Welcome to our series on demystifying Artificial Intelligence! In this inaugural post, we're going to take the first exciting step into the world of neural networks by building a simple one from scratch. No complex libraries needed for this initial dive – just pure Python and a thirst for understanding.

Conceptual diagram of a neural network

What is a Neural Network?

At its core, a neural network is a computational model inspired by the structure and function of biological neural networks, like the human brain. It consists of interconnected nodes, called neurons, organized in layers. These networks learn to perform tasks by processing data, identifying patterns, and adjusting their internal connections (weights) without being explicitly programmed for every specific task.

The Building Blocks: Neurons and Layers

A single neuron receives input signals, processes them through a weighted sum, adds a bias, and then passes the result through an activation function. This output is then passed on to other neurons in the next layer.

  • Input Layer: Receives the raw data.
  • Hidden Layers: Perform computations and feature extraction.
  • Output Layer: Produces the final result.

Our Simple Network: A Single Neuron

Let's start with the simplest possible neural network: a single neuron. This neuron will take a few inputs, multiply them by weights, add a bias, and then apply a simple activation function. We'll use the sigmoid function for this example, which squashes values between 0 and 1.


import numpy as np

class SingleNeuron:
    def __init__(self, weights, bias):
        self.weights = np.array(weights)
        self.bias = bias

    def sigmoid(self, x):
        return 1 / (1 + np.exp(-x))

    def predict(self, inputs):
        inputs = np.array(inputs)
        linear_output = np.dot(inputs, self.weights) + self.bias
        return self.sigmoid(linear_output)

# Example Usage:
# Define weights and bias for a neuron that might learn to detect if a fruit is an apple
# (e.g., based on roundness and color intensity)
# Let's assume input 1: roundness (0 to 1), input 2: color intensity (0 to 1)
weights = [0.5, -0.5]  # Example weights
bias = 0.1             # Example bias

neuron = SingleNeuron(weights, bias)

# Sample inputs: [roundness, color_intensity]
sample_input = [0.8, 0.7]
prediction = neuron.predict(sample_input)
print(f"Input: {sample_input}, Prediction: {prediction:.4f}")

sample_input_2 = [0.2, 0.3]
prediction_2 = neuron.predict(sample_input_2)
print(f"Input: {sample_input_2}, Prediction: {prediction_2:.4f}")
                    

The Forward Pass

The code above demonstrates a "forward pass." We take inputs, combine them with weights, add bias, and apply an activation function to get an output. This output is the neuron's prediction.

What's Next?

This is just the very beginning! In future posts, we'll explore how to create multi-layer perceptrons (MLPs), introduce the concept of training using backpropagation and gradient descent, and discuss various activation functions and optimizers. Stay tuned!

Conclusion

Building your first neural network, even a simple one, is a significant milestone. It lays the groundwork for understanding more complex AI models. We've seen how a single neuron can process information and produce an output. The journey into AI is continuous, and each step brings new insights.

Don't hesitate to experiment with different weights and biases to see how the predictions change. Happy coding!