Understanding Neural Networks
Neural networks are a cornerstone of modern artificial intelligence. Inspired by the human brain, they consist of layers of interconnected nodes (neurons) that can learn complex patterns from data.
Key Concepts
- Neuron: Basic processing unit that applies a weighted sum and an activation function.
- Layers: Input, hidden, and output layers define the network architecture.
- Activation Functions: Introduce non‑linearity (e.g., ReLU, Sigmoid, Tanh).
- Training: Adjusts weights using backpropagation and gradient descent.
Simple Example in Python
import numpy as np
def sigmoid(x):
return 1 / (1 + np.exp(-x))
# Input data
X = np.array([[0,0],[0,1],[1,0],[1,1]])
# Expected output
y = np.array([[0],[1],[1],[0]])
# Initialize weights
np.random.seed(42)
W1 = np.random.randn(2,2)
b1 = np.zeros((1,2))
W2 = np.random.randn(2,1)
b2 = np.zeros((1,1))
# Forward pass
def forward(X):
Z1 = X.dot(W1) + b1
A1 = sigmoid(Z1)
Z2 = A1.dot(W2) + b2
A2 = sigmoid(Z2)
return A2
print(forward(X))
This code demonstrates a minimal two‑layer neural network that learns the XOR problem.
Comments