Backpropagation

What is Backpropagation?

Backpropagation is the algorithm used to train neural networks by minimizing the error between predicted and actual outputs. It computes the gradient of the loss function with respect to each weight by applying the chain rule of calculus.

Mathematical Overview

Δw = - η ∂L/∂w
where:
η – learning rate
L – loss function
w – weight

The gradient ∂L/∂w is obtained by propagating the error backwards through the network layers.

Interactive Demo




Loss: —

Code Example (Python)

import numpy as np

def sigmoid(x): return 1/(1+np.exp(-x))
def sigmoid_deriv(x): return sigmoid(x)*(1-sigmoid(x))

# simple 1‑neuron network
w = np.random.randn()
b = np.random.randn()
lr = 0.1

for epoch in range(1000):
    x = np.array([0,1])          # inputs
    y = np.array([0,1])          # targets
    z = sigmoid(w*x + b)        # forward
    loss = ((y - z)**2).mean()
    dw = ((z - y) * sigmoid_deriv(w*x + b) * x).mean()
    db = ((z - y) * sigmoid_deriv(w*x + b)).mean()
    w -= lr*dw
    b -= lr*db