Regularization
Regularization techniques are essential for preventing overfitting in machine learning models. By adding a penalty term to the loss function, they constrain model complexity and improve generalization.
Common Types
- L1 (Lasso): adds the absolute value of coefficients; can produce sparse models.
- L2 (Ridge): adds the squared value of coefficients; shrinks weights smoothly.
- Elastic Net: combines L1 and L2 penalties.
Interactive Demo – Effect of L2 Regularization
Python Example
import numpy as np
from sklearn.linear_model import Ridge
from sklearn.metrics import mean_squared_error
X = np.random.randn(100, 1)
y = 3 * X.squeeze() + np.random.randn(100) * 0.5
model = Ridge(alpha=0.1) # α corresponds to λ
model.fit(X, y)
pred = model.predict(X)
print('MSE:', mean_squared_error(y, pred))