4. Regularization
● (My) definition: Techniques to prevent overfitting
● Frequentists’ viewpoint:
○ Regularization = suppress model complexity
○ “Usually” done by inserting a term representing model complexity into the objective function:
Training
error
Model
complexity
Trade-off weight
5. VC dimension & VC bound
● Why suppressing model complexity?
○ A theoretical bound of testing error, called Vapnik–Chervonenkis (VC) bound, state the
follows:
● To reduce the testing error, we prefer:
○ Low training error ( Etrain
↓)
○ Big data ( N ↑)
○ Low model complexity ( dVC
↓)
6. VC dimension & VC bound
● : VC dimension
○ We say a hypothesis set H has iff given # of instances ≤ N, there exists a certain
set of instances that can be binary-classified into any combination of class labels by H.
● Example: H = {straight lines in 2D space}
Label=1
Label=0
Label=1
Label=0
Label=1
Label=0
……
7. VC dimension & VC bound
● : VC dimension
○ We say a hypothesis set H has iff given # of instances ≤ N, there exists a certain
set of instances that can be binary-classified into any combination of class labels by H.
● Example: H = {straight lines in 2D space}
○ N=2: {0,0}, {0,1}, {1,0}, {1,1}
8. VC dimension & VC bound
● : VC dimension
○ We say a hypothesis set H has iff given # of instances ≤ N, there exists a certain
set of instances that can be binary-classified into any combination of class labels by H.
● Example: H = {straight lines in 2D space}
○ N=2: {0,0}, {0,1}, {1,0}, {1,1}
○ N=3: {0,0,0}, {0,0,1},……, {1,1,1}
9. VC dimension & VC bound
● : VC dimension
○ We say a hypothesis set H has iff given # of instances ≤ N, there exists a certain
set of instances that can be binary-classified into any combination of class labels by H.
● Example: H = {straight lines in 2D space}
○ N=2: {0,0}, {0,1}, {1,0}, {1,1}
○ N=3: {0,0,0}, {0,0,1},……, {1,1,1}
○ N=4: fails in the case:
10. Regularization – Frequentist viewpoint
● In general, more model parameters
↔ higher VC dimension
↔ higher model complexity
↔
11. Regularization – Frequentist viewpoint
● ……Therefore, reduce model complexity
↔ reduce VC dimension
↔ reduce number of free parameters
↔ reduce
↔ sparsity of parameter!
L-0 norm
12. Regularization – Frequentist viewpoint
● The L-p norm of a K-dimensional vector x:
1. L-2 norm:
2. L-1 norm:
3. L-0 norm: defined as
4. L-∞ norm:
13. Regularization – Frequentist viewpoint
● However, since L-0 norm is hard to incorporate into the objective function (∵
not continuous), we turn to the other more approachable L-p norms
● E.g. Linear SVM:
● Linear SVM = Hinge loss + L-2 regularization!
L-2 regularization (a.k.a. Large Margin)Trade-off weight
Hinge Loss:
15. L1 Regularization – An Intuitive Interpretation
● Now we know we prefer sparse parameters
○ ↔ small L-0 norm
● ……but why people say minimizing L1 norm would introduce sparsity?
● “For most large underdetermined systems of linear equations, the minimal L1‐
norm solution is also the sparsest solution”
○ Donoho, David L, Communications on pure and applied mathematics, 2006.
16. L1 Regularization – An Intuitive Interpretation
● An intuitive interpretation: L-p norm ≣ control our preference to parameters
○ L-2 norm:
○ L-1 norm:
Equal-preferable lines
<Parameter Space>
17. L1 Regularization – An Intuitive Interpretation
● Intuition: using L1 regularization, it’s more possible that the minimal training
error occurs at the tip points of parameter preference lines
○ Assume the equal training error lines are concentric circles ……
Equal training error lines
Optimal solution
18. L1 Regularization – An Intuitive Interpretation
● Intuition: using L1 regularization, it’s more possible that the minimal training
error occurs at the tip points of parameter preference lines
○ Assume the equal training error lines are concentric circles ……
……
19. L1 Regularization – An Intuitive Interpretation
● Intuition: using L1 regularization, it’s more possible that the minimal training
error occurs at the tip points of parameter preference lines
○ Assume the equal training error lines are concentric circles, then the minimal training error
occurs at the tip points iff the centric of equal training error lines lies in the shaded areas as
the figure shows, which is relatively highly probable!
21. Regularization – Bayesian viewpoint
● Bayesian: model parameters are probabilistic.
● Frequentist: model parameters are deterministic.
Given observation
Fixed yet unknown universe
Sampling
Estimate
parameters
Unknown universe
Random observation
Sampling
Estimate parameters
assuming the universe is
a certain type of model
22. Regularization – Bayesian viewpoint
● To conclude:
Data Model parameter
Bayesian Fixed Variable
Frequentist Variable Fixed yet unknown
23. Regularization – Bayesian viewpoint
● E.g. L-2 regularization
● Assume the parameters w are from a Gaussian distribution with zero-mean,
identity covariance:
<Parameter Probability Space>
Equal probability lines
24. Regularization – Bayesian viewpoint
● E.g. L-2 regularization
● Assume the parameters w are from a Gaussian distribution with zero-mean,
identity covariance:
26. Early Stopping
● Early stopping – stop training before optimal
● Often used in MLP training
● An intuitive interpretation:
○ Training iteration ↑
○ → number of updates of weights ↑
○ → number of active (far from 0) weights ↑
○ → complexity ↑
27. Early Stopping
● Theoretical proof:
○ Consider a perceptron with hinge loss:
○ Assume the optimal separating hyperplane is , with maximal margin
○ Denote the weight at t-th iteration as , with margin
35. Conclusion
● Regularization: Techniques to prevent overfitting
○ L1-norm: Sparsity of parameter
○ L2-norm: Large Margin
○ Early stopping
○ ……etc.
● The philosophy of regularization
○ Occam’s razor: “Entities must not be multiplied beyond necessity.”
36. Reference
● Learning From Data - A Short Course
○ Yaser S. Abu-Mostafa, Malik Magdon-Ismail, Hsuan-Tien Lin
● Ronan Collobert, Samy Bengio, “Links Between Perceptrons, MLPs and
SVMs”, in ACM 2004.