L2 Regularization from Probabilistic Perspective

26 Aug 2017


Hi everyone! In the previous post we have noted that least-squared regression is very prone to overfitting. Due to some assumptions used to derive it, L2 loss function is sensitive to outliers i.e. outliers can penalize the L2 loss function heavily, messing up the model entirely. You must have been aware that adding regularization terms helps to improve the robustness of the model. The regulazired L2 loss is expressed as follow:

In this post we show how this regularization term is derived from the probabilistic perspective. Enjoy!

Assumptions

Recall these assumptions that we have used to derive the least-squared regression:

In overfitting, our model fit the noise or random error instead of the relationship between descriptor and target value. When it happens, the model parameters or weights are usually excessively complex. The weights could be represented as a vector in vector space which has a very high norm. In other words, the value of each elements is very large.

To avoid this to happen, we shall introduce another assumption. We could add prior assumption that model parameters distributes according to a multivariate gaussian of mean 0 and covariance matrix . With this prior we could keep the values of to be small or sufficiently close to 0.

We could in fact take this assumption even further. We suppose that variables of this multivariate normal are independent to each other. Thus, the covariance matrix decomposed as an identity matrix multiplied by a scalar , namely, . We can think of as the constant that controls the width of the gaussian curve.

Probabilistic Point of View

We have previously shown that minimizing is equivalent to maximizing the likelihood of or also the same of maximizing the probability of generating the data given model parameters, . By conditional probability rule, we can express the following equalities:

From above derivation, we can see that maximizing the likelihood term, we are also maximizing the posterior term on the left hand side of the equation. One should question: why should we maximize the posterior instead of the likelihood?

Notice that by explicitly maximizing the posterior, we have the advantage of incorporating the prior to our estimation. This way, our assumption that , could be put to use. Lastly, since the denominator only acts as a constant in this maximization, we can simply ignore it.

Now, we recast our goal from finding that maximize the likelihood, to finding that maximize the posterior. We refer this goal as a maximum a-posterior estimation of , which could be denoted as .

Having all this intuition cleared, we can now start deriving our L2 loss function! See how could be further decomposed as follow:

At this point, we can use the following useful properties of identity matrix:

As we already know, by finding the MAP is equivalent to minimizing the negative log of MAP. Therefore, we can restate to be as follow:

Note: We cannot ignore and remove the remaining denominators since it could translate the original function. The solution to the original function would be different to the resulting function.

We can additionally perform simple mathematical manipulation by multiplying both terms by a constant : .

Finally, we can introduce a constant , where , and take the constant to the outside of the summation, then voila! we arrived at the equation we show in the beginning of this post:

Additional Notes

We have seen how the L2 regularization term is derived, and what the magic scalar is. We saw that controls how fat the tail of the error distribution, while controls the shape (or width) of the gaussian prior distribution for weights . Therefore, implicitly controls both quantity. Namely, as gets larger, we allow the model to tolerate more error (by having fatter tail of error distribution), and at the same time, we narrow down the weights distribution closer to 0. This way, we avoid having the weights that are too large, and also avoid outliers to penalize the loss function too much.


That is all for now. I hope you enjoy it and see you in the next post.

Thank you!

Share this:

0 Comments