Home Artificial Intelligence Courage to Learn ML: Demystifying L1 & L2 Regularization (part 3) I even have a matter based on our last discussion, I checked that for Lp norm, the worth of p may be any number larger than 0. Why don’t use p between 0 and 1? What’s the rationale behind not having an L0.5 regularization? Why can we care about whether an issue or a constraint is a non-convex problem here? Aren’t most deep learning problems non-convex? Why don’t we think about using higher norms, like L3 and L4, for regularization? Is it possible to mix L1 and L2 regularization? What’s the advantage of using Elastic Net regularization? If that’s the case, why don’t we use it more often?

Courage to Learn ML: Demystifying L1 & L2 Regularization (part 3) I even have a matter based on our last discussion, I checked that for Lp norm, the worth of p may be any number larger than 0. Why don’t use p between 0 and 1? What’s the rationale behind not having an L0.5 regularization? Why can we care about whether an issue or a constraint is a non-convex problem here? Aren’t most deep learning problems non-convex? Why don’t we think about using higher norms, like L3 and L4, for regularization? Is it possible to mix L1 and L2 regularization? What’s the advantage of using Elastic Net regularization? If that’s the case, why don’t we use it more often?

0
Courage to Learn ML: Demystifying L1 & L2 Regularization (part 3)
I even have a matter based on our last discussion, I checked that for Lp norm, the worth of p may be any number larger than 0. Why don’t use p between 0 and 1? What’s the rationale behind not having an L0.5 regularization?
Why can we care about whether an issue or a constraint is a non-convex problem here? Aren’t most deep learning problems non-convex?
Why don’t we think about using higher norms, like L3 and L4, for regularization?
Is it possible to mix L1 and L2 regularization?
What’s the advantage of using Elastic Net regularization? If that’s the case, why don’t we use it more often?

I’m glad you brought up this query. To get straight to the purpose, we typically avoid p values lower than 1 because they result in non-convex optimization problems. Let me illustrate this with a picture showing the form of Lp norms for various p values. Take a detailed have a look at when p=0.5; you’ll notice that the form is decidedly non-convex.

The shape of Lp norms for different p value. Source: https://lh5.googleusercontent.com/EoX3sngY7YnzCGY9CyMX0tEaNuKD3_ZiF4Fp3HQqbyqPtXks2TAbpTj5e4tiDv-U9PT0MAarRrPv6ClJ06C0HXQZKHeK40ZpVgRKke8-Ac0TAqdI7vWFdCXjK4taR40bdSdhGkWB
The form of Lp norms for various p value. Source: https://lh5.googleusercontent.com/EoX3sngY7YnzCGY9CyMX0tEaNuKD3_ZiF4Fp3HQqbyqPtXks2TAbpTj5e4tiDv-U9PT0MAarRrPv6ClJ06C0HXQZKHeK40ZpVgRKke8-Ac0TAqdI7vWFdCXjK4taR40bdSdhGkWB

This becomes even clearer once we have a look at a 3D representation, assuming we’re optimizing three weights. On this case, it’s evident that the issue isn’t convex, with quite a few local minima appearing along the boundaries.

Source: https://ekamperi.github.io/images/lp_norms_3d.png

The explanation why we typically avoid non-convex problems in machine learning is their complexity. With a convex problem, you’re guaranteed a world minimum — this makes it generally easier to unravel. However, non-convex problems often include multiple local minima and may be computationally intensive and unpredictable. It’s exactly these sorts of challenges we aim to sidestep in ML.

Once we use techniques like Lagrange multipliers to optimize a function with certain constraints, it’s crucial that these constraints are convex functions. This ensures that adding them to the unique problem doesn’t alter its fundamental properties, making it harder to unravel. This aspect is critical; otherwise, adding constraints could add more difficulties to the unique problem.

You questions touches an interesting aspect of deep learning. While it’s not that we prefer non-convex problems, it’s more accurate to say that we frequently encounter and should take care of them in the sphere of deep learning. Here’s why:

  1. Nature of Deep Learning Models results in a non-convex loss surface: Most deep learning models, particularly neural networks with hidden layers, inherently have non-convex loss functions. That is because of the complex, non-linear transformations that occur inside these models. The mix of those non-linearities and the high dimensionality of the parameter space typically leads to a loss surface that’s non-convex.
  2. Local Minima aren’t any longer an issue in deep learning: In high-dimensional spaces, that are typical in deep learning, local minima usually are not as problematic as they could be in lower-dimensional spaces. Research suggests that most of the local minima in deep learning are close in value to the worldwide minimum. Furthermore, saddle points — points where the gradient is zero but are neither maxima nor minima — are more common in such spaces and are a much bigger challenge.
  3. Advanced optimization techniques exist which might be simpler in coping with non-convex spaces. Advanced optimization techniques, comparable to stochastic gradient descent (SGD) and its variants, have been particularly effective to find good solutions in these non-convex spaces. While these solutions won’t be global minima, they often are adequate to realize high performance on practical tasks.

Regardless that deep learning models are non-convex, they excel at capturing complex patterns and relationships in large datasets. Moreover, research into non-convex functions is continually progressing, enhancing our understanding. Looking ahead, there’s potential for us to handle non-convex problems more efficiently, with fewer concerns.

Recall the image we discussed earlier showing the shapes of Lp norms for various values of p. As p increases, the Lp norm’s shape evolves. For instance, at p = 3, it resembles a square with rounded corners, and as p nears infinity, it forms an ideal square.

The shape of Lp norms for different p value. Source: https://lh5.googleusercontent.com/EoX3sngY7YnzCGY9CyMX0tEaNuKD3_ZiF4Fp3HQqbyqPtXks2TAbpTj5e4tiDv-U9PT0MAarRrPv6ClJ06C0HXQZKHeK40ZpVgRKke8-Ac0TAqdI7vWFdCXjK4taR40bdSdhGkWB
The form of Lp norms for various p value. Source: https://lh5.googleusercontent.com/EoX3sngY7YnzCGY9CyMX0tEaNuKD3_ZiF4Fp3HQqbyqPtXks2TAbpTj5e4tiDv-U9PT0MAarRrPv6ClJ06C0HXQZKHeK40ZpVgRKke8-Ac0TAqdI7vWFdCXjK4taR40bdSdhGkWB

In our optimization problem’s context, consider higher norms like L3 or L4. Just like L2 regularization, where the loss function and constraint contours intersect at rounded edges, these higher norms would encourage weights to approximate zero, similar to L2 regularization. (If this part isn’t clear, be at liberty to revisit Part 2 for a more detailed explanation.) Based on this statement, we are able to talk concerning the two crucial explanation why L3 and L4 norms aren’t commonly used:

  1. L3 and L4 norms show similar effects as L2, without offering significant recent benefits (make weights near 0). L1 regularization, in contrast, zeroes out weights and introduces sparsity, useful for feature selection.
  2. Computational complexity is one other vital aspect. Regularization affects the optimization process’s complexity. L3 and L4 norms are computationally heavier than L2, making them less feasible for many machine learning applications.

To sum up, while L3 and L4 norms may very well be utilized in theory, they don’t provide unique advantages over L1 or L2 regularization, and their computational inefficiency makes them less practical alternative.

Yes, it’s indeed possible to mix L1 and L2 regularization, a method also known as Elastic Net regularization. This approach blends the properties of each L1 (lasso) and L2 (ridge) regularization together and may be useful while difficult.

Elastic Net regularization is a linear combination of the L1 and L2 regularization terms. It adds each the L1 and L2 norm to the loss function. So it has two parameters to be tuned, lambda1 and lambda2

Elastic Net regularization. Source: https://wikimedia.org/api/rest_v1/media/math/render/svg/a66c7bfcf201d515eb71dd0aed5c8553ce990b6e
Elastic Net regularization. Source: https://wikimedia.org/api/rest_v1/media/math/render/svg/a66c7bfcf201d515eb71dd0aed5c8553ce990b6e

By combining each regularization techniques, Elastic Net can improve the generalization capability of the model, reducing the chance of overfitting more effectively than using either L1 or L2 alone.

Let’s break it down its benefits:

  1. Elastic Net provides more stability than L1. L1 regularization can result in sparse models, which is beneficial for feature selection. But it could possibly even be unstable in certain situations. For instance, L1 regularization can select features arbitrarily amongst highly correlated variables (while make others’ coefficients grow to be 0). While Elastic Net can distribute the weights more evenly amongst those variables.
  2. L2 may be more stable than L1 regularization, but it surely doesn’t encourage sparsity. Elastic Net goals to balance these two points, potentially resulting in more robust models.

Nonetheless, Elastic Net regularization introduces an additional hyperparameter that demands meticulous tuning. Achieving the correct balance between L1 and L2 regularization and optimal model performance involves increased computational effort. This added complexity is why it’s not incessantly used.

LEAVE A REPLY

Please enter your comment!
Please enter your name here