From the course: Deep Learning with Python: Optimizing Deep Learning Models
Unlock this course with a free trial
Join today to access over 24,900 courses taught by industry experts.
Applying L1 regularization to a deep learning model - Python Tutorial
From the course: Deep Learning with Python: Optimizing Deep Learning Models
Applying L1 regularization to a deep learning model
- [Instructor] In this video, you will learn how to apply L1 Regularization, also known as Lasso regularization, to a deep learning model in order to reduce overfitting, I will be running the code in the 02_03e file. You can follow along by completing the empty code cells in the 02_03b file. Make sure to run the previously written code to import and pre-process the data as well as to build and train the baseline model. I've already done so. So we can see the result from the previous model. A clear indicator of overfitting is a divergence in the training and validation loss metrics, which is visible in the training curves above. L1 Regularization adds a penalty proportional to the absolute values of the weight during training. This encourages sparsity, meaning the model learns to rely only on the most important features. To apply L1 Regularization to the baseline model we created above, we set the kernel_regularizer argument within each hidden layer of the network to L1. In…
Contents
-
-
-
-
(Locked)
The bias-variance trade-off3m 33s
-
(Locked)
Lasso and ridge regularization3m 56s
-
(Locked)
Applying L1 regularization to a deep learning model3m 21s
-
(Locked)
Applying L2 regularization to a deep learning model3m 16s
-
(Locked)
Elastic Net regularization2m 29s
-
(Locked)
Dropout regularization2m 52s
-
(Locked)
Applying dropout regularization to a deep learning model3m 21s
-
(Locked)
-
-
-
-