From the course: Full-Stack Deep Learning with Python
Unlock this course with a free trial
Join today to access over 24,900 courses taught by industry experts.
Logging metrics, parameters, and artifacts in MLflow - Python Tutorial
From the course: Full-Stack Deep Learning with Python
Logging metrics, parameters, and artifacts in MLflow
- [Instructor] Now that we have our data set up for training our model, let's play around with MLflow and understand how it works. The first thing we need to do is create an MLflow experiment, and you can do this programmatically by calling mlflow.create_experiment, and specifying a name for that experiment. You can do this using the UI as well. An experiment is made up of runs. A run is an execution of a model and a run has all of the metrics and parameters that are logged when the model is run. Once you've created an experiment, you can head over to the MLflow UI and under experiments you should find our newly created experiment. You'll find the test experiment over on the left and if you were to select it, you'll find that it's completely empty. That's because we haven't created any runs yet within this experiment. Back in our notebook, let me open up the left sidebar and show you the contents of the mlruns…
Practice while you learn with exercise files
Download the files the instructor uses to teach the course. Follow along and learn by watching, listening and practicing.
Contents
-
-
-
-
Loading and exploring the EMNIST dataset4m 53s
-
(Locked)
Logging metrics, parameters, and artifacts in MLflow6m 6s
-
(Locked)
Set up the dataset and data loader3m 47s
-
(Locked)
Configuring the image classification DNN model4m 56s
-
(Locked)
Training a model within an MLflow run4m 6s
-
(Locked)
Exploring parameters and metrics in MLflow4m 19s
-
(Locked)
Making predictions using MLflow artifacts5m 21s
-
-
-
-