From the course: Full-Stack Deep Learning with Python

Unlock this course with a free trial

Join today to access over 24,900 courses taught by industry experts.

Logging metrics, parameters, and artifacts in MLflow

Logging metrics, parameters, and artifacts in MLflow - Python Tutorial

From the course: Full-Stack Deep Learning with Python

Logging metrics, parameters, and artifacts in MLflow

- [Instructor] Now that we have our data set up for training our model, let's play around with MLflow and understand how it works. The first thing we need to do is create an MLflow experiment, and you can do this programmatically by calling mlflow.create_experiment, and specifying a name for that experiment. You can do this using the UI as well. An experiment is made up of runs. A run is an execution of a model and a run has all of the metrics and parameters that are logged when the model is run. Once you've created an experiment, you can head over to the MLflow UI and under experiments you should find our newly created experiment. You'll find the test experiment over on the left and if you were to select it, you'll find that it's completely empty. That's because we haven't created any runs yet within this experiment. Back in our notebook, let me open up the left sidebar and show you the contents of the mlruns…

Contents