From the course: Learning Amazon SageMaker AI

Unlock the full course today

Join today to access over 24,900 courses taught by industry experts.

Evaluating model performance

Evaluating model performance

- [Instructor] Did you know that 80% of a machine learning model's success depends on how well you can monitor and evaluate its performance during training? It's not just about feeding it data, keeping track of loss, accuracy, and other metrics in real time is what really makes or breaks a model's ability to predict outcomes. Loss measures how well or poorly the model's predictions match the actual values. The lower the loss, the better the model is at making accurate predictions. You'll want to see the loss decrease steadily over time during training. If it levels off too early, the model might not be learning enough from the data. Training accuracy is the percentage of correct predictions your model makes on the training dataset. It helps you understand how well your model is fitting the data it's seen. While high training accuracy is good, it's important to ensure the model isn't overfitting, meaning it's doing well on the training data, but may perform poorly on unseen data…

Contents