XGBoost Hyperparameters: Learning Rate, max_depth, gamma

This title was summarized by AI from the post below.
View profile for Pushkar Saini

Senior Associate - Data Science

🚀 Top 3 XGBoost Hyperparameters You Must Know 1️⃣ Learning Rate (η) What it does: Scales how much each new tree contributes to predictions. Effect: Smaller → slower learning, less overfitting, more trees needed Larger → faster learning, may overfit Example: learning_rate = 0.1 → each tree adjusts predictions by 10% of its weight 2️⃣ max_depth What it does: Maximum depth of each tree (max splits along a path). Effect: Smaller → simpler trees, less overfitting Larger → more complex trees, captures more patterns but may overfit Example: max_depth = 3 → a path can have at most 3 splits → each path can “use” up to 3 features 3️⃣ gamma (γ) What it does: Minimum loss reduction required to make a split. Effect: Higher → fewer splits, simpler trees → less overfitting Lower → more splits → more complex trees Example: gamma = 1 → split only if it improves loss by at least 1 💡 Pro Tip: Start with default values, tune max_depth & gamma first to control complexity, then adjust learning_rate for gradual improvement. #DataScience #Python #SQL #Statistics #InterviewPrep #MachineLearning #Analytics

To view or add a comment, sign in

Explore content categories