Classification Using Tree Models
About the course
Classification problems account for 80% of the modeling challenges in business and we want to help you tackle these effectively with this badge. Tree-based learning is versatile and has a consistently high performance making it a valuable tool with endless applications. Consequently, this means there is also a staggering number of implementations available from these methods. In this practical course, you will learn to construct Decision Trees using Gini, combine them to maximize predictive power and learn to apply, as well as interpret, powerful methods such as Random Forest and XGBoost. For each technique, you learn the criteria to determine whether a model is suited for your data and how to evaluate the results. After just two days you will be prepared to implement tree-based models, handle various challenges of classification modeling.
Why this is for you
Do you want to implement advanced classification models, thereby creating a positive impact on your business? The popularity of these methods brings with it continuously developing techniques and a mass of often unreliable online resources. Classification using tree models will allow you to grasp the potential of tree-based models by steering you through the endless content and opinions to find methods immediately accessible for your business. This will allow your business to generate new insights and boost predictive accuracy immediately in various applications.
This course is designed for both Data Scientists as well as non-technical groups with limited previous modeling skills. This course follows on from our Machine Learning Processes (3201) course and therefore it is mandatory that you have earned this badge before embarking on the next step.
What you’ll learn
We will spend one day going through the theory of decision trees, bagging and boosting which will be applied throughout the course to a case. The second day you will focus on applying these techniques to real-life data and choosing the most appropriate evaluation metric by which to assess them.
- An introduction to classification
- Explain and apply CART Decision trees
- Explain and apply Random Forest
- Explain and apply AdaBoost
- Explain and apply XGBoost
- To choose the appropriate classification method for your data using a criteria
Theory and practical use
- Decision Tree – Understanding tree classification models through the explanation of simple decision trees and hands-on experience
- Random Forest – Learning to apply a bagging tree classification technique through hands-on experience
- AdaBoost – Learning to apply a boosting tree classification technique through hands-on experience
- XGBoost – Gaining a deeper understanding of the most popular boosting tree method, frequently used to win data science competitions
All trainings in the GAIn portfolio combine high-quality standardized training material with theory sessions from experts and hands-on experience where you directly apply the material to real-life cases. Each training is developed by top of the field practitioners which means they are full of industry examples along with practical challenges and know-how, fueling the interactive discussions during training. We believe this multi-level approach creates the ideal learning environment for participants to thrive.