SlideShare a Scribd company logo
1 of 24
Proprietary Information created by Parth Khare
Machine Learning
Classification & Decision Trees
04/01/2013
2
Contents
 Recursive Partitioning
 Classification
 Regression/Decision
 Bagging
 Random Forest
 Boosting
 Gradient Boosting
 Questions
2
3
Detail and flow
 What is the difference between supervised and unsupervised learning?
 What is ML? how is it different from classical statistics?
 Supervised learning: machine -> an application is Trees
 Most elementary analysis: CART
 Tree
3
4
Basics
 Supervised Learning:
 Called “supervised” because of the presence of the outcome variable to guide learning process
 building a learner (/model) to predict the outcome for new unseen objects.
 Alternatively,
 Unsupervised Learning:
 observe only the features and have no measurements of the outcome
 task is rather to describe how the data are organized or clustered
4
5
Machine Learning viz Statistics
‘learning’ viz ‘fitting’
 Machine learning: a branch of artificial intelligence, is about the construction and study
of systems that can learn from data.
 Statistics bases everything on probability models
 assuming your data are samples from a random variable with some
 distribution, then making
 inferences about the parameters of the distribution
 Machine learning may use probability models, and when it does, it overlaps with
statistics.
 isn't so committed to probability
 use other approaches to problem solving that are not based on probability
 The basic optimization concept is the same for trees is same as that of parametric
techniques, minimizing errors metrics. Instead of square error function or MLE,
Machine Learning supervises optimization of entropy, node impurity etc
 An application _-> Trees
5
6
Decision Tree Approach: Parlance
 A decision tree represents a hierarchical segmentation of the data
 The original segment is called the root node and is the entire data set
 The root node is partitioned into two or more segments by applying a series of simple
rules over an input variables
 For example, risk = low, risk = not low
 Each rule assigns the observations to a segment based on its input value
 Each resulting segment can be further partitioned into sub-segments, and so on
 For example risk = low can be partitioned into income = low and income = not low
 The segments are also called nodes, and the final segments are called leaf nodes or
leaves
 Final node surviving the partitions called the terminal node
7
Decision Tree Example: Risk
Assessment(Loan)
Income
< $30k >= $30k
Age Credit Score
< 25 >=25 < 600 >= 600
not on-time on-time not on-time on-time
8
CART: Heuristic and Visual
 Generic supervised learning problem:
 given a bunch of data (x1, y1), (x2, y2)…(xn,yn), and a new point ‘xi ‘, supervised learning
objective: associates a ‘y’ with this new ‘x’
 Main Idea: form a binary tree and minimize error in each leaf
 Given dataset, a decision tree: choose a sequence of binary split of the data
8
9
Growing the tree
 Growing the tree involves successively partitioning the data – recursively partitioning
 If an input variable is binary, then the two categories can be used to split the data
(relative concentration of ‘0’’s and ‘1’’s)
 If an input variable is interval, a splitting value is used to classify the data into two
segments
 For example, if household income is interval and there are 100 possible incomes in the
data set, then there are 100 possible splitting values
 For example, income < $30k, and income >= $30k
10
Classification Tree: again (referrence)
 Represented by a series of binary splits.
 Each internal node represents a value
query on one of the variables — e.g. “Is
X3 > 0.4”. If the answer is “Yes”, go right,
else go left.
 The terminal nodes are the decision
nodes. Typically each terminal node is
dominated by one of the classes.
 The tree is grown using training data, by
recursive splitting.
 The tree is often pruned to an optimal
size, evaluated by cross-validation.
 New observations are classified by
passing their X down to a terminal node of
the tree, and then using majority vote.
10
11
Evaluating the partitions
 When the target is categorical, for each partition of an input variable a chi-square
statistic is computed
 A contingency table is formed that maps responders and non-responders against the
partitioned input variable
 For example, the null hypothesis might be that there is no difference between people
with income <$30k and those with income >=$30k in making an on-time loan payment
 The lower the significance or p-value, the more likely that we reject this hypothesis,
meaning that this income split is a discriminating factor
12
Splitting Criteria: Categorical
 Information Gain -> Entropy
 The rarity of an event is defined as: -log2(pi)
 Impurity Measure:
- Pr(Y=0) X log2 [Pr(Y=0)] - Pr(Y=1) X log2 [Pr(Y=1)]
e.g. check at Pr(Y=0) = 0.5??
 Entropy sums up the rarity of response and non-response over all observations
 Entropy ranges from the best case of 0 (all responders or all non-responders) to 1
(equal mix of responders and non-responders)
link
http://www.youtube.com/watch?v=p17C9q2M00Q
12
13
Splitting Criteria :Continuous
 An F-statistic is used to measure the degree of separation of a split for an interval
target, such as revenue
 Similar to the sum of squares discussion under multiple regression,
F-statistic is based on the ratio of the sum of squares between the groups and the sum
of squares within groups, both adjusted for the number of degrees of freedom
 The null hypothesis is that there is no difference in the target mean between the two
groups
14
Contents
 Recursive Partitioning
 Classification
 Regression/Decision
 Bagging
 Random Forest
 Boosting
 Gradient Boosting
14
15
Bagging
 Ensemble Models : Combines the results from different models
 An ensemble classifier using many decision tree models
 Bagging: Bootstrapped Samples of data
Working: Random Forest
 A different subset of the training data are selected (~2/3), with replacement, to train
each tree
 Remaining training data (OOB) are used to estimate error and variable importance
 Class assignment is made by the number of votes from all of the trees and for
regression the average of the results is used
 A randomly selected subset of variables is used to split each node
 The number of variables used is decided by the user (mtry parameter in R)
15
16
Bagging: Stanford
 Suppose
 C(S, x) is a classifier, such as a tree, based
on our training data S, producing a
predicted class label at input point x.
 ‘To bag C, we draw bootstrap samples
S∗1,...S∗B each of size N with replacement
from the training data.
 Then
 Cˆbag(x) = Majority Vote{C(S∗b, x)}B
b =1.
 Bagging can dramatically reduce the
variance of unstable procedures (like
trees), leading to improved prediction.
 However any simple structure in C (e.g a
tree) is lost.
16
17
Bootstrapped samples
17
18
Contents
 Recursive Partitioning
 Classification
 Regression/Decision
 Bagging
 Random Forest
 Boosting
 Gradient Boosting
18
19
Boosting
Make Copies of Data
 Boosting idea: Based on "strength of weak learn ability" principles
 Example:
IF Gender=MALE AND Age<=25 THEN claim_freq.=‘high’
Combination of weak learners increased accuracy
 Simple or “weak" learners are not perfect!
 Every “boosting” algorithm can be interpreted as optimizing the loss function in a “greedy stage-
wise” manner
Working: Gradient Descent
 First tree is created, residuals observed
 Now, a tree is fitted on the residuals of the first tree and so on
 In this way, boosting grows trees in series, with later trees dependent on the results of previous
trees
 Shrinkage, CV folds, Interaction Depth
 Adaboost, DirectBoost, Laplace Loss(Gaussian Boost)
19
20
GBM
 Gradient Tree Boosting is a generalization of boosting to arbitrary differentiable loss functions.
GBRT is an accurate and effective off-the-shelf procedure that can be used for both regression and
classification problems.
 What it does essentially
 By sequentially learning form the errors of the previous trees Gradient Boosting, in a way tries to
‘learn’ the unconditional distribution of the target variable. So, analogus to how we use different
types of distributions in GLM modeling, GBM creates/replicates the distribution in the given data
as close as possible.
 This comes with an additional risk of over-fitting, resolved by methods like cross validation
within, min observation per node etc.
 Parameters working: OOB data/error
 We know that the first tree of GBM is build on training data and the subsequent trees are
developed on the error form the first tree. This process carries on.
 For OOB, the training data is also split in two parts, on one part the trees and developed, and on
the other part the tree developed on the first part is tested. This second part is called the OOB
data and the error obtained is known as OOB error.
20
21
Summary: Rf and GBM
Main similarities:
 Both derive many benefits from ensembling, with few disadvantages
 Both can be applied to ensembling decision trees
Main differences:
 Boosting performs an exhaustive search for best predictor to split on; RF searches
only a small subset
 Boosting grows trees in series, with later trees dependent on the results of
previous trees
 RF grows trees in parallel independently of one another.
 RF cannot work with missing values GBM can
21
22
More diff b/w RF and GBM
 Algorithmic difference is;
 Random Forests are trained with random sample of data (even more randomized
cases available like feature randomization) and it trusts randomization to have better
generalization performance on out of train set.
 On the other spectrum, Gradient Boosted Trees algorithm additionally tries to find
optimal linear combination of trees (assume final model is the weighted sum of
predictions of individual trees) in relation to given train data. This extra tuning might
be deemed as the difference. Note that, there are many variations of those
algorithms as well.
 At the practical side; owing to this tuning stage,
 Gradient Boosted Trees are more susceptible to jiggling data. This final stage makes
GBT more likely to overfit therefore if the test cases are inclined to be so verbose
compared to train cases this algorithm starts lacking.
 On the contrary, Random Forests are better to strain on overfitting although it is
lacking on the other way around.
22
23
Questions
 Concept/ Interpretation
 Application
23
For further details contact:
Parth Khare
https://www.linkedin.com/profile/view?id=43877647&trk=nav_responsive_tab_profile

More Related Content

What's hot

Introduction to Machine Learning Classifiers
Introduction to Machine Learning ClassifiersIntroduction to Machine Learning Classifiers
Introduction to Machine Learning ClassifiersFunctional Imperative
 
Decision trees in Machine Learning
Decision trees in Machine Learning Decision trees in Machine Learning
Decision trees in Machine Learning Mohammad Junaid Khan
 
Random Forest Algorithm - Random Forest Explained | Random Forest In Machine ...
Random Forest Algorithm - Random Forest Explained | Random Forest In Machine ...Random Forest Algorithm - Random Forest Explained | Random Forest In Machine ...
Random Forest Algorithm - Random Forest Explained | Random Forest In Machine ...Simplilearn
 
Machine Learning with Decision trees
Machine Learning with Decision treesMachine Learning with Decision trees
Machine Learning with Decision treesKnoldus Inc.
 
Decision Tree - C4.5&CART
Decision Tree - C4.5&CARTDecision Tree - C4.5&CART
Decision Tree - C4.5&CARTXueping Peng
 
Optimization in Deep Learning
Optimization in Deep LearningOptimization in Deep Learning
Optimization in Deep LearningYan Xu
 
Introduction to Deep Learning
Introduction to Deep LearningIntroduction to Deep Learning
Introduction to Deep LearningOswald Campesato
 
Decision tree in artificial intelligence
Decision tree in artificial intelligenceDecision tree in artificial intelligence
Decision tree in artificial intelligenceMdAlAmin187
 
Chapter 8. Classification Basic Concepts.ppt
Chapter 8. Classification Basic Concepts.pptChapter 8. Classification Basic Concepts.ppt
Chapter 8. Classification Basic Concepts.pptSubrata Kumer Paul
 
Over fitting underfitting
Over fitting underfittingOver fitting underfitting
Over fitting underfittingSivapriyaS12
 
Random Forest and KNN is fun
Random Forest and KNN is funRandom Forest and KNN is fun
Random Forest and KNN is funZhen Li
 
Decision Tree Algorithm & Analysis | Machine Learning Algorithm | Data Scienc...
Decision Tree Algorithm & Analysis | Machine Learning Algorithm | Data Scienc...Decision Tree Algorithm & Analysis | Machine Learning Algorithm | Data Scienc...
Decision Tree Algorithm & Analysis | Machine Learning Algorithm | Data Scienc...Edureka!
 
2.3 bayesian classification
2.3 bayesian classification2.3 bayesian classification
2.3 bayesian classificationKrish_ver2
 
Bias and variance trade off
Bias and variance trade offBias and variance trade off
Bias and variance trade offVARUN KUMAR
 
2.1 Data Mining-classification Basic concepts
2.1 Data Mining-classification Basic concepts2.1 Data Mining-classification Basic concepts
2.1 Data Mining-classification Basic conceptsKrish_ver2
 

What's hot (20)

Introduction to Machine Learning Classifiers
Introduction to Machine Learning ClassifiersIntroduction to Machine Learning Classifiers
Introduction to Machine Learning Classifiers
 
Decision trees in Machine Learning
Decision trees in Machine Learning Decision trees in Machine Learning
Decision trees in Machine Learning
 
Decision Tree Learning
Decision Tree LearningDecision Tree Learning
Decision Tree Learning
 
Random Forest Algorithm - Random Forest Explained | Random Forest In Machine ...
Random Forest Algorithm - Random Forest Explained | Random Forest In Machine ...Random Forest Algorithm - Random Forest Explained | Random Forest In Machine ...
Random Forest Algorithm - Random Forest Explained | Random Forest In Machine ...
 
Machine Learning with Decision trees
Machine Learning with Decision treesMachine Learning with Decision trees
Machine Learning with Decision trees
 
Decision Tree - C4.5&CART
Decision Tree - C4.5&CARTDecision Tree - C4.5&CART
Decision Tree - C4.5&CART
 
Decision tree
Decision treeDecision tree
Decision tree
 
Optimization in Deep Learning
Optimization in Deep LearningOptimization in Deep Learning
Optimization in Deep Learning
 
Introduction to Deep Learning
Introduction to Deep LearningIntroduction to Deep Learning
Introduction to Deep Learning
 
Random forest
Random forestRandom forest
Random forest
 
Decision tree in artificial intelligence
Decision tree in artificial intelligenceDecision tree in artificial intelligence
Decision tree in artificial intelligence
 
Chapter 8. Classification Basic Concepts.ppt
Chapter 8. Classification Basic Concepts.pptChapter 8. Classification Basic Concepts.ppt
Chapter 8. Classification Basic Concepts.ppt
 
Over fitting underfitting
Over fitting underfittingOver fitting underfitting
Over fitting underfitting
 
Random Forest and KNN is fun
Random Forest and KNN is funRandom Forest and KNN is fun
Random Forest and KNN is fun
 
Naive bayes
Naive bayesNaive bayes
Naive bayes
 
Decision Tree Algorithm & Analysis | Machine Learning Algorithm | Data Scienc...
Decision Tree Algorithm & Analysis | Machine Learning Algorithm | Data Scienc...Decision Tree Algorithm & Analysis | Machine Learning Algorithm | Data Scienc...
Decision Tree Algorithm & Analysis | Machine Learning Algorithm | Data Scienc...
 
2.3 bayesian classification
2.3 bayesian classification2.3 bayesian classification
2.3 bayesian classification
 
Bias and variance trade off
Bias and variance trade offBias and variance trade off
Bias and variance trade off
 
2.1 Data Mining-classification Basic concepts
2.1 Data Mining-classification Basic concepts2.1 Data Mining-classification Basic concepts
2.1 Data Mining-classification Basic concepts
 
3. mining frequent patterns
3. mining frequent patterns3. mining frequent patterns
3. mining frequent patterns
 

Similar to Machine learning basics using trees algorithm (Random forest, Gradient Boosting)

13 random forest
13 random forest13 random forest
13 random forestVishal Dutt
 
A General Framework for Accurate and Fast Regression by Data Summarization in...
A General Framework for Accurate and Fast Regression by Data Summarization in...A General Framework for Accurate and Fast Regression by Data Summarization in...
A General Framework for Accurate and Fast Regression by Data Summarization in...Yao Wu
 
Machine learning session6(decision trees random forrest)
Machine learning   session6(decision trees random forrest)Machine learning   session6(decision trees random forrest)
Machine learning session6(decision trees random forrest)Abhimanyu Dwivedi
 
Module III - Classification Decision tree (1).pptx
Module III - Classification Decision tree (1).pptxModule III - Classification Decision tree (1).pptx
Module III - Classification Decision tree (1).pptxShivakrishnan18
 
Decision Trees for Classification: A Machine Learning Algorithm
Decision Trees for Classification: A Machine Learning AlgorithmDecision Trees for Classification: A Machine Learning Algorithm
Decision Trees for Classification: A Machine Learning AlgorithmPalin analytics
 
Classifiers
ClassifiersClassifiers
ClassifiersAyurdata
 
Data Science - Part V - Decision Trees & Random Forests
Data Science - Part V - Decision Trees & Random Forests Data Science - Part V - Decision Trees & Random Forests
Data Science - Part V - Decision Trees & Random Forests Derek Kane
 
CS109a_Lecture16_Bagging_RF_Boosting.pptx
CS109a_Lecture16_Bagging_RF_Boosting.pptxCS109a_Lecture16_Bagging_RF_Boosting.pptx
CS109a_Lecture16_Bagging_RF_Boosting.pptxAbhishekSingh43430
 
Tree net and_randomforests_2009
Tree net and_randomforests_2009Tree net and_randomforests_2009
Tree net and_randomforests_2009Matthew Magistrado
 
Machine Learning: Decision Trees Chapter 18.1-18.3
Machine Learning: Decision Trees Chapter 18.1-18.3Machine Learning: Decision Trees Chapter 18.1-18.3
Machine Learning: Decision Trees Chapter 18.1-18.3butest
 
IMAGE CLASSIFICATION USING DIFFERENT CLASSICAL APPROACHES
IMAGE CLASSIFICATION USING DIFFERENT CLASSICAL APPROACHESIMAGE CLASSIFICATION USING DIFFERENT CLASSICAL APPROACHES
IMAGE CLASSIFICATION USING DIFFERENT CLASSICAL APPROACHESVikash Kumar
 
Mis End Term Exam Theory Concepts
Mis End Term Exam Theory ConceptsMis End Term Exam Theory Concepts
Mis End Term Exam Theory ConceptsVidya sagar Sharma
 
Machine Learning Algorithm - Decision Trees
Machine Learning Algorithm - Decision Trees Machine Learning Algorithm - Decision Trees
Machine Learning Algorithm - Decision Trees Kush Kulshrestha
 
A Decision Tree Based Classifier for Classification & Prediction of Diseases
A Decision Tree Based Classifier for Classification & Prediction of DiseasesA Decision Tree Based Classifier for Classification & Prediction of Diseases
A Decision Tree Based Classifier for Classification & Prediction of Diseasesijsrd.com
 
DM Unit-III ppt.ppt
DM Unit-III ppt.pptDM Unit-III ppt.ppt
DM Unit-III ppt.pptLaxmi139487
 
Ijaems apr-2016-23 Study of Pruning Techniques to Predict Efficient Business ...
Ijaems apr-2016-23 Study of Pruning Techniques to Predict Efficient Business ...Ijaems apr-2016-23 Study of Pruning Techniques to Predict Efficient Business ...
Ijaems apr-2016-23 Study of Pruning Techniques to Predict Efficient Business ...INFOGAIN PUBLICATION
 

Similar to Machine learning basics using trees algorithm (Random forest, Gradient Boosting) (20)

Bank loan purchase modeling
Bank loan purchase modelingBank loan purchase modeling
Bank loan purchase modeling
 
13 random forest
13 random forest13 random forest
13 random forest
 
Decision tree
Decision tree Decision tree
Decision tree
 
A General Framework for Accurate and Fast Regression by Data Summarization in...
A General Framework for Accurate and Fast Regression by Data Summarization in...A General Framework for Accurate and Fast Regression by Data Summarization in...
A General Framework for Accurate and Fast Regression by Data Summarization in...
 
Machine learning session6(decision trees random forrest)
Machine learning   session6(decision trees random forrest)Machine learning   session6(decision trees random forrest)
Machine learning session6(decision trees random forrest)
 
Module III - Classification Decision tree (1).pptx
Module III - Classification Decision tree (1).pptxModule III - Classification Decision tree (1).pptx
Module III - Classification Decision tree (1).pptx
 
Decision Trees for Classification: A Machine Learning Algorithm
Decision Trees for Classification: A Machine Learning AlgorithmDecision Trees for Classification: A Machine Learning Algorithm
Decision Trees for Classification: A Machine Learning Algorithm
 
Classifiers
ClassifiersClassifiers
Classifiers
 
Data Science - Part V - Decision Trees & Random Forests
Data Science - Part V - Decision Trees & Random Forests Data Science - Part V - Decision Trees & Random Forests
Data Science - Part V - Decision Trees & Random Forests
 
CS109a_Lecture16_Bagging_RF_Boosting.pptx
CS109a_Lecture16_Bagging_RF_Boosting.pptxCS109a_Lecture16_Bagging_RF_Boosting.pptx
CS109a_Lecture16_Bagging_RF_Boosting.pptx
 
Tree net and_randomforests_2009
Tree net and_randomforests_2009Tree net and_randomforests_2009
Tree net and_randomforests_2009
 
Machine Learning: Decision Trees Chapter 18.1-18.3
Machine Learning: Decision Trees Chapter 18.1-18.3Machine Learning: Decision Trees Chapter 18.1-18.3
Machine Learning: Decision Trees Chapter 18.1-18.3
 
IMAGE CLASSIFICATION USING DIFFERENT CLASSICAL APPROACHES
IMAGE CLASSIFICATION USING DIFFERENT CLASSICAL APPROACHESIMAGE CLASSIFICATION USING DIFFERENT CLASSICAL APPROACHES
IMAGE CLASSIFICATION USING DIFFERENT CLASSICAL APPROACHES
 
Mis End Term Exam Theory Concepts
Mis End Term Exam Theory ConceptsMis End Term Exam Theory Concepts
Mis End Term Exam Theory Concepts
 
Machine Learning Algorithm - Decision Trees
Machine Learning Algorithm - Decision Trees Machine Learning Algorithm - Decision Trees
Machine Learning Algorithm - Decision Trees
 
A Decision Tree Based Classifier for Classification & Prediction of Diseases
A Decision Tree Based Classifier for Classification & Prediction of DiseasesA Decision Tree Based Classifier for Classification & Prediction of Diseases
A Decision Tree Based Classifier for Classification & Prediction of Diseases
 
Classification
ClassificationClassification
Classification
 
Classification
ClassificationClassification
Classification
 
DM Unit-III ppt.ppt
DM Unit-III ppt.pptDM Unit-III ppt.ppt
DM Unit-III ppt.ppt
 
Ijaems apr-2016-23 Study of Pruning Techniques to Predict Efficient Business ...
Ijaems apr-2016-23 Study of Pruning Techniques to Predict Efficient Business ...Ijaems apr-2016-23 Study of Pruning Techniques to Predict Efficient Business ...
Ijaems apr-2016-23 Study of Pruning Techniques to Predict Efficient Business ...
 

Recently uploaded

Predictive Analysis for Loan Default Presentation : Data Analysis Project PPT
Predictive Analysis for Loan Default  Presentation : Data Analysis Project PPTPredictive Analysis for Loan Default  Presentation : Data Analysis Project PPT
Predictive Analysis for Loan Default Presentation : Data Analysis Project PPTBoston Institute of Analytics
 
Digital Marketing Plan, how digital marketing works
Digital Marketing Plan, how digital marketing worksDigital Marketing Plan, how digital marketing works
Digital Marketing Plan, how digital marketing worksdeepakthakur548787
 
SMOTE and K-Fold Cross Validation-Presentation.pptx
SMOTE and K-Fold Cross Validation-Presentation.pptxSMOTE and K-Fold Cross Validation-Presentation.pptx
SMOTE and K-Fold Cross Validation-Presentation.pptxHaritikaChhatwal1
 
Unveiling the Role of Social Media Suspect Investigators in Preventing Online...
Unveiling the Role of Social Media Suspect Investigators in Preventing Online...Unveiling the Role of Social Media Suspect Investigators in Preventing Online...
Unveiling the Role of Social Media Suspect Investigators in Preventing Online...Milind Agarwal
 
Bank Loan Approval Analysis: A Comprehensive Data Analysis Project
Bank Loan Approval Analysis: A Comprehensive Data Analysis ProjectBank Loan Approval Analysis: A Comprehensive Data Analysis Project
Bank Loan Approval Analysis: A Comprehensive Data Analysis ProjectBoston Institute of Analytics
 
Principles and Practices of Data Visualization
Principles and Practices of Data VisualizationPrinciples and Practices of Data Visualization
Principles and Practices of Data VisualizationKianJazayeri1
 
Real-Time AI Streaming - AI Max Princeton
Real-Time AI  Streaming - AI Max PrincetonReal-Time AI  Streaming - AI Max Princeton
Real-Time AI Streaming - AI Max PrincetonTimothy Spann
 
Semantic Shed - Squashing and Squeezing.pptx
Semantic Shed - Squashing and Squeezing.pptxSemantic Shed - Squashing and Squeezing.pptx
Semantic Shed - Squashing and Squeezing.pptxMike Bennett
 
NO1 Certified Black Magic Specialist Expert Amil baba in Lahore Islamabad Raw...
NO1 Certified Black Magic Specialist Expert Amil baba in Lahore Islamabad Raw...NO1 Certified Black Magic Specialist Expert Amil baba in Lahore Islamabad Raw...
NO1 Certified Black Magic Specialist Expert Amil baba in Lahore Islamabad Raw...Amil Baba Dawood bangali
 
modul pembelajaran robotic Workshop _ by Slidesgo.pptx
modul pembelajaran robotic Workshop _ by Slidesgo.pptxmodul pembelajaran robotic Workshop _ by Slidesgo.pptx
modul pembelajaran robotic Workshop _ by Slidesgo.pptxaleedritatuxx
 
English-8-Q4-W3-Synthesizing-Essential-Information-From-Various-Sources-1.pdf
English-8-Q4-W3-Synthesizing-Essential-Information-From-Various-Sources-1.pdfEnglish-8-Q4-W3-Synthesizing-Essential-Information-From-Various-Sources-1.pdf
English-8-Q4-W3-Synthesizing-Essential-Information-From-Various-Sources-1.pdfblazblazml
 
Networking Case Study prepared by teacher.pptx
Networking Case Study prepared by teacher.pptxNetworking Case Study prepared by teacher.pptx
Networking Case Study prepared by teacher.pptxHimangsuNath
 
The Power of Data-Driven Storytelling_ Unveiling the Layers of Insight.pptx
The Power of Data-Driven Storytelling_ Unveiling the Layers of Insight.pptxThe Power of Data-Driven Storytelling_ Unveiling the Layers of Insight.pptx
The Power of Data-Driven Storytelling_ Unveiling the Layers of Insight.pptxTasha Penwell
 
Decoding the Heart: Student Presentation on Heart Attack Prediction with Data...
Decoding the Heart: Student Presentation on Heart Attack Prediction with Data...Decoding the Heart: Student Presentation on Heart Attack Prediction with Data...
Decoding the Heart: Student Presentation on Heart Attack Prediction with Data...Boston Institute of Analytics
 
Student profile product demonstration on grades, ability, well-being and mind...
Student profile product demonstration on grades, ability, well-being and mind...Student profile product demonstration on grades, ability, well-being and mind...
Student profile product demonstration on grades, ability, well-being and mind...Seán Kennedy
 
Defining Constituents, Data Vizzes and Telling a Data Story
Defining Constituents, Data Vizzes and Telling a Data StoryDefining Constituents, Data Vizzes and Telling a Data Story
Defining Constituents, Data Vizzes and Telling a Data StoryJeremy Anderson
 
What To Do For World Nature Conservation Day by Slidesgo.pptx
What To Do For World Nature Conservation Day by Slidesgo.pptxWhat To Do For World Nature Conservation Day by Slidesgo.pptx
What To Do For World Nature Conservation Day by Slidesgo.pptxSimranPal17
 
Conf42-LLM_Adding Generative AI to Real-Time Streaming Pipelines
Conf42-LLM_Adding Generative AI to Real-Time Streaming PipelinesConf42-LLM_Adding Generative AI to Real-Time Streaming Pipelines
Conf42-LLM_Adding Generative AI to Real-Time Streaming PipelinesTimothy Spann
 

Recently uploaded (20)

Predictive Analysis for Loan Default Presentation : Data Analysis Project PPT
Predictive Analysis for Loan Default  Presentation : Data Analysis Project PPTPredictive Analysis for Loan Default  Presentation : Data Analysis Project PPT
Predictive Analysis for Loan Default Presentation : Data Analysis Project PPT
 
Digital Marketing Plan, how digital marketing works
Digital Marketing Plan, how digital marketing worksDigital Marketing Plan, how digital marketing works
Digital Marketing Plan, how digital marketing works
 
SMOTE and K-Fold Cross Validation-Presentation.pptx
SMOTE and K-Fold Cross Validation-Presentation.pptxSMOTE and K-Fold Cross Validation-Presentation.pptx
SMOTE and K-Fold Cross Validation-Presentation.pptx
 
Unveiling the Role of Social Media Suspect Investigators in Preventing Online...
Unveiling the Role of Social Media Suspect Investigators in Preventing Online...Unveiling the Role of Social Media Suspect Investigators in Preventing Online...
Unveiling the Role of Social Media Suspect Investigators in Preventing Online...
 
Bank Loan Approval Analysis: A Comprehensive Data Analysis Project
Bank Loan Approval Analysis: A Comprehensive Data Analysis ProjectBank Loan Approval Analysis: A Comprehensive Data Analysis Project
Bank Loan Approval Analysis: A Comprehensive Data Analysis Project
 
Principles and Practices of Data Visualization
Principles and Practices of Data VisualizationPrinciples and Practices of Data Visualization
Principles and Practices of Data Visualization
 
Real-Time AI Streaming - AI Max Princeton
Real-Time AI  Streaming - AI Max PrincetonReal-Time AI  Streaming - AI Max Princeton
Real-Time AI Streaming - AI Max Princeton
 
Insurance Churn Prediction Data Analysis Project
Insurance Churn Prediction Data Analysis ProjectInsurance Churn Prediction Data Analysis Project
Insurance Churn Prediction Data Analysis Project
 
Semantic Shed - Squashing and Squeezing.pptx
Semantic Shed - Squashing and Squeezing.pptxSemantic Shed - Squashing and Squeezing.pptx
Semantic Shed - Squashing and Squeezing.pptx
 
NO1 Certified Black Magic Specialist Expert Amil baba in Lahore Islamabad Raw...
NO1 Certified Black Magic Specialist Expert Amil baba in Lahore Islamabad Raw...NO1 Certified Black Magic Specialist Expert Amil baba in Lahore Islamabad Raw...
NO1 Certified Black Magic Specialist Expert Amil baba in Lahore Islamabad Raw...
 
modul pembelajaran robotic Workshop _ by Slidesgo.pptx
modul pembelajaran robotic Workshop _ by Slidesgo.pptxmodul pembelajaran robotic Workshop _ by Slidesgo.pptx
modul pembelajaran robotic Workshop _ by Slidesgo.pptx
 
Data Analysis Project: Stroke Prediction
Data Analysis Project: Stroke PredictionData Analysis Project: Stroke Prediction
Data Analysis Project: Stroke Prediction
 
English-8-Q4-W3-Synthesizing-Essential-Information-From-Various-Sources-1.pdf
English-8-Q4-W3-Synthesizing-Essential-Information-From-Various-Sources-1.pdfEnglish-8-Q4-W3-Synthesizing-Essential-Information-From-Various-Sources-1.pdf
English-8-Q4-W3-Synthesizing-Essential-Information-From-Various-Sources-1.pdf
 
Networking Case Study prepared by teacher.pptx
Networking Case Study prepared by teacher.pptxNetworking Case Study prepared by teacher.pptx
Networking Case Study prepared by teacher.pptx
 
The Power of Data-Driven Storytelling_ Unveiling the Layers of Insight.pptx
The Power of Data-Driven Storytelling_ Unveiling the Layers of Insight.pptxThe Power of Data-Driven Storytelling_ Unveiling the Layers of Insight.pptx
The Power of Data-Driven Storytelling_ Unveiling the Layers of Insight.pptx
 
Decoding the Heart: Student Presentation on Heart Attack Prediction with Data...
Decoding the Heart: Student Presentation on Heart Attack Prediction with Data...Decoding the Heart: Student Presentation on Heart Attack Prediction with Data...
Decoding the Heart: Student Presentation on Heart Attack Prediction with Data...
 
Student profile product demonstration on grades, ability, well-being and mind...
Student profile product demonstration on grades, ability, well-being and mind...Student profile product demonstration on grades, ability, well-being and mind...
Student profile product demonstration on grades, ability, well-being and mind...
 
Defining Constituents, Data Vizzes and Telling a Data Story
Defining Constituents, Data Vizzes and Telling a Data StoryDefining Constituents, Data Vizzes and Telling a Data Story
Defining Constituents, Data Vizzes and Telling a Data Story
 
What To Do For World Nature Conservation Day by Slidesgo.pptx
What To Do For World Nature Conservation Day by Slidesgo.pptxWhat To Do For World Nature Conservation Day by Slidesgo.pptx
What To Do For World Nature Conservation Day by Slidesgo.pptx
 
Conf42-LLM_Adding Generative AI to Real-Time Streaming Pipelines
Conf42-LLM_Adding Generative AI to Real-Time Streaming PipelinesConf42-LLM_Adding Generative AI to Real-Time Streaming Pipelines
Conf42-LLM_Adding Generative AI to Real-Time Streaming Pipelines
 

Machine learning basics using trees algorithm (Random forest, Gradient Boosting)

  • 1. Proprietary Information created by Parth Khare Machine Learning Classification & Decision Trees 04/01/2013
  • 2. 2 Contents  Recursive Partitioning  Classification  Regression/Decision  Bagging  Random Forest  Boosting  Gradient Boosting  Questions 2
  • 3. 3 Detail and flow  What is the difference between supervised and unsupervised learning?  What is ML? how is it different from classical statistics?  Supervised learning: machine -> an application is Trees  Most elementary analysis: CART  Tree 3
  • 4. 4 Basics  Supervised Learning:  Called “supervised” because of the presence of the outcome variable to guide learning process  building a learner (/model) to predict the outcome for new unseen objects.  Alternatively,  Unsupervised Learning:  observe only the features and have no measurements of the outcome  task is rather to describe how the data are organized or clustered 4
  • 5. 5 Machine Learning viz Statistics ‘learning’ viz ‘fitting’  Machine learning: a branch of artificial intelligence, is about the construction and study of systems that can learn from data.  Statistics bases everything on probability models  assuming your data are samples from a random variable with some  distribution, then making  inferences about the parameters of the distribution  Machine learning may use probability models, and when it does, it overlaps with statistics.  isn't so committed to probability  use other approaches to problem solving that are not based on probability  The basic optimization concept is the same for trees is same as that of parametric techniques, minimizing errors metrics. Instead of square error function or MLE, Machine Learning supervises optimization of entropy, node impurity etc  An application _-> Trees 5
  • 6. 6 Decision Tree Approach: Parlance  A decision tree represents a hierarchical segmentation of the data  The original segment is called the root node and is the entire data set  The root node is partitioned into two or more segments by applying a series of simple rules over an input variables  For example, risk = low, risk = not low  Each rule assigns the observations to a segment based on its input value  Each resulting segment can be further partitioned into sub-segments, and so on  For example risk = low can be partitioned into income = low and income = not low  The segments are also called nodes, and the final segments are called leaf nodes or leaves  Final node surviving the partitions called the terminal node
  • 7. 7 Decision Tree Example: Risk Assessment(Loan) Income < $30k >= $30k Age Credit Score < 25 >=25 < 600 >= 600 not on-time on-time not on-time on-time
  • 8. 8 CART: Heuristic and Visual  Generic supervised learning problem:  given a bunch of data (x1, y1), (x2, y2)…(xn,yn), and a new point ‘xi ‘, supervised learning objective: associates a ‘y’ with this new ‘x’  Main Idea: form a binary tree and minimize error in each leaf  Given dataset, a decision tree: choose a sequence of binary split of the data 8
  • 9. 9 Growing the tree  Growing the tree involves successively partitioning the data – recursively partitioning  If an input variable is binary, then the two categories can be used to split the data (relative concentration of ‘0’’s and ‘1’’s)  If an input variable is interval, a splitting value is used to classify the data into two segments  For example, if household income is interval and there are 100 possible incomes in the data set, then there are 100 possible splitting values  For example, income < $30k, and income >= $30k
  • 10. 10 Classification Tree: again (referrence)  Represented by a series of binary splits.  Each internal node represents a value query on one of the variables — e.g. “Is X3 > 0.4”. If the answer is “Yes”, go right, else go left.  The terminal nodes are the decision nodes. Typically each terminal node is dominated by one of the classes.  The tree is grown using training data, by recursive splitting.  The tree is often pruned to an optimal size, evaluated by cross-validation.  New observations are classified by passing their X down to a terminal node of the tree, and then using majority vote. 10
  • 11. 11 Evaluating the partitions  When the target is categorical, for each partition of an input variable a chi-square statistic is computed  A contingency table is formed that maps responders and non-responders against the partitioned input variable  For example, the null hypothesis might be that there is no difference between people with income <$30k and those with income >=$30k in making an on-time loan payment  The lower the significance or p-value, the more likely that we reject this hypothesis, meaning that this income split is a discriminating factor
  • 12. 12 Splitting Criteria: Categorical  Information Gain -> Entropy  The rarity of an event is defined as: -log2(pi)  Impurity Measure: - Pr(Y=0) X log2 [Pr(Y=0)] - Pr(Y=1) X log2 [Pr(Y=1)] e.g. check at Pr(Y=0) = 0.5??  Entropy sums up the rarity of response and non-response over all observations  Entropy ranges from the best case of 0 (all responders or all non-responders) to 1 (equal mix of responders and non-responders) link http://www.youtube.com/watch?v=p17C9q2M00Q 12
  • 13. 13 Splitting Criteria :Continuous  An F-statistic is used to measure the degree of separation of a split for an interval target, such as revenue  Similar to the sum of squares discussion under multiple regression, F-statistic is based on the ratio of the sum of squares between the groups and the sum of squares within groups, both adjusted for the number of degrees of freedom  The null hypothesis is that there is no difference in the target mean between the two groups
  • 14. 14 Contents  Recursive Partitioning  Classification  Regression/Decision  Bagging  Random Forest  Boosting  Gradient Boosting 14
  • 15. 15 Bagging  Ensemble Models : Combines the results from different models  An ensemble classifier using many decision tree models  Bagging: Bootstrapped Samples of data Working: Random Forest  A different subset of the training data are selected (~2/3), with replacement, to train each tree  Remaining training data (OOB) are used to estimate error and variable importance  Class assignment is made by the number of votes from all of the trees and for regression the average of the results is used  A randomly selected subset of variables is used to split each node  The number of variables used is decided by the user (mtry parameter in R) 15
  • 16. 16 Bagging: Stanford  Suppose  C(S, x) is a classifier, such as a tree, based on our training data S, producing a predicted class label at input point x.  ‘To bag C, we draw bootstrap samples S∗1,...S∗B each of size N with replacement from the training data.  Then  Cˆbag(x) = Majority Vote{C(S∗b, x)}B b =1.  Bagging can dramatically reduce the variance of unstable procedures (like trees), leading to improved prediction.  However any simple structure in C (e.g a tree) is lost. 16
  • 18. 18 Contents  Recursive Partitioning  Classification  Regression/Decision  Bagging  Random Forest  Boosting  Gradient Boosting 18
  • 19. 19 Boosting Make Copies of Data  Boosting idea: Based on "strength of weak learn ability" principles  Example: IF Gender=MALE AND Age<=25 THEN claim_freq.=‘high’ Combination of weak learners increased accuracy  Simple or “weak" learners are not perfect!  Every “boosting” algorithm can be interpreted as optimizing the loss function in a “greedy stage- wise” manner Working: Gradient Descent  First tree is created, residuals observed  Now, a tree is fitted on the residuals of the first tree and so on  In this way, boosting grows trees in series, with later trees dependent on the results of previous trees  Shrinkage, CV folds, Interaction Depth  Adaboost, DirectBoost, Laplace Loss(Gaussian Boost) 19
  • 20. 20 GBM  Gradient Tree Boosting is a generalization of boosting to arbitrary differentiable loss functions. GBRT is an accurate and effective off-the-shelf procedure that can be used for both regression and classification problems.  What it does essentially  By sequentially learning form the errors of the previous trees Gradient Boosting, in a way tries to ‘learn’ the unconditional distribution of the target variable. So, analogus to how we use different types of distributions in GLM modeling, GBM creates/replicates the distribution in the given data as close as possible.  This comes with an additional risk of over-fitting, resolved by methods like cross validation within, min observation per node etc.  Parameters working: OOB data/error  We know that the first tree of GBM is build on training data and the subsequent trees are developed on the error form the first tree. This process carries on.  For OOB, the training data is also split in two parts, on one part the trees and developed, and on the other part the tree developed on the first part is tested. This second part is called the OOB data and the error obtained is known as OOB error. 20
  • 21. 21 Summary: Rf and GBM Main similarities:  Both derive many benefits from ensembling, with few disadvantages  Both can be applied to ensembling decision trees Main differences:  Boosting performs an exhaustive search for best predictor to split on; RF searches only a small subset  Boosting grows trees in series, with later trees dependent on the results of previous trees  RF grows trees in parallel independently of one another.  RF cannot work with missing values GBM can 21
  • 22. 22 More diff b/w RF and GBM  Algorithmic difference is;  Random Forests are trained with random sample of data (even more randomized cases available like feature randomization) and it trusts randomization to have better generalization performance on out of train set.  On the other spectrum, Gradient Boosted Trees algorithm additionally tries to find optimal linear combination of trees (assume final model is the weighted sum of predictions of individual trees) in relation to given train data. This extra tuning might be deemed as the difference. Note that, there are many variations of those algorithms as well.  At the practical side; owing to this tuning stage,  Gradient Boosted Trees are more susceptible to jiggling data. This final stage makes GBT more likely to overfit therefore if the test cases are inclined to be so verbose compared to train cases this algorithm starts lacking.  On the contrary, Random Forests are better to strain on overfitting although it is lacking on the other way around. 22
  • 24. For further details contact: Parth Khare https://www.linkedin.com/profile/view?id=43877647&trk=nav_responsive_tab_profile