SlideShare une entreprise Scribd logo
1  sur  21
Télécharger pour lire hors ligne
GBM PACKAGE IN R
7/24/2014
Presentation Outline
• Algorithm Overview
• Basics
• How it solves problems
• Why to use it
• Deeper investigation while going through live code
What is GBM?
• Predictive modeling algorithm
• Classification & Regression
• Decision tree as a basis*
• Boosted
• Multiple weak models combined algorithmically
• Gradient boosted
• Iteratively solves residuals
• Stochastic
(some additional references on last slide)
* technically, GBM can take on other forms such as linear, but decision trees are the dominant usage,
Friedman specifically optimized for trees, and R’s implementation is internally represented as a tree
Predictive Modeling Landscape:
General Purpose Algorithms
(forillustrativepurposes only,nottoscale,precise,orcomprehensive;author’sperspective)
Linear Models Decision Trees Others
Linear Models
(lm)
Generalized
Linear Models (glm)
Regularized
Linear
Models
(glmnet)
Classification
And Regression
Trees (rpart)
Random
Forest
(randomForest)
Gradient
Boosted
Machines
(gbm)
Nearest Neighbor
(kNN)
Neural
Networks
(nnet)
Support
Vector
Machines
(kernlab)
complexity
Naïve Bayes
(klaR)
Splines
(earth)
More Comprehensive List: http://caret.r-forge.r-project.org/modelList.html
GBM’s decision tree structure
Why GBM?
• Characteristics
• Competitive Performance
• Robust
• Loss functions
• Fast (relatively)
• Usages
• Quick modeling
• Variable selection
• Final-stage precision modeling
Competitive Performance
• Competitive with high-end algorithms such as
RandomForest
• Reliable performance
• Avoids nonsensical predictions
• Rare to produce worse predictions than simpler models
• Often in winning Kaggle solutions
• Cited within winning solution descriptions in numerous
competitions, including $3M competition
• Many of the highest ranked competitors use it frequently
• Used in 4 of 5 personal top 20 finishes
Robust
• Explicitly handles NAs
• Scaling/normalization is unnecessary
• Handles more factor levels than random forest (1024 vs
32)
• Handles perfectly correlated independent variables
• No [known] limit to number of independent variables
Loss Functions
• Gaussian: squared loss
• Laplace: absolute loss
• Bernoulli: logistic, for 0/1
• Huberized: hinge, for 0/1
• Adaboost: exponential loss, for 0/1
• Multinomial: more than one class (produces probability matrix)
• Quantile: flexible alpha (e.g. optimize for 2 StDev threshold)
• Poisson: Poisson distribution, for counts
• CoxPH: Cox proportional hazard, for right-censored
• Tdist: t-distribution loss
• Pairwise: rankings (e.g. search result scoring)
• Concordant pairs
• Mean reciprocal rank
• Mean average precision
• Normalized discounted cumulative gain
Drawbacks
• Several hyper-parameters to tune
• I typically use roughly the same parameters to start, unless I
suspect the data set might have peculiar characteristics
• For creating a final model, tuning several parameters is advisable
• Still has capacity to overfit
• Despite internal cross-validation, it is still particularly prone to
overfit ID-like columns (suggestion: withhold them)
• Can have trouble with highly noisy data
• Black box
• However, GBM package does provide tools to analyze the resulting
models
Deeper Analysis via Walkthrough
• Hyper-parameter explanations (some, not all)
• Quickly analyze performance
• Analyze influence of variables
• Peek under the hood…then follow a toy problem
For those not attending the presentation, the code at the back is run at this
point and discussed. The remaining four slides were mainly to supplement
the discussion of the code and comments, and there was not sufficient
time.
Same analysis with a simpler data set
Note that one can recreate the
predictions of this first tree by finding the
terminal node for any prediction and
using the Prediction value (final column
in data frame). Those values for all
desired trees, plus the initial value (mean
for this) is the prediction.
Matches predictions 1 & 3
Matches predictions 2,4 & 5
Same analysis with a simpler data set
Explanation
1 tree built.
Tree has one decision only, node 0.
Node 0 indicates it split the 3rd field (SplitVar:2), to where values below 1.5
(ordered values 0 & 1 which are a & b) went to node 1; values above
1.5 (2/3 = c/d) went to node 2; missing (none) go to node 3.
Node 1 (X3=A/B) is a terminal node (SplitVar -1) and it predicts the mean plus -0.925.
Node 2 (X3=C/D) is a terminal node and it predicts the mean plus 1.01.
Node 3 (none) is a terminal node and it predicts the mean plus 0, effectively.
Later saw that gbm1$initF will show the intercept, which in this case is the mean.
GBM predict: fit a GBM to data
• gbm(formula = formula(data),
• distribution = "bernoulli",
• n.trees = 100,
• interaction.depth = 1,
• n.minobsinnode = 10,
• shrinkage = 0.001,
• bag.fraction = 0.5,
• train.fraction = 1.0,
• cv.folds=0,
• weights,
• data = list(),
• var.monotone = NULL,
• keep.data = TRUE,
• verbose = "CV",
• class.stratify.cv=NULL,
• n.cores = NULL)
Effect of shrinkage & trees
Source: https://www.youtube.com/watch?v=IXZKgIsZRm0 (GBM explanation by SciKit author)
Code Dump
• The code has been copied from a text R script into PowerPoint, so
the format isn’t great, but it should look OK if copying and pasting
back out to a text file. If not, here it is on Github.
• The code shown uses a competition data set that is comparable to
real world data and uses a simple GBM to predict sale prices of
construction equipment at auction.
• A GBM model was fit against 100k rows with 45-50 variables in about
2-4 minutes during the presentation. It improves the RMSE of
prediction against the mean from ~24.5k to ~9.7k, when scored on
data the model had not seen (and future dates, so the 100k/50k splits
should be valid), with fairly stable train:test performance.
• After predictions are made and scored, some GBM utilities are used
to see which variables the model found most influential, see how the
top 2 variables are used (per factor for one; throughout a continuous
distribution for the other), and see interaction effects of specific
variable pairs.
• Note: GBM was used by my teammate and I to finish 12th out of 476
in this competition (albeit a complex ensemble of GBMs)
Code Dump: Page1
library(Metrics) ##load evaluation package
setwd("C:/Users/Mark_Landry/Documents/K/dozer/")
##Done in advance to speed up loading of data set
train<-read.csv("Train.csv")
## Kaggle data set: http://www.kaggle.com/c/bluebook-for-bulldozers/data
train$saleTransform<-strptime(train$saledate,"%m/%d/%Y %H:%M")
train<-train[order(train$saleTransform),]
save(train,file="rTrain.Rdata")
load("rTrain.Rdata")
xTrain<-train[(nrow(train)-149999):(nrow(train)-50000),5:ncol(train)]
xTest<-train[(nrow(train)-49999):nrow(train),5:ncol(train)]
yTrain<-train[(nrow(train)-149999):(nrow(train)-50000),2]
yTest<-train[(nrow(train)-49999):nrow(train),2]
dim(xTrain); dim(xTest)
sapply(xTrain,function(x) length(levels(x)))
## check levels; gbm is robust, but still has a limit of 1024 per factor; for initial model, remove
## after iterating through model, would want to go back and compress these factors to investigate
## their usefulness (or other information analysis)
xTrain$saledate<-NULL; xTest$saledate<-NULL
xTrain$fiModelDesc<-NULL; xTest$fiModelDesc<-NULL
xTrain$fiBaseModel<-NULL; xTest$fiBaseModel<-NULL
xTrain$saleTransform<-NULL; xTest$saleTransform<-NULL
Code Dump: Page2
library(gbm)
## Set up parameters to pass in; there are many more hyper-parameters available, but these are the most common to
control
GBM_NTREES = 400
## 400 trees in the model; can scale back later for predictions, if desired or overfitting is suspected
GBM_SHRINKAGE = 0.05
## shrinkage is a regularization parameter dictating how fast/aggressive the algorithm moves across
the loss gradient
## 0.05 is somewhat aggressive; default is 0.001, values below 0.1 tend to produce good results
## decreasing shrinkage generally improves results, but requires more trees, so the two
should be adjusted in tandem
GBM_DEPTH = 4
## depth 4 means each tree will evaluate four decisions;
## will always yield [3*depth + 1] nodes and [2*depth + 1] terminal nodes (depth 4 = 9)
## because each decision yields 3 nodes, but each decision will come from a prior node
GBM_MINOBS = 30
## regularization parameter to dictate how many observations must be present to yield a terminal node
## higher number means more conservative fit; 30 is fairly high, but good for exploratory fits; default is
10
## Fit model
g<-gbm.fit(x=xTrain,y=yTrain,distribution = "gaussian",n.trees = GBM_NTREES,shrinkage = GBM_SHRINKAGE,
interaction.depth = GBM_DEPTH,n.minobsinnode = GBM_MINOBS)
## gbm fit; provide all remaining independent variables in xTrain; provide targets as yTrain;
## gaussian distribution will optimize squared loss;
Code Dump: Page3
## get predictions; first on train set, then on unseen test data
tP1 <- predict.gbm(object = g,newdata = xTrain,GBM_NTREES)
hP1 <- predict.gbm(object = g,newdata = xTest,GBM_NTREES)
## compare model performance to default (overall mean)
rmse(yTrain,tP1) ## 9452.742 on data used for training
rmse(yTest,hP1) ## 9740.559 ~3% drop on unseen data; does not seem
to be overfit
rmse(yTest,mean(yTrain)) ## 24481.08 overall mean; cut error rate (from perfection) by 60%
## look at variables
summary(g) ## summary will plot and then show the relative influence of each variable to the entire GBM model (all trees)
## test dominant variable mean
library(sqldf)
trainProdClass<-as.data.frame(cbind(as.character(xTrain$fiProductClassDesc),yTrain))
testProdClass<-as.data.frame(cbind(as.character(xTest$fiProductClassDesc),yTest))
colnames(trainProdClass)<-c("fiProductClassDesc","y"); colnames(testProdClass)<-c("fiProductClassDesc","y")
ProdClassMeans<-sqldf("SELECT fiProductClassDesc,avg(y) avg, COUNT(*) n FROM trainProdClass GROUP BY
fiProductClassDesc")
ProdClassPredictions<-sqldf("SELECT case when n > 30 then avg ELSE 31348.63 end avg
FROM ProdClassMeans P LEFT JOIN testProdClass t ON t.fiProductClassDesc = P.fiProductClassDesc")
rmse(yTest,ProdClassPredictions$avg) ## 29082.64 ? peculiar result on the fiProductClassDesc means, which seemed
fairly stable and useful
##seems to say that the primary factor alone is not helpful; full tree needed
Code Dump: Page4
## Investigate actual GBM model
pretty.gbm.tree(g,1) ## show underlying model for the first decision tree
summary(xTrain[,10]) ## underlying model showed variable 9 to be first point in tree (9 with 0 index = 10th
column)
g$initF ## view what is effectively the "y intercept"
mean(yTrain) ## equivalence shows gaussian y intercept is the mean
t(g$c.splits[1][[1]]) ## show whether each factor level should go left or right
plot(g,10) ## plot fiProductClassDesc, the variable with the highest
rel.inf
plot(g,3) ## plot YearMade, continuous variable with 2nd highest
rel.inf
interact.gbm(g,xTrain,c(10,3))
## compute H statistic to
show interaction; integrates
interact.gbm(g,xTrain,c(10,3))
## example of uninteresting
interaction
Selected References
• CRAN
• Documentation
• vignette
• Algorithm publications:
• Greedy function approximation: A gradient boosting machine
Friedman 2/99
• Stochastic Gradient Boosting; Friedman 3/99
• Overviews
• Gradient boosting machines, a tutorial: Frontiers (4/13)
• Wikipedia (pretty good article, really)
• Video of author of GBM in Python: Gradient Boosted Regression
Trees in scikit-learn
• Very helpful, but the implementation is not decision “stumps” in R, so
some things are different in R (e.g. number of trees need not be so high)

Contenu connexe

Tendances

K-Means Clustering Algorithm.pptx
K-Means Clustering Algorithm.pptxK-Means Clustering Algorithm.pptx
K-Means Clustering Algorithm.pptxJebaRaj26
 
Random Forest Tutorial | Random Forest in R | Machine Learning | Data Science...
Random Forest Tutorial | Random Forest in R | Machine Learning | Data Science...Random Forest Tutorial | Random Forest in R | Machine Learning | Data Science...
Random Forest Tutorial | Random Forest in R | Machine Learning | Data Science...Edureka!
 
Lecture 4 Decision Trees (2): Entropy, Information Gain, Gain Ratio
Lecture 4 Decision Trees (2): Entropy, Information Gain, Gain RatioLecture 4 Decision Trees (2): Entropy, Information Gain, Gain Ratio
Lecture 4 Decision Trees (2): Entropy, Information Gain, Gain RatioMarina Santini
 
K means Clustering
K means ClusteringK means Clustering
K means ClusteringEdureka!
 
. An introduction to machine learning and probabilistic ...
. An introduction to machine learning and probabilistic .... An introduction to machine learning and probabilistic ...
. An introduction to machine learning and probabilistic ...butest
 
Decision Tree, Naive Bayes, Association Rule Mining, Support Vector Machine, ...
Decision Tree, Naive Bayes, Association Rule Mining, Support Vector Machine, ...Decision Tree, Naive Bayes, Association Rule Mining, Support Vector Machine, ...
Decision Tree, Naive Bayes, Association Rule Mining, Support Vector Machine, ...Akanksha Bali
 
Deep Neural Networks for Machine Learning
Deep Neural Networks for Machine LearningDeep Neural Networks for Machine Learning
Deep Neural Networks for Machine LearningJustin Beirold
 
Supervised and unsupervised learning
Supervised and unsupervised learningSupervised and unsupervised learning
Supervised and unsupervised learningParas Kohli
 
decision tree regression
decision tree regressiondecision tree regression
decision tree regressionAkhilesh Joshi
 
Convolutional neural network in practice
Convolutional neural network in practiceConvolutional neural network in practice
Convolutional neural network in practice남주 김
 
K Nearest Neighbor Presentation
K Nearest Neighbor PresentationK Nearest Neighbor Presentation
K Nearest Neighbor PresentationDessy Amirudin
 
Introduction to Deep Learning
Introduction to Deep LearningIntroduction to Deep Learning
Introduction to Deep LearningOswald Campesato
 
Visualize your Twitter network
Visualize your Twitter networkVisualize your Twitter network
Visualize your Twitter networkVerkostoanatomia
 

Tendances (20)

Decision Tree Learning
Decision Tree LearningDecision Tree Learning
Decision Tree Learning
 
K-Means Clustering Algorithm.pptx
K-Means Clustering Algorithm.pptxK-Means Clustering Algorithm.pptx
K-Means Clustering Algorithm.pptx
 
Random Forest Tutorial | Random Forest in R | Machine Learning | Data Science...
Random Forest Tutorial | Random Forest in R | Machine Learning | Data Science...Random Forest Tutorial | Random Forest in R | Machine Learning | Data Science...
Random Forest Tutorial | Random Forest in R | Machine Learning | Data Science...
 
Knn 160904075605-converted
Knn 160904075605-convertedKnn 160904075605-converted
Knn 160904075605-converted
 
Lecture 4 Decision Trees (2): Entropy, Information Gain, Gain Ratio
Lecture 4 Decision Trees (2): Entropy, Information Gain, Gain RatioLecture 4 Decision Trees (2): Entropy, Information Gain, Gain Ratio
Lecture 4 Decision Trees (2): Entropy, Information Gain, Gain Ratio
 
3 classification
3  classification3  classification
3 classification
 
K means Clustering
K means ClusteringK means Clustering
K means Clustering
 
. An introduction to machine learning and probabilistic ...
. An introduction to machine learning and probabilistic .... An introduction to machine learning and probabilistic ...
. An introduction to machine learning and probabilistic ...
 
Decision Tree, Naive Bayes, Association Rule Mining, Support Vector Machine, ...
Decision Tree, Naive Bayes, Association Rule Mining, Support Vector Machine, ...Decision Tree, Naive Bayes, Association Rule Mining, Support Vector Machine, ...
Decision Tree, Naive Bayes, Association Rule Mining, Support Vector Machine, ...
 
Deep Neural Networks for Machine Learning
Deep Neural Networks for Machine LearningDeep Neural Networks for Machine Learning
Deep Neural Networks for Machine Learning
 
Id3,c4.5 algorithim
Id3,c4.5 algorithimId3,c4.5 algorithim
Id3,c4.5 algorithim
 
Supervised and unsupervised learning
Supervised and unsupervised learningSupervised and unsupervised learning
Supervised and unsupervised learning
 
decision tree regression
decision tree regressiondecision tree regression
decision tree regression
 
Support Vector Machine
Support Vector MachineSupport Vector Machine
Support Vector Machine
 
Convolutional neural network in practice
Convolutional neural network in practiceConvolutional neural network in practice
Convolutional neural network in practice
 
Support vector machine
Support vector machineSupport vector machine
Support vector machine
 
K Nearest Neighbor Presentation
K Nearest Neighbor PresentationK Nearest Neighbor Presentation
K Nearest Neighbor Presentation
 
Decision tree
Decision treeDecision tree
Decision tree
 
Introduction to Deep Learning
Introduction to Deep LearningIntroduction to Deep Learning
Introduction to Deep Learning
 
Visualize your Twitter network
Visualize your Twitter networkVisualize your Twitter network
Visualize your Twitter network
 

En vedette

Gbm.more GBM in H2O
Gbm.more GBM in H2OGbm.more GBM in H2O
Gbm.more GBM in H2OSri Ambati
 
Automated data analysis with Python
Automated data analysis with PythonAutomated data analysis with Python
Automated data analysis with PythonGramener
 
Gradient boosting in practice: a deep dive into xgboost
Gradient boosting in practice: a deep dive into xgboostGradient boosting in practice: a deep dive into xgboost
Gradient boosting in practice: a deep dive into xgboostJaroslaw Szymczak
 
Kaggle Winning Solution Xgboost algorithm -- Let us learn from its author
Kaggle Winning Solution Xgboost algorithm -- Let us learn from its authorKaggle Winning Solution Xgboost algorithm -- Let us learn from its author
Kaggle Winning Solution Xgboost algorithm -- Let us learn from its authorVivian S. Zhang
 
Decision Tree Ensembles - Bagging, Random Forest & Gradient Boosting Machines
Decision Tree Ensembles - Bagging, Random Forest & Gradient Boosting MachinesDecision Tree Ensembles - Bagging, Random Forest & Gradient Boosting Machines
Decision Tree Ensembles - Bagging, Random Forest & Gradient Boosting MachinesDeepak George
 

En vedette (9)

Inlining Heuristics
Inlining HeuristicsInlining Heuristics
Inlining Heuristics
 
Gbm.more GBM in H2O
Gbm.more GBM in H2OGbm.more GBM in H2O
Gbm.more GBM in H2O
 
XGBoost (System Overview)
XGBoost (System Overview)XGBoost (System Overview)
XGBoost (System Overview)
 
Automated data analysis with Python
Automated data analysis with PythonAutomated data analysis with Python
Automated data analysis with Python
 
GBM theory code and parameters
GBM theory code and parametersGBM theory code and parameters
GBM theory code and parameters
 
Gradient boosting in practice: a deep dive into xgboost
Gradient boosting in practice: a deep dive into xgboostGradient boosting in practice: a deep dive into xgboost
Gradient boosting in practice: a deep dive into xgboost
 
Kaggle Winning Solution Xgboost algorithm -- Let us learn from its author
Kaggle Winning Solution Xgboost algorithm -- Let us learn from its authorKaggle Winning Solution Xgboost algorithm -- Let us learn from its author
Kaggle Winning Solution Xgboost algorithm -- Let us learn from its author
 
Xgboost
XgboostXgboost
Xgboost
 
Decision Tree Ensembles - Bagging, Random Forest & Gradient Boosting Machines
Decision Tree Ensembles - Bagging, Random Forest & Gradient Boosting MachinesDecision Tree Ensembles - Bagging, Random Forest & Gradient Boosting Machines
Decision Tree Ensembles - Bagging, Random Forest & Gradient Boosting Machines
 

Similaire à GBM PACKAGE IN R: A GUIDE TO GRADIENT BOOSTED MACHINES

Algorithm explanations
Algorithm explanationsAlgorithm explanations
Algorithm explanationsnikita kapil
 
Machine Learning Algorithms (Part 1)
Machine Learning Algorithms (Part 1)Machine Learning Algorithms (Part 1)
Machine Learning Algorithms (Part 1)Zihui Li
 
Genetic programming
Genetic programmingGenetic programming
Genetic programmingYun-Yan Chi
 
Minmax and alpha beta pruning.pptx
Minmax and alpha beta pruning.pptxMinmax and alpha beta pruning.pptx
Minmax and alpha beta pruning.pptxPriyadharshiniG41
 
Kaggle review Planet: Understanding the Amazon from Space
Kaggle reviewPlanet: Understanding the Amazon from SpaceKaggle reviewPlanet: Understanding the Amazon from Space
Kaggle review Planet: Understanding the Amazon from SpaceEduard Tyantov
 
Musings of kaggler
Musings of kagglerMusings of kaggler
Musings of kagglerKai Xin Thia
 
CLUSTER ANALYSIS ALGORITHMS.pptx
CLUSTER ANALYSIS ALGORITHMS.pptxCLUSTER ANALYSIS ALGORITHMS.pptx
CLUSTER ANALYSIS ALGORITHMS.pptxShwetapadmaBabu1
 
Heuristic design of experiments w meta gradient search
Heuristic design of experiments w meta gradient searchHeuristic design of experiments w meta gradient search
Heuristic design of experiments w meta gradient searchGreg Makowski
 
Firefly exact MCMC for Big Data
Firefly exact MCMC for Big DataFirefly exact MCMC for Big Data
Firefly exact MCMC for Big DataGianvito Siciliano
 
Tensors Are All You Need: Faster Inference with Hummingbird
Tensors Are All You Need: Faster Inference with HummingbirdTensors Are All You Need: Faster Inference with Hummingbird
Tensors Are All You Need: Faster Inference with HummingbirdDatabricks
 
Decision tree induction
Decision tree inductionDecision tree induction
Decision tree inductionthamizh arasi
 
Graph Analysis Beyond Linear Algebra
Graph Analysis Beyond Linear AlgebraGraph Analysis Beyond Linear Algebra
Graph Analysis Beyond Linear AlgebraJason Riedy
 
대용량 데이터 분석을 위한 병렬 Clustering 알고리즘 최적화
대용량 데이터 분석을 위한 병렬 Clustering 알고리즘 최적화대용량 데이터 분석을 위한 병렬 Clustering 알고리즘 최적화
대용량 데이터 분석을 위한 병렬 Clustering 알고리즘 최적화NAVER Engineering
 
DAA 1 ppt.pptx
DAA 1 ppt.pptxDAA 1 ppt.pptx
DAA 1 ppt.pptxRAJESH S
 
DAA ppt.pptx
DAA ppt.pptxDAA ppt.pptx
DAA ppt.pptxRAJESH S
 

Similaire à GBM PACKAGE IN R: A GUIDE TO GRADIENT BOOSTED MACHINES (20)

Algorithm explanations
Algorithm explanationsAlgorithm explanations
Algorithm explanations
 
Machine Learning Algorithms (Part 1)
Machine Learning Algorithms (Part 1)Machine Learning Algorithms (Part 1)
Machine Learning Algorithms (Part 1)
 
Machine Learning - Supervised Learning
Machine Learning - Supervised LearningMachine Learning - Supervised Learning
Machine Learning - Supervised Learning
 
Genetic programming
Genetic programmingGenetic programming
Genetic programming
 
XgBoost.pptx
XgBoost.pptxXgBoost.pptx
XgBoost.pptx
 
Minmax and alpha beta pruning.pptx
Minmax and alpha beta pruning.pptxMinmax and alpha beta pruning.pptx
Minmax and alpha beta pruning.pptx
 
Kaggle review Planet: Understanding the Amazon from Space
Kaggle reviewPlanet: Understanding the Amazon from SpaceKaggle reviewPlanet: Understanding the Amazon from Space
Kaggle review Planet: Understanding the Amazon from Space
 
Musings of kaggler
Musings of kagglerMusings of kaggler
Musings of kaggler
 
ngboost.pptx
ngboost.pptxngboost.pptx
ngboost.pptx
 
CLUSTER ANALYSIS ALGORITHMS.pptx
CLUSTER ANALYSIS ALGORITHMS.pptxCLUSTER ANALYSIS ALGORITHMS.pptx
CLUSTER ANALYSIS ALGORITHMS.pptx
 
Heuristic design of experiments w meta gradient search
Heuristic design of experiments w meta gradient searchHeuristic design of experiments w meta gradient search
Heuristic design of experiments w meta gradient search
 
Firefly exact MCMC for Big Data
Firefly exact MCMC for Big DataFirefly exact MCMC for Big Data
Firefly exact MCMC for Big Data
 
Tensors Are All You Need: Faster Inference with Hummingbird
Tensors Are All You Need: Faster Inference with HummingbirdTensors Are All You Need: Faster Inference with Hummingbird
Tensors Are All You Need: Faster Inference with Hummingbird
 
R user group meeting 25th jan 2017
R user group meeting 25th jan 2017R user group meeting 25th jan 2017
R user group meeting 25th jan 2017
 
Decision tree induction
Decision tree inductionDecision tree induction
Decision tree induction
 
Graph Analysis Beyond Linear Algebra
Graph Analysis Beyond Linear AlgebraGraph Analysis Beyond Linear Algebra
Graph Analysis Beyond Linear Algebra
 
대용량 데이터 분석을 위한 병렬 Clustering 알고리즘 최적화
대용량 데이터 분석을 위한 병렬 Clustering 알고리즘 최적화대용량 데이터 분석을 위한 병렬 Clustering 알고리즘 최적화
대용량 데이터 분석을 위한 병렬 Clustering 알고리즘 최적화
 
DAA 1 ppt.pptx
DAA 1 ppt.pptxDAA 1 ppt.pptx
DAA 1 ppt.pptx
 
DAA ppt.pptx
DAA ppt.pptxDAA ppt.pptx
DAA ppt.pptx
 
Deeplearning
Deeplearning Deeplearning
Deeplearning
 

Dernier

Data Analysis Project : Targeting the Right Customers, Presentation on Bank M...
Data Analysis Project : Targeting the Right Customers, Presentation on Bank M...Data Analysis Project : Targeting the Right Customers, Presentation on Bank M...
Data Analysis Project : Targeting the Right Customers, Presentation on Bank M...Boston Institute of Analytics
 
Rithik Kumar Singh codealpha pythohn.pdf
Rithik Kumar Singh codealpha pythohn.pdfRithik Kumar Singh codealpha pythohn.pdf
Rithik Kumar Singh codealpha pythohn.pdfrahulyadav957181
 
knowledge representation in artificial intelligence
knowledge representation in artificial intelligenceknowledge representation in artificial intelligence
knowledge representation in artificial intelligencePriyadharshiniG41
 
Principles and Practices of Data Visualization
Principles and Practices of Data VisualizationPrinciples and Practices of Data Visualization
Principles and Practices of Data VisualizationKianJazayeri1
 
Statistics For Management by Richard I. Levin 8ed.pdf
Statistics For Management by Richard I. Levin 8ed.pdfStatistics For Management by Richard I. Levin 8ed.pdf
Statistics For Management by Richard I. Levin 8ed.pdfnikeshsingh56
 
modul pembelajaran robotic Workshop _ by Slidesgo.pptx
modul pembelajaran robotic Workshop _ by Slidesgo.pptxmodul pembelajaran robotic Workshop _ by Slidesgo.pptx
modul pembelajaran robotic Workshop _ by Slidesgo.pptxaleedritatuxx
 
IBEF report on the Insurance market in India
IBEF report on the Insurance market in IndiaIBEF report on the Insurance market in India
IBEF report on the Insurance market in IndiaManalVerma4
 
Digital Indonesia Report 2024 by We Are Social .pdf
Digital Indonesia Report 2024 by We Are Social .pdfDigital Indonesia Report 2024 by We Are Social .pdf
Digital Indonesia Report 2024 by We Are Social .pdfNicoChristianSunaryo
 
World Economic Forum Metaverse Ecosystem By Utpal Chakraborty.pdf
World Economic Forum Metaverse Ecosystem By Utpal Chakraborty.pdfWorld Economic Forum Metaverse Ecosystem By Utpal Chakraborty.pdf
World Economic Forum Metaverse Ecosystem By Utpal Chakraborty.pdfsimulationsindia
 
Decoding Movie Sentiments: Analyzing Reviews with Data Analysis model
Decoding Movie Sentiments: Analyzing Reviews with Data Analysis modelDecoding Movie Sentiments: Analyzing Reviews with Data Analysis model
Decoding Movie Sentiments: Analyzing Reviews with Data Analysis modelBoston Institute of Analytics
 
Data Analysis Project Presentation: Unveiling Your Ideal Customer, Bank Custo...
Data Analysis Project Presentation: Unveiling Your Ideal Customer, Bank Custo...Data Analysis Project Presentation: Unveiling Your Ideal Customer, Bank Custo...
Data Analysis Project Presentation: Unveiling Your Ideal Customer, Bank Custo...Boston Institute of Analytics
 
Introduction to Mongo DB-open-­‐source, high-­‐performance, document-­‐orient...
Introduction to Mongo DB-open-­‐source, high-­‐performance, document-­‐orient...Introduction to Mongo DB-open-­‐source, high-­‐performance, document-­‐orient...
Introduction to Mongo DB-open-­‐source, high-­‐performance, document-­‐orient...boychatmate1
 
FAIR, FAIRsharing, FAIR Cookbook and ELIXIR - Sansone SA - Boston 2024
FAIR, FAIRsharing, FAIR Cookbook and ELIXIR - Sansone SA - Boston 2024FAIR, FAIRsharing, FAIR Cookbook and ELIXIR - Sansone SA - Boston 2024
FAIR, FAIRsharing, FAIR Cookbook and ELIXIR - Sansone SA - Boston 2024Susanna-Assunta Sansone
 
English-8-Q4-W3-Synthesizing-Essential-Information-From-Various-Sources-1.pdf
English-8-Q4-W3-Synthesizing-Essential-Information-From-Various-Sources-1.pdfEnglish-8-Q4-W3-Synthesizing-Essential-Information-From-Various-Sources-1.pdf
English-8-Q4-W3-Synthesizing-Essential-Information-From-Various-Sources-1.pdfblazblazml
 
Bank Loan Approval Analysis: A Comprehensive Data Analysis Project
Bank Loan Approval Analysis: A Comprehensive Data Analysis ProjectBank Loan Approval Analysis: A Comprehensive Data Analysis Project
Bank Loan Approval Analysis: A Comprehensive Data Analysis ProjectBoston Institute of Analytics
 
Digital Marketing Plan, how digital marketing works
Digital Marketing Plan, how digital marketing worksDigital Marketing Plan, how digital marketing works
Digital Marketing Plan, how digital marketing worksdeepakthakur548787
 
What To Do For World Nature Conservation Day by Slidesgo.pptx
What To Do For World Nature Conservation Day by Slidesgo.pptxWhat To Do For World Nature Conservation Day by Slidesgo.pptx
What To Do For World Nature Conservation Day by Slidesgo.pptxSimranPal17
 
why-transparency-and-traceability-are-essential-for-sustainable-supply-chains...
why-transparency-and-traceability-are-essential-for-sustainable-supply-chains...why-transparency-and-traceability-are-essential-for-sustainable-supply-chains...
why-transparency-and-traceability-are-essential-for-sustainable-supply-chains...Jack Cole
 

Dernier (20)

Data Analysis Project : Targeting the Right Customers, Presentation on Bank M...
Data Analysis Project : Targeting the Right Customers, Presentation on Bank M...Data Analysis Project : Targeting the Right Customers, Presentation on Bank M...
Data Analysis Project : Targeting the Right Customers, Presentation on Bank M...
 
Rithik Kumar Singh codealpha pythohn.pdf
Rithik Kumar Singh codealpha pythohn.pdfRithik Kumar Singh codealpha pythohn.pdf
Rithik Kumar Singh codealpha pythohn.pdf
 
Data Analysis Project: Stroke Prediction
Data Analysis Project: Stroke PredictionData Analysis Project: Stroke Prediction
Data Analysis Project: Stroke Prediction
 
knowledge representation in artificial intelligence
knowledge representation in artificial intelligenceknowledge representation in artificial intelligence
knowledge representation in artificial intelligence
 
Principles and Practices of Data Visualization
Principles and Practices of Data VisualizationPrinciples and Practices of Data Visualization
Principles and Practices of Data Visualization
 
Statistics For Management by Richard I. Levin 8ed.pdf
Statistics For Management by Richard I. Levin 8ed.pdfStatistics For Management by Richard I. Levin 8ed.pdf
Statistics For Management by Richard I. Levin 8ed.pdf
 
modul pembelajaran robotic Workshop _ by Slidesgo.pptx
modul pembelajaran robotic Workshop _ by Slidesgo.pptxmodul pembelajaran robotic Workshop _ by Slidesgo.pptx
modul pembelajaran robotic Workshop _ by Slidesgo.pptx
 
IBEF report on the Insurance market in India
IBEF report on the Insurance market in IndiaIBEF report on the Insurance market in India
IBEF report on the Insurance market in India
 
Digital Indonesia Report 2024 by We Are Social .pdf
Digital Indonesia Report 2024 by We Are Social .pdfDigital Indonesia Report 2024 by We Are Social .pdf
Digital Indonesia Report 2024 by We Are Social .pdf
 
World Economic Forum Metaverse Ecosystem By Utpal Chakraborty.pdf
World Economic Forum Metaverse Ecosystem By Utpal Chakraborty.pdfWorld Economic Forum Metaverse Ecosystem By Utpal Chakraborty.pdf
World Economic Forum Metaverse Ecosystem By Utpal Chakraborty.pdf
 
Decoding Movie Sentiments: Analyzing Reviews with Data Analysis model
Decoding Movie Sentiments: Analyzing Reviews with Data Analysis modelDecoding Movie Sentiments: Analyzing Reviews with Data Analysis model
Decoding Movie Sentiments: Analyzing Reviews with Data Analysis model
 
Data Analysis Project Presentation: Unveiling Your Ideal Customer, Bank Custo...
Data Analysis Project Presentation: Unveiling Your Ideal Customer, Bank Custo...Data Analysis Project Presentation: Unveiling Your Ideal Customer, Bank Custo...
Data Analysis Project Presentation: Unveiling Your Ideal Customer, Bank Custo...
 
Introduction to Mongo DB-open-­‐source, high-­‐performance, document-­‐orient...
Introduction to Mongo DB-open-­‐source, high-­‐performance, document-­‐orient...Introduction to Mongo DB-open-­‐source, high-­‐performance, document-­‐orient...
Introduction to Mongo DB-open-­‐source, high-­‐performance, document-­‐orient...
 
FAIR, FAIRsharing, FAIR Cookbook and ELIXIR - Sansone SA - Boston 2024
FAIR, FAIRsharing, FAIR Cookbook and ELIXIR - Sansone SA - Boston 2024FAIR, FAIRsharing, FAIR Cookbook and ELIXIR - Sansone SA - Boston 2024
FAIR, FAIRsharing, FAIR Cookbook and ELIXIR - Sansone SA - Boston 2024
 
English-8-Q4-W3-Synthesizing-Essential-Information-From-Various-Sources-1.pdf
English-8-Q4-W3-Synthesizing-Essential-Information-From-Various-Sources-1.pdfEnglish-8-Q4-W3-Synthesizing-Essential-Information-From-Various-Sources-1.pdf
English-8-Q4-W3-Synthesizing-Essential-Information-From-Various-Sources-1.pdf
 
Bank Loan Approval Analysis: A Comprehensive Data Analysis Project
Bank Loan Approval Analysis: A Comprehensive Data Analysis ProjectBank Loan Approval Analysis: A Comprehensive Data Analysis Project
Bank Loan Approval Analysis: A Comprehensive Data Analysis Project
 
Digital Marketing Plan, how digital marketing works
Digital Marketing Plan, how digital marketing worksDigital Marketing Plan, how digital marketing works
Digital Marketing Plan, how digital marketing works
 
What To Do For World Nature Conservation Day by Slidesgo.pptx
What To Do For World Nature Conservation Day by Slidesgo.pptxWhat To Do For World Nature Conservation Day by Slidesgo.pptx
What To Do For World Nature Conservation Day by Slidesgo.pptx
 
why-transparency-and-traceability-are-essential-for-sustainable-supply-chains...
why-transparency-and-traceability-are-essential-for-sustainable-supply-chains...why-transparency-and-traceability-are-essential-for-sustainable-supply-chains...
why-transparency-and-traceability-are-essential-for-sustainable-supply-chains...
 
Insurance Churn Prediction Data Analysis Project
Insurance Churn Prediction Data Analysis ProjectInsurance Churn Prediction Data Analysis Project
Insurance Churn Prediction Data Analysis Project
 

GBM PACKAGE IN R: A GUIDE TO GRADIENT BOOSTED MACHINES

  • 1. GBM PACKAGE IN R 7/24/2014
  • 2. Presentation Outline • Algorithm Overview • Basics • How it solves problems • Why to use it • Deeper investigation while going through live code
  • 3. What is GBM? • Predictive modeling algorithm • Classification & Regression • Decision tree as a basis* • Boosted • Multiple weak models combined algorithmically • Gradient boosted • Iteratively solves residuals • Stochastic (some additional references on last slide) * technically, GBM can take on other forms such as linear, but decision trees are the dominant usage, Friedman specifically optimized for trees, and R’s implementation is internally represented as a tree
  • 4. Predictive Modeling Landscape: General Purpose Algorithms (forillustrativepurposes only,nottoscale,precise,orcomprehensive;author’sperspective) Linear Models Decision Trees Others Linear Models (lm) Generalized Linear Models (glm) Regularized Linear Models (glmnet) Classification And Regression Trees (rpart) Random Forest (randomForest) Gradient Boosted Machines (gbm) Nearest Neighbor (kNN) Neural Networks (nnet) Support Vector Machines (kernlab) complexity Naïve Bayes (klaR) Splines (earth) More Comprehensive List: http://caret.r-forge.r-project.org/modelList.html
  • 6. Why GBM? • Characteristics • Competitive Performance • Robust • Loss functions • Fast (relatively) • Usages • Quick modeling • Variable selection • Final-stage precision modeling
  • 7. Competitive Performance • Competitive with high-end algorithms such as RandomForest • Reliable performance • Avoids nonsensical predictions • Rare to produce worse predictions than simpler models • Often in winning Kaggle solutions • Cited within winning solution descriptions in numerous competitions, including $3M competition • Many of the highest ranked competitors use it frequently • Used in 4 of 5 personal top 20 finishes
  • 8. Robust • Explicitly handles NAs • Scaling/normalization is unnecessary • Handles more factor levels than random forest (1024 vs 32) • Handles perfectly correlated independent variables • No [known] limit to number of independent variables
  • 9. Loss Functions • Gaussian: squared loss • Laplace: absolute loss • Bernoulli: logistic, for 0/1 • Huberized: hinge, for 0/1 • Adaboost: exponential loss, for 0/1 • Multinomial: more than one class (produces probability matrix) • Quantile: flexible alpha (e.g. optimize for 2 StDev threshold) • Poisson: Poisson distribution, for counts • CoxPH: Cox proportional hazard, for right-censored • Tdist: t-distribution loss • Pairwise: rankings (e.g. search result scoring) • Concordant pairs • Mean reciprocal rank • Mean average precision • Normalized discounted cumulative gain
  • 10. Drawbacks • Several hyper-parameters to tune • I typically use roughly the same parameters to start, unless I suspect the data set might have peculiar characteristics • For creating a final model, tuning several parameters is advisable • Still has capacity to overfit • Despite internal cross-validation, it is still particularly prone to overfit ID-like columns (suggestion: withhold them) • Can have trouble with highly noisy data • Black box • However, GBM package does provide tools to analyze the resulting models
  • 11. Deeper Analysis via Walkthrough • Hyper-parameter explanations (some, not all) • Quickly analyze performance • Analyze influence of variables • Peek under the hood…then follow a toy problem For those not attending the presentation, the code at the back is run at this point and discussed. The remaining four slides were mainly to supplement the discussion of the code and comments, and there was not sufficient time.
  • 12. Same analysis with a simpler data set Note that one can recreate the predictions of this first tree by finding the terminal node for any prediction and using the Prediction value (final column in data frame). Those values for all desired trees, plus the initial value (mean for this) is the prediction. Matches predictions 1 & 3 Matches predictions 2,4 & 5
  • 13. Same analysis with a simpler data set Explanation 1 tree built. Tree has one decision only, node 0. Node 0 indicates it split the 3rd field (SplitVar:2), to where values below 1.5 (ordered values 0 & 1 which are a & b) went to node 1; values above 1.5 (2/3 = c/d) went to node 2; missing (none) go to node 3. Node 1 (X3=A/B) is a terminal node (SplitVar -1) and it predicts the mean plus -0.925. Node 2 (X3=C/D) is a terminal node and it predicts the mean plus 1.01. Node 3 (none) is a terminal node and it predicts the mean plus 0, effectively. Later saw that gbm1$initF will show the intercept, which in this case is the mean.
  • 14. GBM predict: fit a GBM to data • gbm(formula = formula(data), • distribution = "bernoulli", • n.trees = 100, • interaction.depth = 1, • n.minobsinnode = 10, • shrinkage = 0.001, • bag.fraction = 0.5, • train.fraction = 1.0, • cv.folds=0, • weights, • data = list(), • var.monotone = NULL, • keep.data = TRUE, • verbose = "CV", • class.stratify.cv=NULL, • n.cores = NULL)
  • 15. Effect of shrinkage & trees Source: https://www.youtube.com/watch?v=IXZKgIsZRm0 (GBM explanation by SciKit author)
  • 16. Code Dump • The code has been copied from a text R script into PowerPoint, so the format isn’t great, but it should look OK if copying and pasting back out to a text file. If not, here it is on Github. • The code shown uses a competition data set that is comparable to real world data and uses a simple GBM to predict sale prices of construction equipment at auction. • A GBM model was fit against 100k rows with 45-50 variables in about 2-4 minutes during the presentation. It improves the RMSE of prediction against the mean from ~24.5k to ~9.7k, when scored on data the model had not seen (and future dates, so the 100k/50k splits should be valid), with fairly stable train:test performance. • After predictions are made and scored, some GBM utilities are used to see which variables the model found most influential, see how the top 2 variables are used (per factor for one; throughout a continuous distribution for the other), and see interaction effects of specific variable pairs. • Note: GBM was used by my teammate and I to finish 12th out of 476 in this competition (albeit a complex ensemble of GBMs)
  • 17. Code Dump: Page1 library(Metrics) ##load evaluation package setwd("C:/Users/Mark_Landry/Documents/K/dozer/") ##Done in advance to speed up loading of data set train<-read.csv("Train.csv") ## Kaggle data set: http://www.kaggle.com/c/bluebook-for-bulldozers/data train$saleTransform<-strptime(train$saledate,"%m/%d/%Y %H:%M") train<-train[order(train$saleTransform),] save(train,file="rTrain.Rdata") load("rTrain.Rdata") xTrain<-train[(nrow(train)-149999):(nrow(train)-50000),5:ncol(train)] xTest<-train[(nrow(train)-49999):nrow(train),5:ncol(train)] yTrain<-train[(nrow(train)-149999):(nrow(train)-50000),2] yTest<-train[(nrow(train)-49999):nrow(train),2] dim(xTrain); dim(xTest) sapply(xTrain,function(x) length(levels(x))) ## check levels; gbm is robust, but still has a limit of 1024 per factor; for initial model, remove ## after iterating through model, would want to go back and compress these factors to investigate ## their usefulness (or other information analysis) xTrain$saledate<-NULL; xTest$saledate<-NULL xTrain$fiModelDesc<-NULL; xTest$fiModelDesc<-NULL xTrain$fiBaseModel<-NULL; xTest$fiBaseModel<-NULL xTrain$saleTransform<-NULL; xTest$saleTransform<-NULL
  • 18. Code Dump: Page2 library(gbm) ## Set up parameters to pass in; there are many more hyper-parameters available, but these are the most common to control GBM_NTREES = 400 ## 400 trees in the model; can scale back later for predictions, if desired or overfitting is suspected GBM_SHRINKAGE = 0.05 ## shrinkage is a regularization parameter dictating how fast/aggressive the algorithm moves across the loss gradient ## 0.05 is somewhat aggressive; default is 0.001, values below 0.1 tend to produce good results ## decreasing shrinkage generally improves results, but requires more trees, so the two should be adjusted in tandem GBM_DEPTH = 4 ## depth 4 means each tree will evaluate four decisions; ## will always yield [3*depth + 1] nodes and [2*depth + 1] terminal nodes (depth 4 = 9) ## because each decision yields 3 nodes, but each decision will come from a prior node GBM_MINOBS = 30 ## regularization parameter to dictate how many observations must be present to yield a terminal node ## higher number means more conservative fit; 30 is fairly high, but good for exploratory fits; default is 10 ## Fit model g<-gbm.fit(x=xTrain,y=yTrain,distribution = "gaussian",n.trees = GBM_NTREES,shrinkage = GBM_SHRINKAGE, interaction.depth = GBM_DEPTH,n.minobsinnode = GBM_MINOBS) ## gbm fit; provide all remaining independent variables in xTrain; provide targets as yTrain; ## gaussian distribution will optimize squared loss;
  • 19. Code Dump: Page3 ## get predictions; first on train set, then on unseen test data tP1 <- predict.gbm(object = g,newdata = xTrain,GBM_NTREES) hP1 <- predict.gbm(object = g,newdata = xTest,GBM_NTREES) ## compare model performance to default (overall mean) rmse(yTrain,tP1) ## 9452.742 on data used for training rmse(yTest,hP1) ## 9740.559 ~3% drop on unseen data; does not seem to be overfit rmse(yTest,mean(yTrain)) ## 24481.08 overall mean; cut error rate (from perfection) by 60% ## look at variables summary(g) ## summary will plot and then show the relative influence of each variable to the entire GBM model (all trees) ## test dominant variable mean library(sqldf) trainProdClass<-as.data.frame(cbind(as.character(xTrain$fiProductClassDesc),yTrain)) testProdClass<-as.data.frame(cbind(as.character(xTest$fiProductClassDesc),yTest)) colnames(trainProdClass)<-c("fiProductClassDesc","y"); colnames(testProdClass)<-c("fiProductClassDesc","y") ProdClassMeans<-sqldf("SELECT fiProductClassDesc,avg(y) avg, COUNT(*) n FROM trainProdClass GROUP BY fiProductClassDesc") ProdClassPredictions<-sqldf("SELECT case when n > 30 then avg ELSE 31348.63 end avg FROM ProdClassMeans P LEFT JOIN testProdClass t ON t.fiProductClassDesc = P.fiProductClassDesc") rmse(yTest,ProdClassPredictions$avg) ## 29082.64 ? peculiar result on the fiProductClassDesc means, which seemed fairly stable and useful ##seems to say that the primary factor alone is not helpful; full tree needed
  • 20. Code Dump: Page4 ## Investigate actual GBM model pretty.gbm.tree(g,1) ## show underlying model for the first decision tree summary(xTrain[,10]) ## underlying model showed variable 9 to be first point in tree (9 with 0 index = 10th column) g$initF ## view what is effectively the "y intercept" mean(yTrain) ## equivalence shows gaussian y intercept is the mean t(g$c.splits[1][[1]]) ## show whether each factor level should go left or right plot(g,10) ## plot fiProductClassDesc, the variable with the highest rel.inf plot(g,3) ## plot YearMade, continuous variable with 2nd highest rel.inf interact.gbm(g,xTrain,c(10,3)) ## compute H statistic to show interaction; integrates interact.gbm(g,xTrain,c(10,3)) ## example of uninteresting interaction
  • 21. Selected References • CRAN • Documentation • vignette • Algorithm publications: • Greedy function approximation: A gradient boosting machine Friedman 2/99 • Stochastic Gradient Boosting; Friedman 3/99 • Overviews • Gradient boosting machines, a tutorial: Frontiers (4/13) • Wikipedia (pretty good article, really) • Video of author of GBM in Python: Gradient Boosted Regression Trees in scikit-learn • Very helpful, but the implementation is not decision “stumps” in R, so some things are different in R (e.g. number of trees need not be so high)