SlideShare a Scribd company logo
1 of 57
Colleen M. Farrelly
 Many machine learning methods exist in the
literature and in industry.
◦ What works well for one problem may not work well for
the next problem.
◦ In addition to poor model fit, an incorrect application of
methods can lead to incorrect inference.
 Implications for data-driven business decisions.
 Low future confidence in data science and its results.
 Lower quality software products.
 Understanding the intuition and mathematics
behind these methods can ameliorate these
problems.
◦ This talk focuses on building intuition.
◦ Links to theoretical papers underlying each method.
 Total variance of a normally-
distributed outcome as cookie jar.
 Error as empty space.
 Predictors accounting for pieces of
the total variance as cookies.
◦ Based on relationship to predictor and to
each other.
 Cookies accounting for the same piece of
variance as those smooshed together.
 Many statistical assumptions need
to be met.
www.zelcoviacookies.com
 Extends multiple
regression.
◦ Many types of outcome
distributions.
◦ Transform an outcome
variable to create a linear
relationship with
predictors.
 Sort of like silly putty
stretching the outcome
variable in the data
space.
 Does not work on high-
dimensional data where
predictors >
observations.
 Only certain outcome
distributions.
◦ Exponential family as
example.
tomstock.photoshelter.com
 Impose size and/or variable
overlap constraints (penalties) on
generalized linear models.
◦ Elastic net as hybrid of these
constraints.
◦ Can handle large numbers of
predictors.
 Reduce the number of predictors.
◦ Shrink some predictor estimates to 0.
◦ Examine sets of similar predictors.
 Similar to eating irrelevant cookies
in the regression cookie jar or a
cowboy at the origin roping
coefficients that get too close
 Homotopy arrow example
◦ Red and blue arrows can be
deformed into each other by
wiggling and stretching the
line path with anchors at
start and finish of line
◦ Yellow arrow crosses holes
and would need to backtrack
or break to the surface to
freely wiggle into the blue or
red line
 Homotopy method in
LASSO/LARS wiggles an
easy regression path into
an optimal regression
path
◦ Avoids obstacles that can
trap other regression
estimators (peaks, valleys,
saddles…)
 Homotopy as path
equivalence
◦ Intrinsic property of
topological spaces (such
as data manifolds)
 Instead of fitting model to data, fit model to
tangent space (what isn’t the data).
◦ Deals with collinearity, as parallel vectors share a
tangent space (only one selected of collinear
group).
◦ LASSO and LARS extensions.
◦ Rao scoring for selection.
 Effect estimates (angles).
 Model selection criteria.
 Information criteria.
 Deviance scoring.
 Fit all possible models and
figure out likelihood of each
given observed data.
◦ Based on Bayes’ Theorem and
conditional probability.
◦ Instead of giving likelihood
that data came from a specific
parameterized population
(univariate), figure out
likelihood of set of data
coming from sets of
population (multivariate).
◦ Can select naïve prior (no
assumptions on model or
population) or make an
informed guess (assumptions
about population or important
factors in model).
◦ Combine multiple models
according to their likelihoods
into a blended model given
data.
Semi-Parametric Extensions
of Regression
 Extends a generalized linear
models by estimating
unknown (nonlinear)
functions of an outcome on
intervals of a predictor.
 Use of “knots” to break
function into intervals.
 Similar to a downhill skier.
◦ Slope as a function of outcome.
◦ Skier as spline.
◦ Flags as knots anchoring the
skier as he travels down the
slope.
www.dearsportsfan.com
 Multivariate adaptive regression splines as an
extension of spline models to multivariate
data
◦ Knots placed according to multivariate structure
and splines fit between knots
◦ Much like fixing a rope to the peaks and valleys of a
mountain range, where the rope has enough slack
to hug the terrain between its fixed points
 Extends spline models
to the relationships of
many predictors to an
outcome.
◦ Also allows for a “silly-
putty” transformation of
the outcome variable.
 Like multiple skiers on
many courses and
mountains to estimate
many relationships in
the dataset.
www.filigreeinn.com
 Chop data into
partitions and then
fit multiple
regression models
to each partition.
 Divide-and-conquer
approach.
 Examples:
◦ Multivariate adaptive
regression splines
◦ Regression trees
◦ Morse-Smale
regression
www.pinterest.com
 Based on a branch of math
called topology.
◦ Study of changes in function
behavior on shapes.
◦ Used to classify similarities/
differences between shapes.
◦ Data clouds turned into discrete
shape combinations (simplices).
 Use these principles to
partition data and fit elastic net
models to each piece.
◦ Break data into multiple toy sets.
◦ Analyze sets for underlying
properties of each toy.
 Useful visual output for
additional data mining.
 Linear or non-linear
support beams used to
separate group data
for classification or
map data for kernel-
based regression.
 Much like scaffolding
and support beams
separating and holding
up parts of a high rise.
http://en.wikipedia.org/wiki/Support_vector_machine
leadertom.en.ec21.com
 Based on processing
complex, nonlinear
information the way
the human brain
does via a series of
feature mappings.
colah.github.io
www.alz.org
Arrows denote mapping
functions, which take one
topological space to another
 Type of shallow, wide neural network.
 Reduces framework to a penalized linear algebra
problem, rather than iterative training (much faster to
solve).
 Based on random mappings.
 Shown to converge to correct
classification/regression (universal approximation
property—may require unreasonably wide networks).
 Semi-supervised learning extensions.
 Added layers in neural
network to solve width
problem in single-layer
networks for universal
approximation.
 More effective in
learning features of the
data.
 Like sifting data with
multiple sifters to distill
finer and finer pieces of
the data.
 Computationally
intensive and requires
architecture design and
optimization.
 Recent extension of
deep learning
framework from
spreadsheet data to
multiple simultaneous
spreadsheets
◦ Spreadsheets may be of
the same dimension or
different dimensions
◦ Could process multiple
or hierarchical networks
via adjacency matrices
 Like sifting through
data with multiple
inputs of varying sizes
and textures
CHAPTER 3:
 Classifies data according
to optimal partitioning of
data (visualized as a high-
dimensional cube).
 Slices data into squares,
cubes, and hypercubes (4+
dimensional cubes).
◦ Like finding the best place to
cut through the data with a
sword.
 Option to prune tree for
better model (glue some of
the pieces back together).
 Notoriously unstable.
 Optimization possible to
arrive at best possible
trees (genetic algorithms).
tvtropes.org
 Optimization technique.
◦ Minimize a loss function in
regression.
 Accomplished by choosing
a variable per step that
achieves largest
minimization.
 Climber trying to descend
a mountain.
◦ Minimize height above sea
level per step.
 Directions (N, S, E, W) as a
collection of variables.
 Climber rappels down cliff
according to these rules.
www.chinatravelca.com
 Optimization helpers.
◦ Find global maximum or
minimum by searching
many areas
simultaneously.
◦ Can impose restrictions.
 Teams of mountain
climbers trying to find
the summit.
 Restrictions as climber
supplies, hours of
daylight left to find
summit…
 Genetic algorithm as
population of mountain
climbers each climbing
on his/her own.
wallpapers.brothersoft.com
 One climber exploring multiple routes
simultaneously.
 Individual optimized through a series of
gates at each update until superposition
converges on one state
◦ Say wave 2 is the optimal combination of
predictors
◦ Resultant combination slowly flattens to wave 2,
with the states of wave 1disappearing
http://philschatz.com/physics
-book/contents/m42249.html
 Combine multiple models
of the same type (ex. trees)
for better prediction.
◦ Single models unstable.
 Equally-good estimates from
different models.
◦ Use bootstrapping.
 Multiple “marble draws”
followed by model creation.
◦ Creates diversity of features.
◦ Creates models with different
biases and error.
 Uses gradient descent
algorithm to model an
outcome based on
predictors (linear, tree,
spline…).
◦ Proceeds in steps, adding
variables and adjusting weights
according to the algorithm.
 Like putting together a
puzzle.
◦ Algorithm first focuses on most
important parts of the picture
(such as the Mona Lisa’s eyes).
◦ Then adds nuances that help
identify other missing pieces
(hands, landscape…).
play.google.com
 Adds a penalty term to
boosted regression
◦ Can be formulated as
LASSO/ridge
regression/elastic net with
constraints
◦ Also leverages hardware to
further speed up boosting
ensemble
play.google.com
 Single tree models poor predictors.
 Grow a forest of them for a strong predictor (another
ensemble method).
 Algorithm:
◦ Takes random sample of data.
◦ Builds one tree.
◦ Repeats until forest grown.
◦ Averages across forest to identify strong predictors.
CHAPTER 4:
 Principle Component Analysis
(PCA)
◦ Variance partitioning to combine
pieces of variables into new
linear subspace.
◦ Smashing cookies by kind and
combining into a flat hybrid
cookie in previous regression
model.
 Manifold Learning
◦ PCA-like algorithms that
combine pieces into a new
nonlinear subspace.
◦ Non-flat cookie combining.
 Useful as pre-processing step
for prediction models.
◦ Reduce dimension.
◦ Obtain uncorrelated, non-
overlapping variables (bases).
marmaladeandmileposts.
com
 Balanced sampling for low-frequency
predictors.
◦ Stratified samples (i.e. sample from bag of mostly
white marbles and few red marbles with constraint
that 1/5th of draws must be red marbles).
 Dimension reduction/mapping pre-processing
◦ Principle component, manifold learning…
◦ Hybrid of neural network methods and tree models.
 Aggregation of multiple
types of models.
◦ Like a small town election.
◦ Different people have different
views of the politics and care
about different issues.
 Different modeling
methods capture different
pieces of the data and vote
in different pieces.
◦ Leverage strengths, minimize
weaknesses
◦ Diversity of methods to better
explore underlying data
geometry
 Avoids multiple testing
issues.
 Subsembles
◦ Partition data into training sets
 Randomly selected or through
partition algorithm
◦ Train a model on each data
partition
◦ Combine into final weighted
prediction model
 This is similar to national
elections.
◦ Each elector in the electoral
college learns for whom his
constituents voted.
◦ The final electoral college pools
these individual votes.
Full Training Dataset
Model Training
Datasets 1, 2, 3, 4
Final Subsemble Model
 Combines
superlearning and
subsembles for
better prediction
◦ Diversity improves
subsemble method
(better able to explore
data geometry)
◦ Bootstrapping
improves superlearner
pieces (more diversity
within each method)
◦ Preliminary empirical
evidence shows
efficacy of
combination.
CHAPTER 5:
 Classification of a data
point based on the
classification of its
nearest neighboring
points.
 Like a junior high
lunchroom and
student clicks.
◦ Students at the same
table tend to be more
similar to each other
than to a distant table. www.wrestlecrap.com
 Iteratively separating
groups within a
dataset based on
similarity.
 Like untangling
tangled balls of yarn
into multiple, single-
colored balls (ideally).
◦ Each step untangles the
mess a little further.
◦ Few stopping guidelines
(when it looks
separated).
krazymommakreations.blogspot.com
 Hybrid of supervised and
unsupervised learning
(groupings and prediction).
 Uses graphical
representation of data and
its relationships.
 Algorithms with connections
to topology, differential
geometry, and Markov
chains.
 Useful in combination with
other machine learning
methods to provide extra
insight (ex. spectral
clustering).
 K-means algorithm with
weighting and
dimension reduction
components of
similarity measure.
◦ Simplify balls of string to
warm colors and cool
colors before untangling.
 Can be reformulated as
a graph clustering
problem.
◦ Partition subcomponents
of a graph based on flow
equations.
www.simplepastimes.com
 Multivariate technique
similar to mode or
density clustering.
◦ Find peaks and valleys in
data according to an
input function on the
data (level set slices)—
much like a watershed
on mountains.
◦ Separate data based on
shared peaks and valleys
across slices (shared
multivariate
density/gradient).
◦ Many nice theoretical
developments on validity
and convergence.
 Topological clustering.
◦ Define distance metric.
◦ Slice multidimensional dataset
with Morse function.
◦ Examine function behavior
across slice.
◦ Cluster function behavior.
◦ Iterate through multiple slices
to obtain hierarchy of function
behavior.
 Much like examining the
behavior of multiple
objects across a flip book.
◦ Nesting
◦ Cluster overlap
Filtered functions then used to
create various resolutions of a
modified Reeb graph summary
of topology.
Time Series Forecasting
 Similar to
decomposing
superposed states
◦ Seasonal trends
◦ Yearly trends
◦ Trend averages
◦ Dependencies on
previous time point
 Knit individual
forecasted pieces into
a complete forecast by
superposing these
individual forecasts
 Several extensions to
neural networks, time-
lagged machine
learning models…
 A time-series method
incorporating predictors
◦ Constant predictors at initial
time point
◦ Varying predictors at multiple
time points
 Creates a sort of correlation
web between predictors
and time points
◦ Can handle multiple time lags
and multivariate outcomes
◦ Can handle any GLM outcome
links
 Related to partial
differential equations of
dynamic systems
 Data-based mining for
SEM
relationships/time-lag
components
◦ Leverages conditional
probability between
predictors to find
dependencies
◦ Does not require a priori
model formulation like
SEM
 Peeking at data to
create a dependency
web over time or
predictors/outcome
 Can be validated by a
follow-up SEM based
on network structure
Time 1
Outcome
Predictor
2
Time 2
Outcome
Time 3
Outcome
 Technically:
◦ Matrix decomposition
(similar to PCA/manifold
learning)
◦ Followed by spectral
methods
◦ Cleaning of time-lagged
covariance matrix
◦ Reconstruction with simple
forecast
 Kind of like
deconstructing, cleaning,
a rebuilding a car engine
 Combines k-nearest
neighbors-based
clustering with
memoryless state
changes converging to
a transition distribution
(weighted directed
graph)
◦ Reduce an observation
to a pattern
◦ Remember patterns seen
(across time or space)
◦ Match new observations
to this set of patterns
◦ Computationally more
feasible than k-means
clustering
Like remembering classes of chess
board configurations across
games played
 Many machine learning methods exist today, and
many more are being developed every day.
 Methods come with certain assumptions that
must be met.
◦ Breaking assumptions can lead to poor model fit or
incorrect inference.
◦ Matching a method to a problem not only can help with
better prediction and inference; it can also lead to faster
computation times and better presentations to clients.
 Development depends on problem-matching and
deep understanding of the mathematics behind
the methods used.
 Parametric Regression:
◦ Draper, N. R., Smith, H., & Pownell, E. (1966). Applied regression
analysis (Vol. 3). New York: Wiley.
◦ McCullagh, P. (1984). Generalized linear models. European Journal
of Operational Research, 16(3), 285-292.
◦ Zou, H., & Hastie, T. (2005). Regularization and variable selection
via the elastic net. Journal of the Royal Statistical Society: Series B
(Statistical Methodology), 67(2), 301-320.
◦ Augugliaro, L., Mineo, A. M., & Wit, E. C. (2013). Differential
geometric least angle regression: a differential geometric approach
to sparse generalized linear models. Journal of the Royal Statistical
Society: Series B (Statistical Methodology), 75(3), 471-498.
◦ Raftery, A. E., Madigan, D., & Hoeting, J. A. (1997). Bayesian model
averaging for linear regression models. Journal of the American
Statistical Association, 92(437), 179-191.
◦ Osborne, M. R., & Turlach, B. A. (2011). A homotopy algorithm for
the quantile regression lasso and related piecewise linear
problems. Journal of Computational and Graphical Statistics, 20(4),
972-987.
◦ Drori, I., & Donoho, D. L. (2006, May). Solution of l 1 minimization
problems by LARS/homotopy methods. In Acoustics, Speech and
Signal Processing, 2006. ICASSP 2006 Proceedings. 2006 IEEE
International Conference on (Vol. 3, pp. III-III). IEEE.
 Semi-Parametric Regression:
◦ Marsh, L. C., & Cormier, D. R. (2001). Spline regression models (Vol. 137). Sage.
◦ Hastie, T., & Tibshirani, R. (1986). Generalized additive models. Statistical science,
297-310.
◦ McZgee, V. E., & Carleton, W. T. (1970). Piecewise regression. Journal of the
American Statistical Association, 65(331), 1109-1124.
◦ Gerber, S., Rübel, O., Bremer, P. T., Pascucci, V., & Whitaker, R. T. (2013). Morse–
Smale Regression. Journal of Computational and Graphical Statistics, 22(1), 193-
214.
◦ Hearst, M. A., Dumais, S. T., Osman, E., Platt, J., & Scholkopf, B. (1998). Support
vector machines. Intelligent Systems and their Applications, IEEE, 13(4), 18-28.
◦ Hornik, K., Stinchcombe, M., & White, H. (1989). Multilayer feedforward networks
are universal approximators. Neural networks, 2(5), 359-366.
◦ Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with
deep convolutional neural networks. In Advances in neural information processing
systems (pp. 1097-1105).
◦ Huang, G. B., Wang, D. H., & Lan, Y. (2011). Extreme learning machines: a survey.
International Journal of Machine Learning and Cybernetics, 2(2), 107-122.
◦ Friedman, J. H. (1991). Multivariate adaptive regression splines. The annals of
statistics, 1-67.
◦ Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., ... & Ghemawat,
S. (2016). Tensorflow: Large-scale machine learning on heterogeneous distributed
systems. arXiv preprint arXiv:1603.04467.
 Nonparametric Regression:
◦ Buntine, W. (1992). Learning classification trees. Statistics and
computing, 2(2), 63-73.
◦ Friedman, J. H. (2002). Stochastic gradient boosting. Computational
Statistics & Data Analysis, 38(4), 367-378.
◦ Bäck, T. (1996). Evolutionary algorithms in theory and practice:
evolution strategies, evolutionary programming, genetic algorithms.
Oxford university press.
◦ Zhang, G. (2011). Quantum-inspired evolutionary algorithms: a survey
and empirical study. Journal of Heuristics, 17(3), 303-351.
◦ Dietterich, T. G. (2000). Ensemble methods in machine learning. In
Multiple classifier systems (pp. 1-15). Springer Berlin Heidelberg.
◦ Friedman, J. H. (2002). Stochastic gradient boosting. Computational
Statistics & Data Analysis, 38(4), 367-378.
◦ Breiman, L. (2001). Random forests. Machine learning, 45(1), 5-32.
◦ Chen, T., & Guestrin, C. (2016, August). Xgboost: A scalable tree
boosting system. In Proceedings of the 22nd acm sigkdd international
conference on knowledge discovery and data mining (pp. 785-794).
ACM.
 Supervised with Unsupervised Methods:
◦ van der Maaten, L. J., Postma, E. O., & van den
Herik, H. J. (2009). Dimensionality reduction: A
comparative review. Journal of Machine
Learning Research, 10(1-41), 66-71.
◦ Kuncheva, L. I., & Rodríguez, J. J. (2007). An
experimental study on rotation forest
ensembles. In Multiple Classifier Systems (pp.
459-468). Springer Berlin Heidelberg.
◦ van der Laan, M. J., Polley, E. C., & Hubbard, A.
E. (2007). Super learner. Statistical applications
in genetics and molecular biology, 6(1).
◦ Sapp, S. K. (2014). Subsemble: A Flexible
Subset Ensemble Prediction Method (Doctoral
dissertation, University of California, Berkeley).
 Unsupervised Methods:
◦ Fukunaga, K., & Narendra, P. M. (1975). A branch and
bound algorithm for computing k-nearest neighbors.
Computers, IEEE Transactions on, 100(7), 750-753.
◦ MacQueen, J. (1967, June). Some methods for
classification and analysis of multivariate observations. In
Proceedings of the fifth Berkeley symposium on
mathematical statistics and probability (Vol. 1, No. 14, pp.
281-297).
◦ Chartrand, G., & Oellermann, O. R. (1993). Applied and
algorithmic graph theory.
◦ Ng, A. Y., Jordan, M. I., & Weiss, Y. (2002). On spectral
clustering: Analysis and an algorithm. Advances in neural
information processing systems, 2, 849-856.
◦ Singh, G., Mémoli, F., & Carlsson, G. E. (2007, September).
Topological Methods for the Analysis of High Dimensional
Data Sets and 3D Object Recognition. In SPBG (pp. 91-
100).
◦ Chen, Y. C., Genovese, C. R., & Wasserman, L. (2016). A
comprehensive approach to mode clustering. Electronic
Journal of Statistics, 10(1), 210-241.
 Time Series
◦ Wu, J. P., & Wei, S. (1989). Time series analysis. Hunan
Science and Technology Press, ChangSha.
◦ Vautard, R., Yiou, P., & Ghil, M. (1992). Singular-spectrum
analysis: A toolkit for short, noisy chaotic signals. Physica
D: Nonlinear Phenomena, 58(1), 95-126.
◦ Dunham, M. H., Meng, Y., & Huang, J. (2004, November).
Extensible markov model. In Data Mining, 2004. ICDM'04.
Fourth IEEE International Conference on (pp. 371-374).
IEEE.
◦ Ullman, J. B., & Bentler, P. M. (2003). Structural equation
modeling. John Wiley & Sons, Inc..
◦ Heckerman, D., Geiger, D., & Chickering, D. M. (1994, July).
Learning Bayesian networks: The combination of knowledge
and statistical data. In Proceedings of the Tenth
international conference on Uncertainty in artificial
intelligence (pp. 293-301). Morgan Kaufmann Publishers
Inc..

More Related Content

What's hot

An Introduction to Anomaly Detection
An Introduction to Anomaly DetectionAn Introduction to Anomaly Detection
An Introduction to Anomaly DetectionKenneth Graham
 
Feature Engineering in Machine Learning
Feature Engineering in Machine LearningFeature Engineering in Machine Learning
Feature Engineering in Machine LearningKnoldus Inc.
 
Deep vs diverse architectures for classification problems
Deep vs diverse architectures for classification problemsDeep vs diverse architectures for classification problems
Deep vs diverse architectures for classification problemsColleen Farrelly
 
Data warehouse and data mining
Data warehouse and data miningData warehouse and data mining
Data warehouse and data miningPradnya Saval
 
Inductive bias
Inductive biasInductive bias
Inductive biasswapnac12
 
Interpretable Machine Learning Using LIME Framework - Kasia Kulma (PhD), Data...
Interpretable Machine Learning Using LIME Framework - Kasia Kulma (PhD), Data...Interpretable Machine Learning Using LIME Framework - Kasia Kulma (PhD), Data...
Interpretable Machine Learning Using LIME Framework - Kasia Kulma (PhD), Data...Sri Ambati
 
Machine Learning in Big Data
Machine Learning in Big DataMachine Learning in Big Data
Machine Learning in Big DataDataWorks Summit
 
Confusion Matrix
Confusion MatrixConfusion Matrix
Confusion MatrixRajat Gupta
 
Anomaly detection with machine learning at scale
Anomaly detection with machine learning at scaleAnomaly detection with machine learning at scale
Anomaly detection with machine learning at scaleImpetus Technologies
 
Concept Drift: Monitoring Model Quality In Streaming ML Applications
Concept Drift: Monitoring Model Quality In Streaming ML ApplicationsConcept Drift: Monitoring Model Quality In Streaming ML Applications
Concept Drift: Monitoring Model Quality In Streaming ML ApplicationsLightbend
 
Feature Engineering
Feature EngineeringFeature Engineering
Feature EngineeringHJ van Veen
 
Supervised learning and unsupervised learning
Supervised learning and unsupervised learningSupervised learning and unsupervised learning
Supervised learning and unsupervised learningArunakumariAkula1
 

What's hot (20)

An Introduction to Anomaly Detection
An Introduction to Anomaly DetectionAn Introduction to Anomaly Detection
An Introduction to Anomaly Detection
 
Feature Engineering in Machine Learning
Feature Engineering in Machine LearningFeature Engineering in Machine Learning
Feature Engineering in Machine Learning
 
Deep vs diverse architectures for classification problems
Deep vs diverse architectures for classification problemsDeep vs diverse architectures for classification problems
Deep vs diverse architectures for classification problems
 
Data warehouse and data mining
Data warehouse and data miningData warehouse and data mining
Data warehouse and data mining
 
Morse-Smale Regression
Morse-Smale RegressionMorse-Smale Regression
Morse-Smale Regression
 
Inductive bias
Inductive biasInductive bias
Inductive bias
 
Applications of Machine Learning
Applications of Machine LearningApplications of Machine Learning
Applications of Machine Learning
 
Interpretable Machine Learning Using LIME Framework - Kasia Kulma (PhD), Data...
Interpretable Machine Learning Using LIME Framework - Kasia Kulma (PhD), Data...Interpretable Machine Learning Using LIME Framework - Kasia Kulma (PhD), Data...
Interpretable Machine Learning Using LIME Framework - Kasia Kulma (PhD), Data...
 
Clustering
ClusteringClustering
Clustering
 
Machine Learning in Big Data
Machine Learning in Big DataMachine Learning in Big Data
Machine Learning in Big Data
 
Deep learning
Deep learning Deep learning
Deep learning
 
Confusion Matrix
Confusion MatrixConfusion Matrix
Confusion Matrix
 
Anomaly detection with machine learning at scale
Anomaly detection with machine learning at scaleAnomaly detection with machine learning at scale
Anomaly detection with machine learning at scale
 
Concept Drift: Monitoring Model Quality In Streaming ML Applications
Concept Drift: Monitoring Model Quality In Streaming ML ApplicationsConcept Drift: Monitoring Model Quality In Streaming ML Applications
Concept Drift: Monitoring Model Quality In Streaming ML Applications
 
Data Mining Overview
Data Mining OverviewData Mining Overview
Data Mining Overview
 
Missing Data imputation
Missing Data imputationMissing Data imputation
Missing Data imputation
 
Data cubes
Data cubesData cubes
Data cubes
 
Ensemble learning
Ensemble learningEnsemble learning
Ensemble learning
 
Feature Engineering
Feature EngineeringFeature Engineering
Feature Engineering
 
Supervised learning and unsupervised learning
Supervised learning and unsupervised learningSupervised learning and unsupervised learning
Supervised learning and unsupervised learning
 

Similar to Machine Learning by Analogy

Introduction to MARS (1999)
Introduction to MARS (1999)Introduction to MARS (1999)
Introduction to MARS (1999)Salford Systems
 
Multiscale Mapper Networks
Multiscale Mapper NetworksMultiscale Mapper Networks
Multiscale Mapper NetworksColleen Farrelly
 
Introduction to RandomForests 2004
Introduction to RandomForests 2004Introduction to RandomForests 2004
Introduction to RandomForests 2004Salford Systems
 
20211229120253D6323_PERT 06_ Ensemble Learning.pptx
20211229120253D6323_PERT 06_ Ensemble Learning.pptx20211229120253D6323_PERT 06_ Ensemble Learning.pptx
20211229120253D6323_PERT 06_ Ensemble Learning.pptxRaflyRizky2
 
Informs presentation new ppt
Informs presentation new pptInforms presentation new ppt
Informs presentation new pptSalford Systems
 
Social Network Analysis Introduction including Data Structure Graph overview.
Social Network Analysis Introduction including Data Structure Graph overview. Social Network Analysis Introduction including Data Structure Graph overview.
Social Network Analysis Introduction including Data Structure Graph overview. Doug Needham
 
Machine Learning in the Financial Industry
Machine Learning in the Financial IndustryMachine Learning in the Financial Industry
Machine Learning in the Financial IndustrySubrat Panda, PhD
 
Supervised and unsupervised learning
Supervised and unsupervised learningSupervised and unsupervised learning
Supervised and unsupervised learningAmAn Singh
 
Model Evaluation in the land of Deep Learning
Model Evaluation in the land of Deep LearningModel Evaluation in the land of Deep Learning
Model Evaluation in the land of Deep LearningPramit Choudhary
 
Ensemble learning Techniques
Ensemble learning TechniquesEnsemble learning Techniques
Ensemble learning TechniquesBabu Priyavrat
 
Data Structure.pptx
Data Structure.pptxData Structure.pptx
Data Structure.pptxSajalFayyaz
 
Humanmetrics Jung Typology Test™You haven’t answered 1 que
Humanmetrics Jung Typology Test™You haven’t answered 1 queHumanmetrics Jung Typology Test™You haven’t answered 1 que
Humanmetrics Jung Typology Test™You haven’t answered 1 queNarcisaBrandenburg70
 
Machine Learning statistical model using Transportation data
Machine Learning statistical model using Transportation dataMachine Learning statistical model using Transportation data
Machine Learning statistical model using Transportation datajagan477830
 
Data Mining In Market Research
Data Mining In Market ResearchData Mining In Market Research
Data Mining In Market Researchjim
 
Data Mining in Market Research
Data Mining in Market ResearchData Mining in Market Research
Data Mining in Market Researchbutest
 

Similar to Machine Learning by Analogy (20)

Introduction to MARS (1999)
Introduction to MARS (1999)Introduction to MARS (1999)
Introduction to MARS (1999)
 
Multiscale Mapper Networks
Multiscale Mapper NetworksMultiscale Mapper Networks
Multiscale Mapper Networks
 
Introduction to RandomForests 2004
Introduction to RandomForests 2004Introduction to RandomForests 2004
Introduction to RandomForests 2004
 
20211229120253D6323_PERT 06_ Ensemble Learning.pptx
20211229120253D6323_PERT 06_ Ensemble Learning.pptx20211229120253D6323_PERT 06_ Ensemble Learning.pptx
20211229120253D6323_PERT 06_ Ensemble Learning.pptx
 
Phylogenetics
PhylogeneticsPhylogenetics
Phylogenetics
 
Bank loan purchase modeling
Bank loan purchase modelingBank loan purchase modeling
Bank loan purchase modeling
 
Informs presentation new ppt
Informs presentation new pptInforms presentation new ppt
Informs presentation new ppt
 
Mon c-5-hartig-2493
Mon c-5-hartig-2493Mon c-5-hartig-2493
Mon c-5-hartig-2493
 
Social Network Analysis Introduction including Data Structure Graph overview.
Social Network Analysis Introduction including Data Structure Graph overview. Social Network Analysis Introduction including Data Structure Graph overview.
Social Network Analysis Introduction including Data Structure Graph overview.
 
Data analysis01 singlevariable
Data analysis01 singlevariableData analysis01 singlevariable
Data analysis01 singlevariable
 
Machine Learning in the Financial Industry
Machine Learning in the Financial IndustryMachine Learning in the Financial Industry
Machine Learning in the Financial Industry
 
Supervised and unsupervised learning
Supervised and unsupervised learningSupervised and unsupervised learning
Supervised and unsupervised learning
 
Summary2 (1)
Summary2 (1)Summary2 (1)
Summary2 (1)
 
Model Evaluation in the land of Deep Learning
Model Evaluation in the land of Deep LearningModel Evaluation in the land of Deep Learning
Model Evaluation in the land of Deep Learning
 
Ensemble learning Techniques
Ensemble learning TechniquesEnsemble learning Techniques
Ensemble learning Techniques
 
Data Structure.pptx
Data Structure.pptxData Structure.pptx
Data Structure.pptx
 
Humanmetrics Jung Typology Test™You haven’t answered 1 que
Humanmetrics Jung Typology Test™You haven’t answered 1 queHumanmetrics Jung Typology Test™You haven’t answered 1 que
Humanmetrics Jung Typology Test™You haven’t answered 1 que
 
Machine Learning statistical model using Transportation data
Machine Learning statistical model using Transportation dataMachine Learning statistical model using Transportation data
Machine Learning statistical model using Transportation data
 
Data Mining In Market Research
Data Mining In Market ResearchData Mining In Market Research
Data Mining In Market Research
 
Data Mining in Market Research
Data Mining in Market ResearchData Mining in Market Research
Data Mining in Market Research
 

More from Colleen Farrelly

Generative AI for Social Good at Open Data Science East 2024
Generative AI for Social Good at Open Data Science East 2024Generative AI for Social Good at Open Data Science East 2024
Generative AI for Social Good at Open Data Science East 2024Colleen Farrelly
 
Hands-On Network Science, PyData Global 2023
Hands-On Network Science, PyData Global 2023Hands-On Network Science, PyData Global 2023
Hands-On Network Science, PyData Global 2023Colleen Farrelly
 
Modeling Climate Change.pptx
Modeling Climate Change.pptxModeling Climate Change.pptx
Modeling Climate Change.pptxColleen Farrelly
 
Natural Language Processing for Beginners.pptx
Natural Language Processing for Beginners.pptxNatural Language Processing for Beginners.pptx
Natural Language Processing for Beginners.pptxColleen Farrelly
 
The Shape of Data--ODSC.pptx
The Shape of Data--ODSC.pptxThe Shape of Data--ODSC.pptx
The Shape of Data--ODSC.pptxColleen Farrelly
 
Generative AI, WiDS 2023.pptx
Generative AI, WiDS 2023.pptxGenerative AI, WiDS 2023.pptx
Generative AI, WiDS 2023.pptxColleen Farrelly
 
Emerging Technologies for Public Health in Remote Locations.pptx
Emerging Technologies for Public Health in Remote Locations.pptxEmerging Technologies for Public Health in Remote Locations.pptx
Emerging Technologies for Public Health in Remote Locations.pptxColleen Farrelly
 
Applications of Forman-Ricci Curvature.pptx
Applications of Forman-Ricci Curvature.pptxApplications of Forman-Ricci Curvature.pptx
Applications of Forman-Ricci Curvature.pptxColleen Farrelly
 
Geometry for Social Good.pptx
Geometry for Social Good.pptxGeometry for Social Good.pptx
Geometry for Social Good.pptxColleen Farrelly
 
Topology for Time Series.pptx
Topology for Time Series.pptxTopology for Time Series.pptx
Topology for Time Series.pptxColleen Farrelly
 
Time Series Applications AMLD.pptx
Time Series Applications AMLD.pptxTime Series Applications AMLD.pptx
Time Series Applications AMLD.pptxColleen Farrelly
 
An introduction to quantum machine learning.pptx
An introduction to quantum machine learning.pptxAn introduction to quantum machine learning.pptx
An introduction to quantum machine learning.pptxColleen Farrelly
 
An introduction to time series data with R.pptx
An introduction to time series data with R.pptxAn introduction to time series data with R.pptx
An introduction to time series data with R.pptxColleen Farrelly
 
NLP: Challenges and Opportunities in Underserved Areas
NLP: Challenges and Opportunities in Underserved AreasNLP: Challenges and Opportunities in Underserved Areas
NLP: Challenges and Opportunities in Underserved AreasColleen Farrelly
 
Geometry, Data, and One Path Into Data Science.pptx
Geometry, Data, and One Path Into Data Science.pptxGeometry, Data, and One Path Into Data Science.pptx
Geometry, Data, and One Path Into Data Science.pptxColleen Farrelly
 
Topological Data Analysis.pptx
Topological Data Analysis.pptxTopological Data Analysis.pptx
Topological Data Analysis.pptxColleen Farrelly
 
Transforming Text Data to Matrix Data via Embeddings.pptx
Transforming Text Data to Matrix Data via Embeddings.pptxTransforming Text Data to Matrix Data via Embeddings.pptx
Transforming Text Data to Matrix Data via Embeddings.pptxColleen Farrelly
 
Natural Language Processing in the Wild.pptx
Natural Language Processing in the Wild.pptxNatural Language Processing in the Wild.pptx
Natural Language Processing in the Wild.pptxColleen Farrelly
 
SAS Global 2021 Introduction to Natural Language Processing
SAS Global 2021 Introduction to Natural Language Processing SAS Global 2021 Introduction to Natural Language Processing
SAS Global 2021 Introduction to Natural Language Processing Colleen Farrelly
 
2021 American Mathematical Society Data Science Talk
2021 American Mathematical Society Data Science Talk2021 American Mathematical Society Data Science Talk
2021 American Mathematical Society Data Science TalkColleen Farrelly
 

More from Colleen Farrelly (20)

Generative AI for Social Good at Open Data Science East 2024
Generative AI for Social Good at Open Data Science East 2024Generative AI for Social Good at Open Data Science East 2024
Generative AI for Social Good at Open Data Science East 2024
 
Hands-On Network Science, PyData Global 2023
Hands-On Network Science, PyData Global 2023Hands-On Network Science, PyData Global 2023
Hands-On Network Science, PyData Global 2023
 
Modeling Climate Change.pptx
Modeling Climate Change.pptxModeling Climate Change.pptx
Modeling Climate Change.pptx
 
Natural Language Processing for Beginners.pptx
Natural Language Processing for Beginners.pptxNatural Language Processing for Beginners.pptx
Natural Language Processing for Beginners.pptx
 
The Shape of Data--ODSC.pptx
The Shape of Data--ODSC.pptxThe Shape of Data--ODSC.pptx
The Shape of Data--ODSC.pptx
 
Generative AI, WiDS 2023.pptx
Generative AI, WiDS 2023.pptxGenerative AI, WiDS 2023.pptx
Generative AI, WiDS 2023.pptx
 
Emerging Technologies for Public Health in Remote Locations.pptx
Emerging Technologies for Public Health in Remote Locations.pptxEmerging Technologies for Public Health in Remote Locations.pptx
Emerging Technologies for Public Health in Remote Locations.pptx
 
Applications of Forman-Ricci Curvature.pptx
Applications of Forman-Ricci Curvature.pptxApplications of Forman-Ricci Curvature.pptx
Applications of Forman-Ricci Curvature.pptx
 
Geometry for Social Good.pptx
Geometry for Social Good.pptxGeometry for Social Good.pptx
Geometry for Social Good.pptx
 
Topology for Time Series.pptx
Topology for Time Series.pptxTopology for Time Series.pptx
Topology for Time Series.pptx
 
Time Series Applications AMLD.pptx
Time Series Applications AMLD.pptxTime Series Applications AMLD.pptx
Time Series Applications AMLD.pptx
 
An introduction to quantum machine learning.pptx
An introduction to quantum machine learning.pptxAn introduction to quantum machine learning.pptx
An introduction to quantum machine learning.pptx
 
An introduction to time series data with R.pptx
An introduction to time series data with R.pptxAn introduction to time series data with R.pptx
An introduction to time series data with R.pptx
 
NLP: Challenges and Opportunities in Underserved Areas
NLP: Challenges and Opportunities in Underserved AreasNLP: Challenges and Opportunities in Underserved Areas
NLP: Challenges and Opportunities in Underserved Areas
 
Geometry, Data, and One Path Into Data Science.pptx
Geometry, Data, and One Path Into Data Science.pptxGeometry, Data, and One Path Into Data Science.pptx
Geometry, Data, and One Path Into Data Science.pptx
 
Topological Data Analysis.pptx
Topological Data Analysis.pptxTopological Data Analysis.pptx
Topological Data Analysis.pptx
 
Transforming Text Data to Matrix Data via Embeddings.pptx
Transforming Text Data to Matrix Data via Embeddings.pptxTransforming Text Data to Matrix Data via Embeddings.pptx
Transforming Text Data to Matrix Data via Embeddings.pptx
 
Natural Language Processing in the Wild.pptx
Natural Language Processing in the Wild.pptxNatural Language Processing in the Wild.pptx
Natural Language Processing in the Wild.pptx
 
SAS Global 2021 Introduction to Natural Language Processing
SAS Global 2021 Introduction to Natural Language Processing SAS Global 2021 Introduction to Natural Language Processing
SAS Global 2021 Introduction to Natural Language Processing
 
2021 American Mathematical Society Data Science Talk
2021 American Mathematical Society Data Science Talk2021 American Mathematical Society Data Science Talk
2021 American Mathematical Society Data Science Talk
 

Recently uploaded

DBA Basics: Getting Started with Performance Tuning.pdf
DBA Basics: Getting Started with Performance Tuning.pdfDBA Basics: Getting Started with Performance Tuning.pdf
DBA Basics: Getting Started with Performance Tuning.pdfJohn Sterrett
 
Consent & Privacy Signals on Google *Pixels* - MeasureCamp Amsterdam 2024
Consent & Privacy Signals on Google *Pixels* - MeasureCamp Amsterdam 2024Consent & Privacy Signals on Google *Pixels* - MeasureCamp Amsterdam 2024
Consent & Privacy Signals on Google *Pixels* - MeasureCamp Amsterdam 2024thyngster
 
Semantic Shed - Squashing and Squeezing.pptx
Semantic Shed - Squashing and Squeezing.pptxSemantic Shed - Squashing and Squeezing.pptx
Semantic Shed - Squashing and Squeezing.pptxMike Bennett
 
detection and classification of knee osteoarthritis.pptx
detection and classification of knee osteoarthritis.pptxdetection and classification of knee osteoarthritis.pptx
detection and classification of knee osteoarthritis.pptxAleenaJamil4
 
Learn How Data Science Changes Our World
Learn How Data Science Changes Our WorldLearn How Data Science Changes Our World
Learn How Data Science Changes Our WorldEduminds Learning
 
Top 5 Best Data Analytics Courses In Queens
Top 5 Best Data Analytics Courses In QueensTop 5 Best Data Analytics Courses In Queens
Top 5 Best Data Analytics Courses In Queensdataanalyticsqueen03
 
科罗拉多大学波尔得分校毕业证学位证成绩单-可办理
科罗拉多大学波尔得分校毕业证学位证成绩单-可办理科罗拉多大学波尔得分校毕业证学位证成绩单-可办理
科罗拉多大学波尔得分校毕业证学位证成绩单-可办理e4aez8ss
 
Predictive Analysis for Loan Default Presentation : Data Analysis Project PPT
Predictive Analysis for Loan Default  Presentation : Data Analysis Project PPTPredictive Analysis for Loan Default  Presentation : Data Analysis Project PPT
Predictive Analysis for Loan Default Presentation : Data Analysis Project PPTBoston Institute of Analytics
 
Call Us ➥97111√47426🤳Call Girls in Aerocity (Delhi NCR)
Call Us ➥97111√47426🤳Call Girls in Aerocity (Delhi NCR)Call Us ➥97111√47426🤳Call Girls in Aerocity (Delhi NCR)
Call Us ➥97111√47426🤳Call Girls in Aerocity (Delhi NCR)jennyeacort
 
毕业文凭制作#回国入职#diploma#degree澳洲中央昆士兰大学毕业证成绩单pdf电子版制作修改#毕业文凭制作#回国入职#diploma#degree
毕业文凭制作#回国入职#diploma#degree澳洲中央昆士兰大学毕业证成绩单pdf电子版制作修改#毕业文凭制作#回国入职#diploma#degree毕业文凭制作#回国入职#diploma#degree澳洲中央昆士兰大学毕业证成绩单pdf电子版制作修改#毕业文凭制作#回国入职#diploma#degree
毕业文凭制作#回国入职#diploma#degree澳洲中央昆士兰大学毕业证成绩单pdf电子版制作修改#毕业文凭制作#回国入职#diploma#degreeyuu sss
 
Effects of Smartphone Addiction on the Academic Performances of Grades 9 to 1...
Effects of Smartphone Addiction on the Academic Performances of Grades 9 to 1...Effects of Smartphone Addiction on the Academic Performances of Grades 9 to 1...
Effects of Smartphone Addiction on the Academic Performances of Grades 9 to 1...limedy534
 
How we prevented account sharing with MFA
How we prevented account sharing with MFAHow we prevented account sharing with MFA
How we prevented account sharing with MFAAndrei Kaleshka
 
Decoding the Heart: Student Presentation on Heart Attack Prediction with Data...
Decoding the Heart: Student Presentation on Heart Attack Prediction with Data...Decoding the Heart: Student Presentation on Heart Attack Prediction with Data...
Decoding the Heart: Student Presentation on Heart Attack Prediction with Data...Boston Institute of Analytics
 
RABBIT: A CLI tool for identifying bots based on their GitHub events.
RABBIT: A CLI tool for identifying bots based on their GitHub events.RABBIT: A CLI tool for identifying bots based on their GitHub events.
RABBIT: A CLI tool for identifying bots based on their GitHub events.natarajan8993
 
Student Profile Sample report on improving academic performance by uniting gr...
Student Profile Sample report on improving academic performance by uniting gr...Student Profile Sample report on improving academic performance by uniting gr...
Student Profile Sample report on improving academic performance by uniting gr...Seán Kennedy
 
Easter Eggs From Star Wars and in cars 1 and 2
Easter Eggs From Star Wars and in cars 1 and 2Easter Eggs From Star Wars and in cars 1 and 2
Easter Eggs From Star Wars and in cars 1 and 217djon017
 
Heart Disease Classification Report: A Data Analysis Project
Heart Disease Classification Report: A Data Analysis ProjectHeart Disease Classification Report: A Data Analysis Project
Heart Disease Classification Report: A Data Analysis ProjectBoston Institute of Analytics
 
NLP Data Science Project Presentation:Predicting Heart Disease with NLP Data ...
NLP Data Science Project Presentation:Predicting Heart Disease with NLP Data ...NLP Data Science Project Presentation:Predicting Heart Disease with NLP Data ...
NLP Data Science Project Presentation:Predicting Heart Disease with NLP Data ...Boston Institute of Analytics
 
9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort service
9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort service9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort service
9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort servicejennyeacort
 
RS 9000 Call In girls Dwarka Mor (DELHI)⇛9711147426🔝Delhi
RS 9000 Call In girls Dwarka Mor (DELHI)⇛9711147426🔝DelhiRS 9000 Call In girls Dwarka Mor (DELHI)⇛9711147426🔝Delhi
RS 9000 Call In girls Dwarka Mor (DELHI)⇛9711147426🔝Delhijennyeacort
 

Recently uploaded (20)

DBA Basics: Getting Started with Performance Tuning.pdf
DBA Basics: Getting Started with Performance Tuning.pdfDBA Basics: Getting Started with Performance Tuning.pdf
DBA Basics: Getting Started with Performance Tuning.pdf
 
Consent & Privacy Signals on Google *Pixels* - MeasureCamp Amsterdam 2024
Consent & Privacy Signals on Google *Pixels* - MeasureCamp Amsterdam 2024Consent & Privacy Signals on Google *Pixels* - MeasureCamp Amsterdam 2024
Consent & Privacy Signals on Google *Pixels* - MeasureCamp Amsterdam 2024
 
Semantic Shed - Squashing and Squeezing.pptx
Semantic Shed - Squashing and Squeezing.pptxSemantic Shed - Squashing and Squeezing.pptx
Semantic Shed - Squashing and Squeezing.pptx
 
detection and classification of knee osteoarthritis.pptx
detection and classification of knee osteoarthritis.pptxdetection and classification of knee osteoarthritis.pptx
detection and classification of knee osteoarthritis.pptx
 
Learn How Data Science Changes Our World
Learn How Data Science Changes Our WorldLearn How Data Science Changes Our World
Learn How Data Science Changes Our World
 
Top 5 Best Data Analytics Courses In Queens
Top 5 Best Data Analytics Courses In QueensTop 5 Best Data Analytics Courses In Queens
Top 5 Best Data Analytics Courses In Queens
 
科罗拉多大学波尔得分校毕业证学位证成绩单-可办理
科罗拉多大学波尔得分校毕业证学位证成绩单-可办理科罗拉多大学波尔得分校毕业证学位证成绩单-可办理
科罗拉多大学波尔得分校毕业证学位证成绩单-可办理
 
Predictive Analysis for Loan Default Presentation : Data Analysis Project PPT
Predictive Analysis for Loan Default  Presentation : Data Analysis Project PPTPredictive Analysis for Loan Default  Presentation : Data Analysis Project PPT
Predictive Analysis for Loan Default Presentation : Data Analysis Project PPT
 
Call Us ➥97111√47426🤳Call Girls in Aerocity (Delhi NCR)
Call Us ➥97111√47426🤳Call Girls in Aerocity (Delhi NCR)Call Us ➥97111√47426🤳Call Girls in Aerocity (Delhi NCR)
Call Us ➥97111√47426🤳Call Girls in Aerocity (Delhi NCR)
 
毕业文凭制作#回国入职#diploma#degree澳洲中央昆士兰大学毕业证成绩单pdf电子版制作修改#毕业文凭制作#回国入职#diploma#degree
毕业文凭制作#回国入职#diploma#degree澳洲中央昆士兰大学毕业证成绩单pdf电子版制作修改#毕业文凭制作#回国入职#diploma#degree毕业文凭制作#回国入职#diploma#degree澳洲中央昆士兰大学毕业证成绩单pdf电子版制作修改#毕业文凭制作#回国入职#diploma#degree
毕业文凭制作#回国入职#diploma#degree澳洲中央昆士兰大学毕业证成绩单pdf电子版制作修改#毕业文凭制作#回国入职#diploma#degree
 
Effects of Smartphone Addiction on the Academic Performances of Grades 9 to 1...
Effects of Smartphone Addiction on the Academic Performances of Grades 9 to 1...Effects of Smartphone Addiction on the Academic Performances of Grades 9 to 1...
Effects of Smartphone Addiction on the Academic Performances of Grades 9 to 1...
 
How we prevented account sharing with MFA
How we prevented account sharing with MFAHow we prevented account sharing with MFA
How we prevented account sharing with MFA
 
Decoding the Heart: Student Presentation on Heart Attack Prediction with Data...
Decoding the Heart: Student Presentation on Heart Attack Prediction with Data...Decoding the Heart: Student Presentation on Heart Attack Prediction with Data...
Decoding the Heart: Student Presentation on Heart Attack Prediction with Data...
 
RABBIT: A CLI tool for identifying bots based on their GitHub events.
RABBIT: A CLI tool for identifying bots based on their GitHub events.RABBIT: A CLI tool for identifying bots based on their GitHub events.
RABBIT: A CLI tool for identifying bots based on their GitHub events.
 
Student Profile Sample report on improving academic performance by uniting gr...
Student Profile Sample report on improving academic performance by uniting gr...Student Profile Sample report on improving academic performance by uniting gr...
Student Profile Sample report on improving academic performance by uniting gr...
 
Easter Eggs From Star Wars and in cars 1 and 2
Easter Eggs From Star Wars and in cars 1 and 2Easter Eggs From Star Wars and in cars 1 and 2
Easter Eggs From Star Wars and in cars 1 and 2
 
Heart Disease Classification Report: A Data Analysis Project
Heart Disease Classification Report: A Data Analysis ProjectHeart Disease Classification Report: A Data Analysis Project
Heart Disease Classification Report: A Data Analysis Project
 
NLP Data Science Project Presentation:Predicting Heart Disease with NLP Data ...
NLP Data Science Project Presentation:Predicting Heart Disease with NLP Data ...NLP Data Science Project Presentation:Predicting Heart Disease with NLP Data ...
NLP Data Science Project Presentation:Predicting Heart Disease with NLP Data ...
 
9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort service
9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort service9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort service
9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort service
 
RS 9000 Call In girls Dwarka Mor (DELHI)⇛9711147426🔝Delhi
RS 9000 Call In girls Dwarka Mor (DELHI)⇛9711147426🔝DelhiRS 9000 Call In girls Dwarka Mor (DELHI)⇛9711147426🔝Delhi
RS 9000 Call In girls Dwarka Mor (DELHI)⇛9711147426🔝Delhi
 

Machine Learning by Analogy

  • 2.  Many machine learning methods exist in the literature and in industry. ◦ What works well for one problem may not work well for the next problem. ◦ In addition to poor model fit, an incorrect application of methods can lead to incorrect inference.  Implications for data-driven business decisions.  Low future confidence in data science and its results.  Lower quality software products.  Understanding the intuition and mathematics behind these methods can ameliorate these problems. ◦ This talk focuses on building intuition. ◦ Links to theoretical papers underlying each method.
  • 3.
  • 4.  Total variance of a normally- distributed outcome as cookie jar.  Error as empty space.  Predictors accounting for pieces of the total variance as cookies. ◦ Based on relationship to predictor and to each other.  Cookies accounting for the same piece of variance as those smooshed together.  Many statistical assumptions need to be met. www.zelcoviacookies.com
  • 5.  Extends multiple regression. ◦ Many types of outcome distributions. ◦ Transform an outcome variable to create a linear relationship with predictors.  Sort of like silly putty stretching the outcome variable in the data space.  Does not work on high- dimensional data where predictors > observations.  Only certain outcome distributions. ◦ Exponential family as example. tomstock.photoshelter.com
  • 6.  Impose size and/or variable overlap constraints (penalties) on generalized linear models. ◦ Elastic net as hybrid of these constraints. ◦ Can handle large numbers of predictors.  Reduce the number of predictors. ◦ Shrink some predictor estimates to 0. ◦ Examine sets of similar predictors.  Similar to eating irrelevant cookies in the regression cookie jar or a cowboy at the origin roping coefficients that get too close
  • 7.  Homotopy arrow example ◦ Red and blue arrows can be deformed into each other by wiggling and stretching the line path with anchors at start and finish of line ◦ Yellow arrow crosses holes and would need to backtrack or break to the surface to freely wiggle into the blue or red line  Homotopy method in LASSO/LARS wiggles an easy regression path into an optimal regression path ◦ Avoids obstacles that can trap other regression estimators (peaks, valleys, saddles…)  Homotopy as path equivalence ◦ Intrinsic property of topological spaces (such as data manifolds)
  • 8.  Instead of fitting model to data, fit model to tangent space (what isn’t the data). ◦ Deals with collinearity, as parallel vectors share a tangent space (only one selected of collinear group). ◦ LASSO and LARS extensions. ◦ Rao scoring for selection.  Effect estimates (angles).  Model selection criteria.  Information criteria.  Deviance scoring.
  • 9.  Fit all possible models and figure out likelihood of each given observed data. ◦ Based on Bayes’ Theorem and conditional probability. ◦ Instead of giving likelihood that data came from a specific parameterized population (univariate), figure out likelihood of set of data coming from sets of population (multivariate). ◦ Can select naïve prior (no assumptions on model or population) or make an informed guess (assumptions about population or important factors in model). ◦ Combine multiple models according to their likelihoods into a blended model given data.
  • 11.  Extends a generalized linear models by estimating unknown (nonlinear) functions of an outcome on intervals of a predictor.  Use of “knots” to break function into intervals.  Similar to a downhill skier. ◦ Slope as a function of outcome. ◦ Skier as spline. ◦ Flags as knots anchoring the skier as he travels down the slope. www.dearsportsfan.com
  • 12.  Multivariate adaptive regression splines as an extension of spline models to multivariate data ◦ Knots placed according to multivariate structure and splines fit between knots ◦ Much like fixing a rope to the peaks and valleys of a mountain range, where the rope has enough slack to hug the terrain between its fixed points
  • 13.  Extends spline models to the relationships of many predictors to an outcome. ◦ Also allows for a “silly- putty” transformation of the outcome variable.  Like multiple skiers on many courses and mountains to estimate many relationships in the dataset. www.filigreeinn.com
  • 14.  Chop data into partitions and then fit multiple regression models to each partition.  Divide-and-conquer approach.  Examples: ◦ Multivariate adaptive regression splines ◦ Regression trees ◦ Morse-Smale regression www.pinterest.com
  • 15.  Based on a branch of math called topology. ◦ Study of changes in function behavior on shapes. ◦ Used to classify similarities/ differences between shapes. ◦ Data clouds turned into discrete shape combinations (simplices).  Use these principles to partition data and fit elastic net models to each piece. ◦ Break data into multiple toy sets. ◦ Analyze sets for underlying properties of each toy.  Useful visual output for additional data mining.
  • 16.  Linear or non-linear support beams used to separate group data for classification or map data for kernel- based regression.  Much like scaffolding and support beams separating and holding up parts of a high rise. http://en.wikipedia.org/wiki/Support_vector_machine leadertom.en.ec21.com
  • 17.  Based on processing complex, nonlinear information the way the human brain does via a series of feature mappings. colah.github.io www.alz.org Arrows denote mapping functions, which take one topological space to another
  • 18.  Type of shallow, wide neural network.  Reduces framework to a penalized linear algebra problem, rather than iterative training (much faster to solve).  Based on random mappings.  Shown to converge to correct classification/regression (universal approximation property—may require unreasonably wide networks).  Semi-supervised learning extensions.
  • 19.  Added layers in neural network to solve width problem in single-layer networks for universal approximation.  More effective in learning features of the data.  Like sifting data with multiple sifters to distill finer and finer pieces of the data.  Computationally intensive and requires architecture design and optimization.
  • 20.  Recent extension of deep learning framework from spreadsheet data to multiple simultaneous spreadsheets ◦ Spreadsheets may be of the same dimension or different dimensions ◦ Could process multiple or hierarchical networks via adjacency matrices  Like sifting through data with multiple inputs of varying sizes and textures
  • 22.  Classifies data according to optimal partitioning of data (visualized as a high- dimensional cube).  Slices data into squares, cubes, and hypercubes (4+ dimensional cubes). ◦ Like finding the best place to cut through the data with a sword.  Option to prune tree for better model (glue some of the pieces back together).  Notoriously unstable.  Optimization possible to arrive at best possible trees (genetic algorithms). tvtropes.org
  • 23.  Optimization technique. ◦ Minimize a loss function in regression.  Accomplished by choosing a variable per step that achieves largest minimization.  Climber trying to descend a mountain. ◦ Minimize height above sea level per step.  Directions (N, S, E, W) as a collection of variables.  Climber rappels down cliff according to these rules. www.chinatravelca.com
  • 24.  Optimization helpers. ◦ Find global maximum or minimum by searching many areas simultaneously. ◦ Can impose restrictions.  Teams of mountain climbers trying to find the summit.  Restrictions as climber supplies, hours of daylight left to find summit…  Genetic algorithm as population of mountain climbers each climbing on his/her own. wallpapers.brothersoft.com
  • 25.  One climber exploring multiple routes simultaneously.  Individual optimized through a series of gates at each update until superposition converges on one state ◦ Say wave 2 is the optimal combination of predictors ◦ Resultant combination slowly flattens to wave 2, with the states of wave 1disappearing http://philschatz.com/physics -book/contents/m42249.html
  • 26.  Combine multiple models of the same type (ex. trees) for better prediction. ◦ Single models unstable.  Equally-good estimates from different models. ◦ Use bootstrapping.  Multiple “marble draws” followed by model creation. ◦ Creates diversity of features. ◦ Creates models with different biases and error.
  • 27.  Uses gradient descent algorithm to model an outcome based on predictors (linear, tree, spline…). ◦ Proceeds in steps, adding variables and adjusting weights according to the algorithm.  Like putting together a puzzle. ◦ Algorithm first focuses on most important parts of the picture (such as the Mona Lisa’s eyes). ◦ Then adds nuances that help identify other missing pieces (hands, landscape…). play.google.com
  • 28.  Adds a penalty term to boosted regression ◦ Can be formulated as LASSO/ridge regression/elastic net with constraints ◦ Also leverages hardware to further speed up boosting ensemble play.google.com
  • 29.  Single tree models poor predictors.  Grow a forest of them for a strong predictor (another ensemble method).  Algorithm: ◦ Takes random sample of data. ◦ Builds one tree. ◦ Repeats until forest grown. ◦ Averages across forest to identify strong predictors.
  • 31.  Principle Component Analysis (PCA) ◦ Variance partitioning to combine pieces of variables into new linear subspace. ◦ Smashing cookies by kind and combining into a flat hybrid cookie in previous regression model.  Manifold Learning ◦ PCA-like algorithms that combine pieces into a new nonlinear subspace. ◦ Non-flat cookie combining.  Useful as pre-processing step for prediction models. ◦ Reduce dimension. ◦ Obtain uncorrelated, non- overlapping variables (bases). marmaladeandmileposts. com
  • 32.  Balanced sampling for low-frequency predictors. ◦ Stratified samples (i.e. sample from bag of mostly white marbles and few red marbles with constraint that 1/5th of draws must be red marbles).  Dimension reduction/mapping pre-processing ◦ Principle component, manifold learning… ◦ Hybrid of neural network methods and tree models.
  • 33.  Aggregation of multiple types of models. ◦ Like a small town election. ◦ Different people have different views of the politics and care about different issues.  Different modeling methods capture different pieces of the data and vote in different pieces. ◦ Leverage strengths, minimize weaknesses ◦ Diversity of methods to better explore underlying data geometry  Avoids multiple testing issues.
  • 34.  Subsembles ◦ Partition data into training sets  Randomly selected or through partition algorithm ◦ Train a model on each data partition ◦ Combine into final weighted prediction model  This is similar to national elections. ◦ Each elector in the electoral college learns for whom his constituents voted. ◦ The final electoral college pools these individual votes. Full Training Dataset Model Training Datasets 1, 2, 3, 4 Final Subsemble Model
  • 35.  Combines superlearning and subsembles for better prediction ◦ Diversity improves subsemble method (better able to explore data geometry) ◦ Bootstrapping improves superlearner pieces (more diversity within each method) ◦ Preliminary empirical evidence shows efficacy of combination.
  • 37.  Classification of a data point based on the classification of its nearest neighboring points.  Like a junior high lunchroom and student clicks. ◦ Students at the same table tend to be more similar to each other than to a distant table. www.wrestlecrap.com
  • 38.  Iteratively separating groups within a dataset based on similarity.  Like untangling tangled balls of yarn into multiple, single- colored balls (ideally). ◦ Each step untangles the mess a little further. ◦ Few stopping guidelines (when it looks separated). krazymommakreations.blogspot.com
  • 39.  Hybrid of supervised and unsupervised learning (groupings and prediction).  Uses graphical representation of data and its relationships.  Algorithms with connections to topology, differential geometry, and Markov chains.  Useful in combination with other machine learning methods to provide extra insight (ex. spectral clustering).
  • 40.  K-means algorithm with weighting and dimension reduction components of similarity measure. ◦ Simplify balls of string to warm colors and cool colors before untangling.  Can be reformulated as a graph clustering problem. ◦ Partition subcomponents of a graph based on flow equations. www.simplepastimes.com
  • 41.  Multivariate technique similar to mode or density clustering. ◦ Find peaks and valleys in data according to an input function on the data (level set slices)— much like a watershed on mountains. ◦ Separate data based on shared peaks and valleys across slices (shared multivariate density/gradient). ◦ Many nice theoretical developments on validity and convergence.
  • 42.  Topological clustering. ◦ Define distance metric. ◦ Slice multidimensional dataset with Morse function. ◦ Examine function behavior across slice. ◦ Cluster function behavior. ◦ Iterate through multiple slices to obtain hierarchy of function behavior.  Much like examining the behavior of multiple objects across a flip book. ◦ Nesting ◦ Cluster overlap Filtered functions then used to create various resolutions of a modified Reeb graph summary of topology.
  • 44.  Similar to decomposing superposed states ◦ Seasonal trends ◦ Yearly trends ◦ Trend averages ◦ Dependencies on previous time point  Knit individual forecasted pieces into a complete forecast by superposing these individual forecasts  Several extensions to neural networks, time- lagged machine learning models…
  • 45.  A time-series method incorporating predictors ◦ Constant predictors at initial time point ◦ Varying predictors at multiple time points  Creates a sort of correlation web between predictors and time points ◦ Can handle multiple time lags and multivariate outcomes ◦ Can handle any GLM outcome links  Related to partial differential equations of dynamic systems
  • 46.  Data-based mining for SEM relationships/time-lag components ◦ Leverages conditional probability between predictors to find dependencies ◦ Does not require a priori model formulation like SEM  Peeking at data to create a dependency web over time or predictors/outcome  Can be validated by a follow-up SEM based on network structure Time 1 Outcome Predictor 2 Time 2 Outcome Time 3 Outcome
  • 47.  Technically: ◦ Matrix decomposition (similar to PCA/manifold learning) ◦ Followed by spectral methods ◦ Cleaning of time-lagged covariance matrix ◦ Reconstruction with simple forecast  Kind of like deconstructing, cleaning, a rebuilding a car engine
  • 48.  Combines k-nearest neighbors-based clustering with memoryless state changes converging to a transition distribution (weighted directed graph) ◦ Reduce an observation to a pattern ◦ Remember patterns seen (across time or space) ◦ Match new observations to this set of patterns ◦ Computationally more feasible than k-means clustering Like remembering classes of chess board configurations across games played
  • 49.
  • 50.  Many machine learning methods exist today, and many more are being developed every day.  Methods come with certain assumptions that must be met. ◦ Breaking assumptions can lead to poor model fit or incorrect inference. ◦ Matching a method to a problem not only can help with better prediction and inference; it can also lead to faster computation times and better presentations to clients.  Development depends on problem-matching and deep understanding of the mathematics behind the methods used.
  • 51.
  • 52.  Parametric Regression: ◦ Draper, N. R., Smith, H., & Pownell, E. (1966). Applied regression analysis (Vol. 3). New York: Wiley. ◦ McCullagh, P. (1984). Generalized linear models. European Journal of Operational Research, 16(3), 285-292. ◦ Zou, H., & Hastie, T. (2005). Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 67(2), 301-320. ◦ Augugliaro, L., Mineo, A. M., & Wit, E. C. (2013). Differential geometric least angle regression: a differential geometric approach to sparse generalized linear models. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 75(3), 471-498. ◦ Raftery, A. E., Madigan, D., & Hoeting, J. A. (1997). Bayesian model averaging for linear regression models. Journal of the American Statistical Association, 92(437), 179-191. ◦ Osborne, M. R., & Turlach, B. A. (2011). A homotopy algorithm for the quantile regression lasso and related piecewise linear problems. Journal of Computational and Graphical Statistics, 20(4), 972-987. ◦ Drori, I., & Donoho, D. L. (2006, May). Solution of l 1 minimization problems by LARS/homotopy methods. In Acoustics, Speech and Signal Processing, 2006. ICASSP 2006 Proceedings. 2006 IEEE International Conference on (Vol. 3, pp. III-III). IEEE.
  • 53.  Semi-Parametric Regression: ◦ Marsh, L. C., & Cormier, D. R. (2001). Spline regression models (Vol. 137). Sage. ◦ Hastie, T., & Tibshirani, R. (1986). Generalized additive models. Statistical science, 297-310. ◦ McZgee, V. E., & Carleton, W. T. (1970). Piecewise regression. Journal of the American Statistical Association, 65(331), 1109-1124. ◦ Gerber, S., Rübel, O., Bremer, P. T., Pascucci, V., & Whitaker, R. T. (2013). Morse– Smale Regression. Journal of Computational and Graphical Statistics, 22(1), 193- 214. ◦ Hearst, M. A., Dumais, S. T., Osman, E., Platt, J., & Scholkopf, B. (1998). Support vector machines. Intelligent Systems and their Applications, IEEE, 13(4), 18-28. ◦ Hornik, K., Stinchcombe, M., & White, H. (1989). Multilayer feedforward networks are universal approximators. Neural networks, 2(5), 359-366. ◦ Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems (pp. 1097-1105). ◦ Huang, G. B., Wang, D. H., & Lan, Y. (2011). Extreme learning machines: a survey. International Journal of Machine Learning and Cybernetics, 2(2), 107-122. ◦ Friedman, J. H. (1991). Multivariate adaptive regression splines. The annals of statistics, 1-67. ◦ Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., ... & Ghemawat, S. (2016). Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467.
  • 54.  Nonparametric Regression: ◦ Buntine, W. (1992). Learning classification trees. Statistics and computing, 2(2), 63-73. ◦ Friedman, J. H. (2002). Stochastic gradient boosting. Computational Statistics & Data Analysis, 38(4), 367-378. ◦ Bäck, T. (1996). Evolutionary algorithms in theory and practice: evolution strategies, evolutionary programming, genetic algorithms. Oxford university press. ◦ Zhang, G. (2011). Quantum-inspired evolutionary algorithms: a survey and empirical study. Journal of Heuristics, 17(3), 303-351. ◦ Dietterich, T. G. (2000). Ensemble methods in machine learning. In Multiple classifier systems (pp. 1-15). Springer Berlin Heidelberg. ◦ Friedman, J. H. (2002). Stochastic gradient boosting. Computational Statistics & Data Analysis, 38(4), 367-378. ◦ Breiman, L. (2001). Random forests. Machine learning, 45(1), 5-32. ◦ Chen, T., & Guestrin, C. (2016, August). Xgboost: A scalable tree boosting system. In Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining (pp. 785-794). ACM.
  • 55.  Supervised with Unsupervised Methods: ◦ van der Maaten, L. J., Postma, E. O., & van den Herik, H. J. (2009). Dimensionality reduction: A comparative review. Journal of Machine Learning Research, 10(1-41), 66-71. ◦ Kuncheva, L. I., & Rodríguez, J. J. (2007). An experimental study on rotation forest ensembles. In Multiple Classifier Systems (pp. 459-468). Springer Berlin Heidelberg. ◦ van der Laan, M. J., Polley, E. C., & Hubbard, A. E. (2007). Super learner. Statistical applications in genetics and molecular biology, 6(1). ◦ Sapp, S. K. (2014). Subsemble: A Flexible Subset Ensemble Prediction Method (Doctoral dissertation, University of California, Berkeley).
  • 56.  Unsupervised Methods: ◦ Fukunaga, K., & Narendra, P. M. (1975). A branch and bound algorithm for computing k-nearest neighbors. Computers, IEEE Transactions on, 100(7), 750-753. ◦ MacQueen, J. (1967, June). Some methods for classification and analysis of multivariate observations. In Proceedings of the fifth Berkeley symposium on mathematical statistics and probability (Vol. 1, No. 14, pp. 281-297). ◦ Chartrand, G., & Oellermann, O. R. (1993). Applied and algorithmic graph theory. ◦ Ng, A. Y., Jordan, M. I., & Weiss, Y. (2002). On spectral clustering: Analysis and an algorithm. Advances in neural information processing systems, 2, 849-856. ◦ Singh, G., Mémoli, F., & Carlsson, G. E. (2007, September). Topological Methods for the Analysis of High Dimensional Data Sets and 3D Object Recognition. In SPBG (pp. 91- 100). ◦ Chen, Y. C., Genovese, C. R., & Wasserman, L. (2016). A comprehensive approach to mode clustering. Electronic Journal of Statistics, 10(1), 210-241.
  • 57.  Time Series ◦ Wu, J. P., & Wei, S. (1989). Time series analysis. Hunan Science and Technology Press, ChangSha. ◦ Vautard, R., Yiou, P., & Ghil, M. (1992). Singular-spectrum analysis: A toolkit for short, noisy chaotic signals. Physica D: Nonlinear Phenomena, 58(1), 95-126. ◦ Dunham, M. H., Meng, Y., & Huang, J. (2004, November). Extensible markov model. In Data Mining, 2004. ICDM'04. Fourth IEEE International Conference on (pp. 371-374). IEEE. ◦ Ullman, J. B., & Bentler, P. M. (2003). Structural equation modeling. John Wiley & Sons, Inc.. ◦ Heckerman, D., Geiger, D., & Chickering, D. M. (1994, July). Learning Bayesian networks: The combination of knowledge and statistical data. In Proceedings of the Tenth international conference on Uncertainty in artificial intelligence (pp. 293-301). Morgan Kaufmann Publishers Inc..

Editor's Notes

  1. Assumes predictors and observations are independent. Assumes errors are normally distributed and independent. Assumes linear relationship between predictors and outcome. Assumes normally-distributed outcome. Draper, N. R., Smith, H., & Pownell, E. (1966). Applied regression analysis (Vol. 3). New York: Wiley.
  2. Same assumptions as multiple regression, minus outcome’s normal distribution (link function extends to non-normal distributions). McCullagh, P. (1984). Generalized linear models. European Journal of Operational Research, 16(3), 285-292.
  3. Relaxes predictor independence requirement and adds penalty term. Adds a penalty to reduce generalized linear model’s model size. Zou, H., & Hastie, T. (2005). Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 67(2), 301-320.
  4. Exists for several types of models, including survival, binomial, and Poisson regression models Augugliaro, L., Mineo, A. M., & Wit, E. C. (2013). Differential geometric least angle regression: a differential geometric approach to sparse generalized linear models. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 75(3), 471-498. Augugliaro, L., & Mineo, A. M. (2015). Using the dglars Package to Estimate a Sparse Generalized Linear Model. In Advances in Statistical Models for Data Analysis (pp. 1-8). Springer International Publishing.
  5. Joint model likelihood given multivariate data vs. parameter likelihood given univariate data Raftery, A. E., Madigan, D., & Hoeting, J. A. (1997). Bayesian model averaging for linear regression models. Journal of the American Statistical Association, 92(437), 179-191. Raftery, A. E., Gneiting, T., Balabdaoui, F., & Polakowski, M. (2005). Using Bayesian model averaging to calibrate forecast ensembles. Monthly Weather Review, 133(5), 1155-1174. Madigan, D., Raftery, A. E., Volinsky, C., & Hoeting, J. (1996). Bayesian model averaging. In Proceedings of the AAAI Workshop on Integrating Multiple Learned Models, Portland, OR (pp. 77-83).
  6. Relaxes linear relationship of predictors to outcome requirement. Marsh, L. C., & Cormier, D. R. (2001). Spline regression models (Vol. 137). Sage.
  7. Generally flexible and able to handle high-dimensional data compared to other spline methods. Friedman, J. H. (1991). Multivariate adaptive regression splines. The annals of statistics, 1-67.
  8. Relaxes linear relationship of predictors to outcome requirement and outcome normality assumption. Hastie, T., & Tibshirani, R. (1986). Generalized additive models. Statistical science, 297-310.
  9. Regression by partition. McZgee, V. E., & Carleton, W. T. (1970). Piecewise regression. Journal of the American Statistical Association, 65(331), 1109-1124.
  10. Regression by data partitions defined by multivariate function behavior. Gerber, S., Rübel, O., Bremer, P. T., Pascucci, V., & Whitaker, R. T. (2013). Morse–Smale Regression. Journal of Computational and Graphical Statistics, 22(1), 193-214.
  11. Classification tool by combined data splits. Singularity issues. Computationally taxing, particularly as number of groups increases. Can be used in semi-parametric regression as nonlinear mapping of regression kernels. Hearst, M. A., Dumais, S. T., Osman, E., Platt, J., & Scholkopf, B. (1998). Support vector machines. Intelligent Systems and their Applications, IEEE, 13(4), 18-28.
  12. Computationally expensive in traditional algorithms and rooted in topological maps. Cannot handle lots of variables compared to number of observations. Cannot handle non-independent data. Hornik, K., Stinchcombe, M., & White, H. (1989). Multilayer feedforward networks are universal approximators. Neural networks, 2(5), 359-366.
  13. Random mappings to reduce MLP to linear system of equations. Huang, G. B., Wang, D. H., & Lan, Y. (2011). Extreme learning machines: a survey. International Journal of Machine Learning and Cybernetics, 2(2), 107-122.
  14. Computationally expensive neural network extension. Still suffers from singularities which hinder performance. Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems (pp. 1097-1105).
  15. More flexible than deep learning architectures (different types of simultaneous input data—graphs, spreadsheets, images…). Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., ... & Ghemawat, S. (2016). Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467.
  16. Does not assume anything about data and regression parameters. Can be unstable (many optimal solutions). Can be used to classify or regress. Each additional cut adds another piece to an interaction term. Buntine, W. (1992). Learning classification trees. Statistics and computing, 2(2), 63-73.
  17. A type of optimization that works by finding the quickest route to a solution. Can be very accurate, but takes an inordinate amount of time for some problems (big data, neural network training…). Friedman, J. H. (2002). Stochastic gradient boosting. Computational Statistics & Data Analysis, 38(4), 367-378.
  18. Optimization technique. Computationally expensive but guaranteed to converge. Bäck, T. (1996). Evolutionary algorithms in theory and practice: evolution strategies, evolutionary programming, genetic algorithms. Oxford university press.
  19. Generally much faster convergence than genetic algorithms. Zhang, G. (2011). Quantum-inspired evolutionary algorithms: a survey and empirical study. Journal of Heuristics, 17(3), 303-351.
  20. Exploit weak learners. Dietterich, T. G. (2000). Ensemble methods in machine learning. In Multiple classifier systems (pp. 1-15). Springer Berlin Heidelberg.
  21. Useful when the previous types of regression fail, when the data is “noisy” with little known about which predictors are important, and when outliers are likely to exist in the data (piece-wise modeling method of sorts). Friedman, J. H. (2002). Stochastic gradient boosting. Computational Statistics & Data Analysis, 38(4), 367-378.
  22. Speeds up boosting but requires extensive tuning and does not give interpretable model (only prediction). Has been used a lot in recent Kaggle competitions, particle physics datasets. Chen, T., & Guestrin, C. (2016, August). Xgboost: A scalable tree boosting system. In Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining (pp. 785-794). ACM.
  23. Useful in modeling complex, nonlinear regression/classification problems. Also distance-based metric. Handles noisy data well. Breiman, L. (2001). Random forests. Machine learning, 45(1), 5-32.
  24. Variance combined. van der Maaten, L. J., Postma, E. O., & van den Herik, H. J. (2009). Dimensionality reduction: A comparative review. Journal of Machine Learning Research, 10(1-41), 66-71.
  25. Deep connections to sampling theory and topology. Kuncheva, L. I., & Rodríguez, J. J. (2007). An experimental study on rotation forest ensembles. In Multiple Classifier Systems (pp. 459-468). Springer Berlin Heidelberg.
  26. Bagging of different base models (same bootstrap or different bootstrap). van der Laan, M. J., Polley, E. C., & Hubbard, A. E. (2007). Super learner. Statistical applications in genetics and molecular biology, 6(1).
  27. Good prediction with ability to parallelize for distributed computing. Tends to be faster than superlearners (smaller training samples). Sapp, S. K. (2014). Subsemble: A Flexible Subset Ensemble Prediction Method (Doctoral dissertation, University of California, Berkeley).
  28. New formulation by author (Colleen M. Farrelly). Requires a minimum sample size of ~30 per bootstrap within method.
  29. Local classification. Forms basis of many other machine learning algorithms (manifold learning, clustering, certain Markov chains…). Fukunaga, K., & Narendra, P. M. (1975). A branch and bound algorithm for computing k-nearest neighbors. Computers, IEEE Transactions on, 100(7), 750-753.
  30. Creates groupings of like objects in the dataset based on different linear/nonlinear metrics. Assumes groups are separable (no overlap). Issues with computation as number of observations increases. MacQueen, J. (1967, June). Some methods for classification and analysis of multivariate observations. In Proceedings of the fifth Berkeley symposium on mathematical statistics and probability (Vol. 1, No. 14, pp. 281-297).
  31. Data by pictures. Chartrand, G., & Oellermann, O. R. (1993). Applied and algorithmic graph theory.
  32. Weighted kernel k-means (weighted subcomponent clustering). Ng, A. Y., Jordan, M. I., & Weiss, Y. (2002). On spectral clustering: Analysis and an algorithm. Advances in neural information processing systems, 2, 849-856.
  33. Related to density clustering but relies on Morse critical points and filter functions to define complexes (clusters) Gerber, S., Bremer, P. T., Pascucci, V., & Whitaker, R. (2010). Visual exploration of high dimensional scalar functions. IEEE Transactions on Visualization and Computer Graphics, 16(6), 1271-1280. Gerber, S., & Potter, K. (2011). Morse-Smale approximation, regression and visualization. Chen, Y. C., Genovese, C. R., & Wasserman, L. (2016). A comprehensive approach to mode clustering. Electronic Journal of Statistics, 10(1), 210-241.
  34. Clustering based on multiple levels of function behavior—recent Nature paper and analytics company. Singh, G., Mémoli, F., & Carlsson, G. E. (2007, September). Topological Methods for the Analysis of High Dimensional Data Sets and 3D Object Recognition. In SPBG (pp. 91-100).
  35. Typical method, requires stationarity assumptions, constant error over time, and no predictors/confounders. Wu, J. P., & Wei, S. (1989). Time series analysis. Hunan Science and Technology Press, ChangSha.
  36. Stationarity and error assumptions; assumptions about outcome distribution (must be GLM in current software). Ullman, J. B., & Bentler, P. M. (2003). Structural equation modeling. John Wiley & Sons, Inc..
  37. Some restrictions on predictor/outcome probability distributions (binomial/multinomial). Heckerman, D., Geiger, D., & Chickering, D. M. (1994, July). Learning Bayesian networks: The combination of knowledge and statistical data. In Proceedings of the Tenth international conference on Uncertainty in artificial intelligence (pp. 293-301). Morgan Kaufmann Publishers Inc..
  38. Can handle multivariate time series. Based on nonlinear dynamics decomposition/spectral analysis methods. Vautard, R., Yiou, P., & Ghil, M. (1992). Singular-spectrum analysis: A toolkit for short, noisy chaotic signals. Physica D: Nonlinear Phenomena, 58(1), 95-126.
  39. Clustering memories. Dunham, M. H., Meng, Y., & Huang, J. (2004, November). Extensible markov model. In Data Mining, 2004. ICDM'04. Fourth IEEE International Conference on (pp. 371-374). IEEE.