9. $
spark-‐shell
Welcome
to
____
__
/
__/__
___
_____/
/__
_
/
_
/
_
`/
__/
'_/
/___/
.__/_,_/_/
/_/_
version
1.2.0-‐SNAPSHOT
/_/
Using
Scala
version
2.10.4
(Java
HotSpot(TM)
64-‐Bit
Server
VM,
Java
1.7.0_67)
Type
in
expressions
to
have
them
evaluated.
Type
:help
for
more
information.
15/04/15
11:31:28
INFO
SparkILoop:
Created
spark
context..
Spark
context
available
as
sc.
scala>
val
a
=
sc.parallelize(1
to
100)
a:
org.apache.spark.rdd.RDD[Int]
=
ParallelCollectionRDD[0]
at
parallelize
at
<console>:12
scala>
a.collect.foreach(x
=>
println(x))
1
2
3
4
14. Jupyter
IPython will continue to exist as a Python kernel for Jupyter, but
the notebook and other language-agnostic parts of IPython will
move to new projects under the Jupyter name. IPython 3.0 will
be the last monolithic release of IPython.
!
“IPython” http://ipython.org/
• interactive shell
• browser-based notebook
• 'Kernel'
• great support for visualization library (eg. matplotlib)
• built on pyzmq, tornado
IPYTHON/JUPYTER
15. IPYTHON NOTEBOOK
NOTEBOOK == BROWSER-BASED REPL
IPython Notebook is a web-based interactive
computational environment for creating IPython
notebooks. An IPython notebook is a JSON
document containing an ordered list of input/output
cells which can contain code, text, mathematics,
plots and rich media.
16. MATPLOTLIB
matplotlib tries to make easy things easy and hard things
possible. You can generate plots, histograms, power
spectra, bar charts, errorcharts, scatterplots, etc, with just a
few lines of code, with familiar MATLAB APIs.
plt.barh(y_pos,
performance,
xerr=error,
align='center',
alpha=0.4)
plt.yticks(y_pos,
people)
plt.xlabel('Performance')
plt.title('How
fast
do
you
want
to
go
today?')
plt.show()
17. PYSPARK
• Spark on Python, this serves as the Kernel,
integrating with IPython
• Each notebook spins up a new instance of the
Kernel (ie. PySpark running as the Spark Driver, in
different deploy mode Spark/PySpark supports)
23. WORD2VEC EXAMPLE
Word2Vec computes distributed vector
representation of words. Distributed vector
representation is showed to be useful in many
natural language processing applications such as
named entity recognition, disambiguation, parsing,
tagging and machine translation.
https://code.google.com/p/word2vec/
Spark MLlib implements the Skip-gram approach.
With Skip-gram we want to predict a window of
words given a single word.
24. WORD2VEC DATASET
Wikipedia dump http://mattmahoney.net/dc/
textdata
grep
-‐o
-‐E
'w+(W+w+){0,15}'
text8
>
text8_lines
then randomly sampled to ~200k lines
28. SETUP
To setup IPython
• Python 2.7.9 (separate from CentOS default 2.6.6), on all
nodes
• matplotlib, on the host running IPython
To run IPython with the PySpark Kernel, set these in the environment
(Please check out my handy script on github)
!
!
!
PYSPARK_PYTHON command to run python, eg. “python2.7”
PYSPARK_DRIVER_PYTHON command to run ipython
PYSPARK_DRIVER_PYTHON_OPTS “notebook —profile”
PYSPARK_SUBMIT_ARGS pyspark commandline, eg. --master --deploy_mode
YARN_CONF_DIR if YARN mode
LD_LIBRARY_PATH for matplotlib
29. IPYTHON/JUPYTER KERNELS
• IPython
• IGo
• Bash
• IR
• IHaskell
• IMatlab
• ICSharp
• IScala
• IRuby
• IJulia
.. and more https://github.com/ipython/ipython/wiki/IPython-kernels-for-other-
languages
31. Apache Zeppelin (incubating) is interactive data analytics environment
for distributed data processing system. It provides beautiful interactive
web-based interface, data visualization, collaborative work
environment and many other nice features to make your data analytics
more fun and enjoyable.
Zeppelin has been incubating since Dec 2014.
https://zeppelin.incubator.apache.org/
38. CLUSTERING
• Clustering tries to find natural groupings in
data. It puts objects into groups in which
those within a group are more similar to each
other than to those in other groups.
• Unsupervised learning
39. K-MEANS
• First, given an initial set of k cluster centers,
we find which cluster each data point is
closest to
• Then, we compute the average of each of the
new clusters and use the result to update our
cluster centers
40.
41. K-MEANS|| IN MLLIB
• a parallelized variant of the k-means++
http://theory.stanford.edu/~sergei/papers/vldb12-kmpar.pdf
Parameters:
• k is the number of desired clusters.
• maxIterations is the maximum number of iterations to run.
• initializationMode specifies either random initialization or initialization via
k-means||.
• runs is the number of times to run the k-means algorithm (k-means is not
guaranteed to find a globally optimal solution, and when run multiple
times on a given dataset, the algorithm returns the best clustering result).
• initializationSteps determines the number of steps in the k-means||
algorithm.
• epsilon determines the distance threshold within which we consider k-
means to have converged.
43. Details on github at: http://bit.ly/1JWOPh8
ANOMALY DETECTION WITH K-MEANS
Using Spark DataFrame, csv data source, to process KDDCup’99 data
Scoring with different k values
51. STREAMING K-MEANS
• key concept: forgetfulness
• balances the relative importance of new
data versus past history
• half-life
• time it takes before past data contributes to
only one half of the current model
52. STREAMING K-MEANS
• time unit
• batches (which have a fixed duration in
time), or points
• eliminate dying clusters
55. • Lightning - data visualization server
http://lightning-viz.org
• provides API-based access to reproducible, web-
based, interactive visualizations. It includes a core set
of visualization types, but is built for extendability
and customization. Lightning supports modern
libraries like d3.js and three.js, and is designed for
interactivity over large data sets and continuously
updating data streams.
VISUALIZING STREAMING K-
MEANS ON IPYTHON + LIGHTNING
57. The Freeman Lab at Janelia Research Campus uses Lightning to visualize
large-scale neural recordings from zebrafish, in collaboration with the
Ahrens Lab
58. SPARK STREAMING K-MEANS
DEMO
Environment
• requires: numpy, scipy, scikit-learn
• IPython/Python requires: lightning-python package
Demo consists of 3 parts:
https://github.com/felixcheung/spark-ml-streaming
• Python driver script, data generator
• Scala job - Spark Streaming & Streaming k-means
• IPython notebook to process result, visualize with Lightning
Originally this was part of the Python driver script - it has
been modified for this talk to run within IPython
62. YOU CAN RUN THIS TOO!
• Notebooks available at http://bit.ly/1JWOPh8
• Everything is heavily scripted and automated
Vagrant config for local, virtual environment
available at http://bit.ly/1DB3OLw