SlideShare une entreprise Scribd logo
1  sur  114
1Pivotal Confidential–Internal Use Only 1Pivotal Confidential–Internal Use Only
Spark Architecture
A.Grishchenko
About me
Enterprise Architect @ Pivotal
 7 years in data processing
 5 years with MPP
 4 years with Hadoop
 Spark contributor
 http://0x0fff.com
Outline
 Spark Motivation
 Spark Pillars
 Spark Architecture
 Spark Shuffle
 Spark DataFrame
Outline
 Spark Motivation
 Spark Pillars
 Spark Architecture
 Spark Shuffle
 Spark DataFrame
Spark Motivation
 Difficultly of programming directly in Hadoop MapReduce
Spark Motivation
 Difficultly of programming directly in Hadoop MapReduce
 Performance bottlenecks, or batch not fitting use cases
Spark Motivation
 Difficultly of programming directly in Hadoop MapReduce
 Performance bottlenecks, or batch not fitting use cases
 Better support iterative jobs typical for machine learning
Difficulty of Programming in MR
Word Count implementations
 Hadoop MR – 61 lines in Java
 Spark – 1 line in interactive shell
sc.textFile('...').flatMap(lambda x: x.split())
.map(lambda x: (x, 1)).reduceByKey(lambda x, y: x+y)
.saveAsTextFile('...')
VS
Performance Bottlenecks
How many times the data is put to the HDD during a single
MapReduce Job?
 One
 Two
 Three
 More
Performance Bottlenecks
How many times the data is put to the HDD during a single
MapReduce Job?
 One
 Two
 Three
 More
Performance Bottlenecks
Consider Hive as main SQL tool
Performance Bottlenecks
Consider Hive as main SQL tool
 Typical Hive query is translated to 3-5 MR jobs
Performance Bottlenecks
Consider Hive as main SQL tool
 Typical Hive query is translated to 3-5 MR jobs
 Each MR would scan put data to HDD 3+ times
Performance Bottlenecks
Consider Hive as main SQL tool
 Typical Hive query is translated to 3-5 MR jobs
 Each MR would scan put data to HDD 3+ times
 Each put to HDD – write followed by read
Performance Bottlenecks
Consider Hive as main SQL tool
 Typical Hive query is translated to 3-5 MR jobs
 Each MR would scan put data to HDD 3+ times
 Each put to HDD – write followed by read
 Sums up to 18-30 scans of data during a single
Hive query
Performance Bottlenecks
Spark offers you
 Lazy Computations
– Optimize the job before executing
Performance Bottlenecks
Spark offers you
 Lazy Computations
– Optimize the job before executing
 In-memory data caching
– Scan HDD only once, then scan your RAM
Performance Bottlenecks
Spark offers you
 Lazy Computations
– Optimize the job before executing
 In-memory data caching
– Scan HDD only once, then scan your RAM
 Efficient pipelining
– Avoids the data hitting the HDD by all means
Outline
 Spark Motivation
 Spark Pillars
 Spark Architecture
 Spark Shuffle
 Spark DataFrame
Spark Pillars
Two main abstractions of Spark
Spark Pillars
Two main abstractions of Spark
 RDD – Resilient Distributed Dataset
Spark Pillars
Two main abstractions of Spark
 RDD – Resilient Distributed Dataset
 DAG – Direct Acyclic Graph
RDD
 Simple view
– RDD is collection of data items split into partitions and
stored in memory on worker nodes of the cluster
RDD
 Simple view
– RDD is collection of data items split into partitions and
stored in memory on worker nodes of the cluster
 Complex view
– RDD is an interface for data transformation
RDD
 Simple view
– RDD is collection of data items split into partitions and
stored in memory on worker nodes of the cluster
 Complex view
– RDD is an interface for data transformation
– RDD refers to the data stored either in persisted store
(HDFS, Cassandra, HBase, etc.) or in cache (memory,
memory+disks, disk only, etc.) or in another RDD
RDD
 Complex view (cont’d)
– Partitions are recomputed on failure or cache eviction
RDD
 Complex view (cont’d)
– Partitions are recomputed on failure or cache eviction
– Metadata stored for interface
▪ Partitions – set of data splits associated with this RDD
RDD
 Complex view (cont’d)
– Partitions are recomputed on failure or cache eviction
– Metadata stored for interface
▪ Partitions – set of data splits associated with this RDD
▪ Dependencies – list of parent RDDs involved in computation
RDD
 Complex view (cont’d)
– Partitions are recomputed on failure or cache eviction
– Metadata stored for interface
▪ Partitions – set of data splits associated with this RDD
▪ Dependencies – list of parent RDDs involved in computation
▪ Compute – function to compute partition of the RDD given the
parent partitions from the Dependencies
RDD
 Complex view (cont’d)
– Partitions are recomputed on failure or cache eviction
– Metadata stored for interface
▪ Partitions – set of data splits associated with this RDD
▪ Dependencies – list of parent RDDs involved in computation
▪ Compute – function to compute partition of the RDD given the
parent partitions from the Dependencies
▪ Preferred Locations – where is the best place to put
computations on this partition (data locality)
RDD
 Complex view (cont’d)
– Partitions are recomputed on failure or cache eviction
– Metadata stored for interface
▪ Partitions – set of data splits associated with this RDD
▪ Dependencies – list of parent RDDs involved in computation
▪ Compute – function to compute partition of the RDD given the
parent partitions from the Dependencies
▪ Preferred Locations – where is the best place to put
computations on this partition (data locality)
▪ Partitioner – how the data is split into partitions
RDD
 RDD is the main and only tool for data
manipulation in Spark
 Two classes of operations
– Transformations
– Actions
RDD
Lazy computations model
Transformation cause only metadata change
DAG
Direct Acyclic Graph – sequence of computations
performed on data
DAG
Direct Acyclic Graph – sequence of computations
performed on data
 Node – RDD partition
DAG
Direct Acyclic Graph – sequence of computations
performed on data
 Node – RDD partition
 Edge – transformation on top of data
DAG
Direct Acyclic Graph – sequence of computations
performed on data
 Node – RDD partition
 Edge – transformation on top of data
 Acyclic – graph cannot return to the older partition
DAG
Direct Acyclic Graph – sequence of computations
performed on data
 Node – RDD partition
 Edge – transformation on top of data
 Acyclic – graph cannot return to the older partition
 Direct – transformation is an action that transitions
data partition state (from A to B)
DAG
WordCount example
DAG
WordCount example
HDFSInputSplits
HDFS
RDDPartitions
RDD RDD RDD RDD RDD
sc.textFile(‘hdfs://…’) flatMap map reduceByKey foreach
Outline
 Spark Motivation
 Spark Pillars
 Spark Architecture
 Spark Shuffle
 Spark DataFrames
Spark Cluster
Driver Node
…
Worker Node Worker Node Worker Node
Spark Cluster
Driver Node
Driver
…
Worker Node
…
Executor Executor
Worker Node
…
Executor Executor
Worker Node
…
Executor Executor
Spark Cluster
Driver Node
Driver
Spark Context
…
Worker Node
…
Executor
Cache
Executor
Cache
Worker Node
…
Executor
Cache
Executor
Cache
Worker Node
…
Executor
Cache
Executor
Cache
Spark Cluster
Driver Node
Driver
Spark Context
…
Worker Node
…
Executor
Cache
Task
…
Task
Task
Executor
Cache
Task
…
Task
Task
Worker Node
…
Executor
Cache
Task
…
Task
Task
Executor
Cache
Task
…
Task
Task
Worker Node
…
Executor
Cache
Task
…
Task
Task
Executor
Cache
Task
…
Task
Task
Spark Cluster
 Driver
– Entry point of the Spark Shell (Scala, Python, R)
Spark Cluster
 Driver
– Entry point of the Spark Shell (Scala, Python, R)
– The place where SparkContext is created
Spark Cluster
 Driver
– Entry point of the Spark Shell (Scala, Python, R)
– The place where SparkContext is created
– Translates RDD into the execution graph
Spark Cluster
 Driver
– Entry point of the Spark Shell (Scala, Python, R)
– The place where SparkContext is created
– Translates RDD into the execution graph
– Splits graph into stages
Spark Cluster
 Driver
– Entry point of the Spark Shell (Scala, Python, R)
– The place where SparkContext is created
– Translates RDD into the execution graph
– Splits graph into stages
– Schedules tasks and controls their execution
Spark Cluster
 Driver
– Entry point of the Spark Shell (Scala, Python, R)
– The place where SparkContext is created
– Translates RDD into the execution graph
– Splits graph into stages
– Schedules tasks and controls their execution
– Stores metadata about all the RDDs and their partitions
Spark Cluster
 Driver
– Entry point of the Spark Shell (Scala, Python, R)
– The place where SparkContext is created
– Translates RDD into the execution graph
– Splits graph into stages
– Schedules tasks and controls their execution
– Stores metadata about all the RDDs and their partitions
– Brings up Spark WebUI with job information
Spark Cluster
 Executor
– Stores the data in cache in JVM heap or on HDDs
Spark Cluster
 Executor
– Stores the data in cache in JVM heap or on HDDs
– Reads data from external sources
Spark Cluster
 Executor
– Stores the data in cache in JVM heap or on HDDs
– Reads data from external sources
– Writes data to external sources
Spark Cluster
 Executor
– Stores the data in cache in JVM heap or on HDDs
– Reads data from external sources
– Writes data to external sources
– Performs all the data processing
Executor Memory
Spark Cluster – Detailed
Spark Cluster – PySpark
Application Decomposition
 Application
– Single instance of SparkContext that stores some data
processing logic and can schedule series of jobs,
sequentially or in parallel (SparkContext is thread-safe)
Application Decomposition
 Application
– Single instance of SparkContext that stores some data
processing logic and can schedule series of jobs,
sequentially or in parallel (SparkContext is thread-safe)
 Job
– Complete set of transformations on RDD that finishes
with action or data saving, triggered by the driver
application
Application Decomposition
 Stage
– Set of transformations that can be pipelined and
executed by a single independent worker. Usually it is
app the transformations between “read”, “shuffle”,
“action”, “save”
Application Decomposition
 Stage
– Set of transformations that can be pipelined and
executed by a single independent worker. Usually it is
app the transformations between “read”, “shuffle”,
“action”, “save”
 Task
– Execution of the stage on a single data partition. Basic
unit of scheduling
WordCount ExampleHDFSInputSplits
HDFS
RDDPartitions
RDD RDD RDD RDD RDD
sc.textFile(‘hdfs://…’) flatMap map reduceByKey foreach
Stage 1
WordCount ExampleHDFSInputSplits
HDFS
RDDPartitions
RDD RDD RDD RDD RDD
sc.textFile(‘hdfs://…’) flatMap map reduceByKey foreach
Stage 2Stage 1
WordCount ExampleHDFSInputSplits
HDFS
RDDPartitions
RDD RDD RDD RDD RDD
sc.textFile(‘hdfs://…’) flatMap map reduceByKey foreach
Stage 2Stage 1
WordCount ExampleHDFSInputSplits
HDFS
RDDPartitions
RDD RDD RDD RDD RDD
sc.textFile(‘hdfs://…’) flatMap map reduceByKey foreach
Task 1
Task 2
Task 3
Task 4
Stage 2Stage 1
WordCount ExampleHDFSInputSplits
HDFS
RDDPartitions
RDD RDD RDD RDD RDD
sc.textFile(‘hdfs://…’) flatMap map reduceByKey foreach
Task 1
Task 2
Task 3
Task 4
Task 5
Task 6
Task 7
Task 8
Stage 2Stage 1
WordCount ExampleHDFSInputSplits
HDFS
pipeline
Task 1
Task 2
Task 3
Task 4
Task 5
Task 6
Task 7
Task 8
partition shuffle pipeline
Persistence in Spark
Persistence Level Description
MEMORY_ONLY Store RDD as deserialized Java objects in the JVM. If the RDD does not fit in
memory, some partitions will not be cached and will be recomputed on the fly each
time they're needed. This is the default level.
MEMORY_AND_DISK Store RDD as deserialized Java objects in the JVM. If the RDD does not fit in
memory, store the partitions that don't fit on disk, and read them from there when
they're needed.
MEMORY_ONLY_SER Store RDD as serialized Java objects (one byte array per partition). This is
generally more space-efficient than deserialized objects, especially when using a
fast serializer, but more CPU-intensive to read.
MEMORY_AND_DISK_SER Similar to MEMORY_ONLY_SER, but spill partitions that don't fit in memory to disk
instead of recomputing them on the fly each time they're needed.
DISK_ONLY Store the RDD partitions only on disk.
MEMORY_ONLY_2,
DISK_ONLY_2, etc.
Same as the levels above, but replicate each partition on two cluster nodes.
Persistence in Spark
 Spark considers memory as a cache with LRU
eviction rules
 If “Disk” is involved, data is evicted to disks
rdd = sc.parallelize(xrange(1000))
rdd.cache().count()
rdd.persist(StorageLevel.MEMORY_AND_DISK_SER).count()
rdd.unpersist()
Outline
 Spark Motivation
 Spark Pillars
 Spark Architecture
 Spark Shuffle
 Spark DataFrame
Shuffles in Spark
 Hash Shuffle – default prior to 1.2.0
Shuffles in Spark
 Hash Shuffle – default prior to 1.2.0
 Sort Shuffle – default now
Shuffles in Spark
 Hash Shuffle – default prior to 1.2.0
 Sort Shuffle – default now
 Tungsten Sort – new optimized one!
Hash Shuffle
Executor JVM
Partition
Partition
Partition
Partition
Partition
Partition
Partition
Partition
Hash Shuffle
Executor JVM
Partition
Partition
Partition
Partition
Partition
Partition
Partition
Partition
“map” task
“map” task
spark.executor.cores/
spark.task.cpus
…
Hash Shuffle
Local DirectoryExecutor JVM
Partition
Partition
Partition
Partition
Partition
Partition
Partition
Partition
“map” task
“map” task
spark.executor.cores/
spark.task.cpus
…
Output File
Output File
Output File
…
Numberof
“Reducers”
spark.local.dir
Hash Shuffle
Local DirectoryExecutor JVM
Partition
Partition
Partition
Partition
Partition
Partition
Partition
Partition
“map” task
“map” task
spark.executor.cores/
spark.task.cpus
…
Output File
Output File
Output File
…
Numberof
“Reducers”
Output File
Output File
Output File
…
Numberof
“Reducers”
…
spark.local.dir
Hash Shuffle
Numberof“map”tasks
executedbythisexecutor
Hash Shuffle with Consolidation
Executor JVM
Partition
Partition
Partition
Partition
Partition
Partition
Partition
Partition
Hash Shuffle With Consolidation
Local DirectoryExecutor JVM
Partition
Partition
Partition
Partition
Partition
Partition
Partition
Partition
“map” task
…
Numberof
“Reducers”
Output File
Output File
Output File
spark.local.dir
Hash Shuffle With Consolidationspark.executor.cores/
spark.task.cpus
Local DirectoryExecutor JVM
Partition
Partition
Partition
Partition
Partition
Partition
Partition
Partition
“map” task
“map” task
…
…
…
Numberof
“Reducers”
spark.local.dir
Hash Shuffle With Consolidationspark.executor.cores/
spark.task.cpus
spark.executor.cores/
spark.task.cpus
…
Numberof
“Reducers”
Output File
Output File
Output File
Output File
Output File
Output File
Local DirectoryExecutor JVM
Partition
Partition
Partition
Partition
Partition
Partition
Partition
Partition
“map” task
…
…
…
Numberof
“Reducers”
spark.local.dir
Hash Shuffle With Consolidationspark.executor.cores/
spark.task.cpus
spark.executor.cores/
spark.task.cpus
…
Numberof
“Reducers”
Output File
Output File
Output File
Output File
Output File
Output File
Local DirectoryExecutor JVM
Partition
Partition
Partition
Partition
Partition
Partition
Partition
Partition
“map” task
“map” task
…
…
…
Numberof
“Reducers”
spark.local.dir
Hash Shuffle With Consolidationspark.executor.cores/
spark.task.cpus
spark.executor.cores/
spark.task.cpus
Output File
Output File
Output File
…
Numberof
“Reducers”
Output File
Output File
Output File
Local DirectoryExecutor JVM
Partition
Partition
Partition
Partition
Partition
Partition
Partition
Partition
“map” task
…
…
…
Numberof
“Reducers”
spark.local.dir
Hash Shuffle With Consolidationspark.executor.cores/
spark.task.cpus
spark.executor.cores/
spark.task.cpus
Output File
Output File
Output File
…
Numberof
“Reducers”
Output File
Output File
Output File
Local DirectoryExecutor JVM
Partition
Partition
Partition
Partition
Partition
Partition
Partition
Partition
“map” task
“map” task
…
…
…
Numberof
“Reducers”
spark.local.dir
Hash Shuffle With Consolidationspark.executor.cores/
spark.task.cpus
spark.executor.cores/
spark.task.cpus
…
Numberof
“Reducers”
Output File
Output File
Output File
Output File
Output File
Output File
Sort Shuffle
Executor JVMPartition
Partition
Partition
Partition
Partition
Partition
Partition
Partition
Sort Shuffle
spark.storage.[safetyFraction * memoryFraction]
Partition
Partition
Executor JVMPartition
Partition
Partition
Partition
Partition
Partition
Partition
Partition
“map”
task
“map”
task
…
Sort Shuffle
spark.storage.[safetyFraction * memoryFraction]
Partition
Partition
spark.executor.cores/spark.task.cpus
Executor JVMPartition
Partition
Partition
Partition
Partition
Partition
Partition
Partition
“map”
task
“map”
task
…
Sort Shuffle
spark.storage.[safetyFraction * memoryFraction]
spark.shuffle.
[safetyFraction * memoryFraction]
Partition
Partition
AppendOnlyMap
…
AppendOnlyMap
spark.executor.cores/spark.task.cpus
Executor JVMPartition
Partition
Partition
Partition
Partition
Partition
Partition
Partition
“map”
task
“map”
task
…
Sort Shuffle
spark.storage.[safetyFraction * memoryFraction]
spark.shuffle.
[safetyFraction * memoryFraction]
Partition
Partition
AppendOnlyMap
…
AppendOnlyMap
sort &
spill
spark.executor.cores/spark.task.cpus
Executor JVMPartition
Partition
Partition
Partition
Partition
Partition
Partition
Partition
“map”
task
“map”
task
…
Sort Shuffle
spark.storage.[safetyFraction * memoryFraction]
spark.shuffle.
[safetyFraction * memoryFraction]
Partition
Partition
AppendOnlyMap
…
AppendOnlyMap
sort &
spill
Local
Directory
Output File
index
spark.executor.cores/spark.task.cpus
Executor JVMPartition
Partition
Partition
Partition
Partition
Partition
Partition
Partition
“map”
task
“map”
task
…
Sort Shuffle
spark.storage.[safetyFraction * memoryFraction]
spark.shuffle.
[safetyFraction * memoryFraction]
Partition
Partition
AppendOnlyMap
…
AppendOnlyMap
sort &
spill
sort &
spill
Local
Directory
Output File
index
Output File
index
spark.executor.cores/spark.task.cpus
Executor JVMPartition
Partition
Partition
Partition
Partition
Partition
Partition
Partition
“map”
task
“map”
task
…
Sort Shuffle
spark.storage.[safetyFraction * memoryFraction]
spark.shuffle.
[safetyFraction * memoryFraction]
Partition
Partition
AppendOnlyMap
…
AppendOnlyMap
sort &
spill
sort &
spill
sort &
spill
…
Local
Directory
…
Output File
index
Output File
index
Output File
index
spark.executor.cores/spark.task.cpus
MinHeap
Merge
MinHeap
Merge
Executor JVMPartition
Partition
Partition
Partition
Partition
Partition
Partition
Partition
“map”
task
“map”
task
…
Sort Shuffle
spark.storage.[safetyFraction * memoryFraction]
spark.shuffle.
[safetyFraction * memoryFraction]
Partition
Partition
AppendOnlyMap
…
AppendOnlyMap
sort &
spill
sort &
spill
sort &
spill
…
Local
Directory
…
“reduce”task“reduce”task
…Output File
index
Output File
index
Output File
index
spark.executor.cores/spark.task.cpus
Tungsten Sort Shuffle
Executor JVMPartition
Partition
Partition
Partition
Partition
Partition
Partition
Partition
Tungsten Sort Shuffle
spark.storage.[safetyFraction * memoryFraction]
Partition
Partition
Executor JVMPartition
Partition
Partition
Partition
Partition
Partition
Partition
Partition
“map”
task
Tungsten Sort Shuffle
spark.storage.[safetyFraction * memoryFraction]
Partition
Partition
spark.executor.cores/spark.task.cpus
Executor JVMPartition
Partition
Partition
Partition
Partition
Partition
Partition
Partition
“map”
task
Tungsten Sort Shuffle
spark.storage.[safetyFraction * memoryFraction]
spark.shuffle.
[safetyFraction * memoryFraction]
Partition
Partition
Serialized Data
LinkedList<MemoryBlock>
Array of data pointers and
Partition IDs, long[]
spark.executor.cores/spark.task.cpus
Executor JVMPartition
Partition
Partition
Partition
Partition
Partition
Partition
Partition
“map”
task
Tungsten Sort Shuffle
spark.storage.[safetyFraction * memoryFraction]
spark.shuffle.
[safetyFraction * memoryFraction]
Partition
Partition
Serialized Data
LinkedList<MemoryBlock>
Array of data pointers and
Partition IDs, long[]
sort &
spill
spark.executor.cores/spark.task.cpus
Executor JVMPartition
Partition
Partition
Partition
Partition
Partition
Partition
Partition
“map”
task
Tungsten Sort Shuffle
spark.storage.[safetyFraction * memoryFraction]
spark.shuffle.
[safetyFraction * memoryFraction]
Partition
Partition
Serialized Data
LinkedList<MemoryBlock>
Local Directory
Array of data pointers and
Partition IDs, long[]
sort &
spill
spark.local.dir
Output File
partition
partition
partition
partition
index
spark.executor.cores/spark.task.cpus
Executor JVMPartition
Partition
Partition
Partition
Partition
Partition
Partition
Partition
“map”
task
Tungsten Sort Shuffle
spark.storage.[safetyFraction * memoryFraction]
spark.shuffle.
[safetyFraction * memoryFraction]
Partition
Partition
Serialized Data
LinkedList<MemoryBlock>
Local Directory
Array of data pointers and
Partition IDs, long[]
sort &
spill
sort &
spill
…
spark.local.dir
Output File
partition
partition
partition
partition
index
Output File
partition
partition
partition
partition
index
spark.executor.cores/spark.task.cpus
Executor JVMPartition
Partition
Partition
Partition
Partition
Partition
Partition
Partition
“map”
task
“map”
task
…
Tungsten Sort Shuffle
spark.storage.[safetyFraction * memoryFraction]
spark.shuffle.
[safetyFraction * memoryFraction]
Partition
Partition
Serialized Data
LinkedList<MemoryBlock>
Local Directory
Array of data pointers and
Partition IDs, long[]
sort &
spill
sort &
spill
…
spark.local.dir
Output File
partition
partition
partition
partition
index
Output File
partition
partition
partition
partition
index
spark.executor.cores/spark.task.cpus
Executor JVMPartition
Partition
Partition
Partition
Partition
Partition
Partition
Partition
“map”
task
“map”
task
…
Tungsten Sort Shuffle
spark.storage.[safetyFraction * memoryFraction]
spark.shuffle.
[safetyFraction * memoryFraction]
Partition
Partition
Serialized Data
LinkedList<MemoryBlock>
…
Local Directory
Array of data pointers and
Partition IDs, long[]
Serialized Data
LinkedList<MemoryBlock>
Array of data pointers and
Partition IDs, long[]
sort &
spill
sort &
spill
…
spark.local.dir
Output File
partition
partition
partition
partition
index
Output File
partition
partition
partition
partition
index
spark.executor.cores/spark.task.cpus
Executor JVMPartition
Partition
Partition
Partition
Partition
Partition
Partition
Partition
“map”
task
“map”
task
…
Tungsten Sort Shuffle
spark.storage.[safetyFraction * memoryFraction]
spark.shuffle.
[safetyFraction * memoryFraction]
Partition
Partition
Serialized Data
LinkedList<MemoryBlock>
…
Local Directory
…
Array of data pointers and
Partition IDs, long[]
Serialized Data
LinkedList<MemoryBlock>
Array of data pointers and
Partition IDs, long[]
sort &
spill
sort &
spill
…
sort &
spill
spark.local.dir
Output File
partition
partition
partition
partition
index
Output File
partition
partition
partition
partition
index
Output File
partition
partition
partition
partition
index
spark.executor.cores/spark.task.cpus
Executor JVMPartition
Partition
Partition
Partition
Partition
Partition
Partition
Partition
“map”
task
“map”
task
…
Tungsten Sort Shuffle
spark.storage.[safetyFraction * memoryFraction]
spark.shuffle.
[safetyFraction * memoryFraction]
Partition
Partition
Serialized Data
LinkedList<MemoryBlock>
…
Local Directory
…
Array of data pointers and
Partition IDs, long[]
Serialized Data
LinkedList<MemoryBlock>
Array of data pointers and
Partition IDs, long[]
sort &
spill
sort &
spill
…
sort &
spill
spark.local.dir
Output File
partition
partition
partition
partition
index
Output File
partition
partition
partition
partition
index
Output File
partition
partition
partition
partition
index
Output File
partition
partition
partition
partition
index
merge
spark.executor.cores/spark.task.cpus
Outline
 Spark Motivation
 Spark Pillars
 Spark Architecture
 Spark Shuffle
 Spark DataFrame
DataFrame Idea
DataFrame Implementation
 Interface
– DataFrame is an RDD with schema – field names, field
data types and statistics
– Unified transformation interface in all languages, all the
transformations are passed to JVM
– Can be accessed as RDD, in this case transformed to
the RDD of Row objects
DataFrame Implementation
 Internals
– Internally it is the same RDD
– Data is stored in row-columnar format, row chunk size
is set by spark.sql.inMemoryColumnarStorage.batchSize
– Each column in each partition stores min-max values
for partition pruning
– Allows better compression ratio than standard RDD
– Delivers faster performance for small subsets of
columns
113Pivotal Confidential–Internal Use Only 113Pivotal Confidential–Internal Use Only
Questions?
BUILT FOR THE SPEED OF BUSINESS

Contenu connexe

Tendances

Apache Spark in Depth: Core Concepts, Architecture & Internals
Apache Spark in Depth: Core Concepts, Architecture & InternalsApache Spark in Depth: Core Concepts, Architecture & Internals
Apache Spark in Depth: Core Concepts, Architecture & InternalsAnton Kirillov
 
Introduction and Overview of Apache Kafka, TriHUG July 23, 2013
Introduction and Overview of Apache Kafka, TriHUG July 23, 2013Introduction and Overview of Apache Kafka, TriHUG July 23, 2013
Introduction and Overview of Apache Kafka, TriHUG July 23, 2013mumrah
 
Apache Spark overview
Apache Spark overviewApache Spark overview
Apache Spark overviewDataArt
 
Spark introduction and architecture
Spark introduction and architectureSpark introduction and architecture
Spark introduction and architectureSohil Jain
 
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the Cloud
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the CloudAmazon S3 Best Practice and Tuning for Hadoop/Spark in the Cloud
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the CloudNoritaka Sekiyama
 
Introduction to Apache ZooKeeper
Introduction to Apache ZooKeeperIntroduction to Apache ZooKeeper
Introduction to Apache ZooKeeperSaurav Haloi
 
Introduction to MongoDB
Introduction to MongoDBIntroduction to MongoDB
Introduction to MongoDBMike Dirolf
 
Hive + Tez: A Performance Deep Dive
Hive + Tez: A Performance Deep DiveHive + Tez: A Performance Deep Dive
Hive + Tez: A Performance Deep DiveDataWorks Summit
 
What Is Apache Spark? | Introduction To Apache Spark | Apache Spark Tutorial ...
What Is Apache Spark? | Introduction To Apache Spark | Apache Spark Tutorial ...What Is Apache Spark? | Introduction To Apache Spark | Apache Spark Tutorial ...
What Is Apache Spark? | Introduction To Apache Spark | Apache Spark Tutorial ...Simplilearn
 
Spark streaming , Spark SQL
Spark streaming , Spark SQLSpark streaming , Spark SQL
Spark streaming , Spark SQLYousun Jeong
 
Deep Dive: Memory Management in Apache Spark
Deep Dive: Memory Management in Apache SparkDeep Dive: Memory Management in Apache Spark
Deep Dive: Memory Management in Apache SparkDatabricks
 
Etsy Activity Feeds Architecture
Etsy Activity Feeds ArchitectureEtsy Activity Feeds Architecture
Etsy Activity Feeds ArchitectureDan McKinley
 
Understanding Query Plans and Spark UIs
Understanding Query Plans and Spark UIsUnderstanding Query Plans and Spark UIs
Understanding Query Plans and Spark UIsDatabricks
 
Designing ETL Pipelines with Structured Streaming and Delta Lake—How to Archi...
Designing ETL Pipelines with Structured Streaming and Delta Lake—How to Archi...Designing ETL Pipelines with Structured Streaming and Delta Lake—How to Archi...
Designing ETL Pipelines with Structured Streaming and Delta Lake—How to Archi...Databricks
 
Demystifying flink memory allocation and tuning - Roshan Naik, Uber
Demystifying flink memory allocation and tuning - Roshan Naik, UberDemystifying flink memory allocation and tuning - Roshan Naik, Uber
Demystifying flink memory allocation and tuning - Roshan Naik, UberFlink Forward
 
Hudi architecture, fundamentals and capabilities
Hudi architecture, fundamentals and capabilitiesHudi architecture, fundamentals and capabilities
Hudi architecture, fundamentals and capabilitiesNishith Agarwal
 
RocksDB detail
RocksDB detailRocksDB detail
RocksDB detailMIJIN AN
 
Processing Large Data with Apache Spark -- HasGeek
Processing Large Data with Apache Spark -- HasGeekProcessing Large Data with Apache Spark -- HasGeek
Processing Large Data with Apache Spark -- HasGeekVenkata Naga Ravi
 
The Parquet Format and Performance Optimization Opportunities
The Parquet Format and Performance Optimization OpportunitiesThe Parquet Format and Performance Optimization Opportunities
The Parquet Format and Performance Optimization OpportunitiesDatabricks
 

Tendances (20)

Apache Spark in Depth: Core Concepts, Architecture & Internals
Apache Spark in Depth: Core Concepts, Architecture & InternalsApache Spark in Depth: Core Concepts, Architecture & Internals
Apache Spark in Depth: Core Concepts, Architecture & Internals
 
Introduction and Overview of Apache Kafka, TriHUG July 23, 2013
Introduction and Overview of Apache Kafka, TriHUG July 23, 2013Introduction and Overview of Apache Kafka, TriHUG July 23, 2013
Introduction and Overview of Apache Kafka, TriHUG July 23, 2013
 
Apache Spark overview
Apache Spark overviewApache Spark overview
Apache Spark overview
 
Spark introduction and architecture
Spark introduction and architectureSpark introduction and architecture
Spark introduction and architecture
 
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the Cloud
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the CloudAmazon S3 Best Practice and Tuning for Hadoop/Spark in the Cloud
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the Cloud
 
Introduction to Apache ZooKeeper
Introduction to Apache ZooKeeperIntroduction to Apache ZooKeeper
Introduction to Apache ZooKeeper
 
Introduction to MongoDB
Introduction to MongoDBIntroduction to MongoDB
Introduction to MongoDB
 
Hive + Tez: A Performance Deep Dive
Hive + Tez: A Performance Deep DiveHive + Tez: A Performance Deep Dive
Hive + Tez: A Performance Deep Dive
 
What Is Apache Spark? | Introduction To Apache Spark | Apache Spark Tutorial ...
What Is Apache Spark? | Introduction To Apache Spark | Apache Spark Tutorial ...What Is Apache Spark? | Introduction To Apache Spark | Apache Spark Tutorial ...
What Is Apache Spark? | Introduction To Apache Spark | Apache Spark Tutorial ...
 
Presto
PrestoPresto
Presto
 
Spark streaming , Spark SQL
Spark streaming , Spark SQLSpark streaming , Spark SQL
Spark streaming , Spark SQL
 
Deep Dive: Memory Management in Apache Spark
Deep Dive: Memory Management in Apache SparkDeep Dive: Memory Management in Apache Spark
Deep Dive: Memory Management in Apache Spark
 
Etsy Activity Feeds Architecture
Etsy Activity Feeds ArchitectureEtsy Activity Feeds Architecture
Etsy Activity Feeds Architecture
 
Understanding Query Plans and Spark UIs
Understanding Query Plans and Spark UIsUnderstanding Query Plans and Spark UIs
Understanding Query Plans and Spark UIs
 
Designing ETL Pipelines with Structured Streaming and Delta Lake—How to Archi...
Designing ETL Pipelines with Structured Streaming and Delta Lake—How to Archi...Designing ETL Pipelines with Structured Streaming and Delta Lake—How to Archi...
Designing ETL Pipelines with Structured Streaming and Delta Lake—How to Archi...
 
Demystifying flink memory allocation and tuning - Roshan Naik, Uber
Demystifying flink memory allocation and tuning - Roshan Naik, UberDemystifying flink memory allocation and tuning - Roshan Naik, Uber
Demystifying flink memory allocation and tuning - Roshan Naik, Uber
 
Hudi architecture, fundamentals and capabilities
Hudi architecture, fundamentals and capabilitiesHudi architecture, fundamentals and capabilities
Hudi architecture, fundamentals and capabilities
 
RocksDB detail
RocksDB detailRocksDB detail
RocksDB detail
 
Processing Large Data with Apache Spark -- HasGeek
Processing Large Data with Apache Spark -- HasGeekProcessing Large Data with Apache Spark -- HasGeek
Processing Large Data with Apache Spark -- HasGeek
 
The Parquet Format and Performance Optimization Opportunities
The Parquet Format and Performance Optimization OpportunitiesThe Parquet Format and Performance Optimization Opportunities
The Parquet Format and Performance Optimization Opportunities
 

En vedette

Introduction to Spark Internals
Introduction to Spark InternalsIntroduction to Spark Internals
Introduction to Spark InternalsPietro Michiardi
 
Apache Spark 2.0: Faster, Easier, and Smarter
Apache Spark 2.0: Faster, Easier, and SmarterApache Spark 2.0: Faster, Easier, and Smarter
Apache Spark 2.0: Faster, Easier, and SmarterDatabricks
 
Introduction to Apache Spark Developer Training
Introduction to Apache Spark Developer TrainingIntroduction to Apache Spark Developer Training
Introduction to Apache Spark Developer TrainingCloudera, Inc.
 
Data Storage Tips for Optimal Spark Performance-(Vida Ha, Databricks)
Data Storage Tips for Optimal Spark Performance-(Vida Ha, Databricks)Data Storage Tips for Optimal Spark Performance-(Vida Ha, Databricks)
Data Storage Tips for Optimal Spark Performance-(Vida Ha, Databricks)Spark Summit
 
SQL to Hive Cheat Sheet
SQL to Hive Cheat SheetSQL to Hive Cheat Sheet
SQL to Hive Cheat SheetHortonworks
 
How to understand and analyze Apache Hive query execution plan for performanc...
How to understand and analyze Apache Hive query execution plan for performanc...How to understand and analyze Apache Hive query execution plan for performanc...
How to understand and analyze Apache Hive query execution plan for performanc...DataWorks Summit/Hadoop Summit
 
MapR Tutorial Series
MapR Tutorial SeriesMapR Tutorial Series
MapR Tutorial Seriesselvaraaju
 
AWS re:Invent 2016: Fraud Detection with Amazon Machine Learning on AWS (FIN301)
AWS re:Invent 2016: Fraud Detection with Amazon Machine Learning on AWS (FIN301)AWS re:Invent 2016: Fraud Detection with Amazon Machine Learning on AWS (FIN301)
AWS re:Invent 2016: Fraud Detection with Amazon Machine Learning on AWS (FIN301)Amazon Web Services
 
MapR and Cisco Make IT Better
MapR and Cisco Make IT BetterMapR and Cisco Make IT Better
MapR and Cisco Make IT BetterMapR Technologies
 
Simplifying Big Data Analytics with Apache Spark
Simplifying Big Data Analytics with Apache SparkSimplifying Big Data Analytics with Apache Spark
Simplifying Big Data Analytics with Apache SparkDatabricks
 
MapR M7: Providing an enterprise quality Apache HBase API
MapR M7: Providing an enterprise quality Apache HBase APIMapR M7: Providing an enterprise quality Apache HBase API
MapR M7: Providing an enterprise quality Apache HBase APImcsrivas
 
Hands on MapR -- Viadea
Hands on MapR -- ViadeaHands on MapR -- Viadea
Hands on MapR -- Viadeaviadea
 
Architectural Overview of MapR's Apache Hadoop Distribution
Architectural Overview of MapR's Apache Hadoop DistributionArchitectural Overview of MapR's Apache Hadoop Distribution
Architectural Overview of MapR's Apache Hadoop Distributionmcsrivas
 
Hadoop MapReduce Fundamentals
Hadoop MapReduce FundamentalsHadoop MapReduce Fundamentals
Hadoop MapReduce FundamentalsLynn Langit
 
MapR Data Analyst
MapR Data AnalystMapR Data Analyst
MapR Data Analystselvaraaju
 

En vedette (20)

Modern Data Architecture
Modern Data ArchitectureModern Data Architecture
Modern Data Architecture
 
Introduction to Spark Internals
Introduction to Spark InternalsIntroduction to Spark Internals
Introduction to Spark Internals
 
Apache Spark 2.0: Faster, Easier, and Smarter
Apache Spark 2.0: Faster, Easier, and SmarterApache Spark 2.0: Faster, Easier, and Smarter
Apache Spark 2.0: Faster, Easier, and Smarter
 
Apache HAWQ Architecture
Apache HAWQ ArchitectureApache HAWQ Architecture
Apache HAWQ Architecture
 
MPP vs Hadoop
MPP vs HadoopMPP vs Hadoop
MPP vs Hadoop
 
Introduction to Apache Spark Developer Training
Introduction to Apache Spark Developer TrainingIntroduction to Apache Spark Developer Training
Introduction to Apache Spark Developer Training
 
Data Storage Tips for Optimal Spark Performance-(Vida Ha, Databricks)
Data Storage Tips for Optimal Spark Performance-(Vida Ha, Databricks)Data Storage Tips for Optimal Spark Performance-(Vida Ha, Databricks)
Data Storage Tips for Optimal Spark Performance-(Vida Ha, Databricks)
 
SQL to Hive Cheat Sheet
SQL to Hive Cheat SheetSQL to Hive Cheat Sheet
SQL to Hive Cheat Sheet
 
How to understand and analyze Apache Hive query execution plan for performanc...
How to understand and analyze Apache Hive query execution plan for performanc...How to understand and analyze Apache Hive query execution plan for performanc...
How to understand and analyze Apache Hive query execution plan for performanc...
 
MapR Tutorial Series
MapR Tutorial SeriesMapR Tutorial Series
MapR Tutorial Series
 
Apache Spark & Hadoop
Apache Spark & HadoopApache Spark & Hadoop
Apache Spark & Hadoop
 
AWS re:Invent 2016: Fraud Detection with Amazon Machine Learning on AWS (FIN301)
AWS re:Invent 2016: Fraud Detection with Amazon Machine Learning on AWS (FIN301)AWS re:Invent 2016: Fraud Detection with Amazon Machine Learning on AWS (FIN301)
AWS re:Invent 2016: Fraud Detection with Amazon Machine Learning on AWS (FIN301)
 
MapR and Cisco Make IT Better
MapR and Cisco Make IT BetterMapR and Cisco Make IT Better
MapR and Cisco Make IT Better
 
Simplifying Big Data Analytics with Apache Spark
Simplifying Big Data Analytics with Apache SparkSimplifying Big Data Analytics with Apache Spark
Simplifying Big Data Analytics with Apache Spark
 
Deep Learning for Fraud Detection
Deep Learning for Fraud DetectionDeep Learning for Fraud Detection
Deep Learning for Fraud Detection
 
MapR M7: Providing an enterprise quality Apache HBase API
MapR M7: Providing an enterprise quality Apache HBase APIMapR M7: Providing an enterprise quality Apache HBase API
MapR M7: Providing an enterprise quality Apache HBase API
 
Hands on MapR -- Viadea
Hands on MapR -- ViadeaHands on MapR -- Viadea
Hands on MapR -- Viadea
 
Architectural Overview of MapR's Apache Hadoop Distribution
Architectural Overview of MapR's Apache Hadoop DistributionArchitectural Overview of MapR's Apache Hadoop Distribution
Architectural Overview of MapR's Apache Hadoop Distribution
 
Hadoop MapReduce Fundamentals
Hadoop MapReduce FundamentalsHadoop MapReduce Fundamentals
Hadoop MapReduce Fundamentals
 
MapR Data Analyst
MapR Data AnalystMapR Data Analyst
MapR Data Analyst
 

Similaire à Apache Spark Architecture

Bigdata processing with Spark - part II
Bigdata processing with Spark - part IIBigdata processing with Spark - part II
Bigdata processing with Spark - part IIArjen de Vries
 
Geek Night - Functional Data Processing using Spark and Scala
Geek Night - Functional Data Processing using Spark and ScalaGeek Night - Functional Data Processing using Spark and Scala
Geek Night - Functional Data Processing using Spark and ScalaAtif Akhtar
 
A Tale of Three Apache Spark APIs: RDDs, DataFrames, and Datasets with Jules ...
A Tale of Three Apache Spark APIs: RDDs, DataFrames, and Datasets with Jules ...A Tale of Three Apache Spark APIs: RDDs, DataFrames, and Datasets with Jules ...
A Tale of Three Apache Spark APIs: RDDs, DataFrames, and Datasets with Jules ...Databricks
 
Apache Spark and DataStax Enablement
Apache Spark and DataStax EnablementApache Spark and DataStax Enablement
Apache Spark and DataStax EnablementVincent Poncet
 
Apache Spark Introduction.pdf
Apache Spark Introduction.pdfApache Spark Introduction.pdf
Apache Spark Introduction.pdfMaheshPandit16
 
A Deep Dive Into Spark
A Deep Dive Into SparkA Deep Dive Into Spark
A Deep Dive Into SparkAshish kumar
 
Big Data Processing using Apache Spark and Clojure
Big Data Processing using Apache Spark and ClojureBig Data Processing using Apache Spark and Clojure
Big Data Processing using Apache Spark and ClojureDr. Christian Betz
 
Apache Spark Fundamentals Meetup Talk
Apache Spark Fundamentals Meetup TalkApache Spark Fundamentals Meetup Talk
Apache Spark Fundamentals Meetup TalkEren Avşaroğulları
 
Paris Data Geek - Spark Streaming
Paris Data Geek - Spark Streaming Paris Data Geek - Spark Streaming
Paris Data Geek - Spark Streaming Djamel Zouaoui
 
Spark Summit East 2015 Advanced Devops Student Slides
Spark Summit East 2015 Advanced Devops Student SlidesSpark Summit East 2015 Advanced Devops Student Slides
Spark Summit East 2015 Advanced Devops Student SlidesDatabricks
 
Big data processing with Apache Spark and Oracle Database
Big data processing with Apache Spark and Oracle DatabaseBig data processing with Apache Spark and Oracle Database
Big data processing with Apache Spark and Oracle DatabaseMartin Toshev
 
HKOSCon18 - Chetan Khatri - Scaling TB's of Data with Apache Spark and Scala ...
HKOSCon18 - Chetan Khatri - Scaling TB's of Data with Apache Spark and Scala ...HKOSCon18 - Chetan Khatri - Scaling TB's of Data with Apache Spark and Scala ...
HKOSCon18 - Chetan Khatri - Scaling TB's of Data with Apache Spark and Scala ...Chetan Khatri
 
Intro to Spark
Intro to SparkIntro to Spark
Intro to SparkKyle Burke
 
A really really fast introduction to PySpark - lightning fast cluster computi...
A really really fast introduction to PySpark - lightning fast cluster computi...A really really fast introduction to PySpark - lightning fast cluster computi...
A really really fast introduction to PySpark - lightning fast cluster computi...Holden Karau
 
Introduction to Apache Spark
Introduction to Apache SparkIntroduction to Apache Spark
Introduction to Apache SparkVincent Poncet
 

Similaire à Apache Spark Architecture (20)

Spark
SparkSpark
Spark
 
Bigdata processing with Spark - part II
Bigdata processing with Spark - part IIBigdata processing with Spark - part II
Bigdata processing with Spark - part II
 
Spark learning
Spark learningSpark learning
Spark learning
 
Geek Night - Functional Data Processing using Spark and Scala
Geek Night - Functional Data Processing using Spark and ScalaGeek Night - Functional Data Processing using Spark and Scala
Geek Night - Functional Data Processing using Spark and Scala
 
A Tale of Three Apache Spark APIs: RDDs, DataFrames, and Datasets with Jules ...
A Tale of Three Apache Spark APIs: RDDs, DataFrames, and Datasets with Jules ...A Tale of Three Apache Spark APIs: RDDs, DataFrames, and Datasets with Jules ...
A Tale of Three Apache Spark APIs: RDDs, DataFrames, and Datasets with Jules ...
 
Apache Spark and DataStax Enablement
Apache Spark and DataStax EnablementApache Spark and DataStax Enablement
Apache Spark and DataStax Enablement
 
Apache Spark Introduction.pdf
Apache Spark Introduction.pdfApache Spark Introduction.pdf
Apache Spark Introduction.pdf
 
A Deep Dive Into Spark
A Deep Dive Into SparkA Deep Dive Into Spark
A Deep Dive Into Spark
 
Big Data Processing using Apache Spark and Clojure
Big Data Processing using Apache Spark and ClojureBig Data Processing using Apache Spark and Clojure
Big Data Processing using Apache Spark and Clojure
 
Apache Spark Fundamentals Meetup Talk
Apache Spark Fundamentals Meetup TalkApache Spark Fundamentals Meetup Talk
Apache Spark Fundamentals Meetup Talk
 
Paris Data Geek - Spark Streaming
Paris Data Geek - Spark Streaming Paris Data Geek - Spark Streaming
Paris Data Geek - Spark Streaming
 
Spark Summit East 2015 Advanced Devops Student Slides
Spark Summit East 2015 Advanced Devops Student SlidesSpark Summit East 2015 Advanced Devops Student Slides
Spark Summit East 2015 Advanced Devops Student Slides
 
Big data processing with Apache Spark and Oracle Database
Big data processing with Apache Spark and Oracle DatabaseBig data processing with Apache Spark and Oracle Database
Big data processing with Apache Spark and Oracle Database
 
Intro to Apache Spark
Intro to Apache SparkIntro to Apache Spark
Intro to Apache Spark
 
Intro to Apache Spark
Intro to Apache SparkIntro to Apache Spark
Intro to Apache Spark
 
HKOSCon18 - Chetan Khatri - Scaling TB's of Data with Apache Spark and Scala ...
HKOSCon18 - Chetan Khatri - Scaling TB's of Data with Apache Spark and Scala ...HKOSCon18 - Chetan Khatri - Scaling TB's of Data with Apache Spark and Scala ...
HKOSCon18 - Chetan Khatri - Scaling TB's of Data with Apache Spark and Scala ...
 
Spark SQL
Spark SQLSpark SQL
Spark SQL
 
Intro to Spark
Intro to SparkIntro to Spark
Intro to Spark
 
A really really fast introduction to PySpark - lightning fast cluster computi...
A really really fast introduction to PySpark - lightning fast cluster computi...A really really fast introduction to PySpark - lightning fast cluster computi...
A really really fast introduction to PySpark - lightning fast cluster computi...
 
Introduction to Apache Spark
Introduction to Apache SparkIntroduction to Apache Spark
Introduction to Apache Spark
 

Dernier

Predictive Analysis for Loan Default Presentation : Data Analysis Project PPT
Predictive Analysis for Loan Default  Presentation : Data Analysis Project PPTPredictive Analysis for Loan Default  Presentation : Data Analysis Project PPT
Predictive Analysis for Loan Default Presentation : Data Analysis Project PPTBoston Institute of Analytics
 
Statistics, Data Analysis, and Decision Modeling, 5th edition by James R. Eva...
Statistics, Data Analysis, and Decision Modeling, 5th edition by James R. Eva...Statistics, Data Analysis, and Decision Modeling, 5th edition by James R. Eva...
Statistics, Data Analysis, and Decision Modeling, 5th edition by James R. Eva...ssuserf63bd7
 
RS 9000 Call In girls Dwarka Mor (DELHI)⇛9711147426🔝Delhi
RS 9000 Call In girls Dwarka Mor (DELHI)⇛9711147426🔝DelhiRS 9000 Call In girls Dwarka Mor (DELHI)⇛9711147426🔝Delhi
RS 9000 Call In girls Dwarka Mor (DELHI)⇛9711147426🔝Delhijennyeacort
 
Generative AI for Social Good at Open Data Science East 2024
Generative AI for Social Good at Open Data Science East 2024Generative AI for Social Good at Open Data Science East 2024
Generative AI for Social Good at Open Data Science East 2024Colleen Farrelly
 
INTERNSHIP ON PURBASHA COMPOSITE TEX LTD
INTERNSHIP ON PURBASHA COMPOSITE TEX LTDINTERNSHIP ON PURBASHA COMPOSITE TEX LTD
INTERNSHIP ON PURBASHA COMPOSITE TEX LTDRafezzaman
 
Call Us ➥97111√47426🤳Call Girls in Aerocity (Delhi NCR)
Call Us ➥97111√47426🤳Call Girls in Aerocity (Delhi NCR)Call Us ➥97111√47426🤳Call Girls in Aerocity (Delhi NCR)
Call Us ➥97111√47426🤳Call Girls in Aerocity (Delhi NCR)jennyeacort
 
How we prevented account sharing with MFA
How we prevented account sharing with MFAHow we prevented account sharing with MFA
How we prevented account sharing with MFAAndrei Kaleshka
 
Biometric Authentication: The Evolution, Applications, Benefits and Challenge...
Biometric Authentication: The Evolution, Applications, Benefits and Challenge...Biometric Authentication: The Evolution, Applications, Benefits and Challenge...
Biometric Authentication: The Evolution, Applications, Benefits and Challenge...GQ Research
 
Top 5 Best Data Analytics Courses In Queens
Top 5 Best Data Analytics Courses In QueensTop 5 Best Data Analytics Courses In Queens
Top 5 Best Data Analytics Courses In Queensdataanalyticsqueen03
 
Multiple time frame trading analysis -brianshannon.pdf
Multiple time frame trading analysis -brianshannon.pdfMultiple time frame trading analysis -brianshannon.pdf
Multiple time frame trading analysis -brianshannon.pdfchwongval
 
NLP Data Science Project Presentation:Predicting Heart Disease with NLP Data ...
NLP Data Science Project Presentation:Predicting Heart Disease with NLP Data ...NLP Data Science Project Presentation:Predicting Heart Disease with NLP Data ...
NLP Data Science Project Presentation:Predicting Heart Disease with NLP Data ...Boston Institute of Analytics
 
Defining Constituents, Data Vizzes and Telling a Data Story
Defining Constituents, Data Vizzes and Telling a Data StoryDefining Constituents, Data Vizzes and Telling a Data Story
Defining Constituents, Data Vizzes and Telling a Data StoryJeremy Anderson
 
Call Girls In Dwarka 9654467111 Escorts Service
Call Girls In Dwarka 9654467111 Escorts ServiceCall Girls In Dwarka 9654467111 Escorts Service
Call Girls In Dwarka 9654467111 Escorts ServiceSapana Sha
 
Student profile product demonstration on grades, ability, well-being and mind...
Student profile product demonstration on grades, ability, well-being and mind...Student profile product demonstration on grades, ability, well-being and mind...
Student profile product demonstration on grades, ability, well-being and mind...Seán Kennedy
 
2006_GasProcessing_HB (1).pdf HYDROCARBON PROCESSING
2006_GasProcessing_HB (1).pdf HYDROCARBON PROCESSING2006_GasProcessing_HB (1).pdf HYDROCARBON PROCESSING
2006_GasProcessing_HB (1).pdf HYDROCARBON PROCESSINGmarianagonzalez07
 
Easter Eggs From Star Wars and in cars 1 and 2
Easter Eggs From Star Wars and in cars 1 and 2Easter Eggs From Star Wars and in cars 1 and 2
Easter Eggs From Star Wars and in cars 1 and 217djon017
 
Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...
Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...
Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...dajasot375
 
Consent & Privacy Signals on Google *Pixels* - MeasureCamp Amsterdam 2024
Consent & Privacy Signals on Google *Pixels* - MeasureCamp Amsterdam 2024Consent & Privacy Signals on Google *Pixels* - MeasureCamp Amsterdam 2024
Consent & Privacy Signals on Google *Pixels* - MeasureCamp Amsterdam 2024thyngster
 
NLP Project PPT: Flipkart Product Reviews through NLP Data Science.pptx
NLP Project PPT: Flipkart Product Reviews through NLP Data Science.pptxNLP Project PPT: Flipkart Product Reviews through NLP Data Science.pptx
NLP Project PPT: Flipkart Product Reviews through NLP Data Science.pptxBoston Institute of Analytics
 

Dernier (20)

Predictive Analysis for Loan Default Presentation : Data Analysis Project PPT
Predictive Analysis for Loan Default  Presentation : Data Analysis Project PPTPredictive Analysis for Loan Default  Presentation : Data Analysis Project PPT
Predictive Analysis for Loan Default Presentation : Data Analysis Project PPT
 
Statistics, Data Analysis, and Decision Modeling, 5th edition by James R. Eva...
Statistics, Data Analysis, and Decision Modeling, 5th edition by James R. Eva...Statistics, Data Analysis, and Decision Modeling, 5th edition by James R. Eva...
Statistics, Data Analysis, and Decision Modeling, 5th edition by James R. Eva...
 
RS 9000 Call In girls Dwarka Mor (DELHI)⇛9711147426🔝Delhi
RS 9000 Call In girls Dwarka Mor (DELHI)⇛9711147426🔝DelhiRS 9000 Call In girls Dwarka Mor (DELHI)⇛9711147426🔝Delhi
RS 9000 Call In girls Dwarka Mor (DELHI)⇛9711147426🔝Delhi
 
Generative AI for Social Good at Open Data Science East 2024
Generative AI for Social Good at Open Data Science East 2024Generative AI for Social Good at Open Data Science East 2024
Generative AI for Social Good at Open Data Science East 2024
 
INTERNSHIP ON PURBASHA COMPOSITE TEX LTD
INTERNSHIP ON PURBASHA COMPOSITE TEX LTDINTERNSHIP ON PURBASHA COMPOSITE TEX LTD
INTERNSHIP ON PURBASHA COMPOSITE TEX LTD
 
Call Us ➥97111√47426🤳Call Girls in Aerocity (Delhi NCR)
Call Us ➥97111√47426🤳Call Girls in Aerocity (Delhi NCR)Call Us ➥97111√47426🤳Call Girls in Aerocity (Delhi NCR)
Call Us ➥97111√47426🤳Call Girls in Aerocity (Delhi NCR)
 
How we prevented account sharing with MFA
How we prevented account sharing with MFAHow we prevented account sharing with MFA
How we prevented account sharing with MFA
 
Biometric Authentication: The Evolution, Applications, Benefits and Challenge...
Biometric Authentication: The Evolution, Applications, Benefits and Challenge...Biometric Authentication: The Evolution, Applications, Benefits and Challenge...
Biometric Authentication: The Evolution, Applications, Benefits and Challenge...
 
Top 5 Best Data Analytics Courses In Queens
Top 5 Best Data Analytics Courses In QueensTop 5 Best Data Analytics Courses In Queens
Top 5 Best Data Analytics Courses In Queens
 
Multiple time frame trading analysis -brianshannon.pdf
Multiple time frame trading analysis -brianshannon.pdfMultiple time frame trading analysis -brianshannon.pdf
Multiple time frame trading analysis -brianshannon.pdf
 
NLP Data Science Project Presentation:Predicting Heart Disease with NLP Data ...
NLP Data Science Project Presentation:Predicting Heart Disease with NLP Data ...NLP Data Science Project Presentation:Predicting Heart Disease with NLP Data ...
NLP Data Science Project Presentation:Predicting Heart Disease with NLP Data ...
 
Deep Generative Learning for All - The Gen AI Hype (Spring 2024)
Deep Generative Learning for All - The Gen AI Hype (Spring 2024)Deep Generative Learning for All - The Gen AI Hype (Spring 2024)
Deep Generative Learning for All - The Gen AI Hype (Spring 2024)
 
Defining Constituents, Data Vizzes and Telling a Data Story
Defining Constituents, Data Vizzes and Telling a Data StoryDefining Constituents, Data Vizzes and Telling a Data Story
Defining Constituents, Data Vizzes and Telling a Data Story
 
Call Girls In Dwarka 9654467111 Escorts Service
Call Girls In Dwarka 9654467111 Escorts ServiceCall Girls In Dwarka 9654467111 Escorts Service
Call Girls In Dwarka 9654467111 Escorts Service
 
Student profile product demonstration on grades, ability, well-being and mind...
Student profile product demonstration on grades, ability, well-being and mind...Student profile product demonstration on grades, ability, well-being and mind...
Student profile product demonstration on grades, ability, well-being and mind...
 
2006_GasProcessing_HB (1).pdf HYDROCARBON PROCESSING
2006_GasProcessing_HB (1).pdf HYDROCARBON PROCESSING2006_GasProcessing_HB (1).pdf HYDROCARBON PROCESSING
2006_GasProcessing_HB (1).pdf HYDROCARBON PROCESSING
 
Easter Eggs From Star Wars and in cars 1 and 2
Easter Eggs From Star Wars and in cars 1 and 2Easter Eggs From Star Wars and in cars 1 and 2
Easter Eggs From Star Wars and in cars 1 and 2
 
Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...
Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...
Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...
 
Consent & Privacy Signals on Google *Pixels* - MeasureCamp Amsterdam 2024
Consent & Privacy Signals on Google *Pixels* - MeasureCamp Amsterdam 2024Consent & Privacy Signals on Google *Pixels* - MeasureCamp Amsterdam 2024
Consent & Privacy Signals on Google *Pixels* - MeasureCamp Amsterdam 2024
 
NLP Project PPT: Flipkart Product Reviews through NLP Data Science.pptx
NLP Project PPT: Flipkart Product Reviews through NLP Data Science.pptxNLP Project PPT: Flipkart Product Reviews through NLP Data Science.pptx
NLP Project PPT: Flipkart Product Reviews through NLP Data Science.pptx
 

Apache Spark Architecture

Notes de l'éditeur

  1. def printfunc (x): print 'Word "%s" occurs %d times' % (x[0], x[1]) infile = sc.textFile('hdfs://sparkdemo:8020/sparkdemo/textfiles/README.md', 4) rdd1 = infile.flatMap(lambda x: x.split()) rdd2 = rdd1.map(lambda x: (x, 1)).reduceByKey(lambda x, y: x+y) print rdd2.toDebugString() rdd2.foreach(printfunc)