Академический Документы
Профессиональный Документы
Культура Документы
Analytics at Scale
Reynold Xin, Josh Rosen, Matei Zaharia, Michael Franklin, Scott
Shenker, Ion Stoica
AMPLab, UC Berkeley
June 25 @ SIGMOD 2013
Challenges
Data size growing
Processing has to scale out over large
clusters
Faults and stragglers complicate DB design
Complexity of analysis increasing
Massive ETL (web crawling)
Machine learning, graph processing
Leads to long running jobs
The Rise of MapReduce
Whats good about
MapReduce?
1. Scales out to thousands of nodes in a fault-
tolerant manner
2. Good for analyzing semi-structured data and
complex analytics
3. Elasticity (cloud computing)
4. Dynamic, multi-tenant resource sharing
parallel relational database systems are
signicantly faster than those that rely on the
use of MapReduce for their query engines
I totally agree.
This Research
1. Shows MapReduce model can be extended to
support SQL efciently
Started from a powerful MR-like engine (Spark)
Extended the engine in various ways
2. The artifact: Shark, a fast engine on top of MR
Performant SQL
Complex analytics in the same engine
Maintains MR benets, e.g. fault-tolerance
MapReduce Fundamental Properties?
Data-parallel operations
Apply the same operations on a dened set of data
Fine-grained, deterministic tasks
Enables fault-tolerance & straggler mitigation
Why Were Databases Faster?
Data representation
Schema-aware, column-oriented, etc
Co-partition & co-location of data
Execution strategies
Scheduling/task launching overhead (~20s in Hadoop)
Cost-based optimization
Indexing
Lack of mid-query fault tolerance
MRs pull model costly compared to DBMS push
See Pavlo 2009, Xin 2013.
Why Were Databases Faster?
Data representation
Schema-aware, column-oriented, etc
Co-partition & co-location of data
Execution strategies
Scheduling/task launching overhead (~20s in Hadoop)
Cost-based optimization
Indexing
Lack of mid-query fault tolerance
MRs pull model costly compared to DBMS push
See Pavlo 2009, Xin 2013.
Not fundamental to
MapReduce
Can be
surprisingly
cheap
Introducing Shark
MapReduce-based architecture
Uses Spark as the underlying execution engine
Scales out and tolerate worker failures
Performant
Low-latency, interactive queries
(Optionally) in-memory query processing
Expressive and exible
Supports both SQL and complex analytics
Hive compatible (storage, UDFs, types, metadata, etc)
Spark Engine
Fast MapReduce-like engine
In-memory storage for fast iterative computations
General execution graphs
Designed for low latency (~100ms jobs)
Compatible with Hadoop storage APIs
Read/write to any Hadoop-supported systems, including
HDFS, Hbase, SequenceFiles, etc
Growing open source platform
17 companies contributing code
More Powerful MR Engine
General task DAG
Pipelines functions
within a stage
Cache-aware data
locality & reuse
Partitioning-aware
to avoid shufes
!"#$
&$#"$
'("&)*+
,-)
./-'0 1
./-'0 2
./-'0 3
45
*5
65
75
85
95
:5
; )(0<#"&=>+ ?",)&/0@ )-(/#/#"$
Client
CLI JDBC
Hive Architecture
Meta
store
Hadoop Storage (HDFS, S3, )
Driver
SQL
Parser
Query
Optimizer
Physical Plan
Execution
MapReduce
Client
CLI JDBC
Shark Architecture
Meta
store
Hadoop Storage (HDFS, S3, )
Driver
SQL
Parser
Spark
Cache Mgr.
Physical Plan
Execution
Query
Optimizer
Extending Spark for SQL
Columnar memory store
Dynamic query optimization
Miscellaneous other optimizations (distributed
top-K, partition statistics & pruning a.k.a. coarse-
grained indexes, co-partitioned joins, )
Columnar Memory Store
Simply caching records as JVM objects is inefcient
(huge overhead in MRs record-oriented model)
Shark employs column-oriented storage, a
partition of columns is one MapReduce record.
2
!"#$%& ()"*+,-
3 1
!"A$ ,#B0 =->>+
CD2 1DE FDC
."/ ()"*+,-
2 !"A$ CD2
3 ,#B0 1DE
1 =->>+ FDC
Benet: compact representation, CPU efcient
compression, cache locality.
How do we optimize: