ANADAPTIVEEXECUTIONENGINEFOR
APACHESPARKSQL
Carson Wang (carson.wang@intel.com)
Yucai Yu (yucai.yu@intel.com)
Hao Cheng (hao.cheng@intel.com)
2
Agenda
• Challenges in Spark SQL* High Performance
• Adaptive Execution Background
• Adaptive Execution Architecture
• Benchmark Result
*Other names and brands may be claimed as the property of others.
3
Challenges in Tuning Shuffle Partition Number
• Partition Num P = spark.sql.shuffle.partition (200 by default)
• Total Core Num C = Executor Num * Executor Core Num
• Each Reduce Stage runs the tasks in (P / C) rounds
*Other names and brands may be claimed as the property of others.
4
Shuffle Partition Challenge 1
• Partition Num Too Small:Spill, OOM
• Partition Num Too Large:Scheduling overhead. More IO requests. Too many
small output files
• Tuning method: Increase partition number starting from C, 2C, … until
performance begin to drop
Impractical for each query
in production.
5
Shuffle Partition Challenge 2
• The same Shuffle Partition number doesn’t fit for all Stages
• Shuffle data size usually decreases during the execution of the SQL
query
Question: Can we set the shuffle partition number for each stage
automatically?
6
Spark SQL* Execution Plan
• The execution plan is fixed after planning phase.
Image from: https://databricks.com/blog/2015/03/24/spark-sql-graduates-from-alpha-in-spark-1-3.html
*Other names and brands may be claimed as the property of others.
7
Spark SQL* Join Selection
SELECT xxx
FROM A
JOIN B
ON A.Key1 = B.Key2
*Other names and brands may be claimed as the property of others.
8
Broadcast Hash Join
A1
Partition
1
B
Task 1
A2
Partition
2
B
Task 2
An
Partition
n
B
Task n
……
Table B
Executor Executor Executor
9
Shuffle Hash Join / Sort Merge Join
MAP
SHUFFLE
REDUCE Output Output Output Output Output
A0 A1 A2 B0 B1 B2
Partition 0 Partition 1 Partition 2
……
10
Spark SQL* Join Selection
• spark.sql.autoBroadcastJoinThreshold is 10 MB by default
• For complex queries, a Join may takes intermediate results as inputs.
At planning phase, Spark SQL* doesn’t know the exact size and plans it
to SortMergeJoin.
*Other names and brands may be claimed as the property of others.
Question: Can we
optimize the execution
plan at runtime based on
the runtime statistics ?
11
Data Skew in Join
• Data in some partitions are extremely larger than other partitions.
• Data skew is a common source of slowness for Shuffle Joins.
12
Ways to Handle Skewed Join nowadays
• Increase shuffle partition number
• Increase BroadcastJoin threashold to change Shuffle Join to
Broadcast Join
• Add prefix to the skewed keys
• ……
Question 3: These involve many manual efforts and are limitted. Can
we handle skewed join at runtime automatically?
13
Adaptive Execution Background
• SPARK-9850: Adaptive execution in Spark*
• SPARK-9851: Support submitting map stages individually in
DAGScheduler
• SPARK-9858: Introduce an ExchangeCoordinator to estimate the
number of post-shuffle partitions.
*Other names and brands may be claimed as the property of others.
14
A New Adaptive Execution Engine in Spark SQL*
*Other names and brands may be claimed as the property of others.
15
SortMerge
Join
Sort
Exchange
…
Sort
Exchange
…
QueryStage
SortMerge
Join
Sort
QueryStage
Input
Sort
QueryStage
Input
ChildStage ChildStage
QueryStage
Broadcast
Join
QueryStage
Input
Broadcast
Exchange
QueryStage
Input
Size=5MB
Size=100GB
Adaptive Execution Architecture
Divide the plan into
multiple QueryStages
Execution Plan
(a) Execute ChildStages
(b) Optimize the plan
(c) Determine Reducer num
Execute the Stage
FileScan
RDD
RDD
Shuffled
RowRDD
FileScan
RDD
RDD
Shuffled
RowRDD
RDDStage
Stage
Stage
DAG of RDDs
Execute the Stages
RDDLocalShu
ffledRDD
DAG of RDDs
Stage
16
Auto Setting the Number of Reducers
Map
Task 1
Partition 0
Partition 1
Partition 2
Partition 3
Partition 4
Map
Task 2
Partition 0
Partition 1
Partition 2
Partition 3
Partition 4
Reduce
Task 1
Partition 0
(70MB)
Reduce
Task 2
Partition 1
(30MB)
Partition 2
(20 MB)
Partition 3
(10 MB)
Reduce
Task 3
Parition 4
(50 MB)
• 5 initial reducer partitions with size
[70 MB, 30 MB, 20 MB, 10 MB, 50 MB]
• Set target size per reducer = 64 MB. At runtime, we use 3 actual reducers.
• Also support setting target row count per reducer.
17
Shuffle Join => Broadcast Join
Example 1
• T1 < broadcast threshold
• T2 and T3 > broadcast threshold
• In this case, both Join1 and Join2
are not changed to broadcast join
T1
T3
T2
QueryStage
SortMerge
Join2
SortMerge
Join1
QueryStage
Input
(child stage)
QueryStage
Input
(child stage)
QueryStage
Input
(child stage)
18
Shuffle Join => Broadcast Join
Example 2
• T1 and T3 < broadcast threshold
• T2 > broadcast threshold
• In this case, both Join1 and Join2
are changed to broadcast join
T1
T3
T2
QueryStage
SortMerge
Join2
SortMerge
Join1
QueryStage
Input
(child stage)
QueryStage
Input
(child stage)
QueryStage
Input
(child stage)
19
Remote Shuffle Read => Local Shuffle Read
A0 B0
Map output on Node 1
A1 B1
Map output on Node 2
task1 Task2 Task3 Task4 Task5
Reduce tasks on Node 1 Reduce tasks on Node 2
Reduce tasks on Node 1 Reduce tasks on Node 2
task 1
task 2
Remote Shuffle Read
Local Shuffle Read
20
Skewed Partition Detection at Runtime
• After executing child stages, we calculate the data size and
row count of each partition from MapStaus.
• A partition is skewed if its data size or row count is N times
larger than the median, and also larger than a pre-defined
threshold.
21
Handling Skewed Join
Map 0
Map 1
Map 2
A0-0
A0-1
Map 0
Map 1
Map 2
B0
Shuffle Read
Join
Join
Shuffle Read
Table A (Parition 0 is skewed) Table B
……
Use N tasks instead of 1 task to join the data in
Partition 0. The join result =
Union (A0-0 Join B0, A0-1 Join B0, … , A0-N Join B0)A0-N
……
……
22
Benchmark Result
23
Cluster Setup
Hardware BDW
Slave Node# 98
CPU Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz (88 cores)
Memory 256 GB
Disk 7× 400 GB SSD
Network 10 Gigabit Ethernet
Master CPU Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz (88 cores)
Memory 256 GB
Disk 7× 400 GB SSD
Network 10 Gigabit Ethernet
Software
OS CentOS* Linux release 6.9
Kernel 2.6.32-573.22.1.el6.x86_64
Spark* Spark* master (2.3) / Spark* master (2.3) with adaptive execution patch
Hadoop*/HDFS* hadoop-2.7.3
JDK 1.8.0_40 (Oracle* Corporation)
*Other names and brands may be claimed as the property of others.
For more complete information about performance and benchmark results, visit www.intel.com/benchmarks
24
TPC-DS* 100TB Benchmark
*Other names and brands may be claimed as the property of others.
For more complete information about performance and benchmark results, visit www.intel.com/benchmarks
3.2X
1.9X 1.8X 1.7X
1.6X
1.5X
1.3X
1.3X
1.3X
1.3X
1.3X
1.3X
1.3X
1.2X
1.2X
0
100
200
300
400
500
q8 q81 q30 q51 q61 q60 q90 q37 q82 q56 q31 q19 q41 q74 q91
Duration(s)
Spark SQL v.s. Adaptive Execution
Spark Sql Adaptive Execution
25
Auto Setting the Shuffle Partition Number
Partition Number 10976 (q30)
Partition Number changed to 1084 and 1079 at runtime. (q30)
*For more complete information about performance and benchmark results, visit www.intel.com/benchmarks
• Less scheduler overhead and task startup time.
• Less disk IO requests.
• Less data are written to disk because more data are aggregatd.
26
SortMergeJoin -> BroadcastJoin at Runtime
SortMergeJoin (q8):
BroadcastJoin (q8 Adaptive Execution):
*For more complete information about performance and benchmark results, visit www.intel.com/benchmarks
• Eliminate the data skew and straggler in SortMergeJoin
• Remote shuffle read -> local shuffle read.
• Random IO read -> Sequence IO read
27
Scheduling Difference
Original Spark:
Adaptive Execution:
• Spark SQL* has to wait for the completion of all broadcasts
before scheduling the stages. Adaptive Execution can start the
stages earlier as long as its dependencies are completed.
*Other names and brands may be claimed as the property of others.
For more complete information about performance and benchmark results, visit www.intel.com/benchmarks
50 Seconds Gap
ThankYOU
29
Legal Disclaimer
No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document.
Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as
well as any warranty arising from course of performance, course of dealing, or usage in trade.
This document contains information on products, services and/or processes in development. All information provided here is subject to change without notice. Contact your Intel
representative to obtain the latest forecast, schedule, specifications and roadmaps.
The products and services described may contain defects or errors known as errata which may cause deviations from published specifications. Current characterized errata are
available on request.
Copies of documents which have an order number and are referenced in this document may be obtained by calling 1-800-548-4725 or by visiting
www.intel.com/design/literature.htm.
Intel does not control or audit third-party benchmark data or the web sites referenced in this document. You should visit the referenced web site and confirm whether referenced
data are accurate.
Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.
*Other names and brands may be claimed as the property of others
Copyright © 2017 Intel Corporation.

An Adaptive Execution Engine for Apache Spark with Carson Wang and Yucai Yu

  • 1.
  • 2.
    2 Agenda • Challenges inSpark SQL* High Performance • Adaptive Execution Background • Adaptive Execution Architecture • Benchmark Result *Other names and brands may be claimed as the property of others.
  • 3.
    3 Challenges in TuningShuffle Partition Number • Partition Num P = spark.sql.shuffle.partition (200 by default) • Total Core Num C = Executor Num * Executor Core Num • Each Reduce Stage runs the tasks in (P / C) rounds *Other names and brands may be claimed as the property of others.
  • 4.
    4 Shuffle Partition Challenge1 • Partition Num Too Small:Spill, OOM • Partition Num Too Large:Scheduling overhead. More IO requests. Too many small output files • Tuning method: Increase partition number starting from C, 2C, … until performance begin to drop Impractical for each query in production.
  • 5.
    5 Shuffle Partition Challenge2 • The same Shuffle Partition number doesn’t fit for all Stages • Shuffle data size usually decreases during the execution of the SQL query Question: Can we set the shuffle partition number for each stage automatically?
  • 6.
    6 Spark SQL* ExecutionPlan • The execution plan is fixed after planning phase. Image from: https://databricks.com/blog/2015/03/24/spark-sql-graduates-from-alpha-in-spark-1-3.html *Other names and brands may be claimed as the property of others.
  • 7.
    7 Spark SQL* JoinSelection SELECT xxx FROM A JOIN B ON A.Key1 = B.Key2 *Other names and brands may be claimed as the property of others.
  • 8.
    8 Broadcast Hash Join A1 Partition 1 B Task1 A2 Partition 2 B Task 2 An Partition n B Task n …… Table B Executor Executor Executor
  • 9.
    9 Shuffle Hash Join/ Sort Merge Join MAP SHUFFLE REDUCE Output Output Output Output Output A0 A1 A2 B0 B1 B2 Partition 0 Partition 1 Partition 2 ……
  • 10.
    10 Spark SQL* JoinSelection • spark.sql.autoBroadcastJoinThreshold is 10 MB by default • For complex queries, a Join may takes intermediate results as inputs. At planning phase, Spark SQL* doesn’t know the exact size and plans it to SortMergeJoin. *Other names and brands may be claimed as the property of others. Question: Can we optimize the execution plan at runtime based on the runtime statistics ?
  • 11.
    11 Data Skew inJoin • Data in some partitions are extremely larger than other partitions. • Data skew is a common source of slowness for Shuffle Joins.
  • 12.
    12 Ways to HandleSkewed Join nowadays • Increase shuffle partition number • Increase BroadcastJoin threashold to change Shuffle Join to Broadcast Join • Add prefix to the skewed keys • …… Question 3: These involve many manual efforts and are limitted. Can we handle skewed join at runtime automatically?
  • 13.
    13 Adaptive Execution Background •SPARK-9850: Adaptive execution in Spark* • SPARK-9851: Support submitting map stages individually in DAGScheduler • SPARK-9858: Introduce an ExchangeCoordinator to estimate the number of post-shuffle partitions. *Other names and brands may be claimed as the property of others.
  • 14.
    14 A New AdaptiveExecution Engine in Spark SQL* *Other names and brands may be claimed as the property of others.
  • 15.
    15 SortMerge Join Sort Exchange … Sort Exchange … QueryStage SortMerge Join Sort QueryStage Input Sort QueryStage Input ChildStage ChildStage QueryStage Broadcast Join QueryStage Input Broadcast Exchange QueryStage Input Size=5MB Size=100GB Adaptive ExecutionArchitecture Divide the plan into multiple QueryStages Execution Plan (a) Execute ChildStages (b) Optimize the plan (c) Determine Reducer num Execute the Stage FileScan RDD RDD Shuffled RowRDD FileScan RDD RDD Shuffled RowRDD RDDStage Stage Stage DAG of RDDs Execute the Stages RDDLocalShu ffledRDD DAG of RDDs Stage
  • 16.
    16 Auto Setting theNumber of Reducers Map Task 1 Partition 0 Partition 1 Partition 2 Partition 3 Partition 4 Map Task 2 Partition 0 Partition 1 Partition 2 Partition 3 Partition 4 Reduce Task 1 Partition 0 (70MB) Reduce Task 2 Partition 1 (30MB) Partition 2 (20 MB) Partition 3 (10 MB) Reduce Task 3 Parition 4 (50 MB) • 5 initial reducer partitions with size [70 MB, 30 MB, 20 MB, 10 MB, 50 MB] • Set target size per reducer = 64 MB. At runtime, we use 3 actual reducers. • Also support setting target row count per reducer.
  • 17.
    17 Shuffle Join =>Broadcast Join Example 1 • T1 < broadcast threshold • T2 and T3 > broadcast threshold • In this case, both Join1 and Join2 are not changed to broadcast join T1 T3 T2 QueryStage SortMerge Join2 SortMerge Join1 QueryStage Input (child stage) QueryStage Input (child stage) QueryStage Input (child stage)
  • 18.
    18 Shuffle Join =>Broadcast Join Example 2 • T1 and T3 < broadcast threshold • T2 > broadcast threshold • In this case, both Join1 and Join2 are changed to broadcast join T1 T3 T2 QueryStage SortMerge Join2 SortMerge Join1 QueryStage Input (child stage) QueryStage Input (child stage) QueryStage Input (child stage)
  • 19.
    19 Remote Shuffle Read=> Local Shuffle Read A0 B0 Map output on Node 1 A1 B1 Map output on Node 2 task1 Task2 Task3 Task4 Task5 Reduce tasks on Node 1 Reduce tasks on Node 2 Reduce tasks on Node 1 Reduce tasks on Node 2 task 1 task 2 Remote Shuffle Read Local Shuffle Read
  • 20.
    20 Skewed Partition Detectionat Runtime • After executing child stages, we calculate the data size and row count of each partition from MapStaus. • A partition is skewed if its data size or row count is N times larger than the median, and also larger than a pre-defined threshold.
  • 21.
    21 Handling Skewed Join Map0 Map 1 Map 2 A0-0 A0-1 Map 0 Map 1 Map 2 B0 Shuffle Read Join Join Shuffle Read Table A (Parition 0 is skewed) Table B …… Use N tasks instead of 1 task to join the data in Partition 0. The join result = Union (A0-0 Join B0, A0-1 Join B0, … , A0-N Join B0)A0-N …… ……
  • 22.
  • 23.
    23 Cluster Setup Hardware BDW SlaveNode# 98 CPU Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz (88 cores) Memory 256 GB Disk 7× 400 GB SSD Network 10 Gigabit Ethernet Master CPU Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz (88 cores) Memory 256 GB Disk 7× 400 GB SSD Network 10 Gigabit Ethernet Software OS CentOS* Linux release 6.9 Kernel 2.6.32-573.22.1.el6.x86_64 Spark* Spark* master (2.3) / Spark* master (2.3) with adaptive execution patch Hadoop*/HDFS* hadoop-2.7.3 JDK 1.8.0_40 (Oracle* Corporation) *Other names and brands may be claimed as the property of others. For more complete information about performance and benchmark results, visit www.intel.com/benchmarks
  • 24.
    24 TPC-DS* 100TB Benchmark *Othernames and brands may be claimed as the property of others. For more complete information about performance and benchmark results, visit www.intel.com/benchmarks 3.2X 1.9X 1.8X 1.7X 1.6X 1.5X 1.3X 1.3X 1.3X 1.3X 1.3X 1.3X 1.3X 1.2X 1.2X 0 100 200 300 400 500 q8 q81 q30 q51 q61 q60 q90 q37 q82 q56 q31 q19 q41 q74 q91 Duration(s) Spark SQL v.s. Adaptive Execution Spark Sql Adaptive Execution
  • 25.
    25 Auto Setting theShuffle Partition Number Partition Number 10976 (q30) Partition Number changed to 1084 and 1079 at runtime. (q30) *For more complete information about performance and benchmark results, visit www.intel.com/benchmarks • Less scheduler overhead and task startup time. • Less disk IO requests. • Less data are written to disk because more data are aggregatd.
  • 26.
    26 SortMergeJoin -> BroadcastJoinat Runtime SortMergeJoin (q8): BroadcastJoin (q8 Adaptive Execution): *For more complete information about performance and benchmark results, visit www.intel.com/benchmarks • Eliminate the data skew and straggler in SortMergeJoin • Remote shuffle read -> local shuffle read. • Random IO read -> Sequence IO read
  • 27.
    27 Scheduling Difference Original Spark: AdaptiveExecution: • Spark SQL* has to wait for the completion of all broadcasts before scheduling the stages. Adaptive Execution can start the stages earlier as long as its dependencies are completed. *Other names and brands may be claimed as the property of others. For more complete information about performance and benchmark results, visit www.intel.com/benchmarks 50 Seconds Gap
  • 28.
  • 29.
    29 Legal Disclaimer No license(express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document. Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade. This document contains information on products, services and/or processes in development. All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest forecast, schedule, specifications and roadmaps. The products and services described may contain defects or errors known as errata which may cause deviations from published specifications. Current characterized errata are available on request. Copies of documents which have an order number and are referenced in this document may be obtained by calling 1-800-548-4725 or by visiting www.intel.com/design/literature.htm. Intel does not control or audit third-party benchmark data or the web sites referenced in this document. You should visit the referenced web site and confirm whether referenced data are accurate. Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others Copyright © 2017 Intel Corporation.