Michael Wrinn
Research Program Director, University Research Office,
Intel Corporation
Jason Dai
Engineering Director and Principal Engineer,
Intel Corporation
Big Data Beyond Hadoop*: Research Directions for the Future
1. Big Data Beyond Hadoop*:
Research Directions for the Future
Jason Dai, Engineering Director and Principal
Engineer, Software and Solutions Group
Michael Wrinn, PhD, Research Program Director,
University Research Office, Intel Labs
ACAS002
2. Agenda
• Big data and the Hadoop* ecosystem
• Intel university collaborations on big data research
• Efficient in-memory implementations of map reduce
• Efficient graph algorithms for analytics
• Intel’s efforts moving research to production
The PDF for this Session presentation is available from our
Technical Session Catalog at the end of the day at:
intel.com/go/idfsessionsBJ
URL is on top of Session Agenda Pages in Pocket Guide
2
3. Agenda
• Big data and the Hadoop* ecosystem
• Intel university collaborations on bit data research
• Efficient in-memory implementations of mapreduce
• Efficient graph algorithms for analytics
• Intel’s efforts moving research to production
3
4. What is Big Data?
Big Data is data that is too big, too fast or too hard for
existing systems and algorithms to handle
• Too Big
– Terabytes going on petabytes
– Smart (not brute force) massive parallelism required
• Too Fast
– Sensor tagging everything creates a firehose
– Ingest problem
• Too Hard
– Complex analytics are required (e.g., to find patterns, trends
and relationships)
– Need to combine diverse data types (No Schema, Uncurated,
Inconsistent Syntax and Semantics)
Data should be a resource, not a load
Existing data processing tools are not a good fit
4 Samuel Madden ISTC Director and Professor EECS, MIT
5. Example: Web Analytics
Large web enterprises:
thousands of servers,
millions of users, and
terabytes per day of “click data”
Not just simple reporting:
e.g., in real time, determine what users are likely to do next, or
what ad to serve them, or
which user they are most similar to
Existing analytics systems either:
do not scale to required volumes, or
do not provide required sophistication
5 Samuel Madden ISTC Director and Professor EECS, MIT
6. Example: Sensor Analytics
Smartphone providers
tolling agencies
municipalities
insurance companies
doctors
businesses
Capturing massive streams of video,
position, acceleration, and other
data from phones and other devices
This data needs to be stored, processed,
and mined, e.g., to measure traffic, driving risk,
or medical prognosis
6 Samuel Madden ISTC Director and Professor EECS, MIT
7. Hadoop* in the Big Data Ecosystem
Era of Data Exchange
Cost-effective
Vertical
Solutions
eCommerce Healthcare Manufacturing Energy - Scientific FSI
Traditional Business Solutions New Analytics Models
Business Processing Innovation
In-Memory DB – Integrated
Analytics –Systems & Appliances
EXALYTICS
Big Data
Compute EP
Platform
Topology EX MIC
Fabric
Traditional Business Solutions Connecting to New Analytics
Models for Real-time Value Opportunities
7
8. Agenda
• Big data and the Hadoop* ecosystem
• Intel university collaborations on big data research
• Efficient in-memory implementations of map reduce
• Efficient graph algorithms for analytics
• Intel’s efforts moving research to production
8
9. Intel Activity Landscape on Big Data
Apps Corporate Data Solution Programs for Big Data and Analytics
Services Trust Broker Location Based
Healthcare, Telco, …
(McAfee*) Service (Telmap)
Data Usage
Visualization
End user tools
Big Data HiTune* and other
tools for Hadoop
Market
Internet of
Sizing and Things / M2M
Analytics Segmentation (Intel Labs &
(with Bain) university
Distributed Machine Business Video Analytics
collaborators)
Learning Intelligence
(university and Hadoop*
Data Management collaborators) Distributed Video
and Processing Analytics
Hadoop Distribution & Service
Hadoop performance & Distributed
Data Delivery Computing Architecture Architecture (Guavus)
and Storage Platform
Microservers End-to-End Data
Security
Compression & Federated Device
Large Object Architecture
Decompression IPs Storage
Intel Intel Intel Intel Others
Architecture Software Labs IT
9
10. Agenda
• Big data and the Hadoop* ecosystem
• Intel university collaborations on big data research
• Efficient in-memory implementations of map reduce
• Efficient graph algorithms for analytics
• Intel’s efforts moving research to production
10
11. Algorithms, Machines, People (AMPLab)
Adaptive/Active
Machine Learning
and Analytics
Massive
and
Diverse
Data
CrowdSourcing/
Human Cloud Computing
Computation
All software released as BSD Open Source
11
12. Berkeley Data Analysis System
• Mesos*: resource management platform
• SCADS: scale-independent storage systems
• PIQL, Spark: processing frameworks
SharkQuery Languages * …
Higher Hive* Pig /
PIQL …
Processing Frameworks *
MPI …
Spark Hadoop
Mesos
Resource Management
SCADS Storage HDFS
3rd party AMPLab
12
13. Data Center Programming: Spark
• In-memory cluster computing framework for
applications that reuse working sets of data
– Iterative algorithms: machine learning, graph
processing, optimization
– Interactive data mining: order of magnitude faster
than disk-based tools
• Key idea: RDDs “resilient distributed datasets”
that can automatically be rebuilt on failure
– Keep large working sets in memory
– Fault tolerance mechanism based on “lineage”
13
14. Spark: Motivation
Complex jobs, interactive queries and online
processing all need one thing that Hadoop* MR
lacks:
• Efficient primitives for data sharing
Query 1
Stage 1
Stage 2
Stage 3
Job 1
Job 2
Query 2 …
Query 3
Iterative job Interactive mining Stream processing
14
15. Xfer and Sharing in Hadoop*
HDFS HDFS HDFS HDFS
read write read write
Iter. 1 Iter. 2 . . .
Input
HDFS Query 1 Result 1
read
Query 2 Result 2
Query 3 Result 3
Input
. . .
15
17. Introducing Shark
• Spark + Hive* (the SQL in NoSQL)
• Utilizes Spark’s in-memory RDD caching
and flexible language capabilities: result
reuse, and low latency
• Scalable, fault-tolerant, fast
• Query Compatible with Hive
17
18. Benchmarks: Query 1
30GB input table
SELECT * FROM grep WHERE field LIKE ‘%XYZ%’;
18
19. Benchmark: Query 2
5 GB input table
SELECT pagerank, pageURL FROM rankings WHERE
pagerank > 10;
*
19
20. Agenda
• Big data and the Hadoop* ecosystem
• Intel university collaborations on big data research
• Efficient in-memory implementations of map reduce
• Efficient graph algorithms for analytics
• Intel’s efforts moving research to production
20
21. Data Parallelism (MapReduce)
2
1 8
6
4 1 2 8
3 2
7
4
2 4
8 1 4
CPU 1 CPU 2 CPU 3 CPU 4
7
4
2 5
. . . . . .
1
5
9 5
3 4
9 3 4
3 8
Solve a huge number of independent subproblems
21
22. MapReduce for Data-Parallel ML
• Excellent for large data-parallel tasks!
Data-Parallel Graph-Parallel
MapReduce Is there more to
Feature Cross Machine
Extraction Validation
Computing Sufficient
Learning
?
Statistics
22
23. Machine Learning Pipeline
Extract Graph
Features Formation Structured
Data Value
Machine
from
Learning Data
Algorithm
similar face
faces belief
images faces labels
propagation
docs important doc
shared LDA
words topics
movie words
side collaborative
ratings rated movie
info filtering
movies recommend
23
24. Parallelizing Machine Learning
Extract Graph
Features Formation Structured
Data Value
Machine
from
Learning Data
Algorithm
Graph Ingress Graph-Structured
mostly data-parallel Computation
graph-parallel
24
28. Agenda
• Big data and the Hadoop* ecosystem
• Intel university collaborations on big data research
• Efficient in-memory implementations of map reduce
• Efficient graph algorithms for analytics
• Intel’s efforts moving research to production
28
29. Intel’s Efforts on Hadoop*
• Intel® Distribution for Apache Hadoop*
– Performance, security and management
– Downloadable from http://hadoop.intel.com/
• Intel’s open source initiatives for Hadoop
– HiBench: comprehensive Hadoop benchmark suite
https://github.com/intel-hadoop/hibench
– Project Panthera: efficient support of standard SQL features
on Hadoop
https://github.com/intel-hadoop/project-panthera
– Project Rhino: enhanced data protection for the Apache
Hadoop ecosystem
https://github.com/intel-hadoop/project-rhino
– Graph Builder: scalable graph construction using Hadoop
http://graphlab.org/intel-graphbuilder/
29
30. Using Spark/Shark for In-memory, Real-time
Data Analysis
• Use case 1: ad-hoc & interactive queries
– Interactive queries (exploratory ad-hoc queries, BI charting &
mining)
– Similar projects: Google* Dremel, Facebook* Peregrine,
Cloudera* Impala, Apache* Drill, etc. (several seconds latency)
– Use Shark/Spark to achieve close to sub-second latency for
interactive queries
• Use case 2: in-memory, real-time analysis
– Iterative data mining, online analysis (e.g., loading table into
memory for online analysis, caching intermediate results for
iterative machine learning)
– Similar projects: Google PowerDrill
– Use Shark/Spark to reliably load data in distributed memory for
online analysis
30
31. Using Spark/Shark for In-memory, Real-time
Data Analysis
• Use case 3: stream processing
– Streaming analysis, CEP (e.g., intrusion detection, real-time
statistics, etc.)
– Similar projects: Twitter* Storm, Apache* S4, Facebook* Puma
– Use Spark streaming for stream processing
Better reliability
Unified framework and application for both offline, online &
streaming analysis
• Use case 4: graph-parallel analysis & machine learning
– Use case: graph algorithms, machine learning (e.g., social
network analysis, recommendation engine)
– Similar projects: Google* Pregel, CMU GraphLab*
– Use Bagel (Pregel on Spark) for graph parallel analysis &
machine learning on Spark
31
32. Summary
• MapReduce as implemented in Hadoop* is extremely
useful, but:
– In-memory implementations show serious advantages
– Graph algorithms may be more suitable for problem at hand
• Intel continues to work with university researchers
• Intel works to implement research results into
production environments
32
33. Call to Action
• Use Intel Research results in your own big
data efforts!
• Work with us on the next-gen, in-memory,
real-time analysis using Spark/Shark
33
35. Legal Disclaimer
Intel's compilers may or may not optimize to the same degree for non-Intel
microprocessors for optimizations that are not unique to Intel microprocessors.
These optimizations include SSE2, SSE3, and SSE3 instruction sets and other
optimizations. Intel does not guarantee the availability, functionality, or
effectiveness of any optimization on microprocessors not manufactured by Intel.
Microprocessor-dependent optimizations in this product are intended for use with
Intel microprocessors. Certain optimizations not specific to Intel
microarchitecture are reserved for Intel microprocessors. Please refer to the
applicable product User and Reference Guides for more information regarding the
specific instruction sets covered by this notice.
Notice revision #20110804
35
36. Risk Factors
The above statements and any others in this document that refer to plans and expectations for the first quarter, the year and the
future are forward-looking statements that involve a number of risks and uncertainties. Words such as “anticipates,” “expects,”
“intends,” “plans,” “believes,” “seeks,” “estimates,” “may,” “will,” “should” and their variations identify forward-looking
statements. Statements that refer to or are based on projections, uncertain events or assumptions also identify forward-looking
statements. Many factors could affect Intel’s actual results, and variances from Intel’s current expectations regarding such factors
could cause actual results to differ materially from those expressed in these forward-looking statements. Intel presently considers the
following to be the important factors that could cause actual results to differ materially from the company’s expectations. Demand
could be different from Intel's expectations due to factors including changes in business and economic conditions; customer acceptance
of Intel’s and competitors’ products; supply constraints and other disruptions affecting customers; changes in customer order patterns
including order cancellations; and changes in the level of inventory at customers. Uncertainty in global economic and financial
conditions poses a risk that consumers and businesses may defer purchases in response to negative financial events, which could
negatively affect product demand and other related matters. Intel operates in intensely competitive industries that are characterized by
a high percentage of costs that are fixed or difficult to reduce in the short term and product demand that is highly variable and difficult
to forecast. Revenue and the gross margin percentage are affected by the timing of Intel product introductions and the demand for and
market acceptance of Intel's products; actions taken by Intel's competitors, including product offerings and introductions, marketing
programs and pricing pressures and Intel’s response to such actions; and Intel’s ability to respond quickly to technological
developments and to incorporate new features into its products. The gross margin percentage could vary significantly from
expectations based on capacity utilization; variations in inventory valuation, including variations related to the timing of qualifying
products for sale; changes in revenue levels; segment product mix; the timing and execution of the manufacturing ramp and
associated costs; start-up costs; excess or obsolete inventory; changes in unit costs; defects or disruptions in the supply of materials
or resources; product manufacturing quality/yields; and impairments of long-lived assets, including manufacturing, assembly/test and
intangible assets. Intel's results could be affected by adverse economic, social, political and physical/infrastructure conditions in
countries where Intel, its customers or its suppliers operate, including military conflict and other security risks, natural disasters,
infrastructure disruptions, health concerns and fluctuations in currency exchange rates. Expenses, particularly certain marketing and
compensation expenses, as well as restructuring and asset impairment charges, vary depending on the level of demand for Intel's
products and the level of revenue and profits. Intel’s results could be affected by the timing of closing of acquisitions and divestitures.
Intel’s current chief executive officer plans to retire in May 2013 and the Board of Directors is working to choose a successor. The
succession and transition process may have a direct and/or indirect effect on the business and operations of the company. In
connection with the appointment of the new CEO, the company will seek to retain our executive management team (some of whom are
being considered for the CEO position), and keep employees focused on achieving the company’s strategic goals and objectives. Intel's
results could be affected by adverse effects associated with product defects and errata (deviations from published specifications), and
by litigation or regulatory matters involving intellectual property, stockholder, consumer, antitrust, disclosure and other issues, such as
the litigation and regulatory matters described in Intel's SEC reports. An unfavorable ruling could include monetary damages or an
injunction prohibiting Intel from manufacturing or selling one or more products, precluding particular business practices, impacting
Intel’s ability to design its products, or requiring other remedies such as compulsory licensing of intellectual property. A detailed
discussion of these and other factors that could affect Intel’s results is included in Intel’s SEC filings, including the company’s most
recent Form 10-Q, report on Form 10-K and earnings release.
Rev. 1/17/13
36