SlideShare a Scribd company logo
1 of 38
Download to read offline
Big Data Processing meets Non-volatile Memory:
Opportunities and Challenges
Talk at DataWorks Summit San Jose 2018
by
Shashank Gugnani, Dhabaleswar K. (DK) Panda, Xiaoyi Lu
The Ohio State University
E-mail: gugnani.2@osu.edu, panda@cse.ohio-state.edu, luxi@cse.ohio-state.edu
DataWorks Summit San Jose‘18 2Network Based Computing Laboratory
• Substantial impact on designing data management and processing systems in multiple tiers
– Front-end data accessing and serving (Online)
• Memcached + DB (e.g. MySQL), HBase
– Back-end data analytics (Offline)
• HDFS, MapReduce, Spark
Big Data Management and Processing on Modern Clusters
DataWorks Summit San Jose‘18 3Network Based Computing Laboratory
Big Data Processing with Apache Big Data Analytics Stacks
• Major components included:
– MapReduce (Batch)
– Spark (Iterative and Interactive)
– HBase (Query)
– HDFS (Storage)
– RPC (Inter-process communication)
• Underlying Hadoop Distributed File
System (HDFS) used by MapReduce,
Spark, HBase, and many others
• Model scales but high amount of
communication and I/O can be further
optimized!
HDFS
MapReduce
Apache Big Data Analytics Stacks
User Applications
HBase
Hadoop Common (RPC)
Spark
DataWorks Summit San Jose‘18 4Network Based Computing Laboratory
Drivers of Modern HPC Cluster and Data Center Architecture
• Multi-core/many-core technologies
• Remote Direct Memory Access (RDMA)-enabled networking (InfiniBand and RoCE)
– Single Root I/O Virtualization (SR-IOV)
• Solid State Drives (SSDs), NVM, Parallel Filesystems, Object Storage Clusters
• Accelerators (NVIDIA GPGPUs and FPGAs)
High Performance Interconnects –
InfiniBand (with SR-IOV)
<1usec latency, 200Gbps Bandwidth>
Multi-/Many-core
Processors
Cloud CloudSDSC Comet TACC Stampede
Accelerators / Coprocessors
high compute density, high
performance/watt
>1 TFlop DP on a chip
SSD, NVMe-SSD, NVRAM
DataWorks Summit San Jose‘18 5Network Based Computing Laboratory
• RDMA for Apache Spark
• RDMA for Apache Hadoop 2.x
– Plugins available for Cloudera (CDH) and Hortonworks (HDP)
distributions
• RDMA for Apache HBase
• RDMA for Memcached
• http://hibd.cse.ohio-state.edu
• Users Base: 285 organizations from 34 countries
• More than 26,650 downloads from the project site
The High-Performance Big Data (HiBD) Project
Available for InfiniBand, RoCE,
and, Ethernet
Support for Singularity and Docker
Significant performance
improvement with ‘RDMA+DRAM’
What about RDMA + NVM?
DataWorks Summit San Jose‘18 6Network Based Computing Laboratory
HiBD Release Timeline and Downloads
0
5000
10000
15000
20000
25000
30000
NumberofDownloads
Timeline
RDMA-Hadoop2.x0.9.1
RDMA-Hadoop2.x0.9.6
RDMA-Memcached0.9.4
RDMA-Hadoop2.x0.9.7
RDMA-Spark0.9.1
RDMA-HBase0.9.1
RDMA-Hadoop2.x1.0.0
RDMA-Spark0.9.4
RDMA-Hadoop2.x1.3.0
RDMA-Memcached0.9.6&OHB-0.9.3
RDMA-Spark0.9.5
RDMA-Hadoop1.x0.9.0
RDMA-Hadoop1.x0.9.8
RDMA-Memcached0.9.1&OHB-0.7.1
RDMA-Hadoop1.x0.9.9
DataWorks Summit San Jose‘18 7Network Based Computing Laboratory
Non-Volatile Memory (NVM) and NVMe-SSD
3D XPoint from Intel & Micron Samsung NVMe SSD Performance of PMC Flashtec NVRAM [*]
• Non-Volatile Memory (NVM) provides byte-addressability with persistence
• The huge explosion of data in diverse fields require fast analysis and storage
• NVMs provide the opportunity to build high-throughput storage systems for data-intensive
applications
• Storage technology is moving rapidly towards NVM
[*] http://www.enterprisetech.com/2014/08/06/ flashtec-nvram-15-million-iops-sub-microsecond- latency/
DataWorks Summit San Jose‘18 8Network Based Computing Laboratory
• Popular methods employed by recent works to emulate NVRAM performance
model over DRAM
• Two ways:
– Emulate byte-addressable NVRAM over DRAM
– Emulate block-based NVM device over DRAM
NVRAM Emulation based on DRAM
Application
Virtual File System
Block Device PCMDisk
(RAM-Disk + Delay)
DRAM
mmap/memcpy/msync
Application
Persistent Memory Library
Clflush + Delay
DRAM
pmem_memcpy_persist
(DAX)
Load/store
Load/Store
open/read/write/close
DataWorks Summit San Jose‘18 9Network Based Computing Laboratory
• NRCIO: NVM-aware RDMA-based Communication
and I/O Schemes
• NRCIO for Big Data Analytics
• NVMe-SSD based Big Data Analytics
• Conclusion and Q&A
Presentation Outline
DataWorks Summit San Jose‘18 10Network Based Computing Laboratory
Design Scope (NVRAM)
• Use as block device or memory?
• Software overhead with I/O system calls
• Usage of memory requires application re-design
• Slower than DRAM
• Write performance ~10x worse
• Read performance ~1.5x worse
• Cost
• Cost is more than DRAM
How can you modify existing systems to take advantage of
the non-volatility of NVRAM?
DataWorks Summit San Jose‘18 11Network Based Computing Laboratory
Design Scope (NVM for RDMA)
D-to-N over RDMA N-to-D over RDMA N-to-N over RDMA
D-to-N over RDMA: Communication buffers for client are allocated in DRAM; Server uses NVM
N-to-D over RDMA: Communication buffers for client are allocated in NVM; Server uses DRAM
N-to-N over RDMA: Communication buffers for client and server are allocated in NVM
DRAM NVM
HDFS-RDMA
(RDMADFSClient)
HDFS-RDMA
(RDMADFSServer)
Client
CPU
Server
CPU
PCIe
NIC
PCIe
NIC
Client Server
NVM DRAM
HDFS-RDMA
(RDMADFSClient)
HDFS-RDMA
(RDMADFSServer)
Client
CPU
Server
CPU
PCIePCIe
NIC NIC
Client Server
NVM NVM
HDFS-RDMA
(RDMADFSClient)
HDFS-RDMA
(RDMADFSServer)
Client
CPU
Server
CPU
PCIePCIe
NIC NIC
Client Server
D-to-D over RDMA: Communication buffers for client and server are allocated in DRAM (Common)
DataWorks Summit San Jose‘18 12Network Based Computing Laboratory
NVRAM-aware Communication in NRCIO
NRCIO Send/Recv over NVRAM NRCIO RDMA_Read over NVRAM
DataWorks Summit San Jose‘18 13Network Based Computing Laboratory
NRCIO Send/Recv Emulation over PCM
• Comparison of communication latency using NRCIO send/receive semantics over InfiniBand QDR
network and PCM memory
• High communication latencies due to slower writes to non-volatile persistent memory
• NVRAM-to-Remote-NVRAM (NVRAM-NVRAM) => ~10x overhead vs. DRAM-DRAM
• DRAM-to-Remote-NVRAM (DRAM-NVRAM) => ~8x overhead vs. DRAM-DRAM
• NVRAM-to-Remote-DRAM (NVRAM-DRAM) => ~4x overhead vs. DRAM-DRAM
0
20000
40000
60000
80000
100000
120000
140000
1 2 4 8 16 32 64 128 256 512 1K 2K 4K 8K 16K 32K 64K 128K 256K 512K 1M 2M
Latency(us)
IB Send/Recv Emulation over PCM
DRAM-DRAM NVRAM-DRAM
DRAM-NVRAM NVRAM-NVRAM
DRAM-SSD-Msync_Per_Line DRAM-SSD-Msync_Per_8KPage
DRAM-SSD-Msync_Per_4KPage
DataWorks Summit San Jose‘18 14Network Based Computing Laboratory
NRCIO RDMA-Read Emulation over PCM
0
20000
40000
60000
80000
100000
120000
140000
1 2 4 8 16 32 64 128 256 512 1K 2K 4K 8K 16K 32K 64K 128K 256K 512K 1M 2M
Latency(us)
Message Size (bytes)
IB RDMA_READ Emulation over PCM
DRAM-DRAM NVRAM-DRAM
DRAM-NVRAM NVRAM-NVRAM
DRAM-SSD-Msync_Per_Line DRAM-SSD-Msync_Per_8KPage
DRAM-SSD-Msync_Per_4KPage
• Communication latency with NRCIO RDMA-Read over InfiniBand QDR + PCM memory
• Communication overheads for large messages due to slower writes into NVRAM from
remote memory; similar to Send/Receive
• RDMA-Read outperforms Send/Receive for large messages; as observed for DRAM-DRAM
DataWorks Summit San Jose‘18 15Network Based Computing Laboratory
• NRCIO: NVM-aware RDMA-based Communication
and I/O Schemes
• NRCIO for Big Data Analytics
• NVMe-SSD based Big Data Analytics
• Conclusion and Q&A
Presentation Outline
DataWorks Summit San Jose‘18 16Network Based Computing Laboratory
• Files are divided into fixed sized blocks
– Blocks divided into packets
• NameNode: stores the file system namespace
• DataNode: stores data blocks in local storage
devices
• Uses block replication for fault tolerance
– Replication enhances data-locality and read
throughput
• Communication and I/O intensive
• Java Sockets based communication
• Data needs to be persistent, typically on
SSD/HDD
NameNode
DataNodes
Client
Opportunities of Using NVRAM+RDMA in HDFS
DataWorks Summit San Jose‘18 17Network Based Computing Laboratory
Design Overview of NVM and RDMA-aware HDFS (NVFS)
• Design Features
• RDMA over NVM
• HDFS I/O with NVM
• Block Access
• Memory Access
• Hybrid design
• NVM with SSD as a hybrid
storage for HDFS I/O
• Co-Design with Spark and HBase
• Cost-effectiveness
• Use-case
Applications and Benchmarks
Hadoop MapReduce Spark HBase
Co-Design
(Cost-Effectiveness, Use-case)
RDMA
Receiver
RDMA
Sender
DFSClient
RDMA
Replicator
RDMA
Receiver
NVFS
-BlkIO
Writer/Reader
NVM
NVFS-
MemIO
SSD SSD SSD
NVM and RDMA-aware HDFS (NVFS)
DataNode
N. S. Islam, M. W. Rahman , X. Lu, and D. K.
Panda, High Performance Design for HDFS with
Byte-Addressability of NVM and RDMA, 24th
International Conference on Supercomputing
(ICS), June 2016
DataWorks Summit San Jose‘18 18Network Based Computing Laboratory
• RDMA communication buffers
allocated in NVRAM via D-to-N
approach
• buffers used in a round-robin
manner
• Data received in NVRAM buffers are
encapsulated in java I/O streams
• written to NVM through HDFS
file I/O APIs
• mmap files during HDFS I/O
Proposed Design (NVFS-BlkIO)
NVM
Communication Buffers
RDMADFSServer (DataNode)
Blk1
Blk1RDMADFSClient
Blk2RDMADFSClient
Blk3RDMADFSClient
BlkNRDMADFSClient
Blk2 Blk3
Blk4 … BlkN
write
read
write
read
write
read
write
read
.
.
.
Block Access
Memory Access
DataWorks Summit San Jose‘18 19Network Based Computing Laboratory
• Shared buffers for communication
and storage
• Buffers allocated in NVM; managed
in a linked list
• Data packets added to the tail
(first free buffer)
• Does not maintain HDFS block
structures
• Fast write, suboptimal read for
large number of blocks
• Needs all the buffers to be
registered in the communication
library
Proposed Design (NVFS-MemIO)
NVM
Free Buffers
RDMADFSServer (DataNode)
Blk1RDMADFSClient
Blk2RDMADFSClient
Blk3RDMADFSClient
BlkNRDMADFSClient
write
read
write
read
write
read
write
read
.
.
.
…
Used Buffers
Blk1 Pkt1 Data Nxt
Blk2 Pkt1 Data Nxt
Blk1 Pkt2 Data Nxt
BlkN Pkt1 Data Nxt
Flat Architecture
DataWorks Summit San Jose‘18 20Network Based Computing Laboratory
• Re-usable communication buffers
• Separate storage buffers in NVM
• Maintains HDFS block structures
• Overlapping between
communication and storage
– Efficient write
• Constant time (amortized) retrieval
of blocks
– Faster read over Flat Architecture
• Modularity
Proposed Design (NVFS-MemIO)
NVM
Communication Buffers
RDMADFSServer (DataNode)
Blk1RDMADFSClient
Blk2RDMADFSClient
Blk3RDMADFSClient
BlkNRDMADFSClient
write
read
write
read
write
read
write
read
.
.
.
…
Storage Buffers
Blk1
Pkt1 Data
BlkN
Pkt1 Data
PktM Data PktM Data
Blk1 Blk2 … BlkN
HashTable
LinkedList LinkedList
Block Architecture
DataWorks Summit San Jose‘18 21Network Based Computing Laboratory
Evaluation with Hadoop MapReduce
0
50
100
150
200
250
300
350
Write Read
AverageThroughput(MBps)
HDFS (56Gbps)
NVFS-BlkIO (56Gbps)
NVFS-MemIO (56Gbps)
• TestDFSIO on SDSC Comet (32 nodes)
– Write: NVFS-MemIO gains by 4x over
HDFS
– Read: NVFS-MemIO gains by 1.2x over
HDFS
TestDFSIO
0
200
400
600
800
1000
1200
1400
Write Read
AverageThroughput(MBps)
HDFS (56Gbps)
NVFS-BlkIO (56Gbps)
NVFS-MemIO (56Gbps)
4x
1.2x
4x
2x
SDSC Comet (32 nodes) OSU Nowlab (4 nodes)
• TestDFSIO on OSU Nowlab (4 nodes)
– Write: NVFS-MemIO gains by 4x over
HDFS
– Read: NVFS-MemIO gains by 2x over
HDFS
DataWorks Summit San Jose‘18 22Network Based Computing Laboratory
Evaluation with HBase
0
100
200
300
400
500
600
700
800
8:800K 16:1600K 32:3200K
Throughput(ops/s)
Cluster Size : No. of Records
HDFS (56Gbps) NVFS (56Gbps)
HBase 100% insert
0
200
400
600
800
1000
1200
8:800K 16:1600K 32:3200K
Throughput(ops/s)
Cluster Size : Number of Records
HBase 50% read, 50% update
• YCSB 100% Insert on SDSC Comet (32 nodes)
– NVFS-BlkIO gains by 21% by storing only WALs to NVM
• YCSB 50% Read, 50% Update on SDSC Comet (32 nodes)
– NVFS-BlkIO gains by 20% by storing only WALs to NVM
20%21%
DataWorks Summit San Jose‘18 23Network Based Computing Laboratory
Opportunities to Use NVRAM+RDMA in MapReduce
Disk Operations
• Map and Reduce Tasks carry out the total job execution
– Map tasks read from HDFS, operate on it, and write the intermediate data to local disk (persistent)
– Reduce tasks get these data by shuffle from NodeManagers, operate on it and write to HDFS (persistent)
• Communication and I/O intensive; Shuffle phase uses HTTP over Java Sockets; I/O operations take
place in SSD/HDD typically
Bulk Data Transfer
DataWorks Summit San Jose‘18 24Network Based Computing Laboratory
Opportunities to Use NVRAM in MapReduce-RDMA
DesignInputFiles
OutputFiles
IntermediateData
Map Task
Read Map
Spill
Merge
Map Task
Read Map
Spill
Merge
Reduce Task
Shuffle Reduce
In-
Mem
Merge
Reduce Task
Shuffle Reduce
In-
Mem
Merge
RDMA
All Operations are In-
Memory
Opportunities exist to
improve the
performance with
NVRAM
DataWorks Summit San Jose‘18 25Network Based Computing Laboratory
NVRAM-Assisted Map Spilling in MapReduce-RDMA
InputFiles
OutputFiles
IntermediateData
Map Task
Read Map
Spill
Merge
Map Task
Read Map
Spill
Merge
Reduce Task
Shuffle Reduce
In-
Mem
Merge
Reduce Task
Shuffle Reduce
In-
Mem
Merge
RDMA
NVRAM
 Minimizes the disk operations in Spill phase
M. W. Rahman, N. S. Islam, X. Lu, and D. K. Panda, Can Non-Volatile Memory Benefit MapReduce Applications on HPC Clusters? PDSW-DISCS, with SC 2016.
M. W. Rahman, N. S. Islam, X. Lu, and D. K. Panda, NVMD: Non-Volatile Memory Assisted Design for Accelerating MapReduce and DAG Execution Frameworks on
HPC Systems? IEEE BigData 2017.
DataWorks Summit San Jose‘18 26Network Based Computing Laboratory
Comparison with Sort and TeraSort
• RMR-NVM achieves 2.37x benefit for Map
phase compared to RMR and MR-IPoIB;
overall benefit 55% compared to MR-IPoIB,
28% compared to RMR
2.37x
55%
2.48x
51%
• RMR-NVM achieves 2.48x benefit for Map
phase compared to RMR and MR-IPoIB;
overall benefit 51% compared to MR-IPoIB,
31% compared to RMR
DataWorks Summit San Jose‘18 27Network Based Computing Laboratory
Evaluation of Intel HiBench Workloads
• We evaluate different HiBench
workloads with Huge data sets
on 8 nodes
• Performance benefits for
Shuffle-intensive workloads
compared to MR-IPoIB:
– Sort: 42% (25 GB)
– TeraSort: 39% (32 GB)
– PageRank: 21% (5 million pages)
• Other workloads:
– WordCount: 18% (25 GB)
– KMeans: 11% (100 million samples)
DataWorks Summit San Jose‘18 28Network Based Computing Laboratory
Evaluation of PUMA Workloads
• We evaluate different PUMA
workloads on 8 nodes with
30GB data size
• Performance benefits for
Shuffle-intensive workloads
compared to MR-IPoIB :
– AdjList: 39%
– SelfJoin: 58%
– RankedInvIndex: 39%
• Other workloads:
– SeqCount: 32%
– InvIndex: 18%
DataWorks Summit San Jose‘18 29Network Based Computing Laboratory
• NRCIO: NVM-aware RDMA-based Communication
and I/O Schemes
• NRCIO for Big Data Analytics
• NVMe-SSD based Big Data Analytics
• Conclusion and Q&A
Presentation Outline
DataWorks Summit San Jose‘18 30Network Based Computing Laboratory
Overview of NVMe Standard
• NVMe is the standardized interface
for PCIe SSDs
• Built on ‘RDMA’ principles
– Submission and completion I/O
queues
– Similar semantics as RDMA send/recv
queues
– Asynchronous command processing
• Up to 64K I/O queues, with up to 64K
commands per queue
• Efficient small random I/O operation
• MSI/MSI-X and interrupt aggregation
NVMe Command Processing
Source: NVMExpress.org
DataWorks Summit San Jose‘18 31Network Based Computing Laboratory
Overview of NVMe-over-Fabric
• Remote access to flash with NVMe
over the network
• RDMA fabric is of most importance
– Low latency makes remote access
feasible
– 1 to 1 mapping of NVMe I/O queues
to RDMA send/recv queues
NVMf Architecture
I/O
Submission
Queue
I/O
Completion
Queue
RDMA Fabric
SQ RQ
NVMe
Low latency
overhead compared
to local I/O
DataWorks Summit San Jose‘18 32Network Based Computing Laboratory
Overview of SPDK
• High-performance storage library over NVMe
• Significant performance improvement over POSIX NVMe driver
DataWorks Summit San Jose‘18 33Network Based Computing Laboratory
Design Challenges with NVMe-SSD
• QoS
– Hardware-assisted QoS
• Persistence
– Flushing buffered data
• Performance
– Consider flash related design aspects
– Read/Write performance skew
– Garbage collection
• Virtualization
– SR-IOV hardware support
– Namespace isolation
• New software systems
– Disaggregated Storage with NVMf
– Persistent Caches
Co-design
DataWorks Summit San Jose‘18 34Network Based Computing Laboratory
Evaluation with RocksDB
0
5
10
15
Insert Overwrite Random Read
Latency (us)
POSIX SPDK
0
100
200
300
400
500
Write Sync Read Write
Latency (us)
POSIX SPDK
• 20%, 33%, 61% improvement for Insert, Write Sync, and Read Write
• Overwrite: Compaction and flushing in background
– Low potential for improvement
• Read: Performance much worse; Additional tuning/optimization required
DataWorks Summit San Jose‘18 35Network Based Computing Laboratory
Evaluation with RocksDB
0
5000
10000
15000
20000
Write Sync Read Write
Throughput (ops/sec)
POSIX SPDK
0
100000
200000
300000
400000
500000
600000
Insert Overwrite Random Read
Throughput (ops/sec)
POSIX SPDK
• 25%, 50%, 160% improvement for Insert, Write Sync, and Read Write
• Overwrite: Compaction and flushing in background
– Low potential for improvement
• Read: Performance much worse; Additional tuning/optimization required
DataWorks Summit San Jose‘18 36Network Based Computing Laboratory
QoS-aware SPDK Design
0
50
100
150
1 5 9 13 17 21 25 29 33 37 41 45 49
Bandwidth(MB/s)
Time
Scenario 1
High Priority Job (WRR) Medium Priority Job (WRR)
High Priority Job (OSU-Design) Medium Priority Job (OSU-Design)
0
1
2
3
4
5
2 3 4 5
JobBandwidthRatio
Scenario
Synthetic Application Scenarios
SPDK-WRR OSU-Design Desired
• Synthetic application scenarios with different QoS requirements
– Comparison using SPDK with Weighted Round Robbin NVMe arbitration
• Near desired job bandwidth ratios
• Stable and consistent bandwidth
S. Gugnani, X. Lu, and D. K. Panda, Analyzing, Modeling, and
Provisioning QoS for NVMe SSDs, (Under Review)
DataWorks Summit San Jose‘18 37Network Based Computing Laboratory
Conclusion and Future Work
• Exploring NVM-aware RDMA-based Communication and I/O Schemes
for Big Data Analytics
• Proposed a new library, NRCIO (work-in-progress)
• Re-design HDFS storage architecture with NVRAM
• Re-design RDMA-MapReduce with NVRAM
• Design Big Data analytics stacks with NVMe and NVMf protocols
• Results are promising
• Further optimizations in NRCIO
• Co-design with more Big Data analytics frameworks
DataWorks Summit San Jose‘18 38Network Based Computing Laboratory
{panda, luxi}@cse.ohio-state.edu
http://www.cse.ohio-state.edu/~panda
http://www.cse.ohio-state.edu/~luxi
Thank You!
Network-Based Computing Laboratory
http://nowlab.cse.ohio-state.edu/
The High-Performance Big Data Project
http://hibd.cse.ohio-state.edu/

More Related Content

What's hot

Hadoop 3.0 - Revolution or evolution?
Hadoop 3.0 - Revolution or evolution?Hadoop 3.0 - Revolution or evolution?
Hadoop 3.0 - Revolution or evolution?Uwe Printz
 
Architecting a Fraud Detection Application with Hadoop
Architecting a Fraud Detection Application with HadoopArchitecting a Fraud Detection Application with Hadoop
Architecting a Fraud Detection Application with HadoopDataWorks Summit
 
Securing data in hybrid environments using Apache Ranger
Securing data in hybrid environments using Apache RangerSecuring data in hybrid environments using Apache Ranger
Securing data in hybrid environments using Apache RangerDataWorks Summit
 
Flexible and Real-Time Stream Processing with Apache Flink
Flexible and Real-Time Stream Processing with Apache FlinkFlexible and Real-Time Stream Processing with Apache Flink
Flexible and Real-Time Stream Processing with Apache FlinkDataWorks Summit
 
Difference between hadoop 2 vs hadoop 3
Difference between hadoop 2 vs hadoop 3Difference between hadoop 2 vs hadoop 3
Difference between hadoop 2 vs hadoop 3Manish Chopra
 
High Availability for HBase Tables - Past, Present, and Future
High Availability for HBase Tables - Past, Present, and FutureHigh Availability for HBase Tables - Past, Present, and Future
High Availability for HBase Tables - Past, Present, and FutureDataWorks Summit
 
Performance tuning your Hadoop/Spark clusters to use cloud storage
Performance tuning your Hadoop/Spark clusters to use cloud storagePerformance tuning your Hadoop/Spark clusters to use cloud storage
Performance tuning your Hadoop/Spark clusters to use cloud storageDataWorks Summit
 
Running secured Spark job in Kubernetes compute cluster and integrating with ...
Running secured Spark job in Kubernetes compute cluster and integrating with ...Running secured Spark job in Kubernetes compute cluster and integrating with ...
Running secured Spark job in Kubernetes compute cluster and integrating with ...DataWorks Summit
 
Hadoop Backup and Disaster Recovery
Hadoop Backup and Disaster RecoveryHadoop Backup and Disaster Recovery
Hadoop Backup and Disaster RecoveryCloudera, Inc.
 
Hadoop Operations - Best Practices from the Field
Hadoop Operations - Best Practices from the FieldHadoop Operations - Best Practices from the Field
Hadoop Operations - Best Practices from the FieldDataWorks Summit
 
How to overcome mysterious problems caused by large and multi-tenancy Hadoop ...
How to overcome mysterious problems caused by large and multi-tenancy Hadoop ...How to overcome mysterious problems caused by large and multi-tenancy Hadoop ...
How to overcome mysterious problems caused by large and multi-tenancy Hadoop ...DataWorks Summit/Hadoop Summit
 
Hadoop 2 - Beyond MapReduce
Hadoop 2 - Beyond MapReduceHadoop 2 - Beyond MapReduce
Hadoop 2 - Beyond MapReduceUwe Printz
 
Hortonworks.Cluster Config Guide
Hortonworks.Cluster Config GuideHortonworks.Cluster Config Guide
Hortonworks.Cluster Config GuideDouglas Bernardini
 
Hadoop 3 (2017 hadoop taiwan workshop)
Hadoop 3 (2017 hadoop taiwan workshop)Hadoop 3 (2017 hadoop taiwan workshop)
Hadoop 3 (2017 hadoop taiwan workshop)Wei-Chiu Chuang
 
Improving HDFS Availability with IPC Quality of Service
Improving HDFS Availability with IPC Quality of ServiceImproving HDFS Availability with IPC Quality of Service
Improving HDFS Availability with IPC Quality of ServiceDataWorks Summit
 
The columnar roadmap: Apache Parquet and Apache Arrow
The columnar roadmap: Apache Parquet and Apache ArrowThe columnar roadmap: Apache Parquet and Apache Arrow
The columnar roadmap: Apache Parquet and Apache ArrowDataWorks Summit
 
Tales from the Cloudera Field
Tales from the Cloudera FieldTales from the Cloudera Field
Tales from the Cloudera FieldHBaseCon
 
Lessons learned from running Spark on Docker
Lessons learned from running Spark on DockerLessons learned from running Spark on Docker
Lessons learned from running Spark on DockerDataWorks Summit
 
Advanced Security In Hadoop Cluster
Advanced Security In Hadoop ClusterAdvanced Security In Hadoop Cluster
Advanced Security In Hadoop ClusterEdureka!
 

What's hot (20)

Hadoop 3.0 - Revolution or evolution?
Hadoop 3.0 - Revolution or evolution?Hadoop 3.0 - Revolution or evolution?
Hadoop 3.0 - Revolution or evolution?
 
Architecting a Fraud Detection Application with Hadoop
Architecting a Fraud Detection Application with HadoopArchitecting a Fraud Detection Application with Hadoop
Architecting a Fraud Detection Application with Hadoop
 
Securing data in hybrid environments using Apache Ranger
Securing data in hybrid environments using Apache RangerSecuring data in hybrid environments using Apache Ranger
Securing data in hybrid environments using Apache Ranger
 
Flexible and Real-Time Stream Processing with Apache Flink
Flexible and Real-Time Stream Processing with Apache FlinkFlexible and Real-Time Stream Processing with Apache Flink
Flexible and Real-Time Stream Processing with Apache Flink
 
Difference between hadoop 2 vs hadoop 3
Difference between hadoop 2 vs hadoop 3Difference between hadoop 2 vs hadoop 3
Difference between hadoop 2 vs hadoop 3
 
High Availability for HBase Tables - Past, Present, and Future
High Availability for HBase Tables - Past, Present, and FutureHigh Availability for HBase Tables - Past, Present, and Future
High Availability for HBase Tables - Past, Present, and Future
 
Performance tuning your Hadoop/Spark clusters to use cloud storage
Performance tuning your Hadoop/Spark clusters to use cloud storagePerformance tuning your Hadoop/Spark clusters to use cloud storage
Performance tuning your Hadoop/Spark clusters to use cloud storage
 
Running secured Spark job in Kubernetes compute cluster and integrating with ...
Running secured Spark job in Kubernetes compute cluster and integrating with ...Running secured Spark job in Kubernetes compute cluster and integrating with ...
Running secured Spark job in Kubernetes compute cluster and integrating with ...
 
Hadoop Backup and Disaster Recovery
Hadoop Backup and Disaster RecoveryHadoop Backup and Disaster Recovery
Hadoop Backup and Disaster Recovery
 
Hadoop Operations - Best Practices from the Field
Hadoop Operations - Best Practices from the FieldHadoop Operations - Best Practices from the Field
Hadoop Operations - Best Practices from the Field
 
How to overcome mysterious problems caused by large and multi-tenancy Hadoop ...
How to overcome mysterious problems caused by large and multi-tenancy Hadoop ...How to overcome mysterious problems caused by large and multi-tenancy Hadoop ...
How to overcome mysterious problems caused by large and multi-tenancy Hadoop ...
 
Hadoop 2 - Beyond MapReduce
Hadoop 2 - Beyond MapReduceHadoop 2 - Beyond MapReduce
Hadoop 2 - Beyond MapReduce
 
Hortonworks.Cluster Config Guide
Hortonworks.Cluster Config GuideHortonworks.Cluster Config Guide
Hortonworks.Cluster Config Guide
 
Hadoop 3 (2017 hadoop taiwan workshop)
Hadoop 3 (2017 hadoop taiwan workshop)Hadoop 3 (2017 hadoop taiwan workshop)
Hadoop 3 (2017 hadoop taiwan workshop)
 
Improving HDFS Availability with IPC Quality of Service
Improving HDFS Availability with IPC Quality of ServiceImproving HDFS Availability with IPC Quality of Service
Improving HDFS Availability with IPC Quality of Service
 
To The Cloud and Back: A Look At Hybrid Analytics
To The Cloud and Back: A Look At Hybrid AnalyticsTo The Cloud and Back: A Look At Hybrid Analytics
To The Cloud and Back: A Look At Hybrid Analytics
 
The columnar roadmap: Apache Parquet and Apache Arrow
The columnar roadmap: Apache Parquet and Apache ArrowThe columnar roadmap: Apache Parquet and Apache Arrow
The columnar roadmap: Apache Parquet and Apache Arrow
 
Tales from the Cloudera Field
Tales from the Cloudera FieldTales from the Cloudera Field
Tales from the Cloudera Field
 
Lessons learned from running Spark on Docker
Lessons learned from running Spark on DockerLessons learned from running Spark on Docker
Lessons learned from running Spark on Docker
 
Advanced Security In Hadoop Cluster
Advanced Security In Hadoop ClusterAdvanced Security In Hadoop Cluster
Advanced Security In Hadoop Cluster
 

Similar to Big data processing meets non-volatile memory: opportunities and challenges

Big Data Meets NVM: Accelerating Big Data Processing with Non-Volatile Memory...
Big Data Meets NVM: Accelerating Big Data Processing with Non-Volatile Memory...Big Data Meets NVM: Accelerating Big Data Processing with Non-Volatile Memory...
Big Data Meets NVM: Accelerating Big Data Processing with Non-Volatile Memory...DataWorks Summit
 
High-Performance Big Data Analytics with RDMA over NVM and NVMe-SSD
High-Performance Big Data Analytics with RDMA over NVM and NVMe-SSDHigh-Performance Big Data Analytics with RDMA over NVM and NVMe-SSD
High-Performance Big Data Analytics with RDMA over NVM and NVMe-SSDinside-BigData.com
 
Accelerating Hadoop, Spark, and Memcached with HPC Technologies
Accelerating Hadoop, Spark, and Memcached with HPC TechnologiesAccelerating Hadoop, Spark, and Memcached with HPC Technologies
Accelerating Hadoop, Spark, and Memcached with HPC Technologiesinside-BigData.com
 
Big Data Meets HPC - Exploiting HPC Technologies for Accelerating Big Data Pr...
Big Data Meets HPC - Exploiting HPC Technologies for Accelerating Big Data Pr...Big Data Meets HPC - Exploiting HPC Technologies for Accelerating Big Data Pr...
Big Data Meets HPC - Exploiting HPC Technologies for Accelerating Big Data Pr...inside-BigData.com
 
Spark Summit EU talk by Ahsan Javed Awan
Spark Summit EU talk by Ahsan Javed AwanSpark Summit EU talk by Ahsan Javed Awan
Spark Summit EU talk by Ahsan Javed AwanSpark Summit
 
NoSQL Options Compared
NoSQL Options ComparedNoSQL Options Compared
NoSQL Options ComparedSergey Bushik
 
Building a High Performance Analytics Platform
Building a High Performance Analytics PlatformBuilding a High Performance Analytics Platform
Building a High Performance Analytics PlatformSantanu Dey
 
TWDI Accelerate Seattle, Oct 16, 2017: Distributed and In-Database Analytics ...
TWDI Accelerate Seattle, Oct 16, 2017: Distributed and In-Database Analytics ...TWDI Accelerate Seattle, Oct 16, 2017: Distributed and In-Database Analytics ...
TWDI Accelerate Seattle, Oct 16, 2017: Distributed and In-Database Analytics ...Debraj GuhaThakurta
 
TDWI Accelerate, Seattle, Oct 16, 2017: Distributed and In-Database Analytics...
TDWI Accelerate, Seattle, Oct 16, 2017: Distributed and In-Database Analytics...TDWI Accelerate, Seattle, Oct 16, 2017: Distributed and In-Database Analytics...
TDWI Accelerate, Seattle, Oct 16, 2017: Distributed and In-Database Analytics...Debraj GuhaThakurta
 
Accelerate Big Data Processing with High-Performance Computing Technologies
Accelerate Big Data Processing with High-Performance Computing TechnologiesAccelerate Big Data Processing with High-Performance Computing Technologies
Accelerate Big Data Processing with High-Performance Computing TechnologiesIntel® Software
 
Hadoop on Azure, Blue elephants
Hadoop on Azure,  Blue elephantsHadoop on Azure,  Blue elephants
Hadoop on Azure, Blue elephantsOvidiu Dimulescu
 
TriHUG - Beyond Batch
TriHUG - Beyond BatchTriHUG - Beyond Batch
TriHUG - Beyond Batchboorad
 
Accelerating Apache Hadoop through High-Performance Networking and I/O Techno...
Accelerating Apache Hadoop through High-Performance Networking and I/O Techno...Accelerating Apache Hadoop through High-Performance Networking and I/O Techno...
Accelerating Apache Hadoop through High-Performance Networking and I/O Techno...DataWorks Summit/Hadoop Summit
 
Tackling Network Bottlenecks with Hardware Accelerations: Cloud vs. On-Premise
Tackling Network Bottlenecks with Hardware Accelerations: Cloud vs. On-PremiseTackling Network Bottlenecks with Hardware Accelerations: Cloud vs. On-Premise
Tackling Network Bottlenecks with Hardware Accelerations: Cloud vs. On-PremiseDatabricks
 
Analytics, Big Data and Nonvolatile Memory Architectures – Why you Should Car...
Analytics, Big Data and Nonvolatile Memory Architectures – Why you Should Car...Analytics, Big Data and Nonvolatile Memory Architectures – Why you Should Car...
Analytics, Big Data and Nonvolatile Memory Architectures – Why you Should Car...StampedeCon
 
DUG'20: 13 - HPE’s DAOS Solution Plans
DUG'20: 13 - HPE’s DAOS Solution PlansDUG'20: 13 - HPE’s DAOS Solution Plans
DUG'20: 13 - HPE’s DAOS Solution PlansAndrey Kudryavtsev
 
Critical Attributes for a High-Performance, Low-Latency Database
Critical Attributes for a High-Performance, Low-Latency DatabaseCritical Attributes for a High-Performance, Low-Latency Database
Critical Attributes for a High-Performance, Low-Latency DatabaseScyllaDB
 
AWS Partner Webcast - Hadoop in the Cloud: Unlocking the Potential of Big Dat...
AWS Partner Webcast - Hadoop in the Cloud: Unlocking the Potential of Big Dat...AWS Partner Webcast - Hadoop in the Cloud: Unlocking the Potential of Big Dat...
AWS Partner Webcast - Hadoop in the Cloud: Unlocking the Potential of Big Dat...Amazon Web Services
 
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
5 Critical Steps to Clean Your Data Swamp When Migrating Off of HadoopDatabricks
 
1. beyond mission critical virtualizing big data and hadoop
1. beyond mission critical   virtualizing big data and hadoop1. beyond mission critical   virtualizing big data and hadoop
1. beyond mission critical virtualizing big data and hadoopChiou-Nan Chen
 

Similar to Big data processing meets non-volatile memory: opportunities and challenges (20)

Big Data Meets NVM: Accelerating Big Data Processing with Non-Volatile Memory...
Big Data Meets NVM: Accelerating Big Data Processing with Non-Volatile Memory...Big Data Meets NVM: Accelerating Big Data Processing with Non-Volatile Memory...
Big Data Meets NVM: Accelerating Big Data Processing with Non-Volatile Memory...
 
High-Performance Big Data Analytics with RDMA over NVM and NVMe-SSD
High-Performance Big Data Analytics with RDMA over NVM and NVMe-SSDHigh-Performance Big Data Analytics with RDMA over NVM and NVMe-SSD
High-Performance Big Data Analytics with RDMA over NVM and NVMe-SSD
 
Accelerating Hadoop, Spark, and Memcached with HPC Technologies
Accelerating Hadoop, Spark, and Memcached with HPC TechnologiesAccelerating Hadoop, Spark, and Memcached with HPC Technologies
Accelerating Hadoop, Spark, and Memcached with HPC Technologies
 
Big Data Meets HPC - Exploiting HPC Technologies for Accelerating Big Data Pr...
Big Data Meets HPC - Exploiting HPC Technologies for Accelerating Big Data Pr...Big Data Meets HPC - Exploiting HPC Technologies for Accelerating Big Data Pr...
Big Data Meets HPC - Exploiting HPC Technologies for Accelerating Big Data Pr...
 
Spark Summit EU talk by Ahsan Javed Awan
Spark Summit EU talk by Ahsan Javed AwanSpark Summit EU talk by Ahsan Javed Awan
Spark Summit EU talk by Ahsan Javed Awan
 
NoSQL Options Compared
NoSQL Options ComparedNoSQL Options Compared
NoSQL Options Compared
 
Building a High Performance Analytics Platform
Building a High Performance Analytics PlatformBuilding a High Performance Analytics Platform
Building a High Performance Analytics Platform
 
TWDI Accelerate Seattle, Oct 16, 2017: Distributed and In-Database Analytics ...
TWDI Accelerate Seattle, Oct 16, 2017: Distributed and In-Database Analytics ...TWDI Accelerate Seattle, Oct 16, 2017: Distributed and In-Database Analytics ...
TWDI Accelerate Seattle, Oct 16, 2017: Distributed and In-Database Analytics ...
 
TDWI Accelerate, Seattle, Oct 16, 2017: Distributed and In-Database Analytics...
TDWI Accelerate, Seattle, Oct 16, 2017: Distributed and In-Database Analytics...TDWI Accelerate, Seattle, Oct 16, 2017: Distributed and In-Database Analytics...
TDWI Accelerate, Seattle, Oct 16, 2017: Distributed and In-Database Analytics...
 
Accelerate Big Data Processing with High-Performance Computing Technologies
Accelerate Big Data Processing with High-Performance Computing TechnologiesAccelerate Big Data Processing with High-Performance Computing Technologies
Accelerate Big Data Processing with High-Performance Computing Technologies
 
Hadoop on Azure, Blue elephants
Hadoop on Azure,  Blue elephantsHadoop on Azure,  Blue elephants
Hadoop on Azure, Blue elephants
 
TriHUG - Beyond Batch
TriHUG - Beyond BatchTriHUG - Beyond Batch
TriHUG - Beyond Batch
 
Accelerating Apache Hadoop through High-Performance Networking and I/O Techno...
Accelerating Apache Hadoop through High-Performance Networking and I/O Techno...Accelerating Apache Hadoop through High-Performance Networking and I/O Techno...
Accelerating Apache Hadoop through High-Performance Networking and I/O Techno...
 
Tackling Network Bottlenecks with Hardware Accelerations: Cloud vs. On-Premise
Tackling Network Bottlenecks with Hardware Accelerations: Cloud vs. On-PremiseTackling Network Bottlenecks with Hardware Accelerations: Cloud vs. On-Premise
Tackling Network Bottlenecks with Hardware Accelerations: Cloud vs. On-Premise
 
Analytics, Big Data and Nonvolatile Memory Architectures – Why you Should Car...
Analytics, Big Data and Nonvolatile Memory Architectures – Why you Should Car...Analytics, Big Data and Nonvolatile Memory Architectures – Why you Should Car...
Analytics, Big Data and Nonvolatile Memory Architectures – Why you Should Car...
 
DUG'20: 13 - HPE’s DAOS Solution Plans
DUG'20: 13 - HPE’s DAOS Solution PlansDUG'20: 13 - HPE’s DAOS Solution Plans
DUG'20: 13 - HPE’s DAOS Solution Plans
 
Critical Attributes for a High-Performance, Low-Latency Database
Critical Attributes for a High-Performance, Low-Latency DatabaseCritical Attributes for a High-Performance, Low-Latency Database
Critical Attributes for a High-Performance, Low-Latency Database
 
AWS Partner Webcast - Hadoop in the Cloud: Unlocking the Potential of Big Dat...
AWS Partner Webcast - Hadoop in the Cloud: Unlocking the Potential of Big Dat...AWS Partner Webcast - Hadoop in the Cloud: Unlocking the Potential of Big Dat...
AWS Partner Webcast - Hadoop in the Cloud: Unlocking the Potential of Big Dat...
 
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
 
1. beyond mission critical virtualizing big data and hadoop
1. beyond mission critical   virtualizing big data and hadoop1. beyond mission critical   virtualizing big data and hadoop
1. beyond mission critical virtualizing big data and hadoop
 

More from DataWorks Summit

Floating on a RAFT: HBase Durability with Apache Ratis
Floating on a RAFT: HBase Durability with Apache RatisFloating on a RAFT: HBase Durability with Apache Ratis
Floating on a RAFT: HBase Durability with Apache RatisDataWorks Summit
 
Tracking Crime as It Occurs with Apache Phoenix, Apache HBase and Apache NiFi
Tracking Crime as It Occurs with Apache Phoenix, Apache HBase and Apache NiFiTracking Crime as It Occurs with Apache Phoenix, Apache HBase and Apache NiFi
Tracking Crime as It Occurs with Apache Phoenix, Apache HBase and Apache NiFiDataWorks Summit
 
HBase Tales From the Trenches - Short stories about most common HBase operati...
HBase Tales From the Trenches - Short stories about most common HBase operati...HBase Tales From the Trenches - Short stories about most common HBase operati...
HBase Tales From the Trenches - Short stories about most common HBase operati...DataWorks Summit
 
Optimizing Geospatial Operations with Server-side Programming in HBase and Ac...
Optimizing Geospatial Operations with Server-side Programming in HBase and Ac...Optimizing Geospatial Operations with Server-side Programming in HBase and Ac...
Optimizing Geospatial Operations with Server-side Programming in HBase and Ac...DataWorks Summit
 
Managing the Dewey Decimal System
Managing the Dewey Decimal SystemManaging the Dewey Decimal System
Managing the Dewey Decimal SystemDataWorks Summit
 
Practical NoSQL: Accumulo's dirlist Example
Practical NoSQL: Accumulo's dirlist ExamplePractical NoSQL: Accumulo's dirlist Example
Practical NoSQL: Accumulo's dirlist ExampleDataWorks Summit
 
HBase Global Indexing to support large-scale data ingestion at Uber
HBase Global Indexing to support large-scale data ingestion at UberHBase Global Indexing to support large-scale data ingestion at Uber
HBase Global Indexing to support large-scale data ingestion at UberDataWorks Summit
 
Scaling Cloud-Scale Translytics Workloads with Omid and Phoenix
Scaling Cloud-Scale Translytics Workloads with Omid and PhoenixScaling Cloud-Scale Translytics Workloads with Omid and Phoenix
Scaling Cloud-Scale Translytics Workloads with Omid and PhoenixDataWorks Summit
 
Building the High Speed Cybersecurity Data Pipeline Using Apache NiFi
Building the High Speed Cybersecurity Data Pipeline Using Apache NiFiBuilding the High Speed Cybersecurity Data Pipeline Using Apache NiFi
Building the High Speed Cybersecurity Data Pipeline Using Apache NiFiDataWorks Summit
 
Supporting Apache HBase : Troubleshooting and Supportability Improvements
Supporting Apache HBase : Troubleshooting and Supportability ImprovementsSupporting Apache HBase : Troubleshooting and Supportability Improvements
Supporting Apache HBase : Troubleshooting and Supportability ImprovementsDataWorks Summit
 
Security Framework for Multitenant Architecture
Security Framework for Multitenant ArchitectureSecurity Framework for Multitenant Architecture
Security Framework for Multitenant ArchitectureDataWorks Summit
 
Presto: Optimizing Performance of SQL-on-Anything Engine
Presto: Optimizing Performance of SQL-on-Anything EnginePresto: Optimizing Performance of SQL-on-Anything Engine
Presto: Optimizing Performance of SQL-on-Anything EngineDataWorks Summit
 
Introducing MlFlow: An Open Source Platform for the Machine Learning Lifecycl...
Introducing MlFlow: An Open Source Platform for the Machine Learning Lifecycl...Introducing MlFlow: An Open Source Platform for the Machine Learning Lifecycl...
Introducing MlFlow: An Open Source Platform for the Machine Learning Lifecycl...DataWorks Summit
 
Extending Twitter's Data Platform to Google Cloud
Extending Twitter's Data Platform to Google CloudExtending Twitter's Data Platform to Google Cloud
Extending Twitter's Data Platform to Google CloudDataWorks Summit
 
Event-Driven Messaging and Actions using Apache Flink and Apache NiFi
Event-Driven Messaging and Actions using Apache Flink and Apache NiFiEvent-Driven Messaging and Actions using Apache Flink and Apache NiFi
Event-Driven Messaging and Actions using Apache Flink and Apache NiFiDataWorks Summit
 
Securing Data in Hybrid on-premise and Cloud Environments using Apache Ranger
Securing Data in Hybrid on-premise and Cloud Environments using Apache RangerSecuring Data in Hybrid on-premise and Cloud Environments using Apache Ranger
Securing Data in Hybrid on-premise and Cloud Environments using Apache RangerDataWorks Summit
 
Computer Vision: Coming to a Store Near You
Computer Vision: Coming to a Store Near YouComputer Vision: Coming to a Store Near You
Computer Vision: Coming to a Store Near YouDataWorks Summit
 
Big Data Genomics: Clustering Billions of DNA Sequences with Apache Spark
Big Data Genomics: Clustering Billions of DNA Sequences with Apache SparkBig Data Genomics: Clustering Billions of DNA Sequences with Apache Spark
Big Data Genomics: Clustering Billions of DNA Sequences with Apache SparkDataWorks Summit
 
Transforming and Scaling Large Scale Data Analytics: Moving to a Cloud-based ...
Transforming and Scaling Large Scale Data Analytics: Moving to a Cloud-based ...Transforming and Scaling Large Scale Data Analytics: Moving to a Cloud-based ...
Transforming and Scaling Large Scale Data Analytics: Moving to a Cloud-based ...DataWorks Summit
 

More from DataWorks Summit (20)

Data Science Crash Course
Data Science Crash CourseData Science Crash Course
Data Science Crash Course
 
Floating on a RAFT: HBase Durability with Apache Ratis
Floating on a RAFT: HBase Durability with Apache RatisFloating on a RAFT: HBase Durability with Apache Ratis
Floating on a RAFT: HBase Durability with Apache Ratis
 
Tracking Crime as It Occurs with Apache Phoenix, Apache HBase and Apache NiFi
Tracking Crime as It Occurs with Apache Phoenix, Apache HBase and Apache NiFiTracking Crime as It Occurs with Apache Phoenix, Apache HBase and Apache NiFi
Tracking Crime as It Occurs with Apache Phoenix, Apache HBase and Apache NiFi
 
HBase Tales From the Trenches - Short stories about most common HBase operati...
HBase Tales From the Trenches - Short stories about most common HBase operati...HBase Tales From the Trenches - Short stories about most common HBase operati...
HBase Tales From the Trenches - Short stories about most common HBase operati...
 
Optimizing Geospatial Operations with Server-side Programming in HBase and Ac...
Optimizing Geospatial Operations with Server-side Programming in HBase and Ac...Optimizing Geospatial Operations with Server-side Programming in HBase and Ac...
Optimizing Geospatial Operations with Server-side Programming in HBase and Ac...
 
Managing the Dewey Decimal System
Managing the Dewey Decimal SystemManaging the Dewey Decimal System
Managing the Dewey Decimal System
 
Practical NoSQL: Accumulo's dirlist Example
Practical NoSQL: Accumulo's dirlist ExamplePractical NoSQL: Accumulo's dirlist Example
Practical NoSQL: Accumulo's dirlist Example
 
HBase Global Indexing to support large-scale data ingestion at Uber
HBase Global Indexing to support large-scale data ingestion at UberHBase Global Indexing to support large-scale data ingestion at Uber
HBase Global Indexing to support large-scale data ingestion at Uber
 
Scaling Cloud-Scale Translytics Workloads with Omid and Phoenix
Scaling Cloud-Scale Translytics Workloads with Omid and PhoenixScaling Cloud-Scale Translytics Workloads with Omid and Phoenix
Scaling Cloud-Scale Translytics Workloads with Omid and Phoenix
 
Building the High Speed Cybersecurity Data Pipeline Using Apache NiFi
Building the High Speed Cybersecurity Data Pipeline Using Apache NiFiBuilding the High Speed Cybersecurity Data Pipeline Using Apache NiFi
Building the High Speed Cybersecurity Data Pipeline Using Apache NiFi
 
Supporting Apache HBase : Troubleshooting and Supportability Improvements
Supporting Apache HBase : Troubleshooting and Supportability ImprovementsSupporting Apache HBase : Troubleshooting and Supportability Improvements
Supporting Apache HBase : Troubleshooting and Supportability Improvements
 
Security Framework for Multitenant Architecture
Security Framework for Multitenant ArchitectureSecurity Framework for Multitenant Architecture
Security Framework for Multitenant Architecture
 
Presto: Optimizing Performance of SQL-on-Anything Engine
Presto: Optimizing Performance of SQL-on-Anything EnginePresto: Optimizing Performance of SQL-on-Anything Engine
Presto: Optimizing Performance of SQL-on-Anything Engine
 
Introducing MlFlow: An Open Source Platform for the Machine Learning Lifecycl...
Introducing MlFlow: An Open Source Platform for the Machine Learning Lifecycl...Introducing MlFlow: An Open Source Platform for the Machine Learning Lifecycl...
Introducing MlFlow: An Open Source Platform for the Machine Learning Lifecycl...
 
Extending Twitter's Data Platform to Google Cloud
Extending Twitter's Data Platform to Google CloudExtending Twitter's Data Platform to Google Cloud
Extending Twitter's Data Platform to Google Cloud
 
Event-Driven Messaging and Actions using Apache Flink and Apache NiFi
Event-Driven Messaging and Actions using Apache Flink and Apache NiFiEvent-Driven Messaging and Actions using Apache Flink and Apache NiFi
Event-Driven Messaging and Actions using Apache Flink and Apache NiFi
 
Securing Data in Hybrid on-premise and Cloud Environments using Apache Ranger
Securing Data in Hybrid on-premise and Cloud Environments using Apache RangerSecuring Data in Hybrid on-premise and Cloud Environments using Apache Ranger
Securing Data in Hybrid on-premise and Cloud Environments using Apache Ranger
 
Computer Vision: Coming to a Store Near You
Computer Vision: Coming to a Store Near YouComputer Vision: Coming to a Store Near You
Computer Vision: Coming to a Store Near You
 
Big Data Genomics: Clustering Billions of DNA Sequences with Apache Spark
Big Data Genomics: Clustering Billions of DNA Sequences with Apache SparkBig Data Genomics: Clustering Billions of DNA Sequences with Apache Spark
Big Data Genomics: Clustering Billions of DNA Sequences with Apache Spark
 
Transforming and Scaling Large Scale Data Analytics: Moving to a Cloud-based ...
Transforming and Scaling Large Scale Data Analytics: Moving to a Cloud-based ...Transforming and Scaling Large Scale Data Analytics: Moving to a Cloud-based ...
Transforming and Scaling Large Scale Data Analytics: Moving to a Cloud-based ...
 

Recently uploaded

SALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICESSALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICESmohitsingh558521
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 3652toLead Limited
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfAlex Barbosa Coqueiro
 
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024BookNet Canada
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsSergiu Bodiu
 
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxLoriGlavin3
 
Time Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsTime Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsNathaniel Shimoni
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfAddepto
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024Stephanie Beckett
 
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxDigital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxLoriGlavin3
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii SoldatenkoFwdays
 
Moving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfMoving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfLoriGlavin3
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Manik S Magar
 
Training state-of-the-art general text embedding
Training state-of-the-art general text embeddingTraining state-of-the-art general text embedding
Training state-of-the-art general text embeddingZilliz
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningLars Bell
 
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxLoriGlavin3
 
Artificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxArtificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxhariprasad279825
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Commit University
 
The State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxThe State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxLoriGlavin3
 

Recently uploaded (20)

SALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICESSALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICES
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdf
 
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platforms
 
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
 
Time Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsTime Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directions
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdf
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024
 
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxDigital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko
 
Moving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfMoving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdf
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!
 
Training state-of-the-art general text embedding
Training state-of-the-art general text embeddingTraining state-of-the-art general text embedding
Training state-of-the-art general text embedding
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine Tuning
 
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
 
Artificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxArtificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptx
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!
 
The State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxThe State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptx
 

Big data processing meets non-volatile memory: opportunities and challenges

  • 1. Big Data Processing meets Non-volatile Memory: Opportunities and Challenges Talk at DataWorks Summit San Jose 2018 by Shashank Gugnani, Dhabaleswar K. (DK) Panda, Xiaoyi Lu The Ohio State University E-mail: gugnani.2@osu.edu, panda@cse.ohio-state.edu, luxi@cse.ohio-state.edu
  • 2. DataWorks Summit San Jose‘18 2Network Based Computing Laboratory • Substantial impact on designing data management and processing systems in multiple tiers – Front-end data accessing and serving (Online) • Memcached + DB (e.g. MySQL), HBase – Back-end data analytics (Offline) • HDFS, MapReduce, Spark Big Data Management and Processing on Modern Clusters
  • 3. DataWorks Summit San Jose‘18 3Network Based Computing Laboratory Big Data Processing with Apache Big Data Analytics Stacks • Major components included: – MapReduce (Batch) – Spark (Iterative and Interactive) – HBase (Query) – HDFS (Storage) – RPC (Inter-process communication) • Underlying Hadoop Distributed File System (HDFS) used by MapReduce, Spark, HBase, and many others • Model scales but high amount of communication and I/O can be further optimized! HDFS MapReduce Apache Big Data Analytics Stacks User Applications HBase Hadoop Common (RPC) Spark
  • 4. DataWorks Summit San Jose‘18 4Network Based Computing Laboratory Drivers of Modern HPC Cluster and Data Center Architecture • Multi-core/many-core technologies • Remote Direct Memory Access (RDMA)-enabled networking (InfiniBand and RoCE) – Single Root I/O Virtualization (SR-IOV) • Solid State Drives (SSDs), NVM, Parallel Filesystems, Object Storage Clusters • Accelerators (NVIDIA GPGPUs and FPGAs) High Performance Interconnects – InfiniBand (with SR-IOV) <1usec latency, 200Gbps Bandwidth> Multi-/Many-core Processors Cloud CloudSDSC Comet TACC Stampede Accelerators / Coprocessors high compute density, high performance/watt >1 TFlop DP on a chip SSD, NVMe-SSD, NVRAM
  • 5. DataWorks Summit San Jose‘18 5Network Based Computing Laboratory • RDMA for Apache Spark • RDMA for Apache Hadoop 2.x – Plugins available for Cloudera (CDH) and Hortonworks (HDP) distributions • RDMA for Apache HBase • RDMA for Memcached • http://hibd.cse.ohio-state.edu • Users Base: 285 organizations from 34 countries • More than 26,650 downloads from the project site The High-Performance Big Data (HiBD) Project Available for InfiniBand, RoCE, and, Ethernet Support for Singularity and Docker Significant performance improvement with ‘RDMA+DRAM’ What about RDMA + NVM?
  • 6. DataWorks Summit San Jose‘18 6Network Based Computing Laboratory HiBD Release Timeline and Downloads 0 5000 10000 15000 20000 25000 30000 NumberofDownloads Timeline RDMA-Hadoop2.x0.9.1 RDMA-Hadoop2.x0.9.6 RDMA-Memcached0.9.4 RDMA-Hadoop2.x0.9.7 RDMA-Spark0.9.1 RDMA-HBase0.9.1 RDMA-Hadoop2.x1.0.0 RDMA-Spark0.9.4 RDMA-Hadoop2.x1.3.0 RDMA-Memcached0.9.6&OHB-0.9.3 RDMA-Spark0.9.5 RDMA-Hadoop1.x0.9.0 RDMA-Hadoop1.x0.9.8 RDMA-Memcached0.9.1&OHB-0.7.1 RDMA-Hadoop1.x0.9.9
  • 7. DataWorks Summit San Jose‘18 7Network Based Computing Laboratory Non-Volatile Memory (NVM) and NVMe-SSD 3D XPoint from Intel & Micron Samsung NVMe SSD Performance of PMC Flashtec NVRAM [*] • Non-Volatile Memory (NVM) provides byte-addressability with persistence • The huge explosion of data in diverse fields require fast analysis and storage • NVMs provide the opportunity to build high-throughput storage systems for data-intensive applications • Storage technology is moving rapidly towards NVM [*] http://www.enterprisetech.com/2014/08/06/ flashtec-nvram-15-million-iops-sub-microsecond- latency/
  • 8. DataWorks Summit San Jose‘18 8Network Based Computing Laboratory • Popular methods employed by recent works to emulate NVRAM performance model over DRAM • Two ways: – Emulate byte-addressable NVRAM over DRAM – Emulate block-based NVM device over DRAM NVRAM Emulation based on DRAM Application Virtual File System Block Device PCMDisk (RAM-Disk + Delay) DRAM mmap/memcpy/msync Application Persistent Memory Library Clflush + Delay DRAM pmem_memcpy_persist (DAX) Load/store Load/Store open/read/write/close
  • 9. DataWorks Summit San Jose‘18 9Network Based Computing Laboratory • NRCIO: NVM-aware RDMA-based Communication and I/O Schemes • NRCIO for Big Data Analytics • NVMe-SSD based Big Data Analytics • Conclusion and Q&A Presentation Outline
  • 10. DataWorks Summit San Jose‘18 10Network Based Computing Laboratory Design Scope (NVRAM) • Use as block device or memory? • Software overhead with I/O system calls • Usage of memory requires application re-design • Slower than DRAM • Write performance ~10x worse • Read performance ~1.5x worse • Cost • Cost is more than DRAM How can you modify existing systems to take advantage of the non-volatility of NVRAM?
  • 11. DataWorks Summit San Jose‘18 11Network Based Computing Laboratory Design Scope (NVM for RDMA) D-to-N over RDMA N-to-D over RDMA N-to-N over RDMA D-to-N over RDMA: Communication buffers for client are allocated in DRAM; Server uses NVM N-to-D over RDMA: Communication buffers for client are allocated in NVM; Server uses DRAM N-to-N over RDMA: Communication buffers for client and server are allocated in NVM DRAM NVM HDFS-RDMA (RDMADFSClient) HDFS-RDMA (RDMADFSServer) Client CPU Server CPU PCIe NIC PCIe NIC Client Server NVM DRAM HDFS-RDMA (RDMADFSClient) HDFS-RDMA (RDMADFSServer) Client CPU Server CPU PCIePCIe NIC NIC Client Server NVM NVM HDFS-RDMA (RDMADFSClient) HDFS-RDMA (RDMADFSServer) Client CPU Server CPU PCIePCIe NIC NIC Client Server D-to-D over RDMA: Communication buffers for client and server are allocated in DRAM (Common)
  • 12. DataWorks Summit San Jose‘18 12Network Based Computing Laboratory NVRAM-aware Communication in NRCIO NRCIO Send/Recv over NVRAM NRCIO RDMA_Read over NVRAM
  • 13. DataWorks Summit San Jose‘18 13Network Based Computing Laboratory NRCIO Send/Recv Emulation over PCM • Comparison of communication latency using NRCIO send/receive semantics over InfiniBand QDR network and PCM memory • High communication latencies due to slower writes to non-volatile persistent memory • NVRAM-to-Remote-NVRAM (NVRAM-NVRAM) => ~10x overhead vs. DRAM-DRAM • DRAM-to-Remote-NVRAM (DRAM-NVRAM) => ~8x overhead vs. DRAM-DRAM • NVRAM-to-Remote-DRAM (NVRAM-DRAM) => ~4x overhead vs. DRAM-DRAM 0 20000 40000 60000 80000 100000 120000 140000 1 2 4 8 16 32 64 128 256 512 1K 2K 4K 8K 16K 32K 64K 128K 256K 512K 1M 2M Latency(us) IB Send/Recv Emulation over PCM DRAM-DRAM NVRAM-DRAM DRAM-NVRAM NVRAM-NVRAM DRAM-SSD-Msync_Per_Line DRAM-SSD-Msync_Per_8KPage DRAM-SSD-Msync_Per_4KPage
  • 14. DataWorks Summit San Jose‘18 14Network Based Computing Laboratory NRCIO RDMA-Read Emulation over PCM 0 20000 40000 60000 80000 100000 120000 140000 1 2 4 8 16 32 64 128 256 512 1K 2K 4K 8K 16K 32K 64K 128K 256K 512K 1M 2M Latency(us) Message Size (bytes) IB RDMA_READ Emulation over PCM DRAM-DRAM NVRAM-DRAM DRAM-NVRAM NVRAM-NVRAM DRAM-SSD-Msync_Per_Line DRAM-SSD-Msync_Per_8KPage DRAM-SSD-Msync_Per_4KPage • Communication latency with NRCIO RDMA-Read over InfiniBand QDR + PCM memory • Communication overheads for large messages due to slower writes into NVRAM from remote memory; similar to Send/Receive • RDMA-Read outperforms Send/Receive for large messages; as observed for DRAM-DRAM
  • 15. DataWorks Summit San Jose‘18 15Network Based Computing Laboratory • NRCIO: NVM-aware RDMA-based Communication and I/O Schemes • NRCIO for Big Data Analytics • NVMe-SSD based Big Data Analytics • Conclusion and Q&A Presentation Outline
  • 16. DataWorks Summit San Jose‘18 16Network Based Computing Laboratory • Files are divided into fixed sized blocks – Blocks divided into packets • NameNode: stores the file system namespace • DataNode: stores data blocks in local storage devices • Uses block replication for fault tolerance – Replication enhances data-locality and read throughput • Communication and I/O intensive • Java Sockets based communication • Data needs to be persistent, typically on SSD/HDD NameNode DataNodes Client Opportunities of Using NVRAM+RDMA in HDFS
  • 17. DataWorks Summit San Jose‘18 17Network Based Computing Laboratory Design Overview of NVM and RDMA-aware HDFS (NVFS) • Design Features • RDMA over NVM • HDFS I/O with NVM • Block Access • Memory Access • Hybrid design • NVM with SSD as a hybrid storage for HDFS I/O • Co-Design with Spark and HBase • Cost-effectiveness • Use-case Applications and Benchmarks Hadoop MapReduce Spark HBase Co-Design (Cost-Effectiveness, Use-case) RDMA Receiver RDMA Sender DFSClient RDMA Replicator RDMA Receiver NVFS -BlkIO Writer/Reader NVM NVFS- MemIO SSD SSD SSD NVM and RDMA-aware HDFS (NVFS) DataNode N. S. Islam, M. W. Rahman , X. Lu, and D. K. Panda, High Performance Design for HDFS with Byte-Addressability of NVM and RDMA, 24th International Conference on Supercomputing (ICS), June 2016
  • 18. DataWorks Summit San Jose‘18 18Network Based Computing Laboratory • RDMA communication buffers allocated in NVRAM via D-to-N approach • buffers used in a round-robin manner • Data received in NVRAM buffers are encapsulated in java I/O streams • written to NVM through HDFS file I/O APIs • mmap files during HDFS I/O Proposed Design (NVFS-BlkIO) NVM Communication Buffers RDMADFSServer (DataNode) Blk1 Blk1RDMADFSClient Blk2RDMADFSClient Blk3RDMADFSClient BlkNRDMADFSClient Blk2 Blk3 Blk4 … BlkN write read write read write read write read . . . Block Access Memory Access
  • 19. DataWorks Summit San Jose‘18 19Network Based Computing Laboratory • Shared buffers for communication and storage • Buffers allocated in NVM; managed in a linked list • Data packets added to the tail (first free buffer) • Does not maintain HDFS block structures • Fast write, suboptimal read for large number of blocks • Needs all the buffers to be registered in the communication library Proposed Design (NVFS-MemIO) NVM Free Buffers RDMADFSServer (DataNode) Blk1RDMADFSClient Blk2RDMADFSClient Blk3RDMADFSClient BlkNRDMADFSClient write read write read write read write read . . . … Used Buffers Blk1 Pkt1 Data Nxt Blk2 Pkt1 Data Nxt Blk1 Pkt2 Data Nxt BlkN Pkt1 Data Nxt Flat Architecture
  • 20. DataWorks Summit San Jose‘18 20Network Based Computing Laboratory • Re-usable communication buffers • Separate storage buffers in NVM • Maintains HDFS block structures • Overlapping between communication and storage – Efficient write • Constant time (amortized) retrieval of blocks – Faster read over Flat Architecture • Modularity Proposed Design (NVFS-MemIO) NVM Communication Buffers RDMADFSServer (DataNode) Blk1RDMADFSClient Blk2RDMADFSClient Blk3RDMADFSClient BlkNRDMADFSClient write read write read write read write read . . . … Storage Buffers Blk1 Pkt1 Data BlkN Pkt1 Data PktM Data PktM Data Blk1 Blk2 … BlkN HashTable LinkedList LinkedList Block Architecture
  • 21. DataWorks Summit San Jose‘18 21Network Based Computing Laboratory Evaluation with Hadoop MapReduce 0 50 100 150 200 250 300 350 Write Read AverageThroughput(MBps) HDFS (56Gbps) NVFS-BlkIO (56Gbps) NVFS-MemIO (56Gbps) • TestDFSIO on SDSC Comet (32 nodes) – Write: NVFS-MemIO gains by 4x over HDFS – Read: NVFS-MemIO gains by 1.2x over HDFS TestDFSIO 0 200 400 600 800 1000 1200 1400 Write Read AverageThroughput(MBps) HDFS (56Gbps) NVFS-BlkIO (56Gbps) NVFS-MemIO (56Gbps) 4x 1.2x 4x 2x SDSC Comet (32 nodes) OSU Nowlab (4 nodes) • TestDFSIO on OSU Nowlab (4 nodes) – Write: NVFS-MemIO gains by 4x over HDFS – Read: NVFS-MemIO gains by 2x over HDFS
  • 22. DataWorks Summit San Jose‘18 22Network Based Computing Laboratory Evaluation with HBase 0 100 200 300 400 500 600 700 800 8:800K 16:1600K 32:3200K Throughput(ops/s) Cluster Size : No. of Records HDFS (56Gbps) NVFS (56Gbps) HBase 100% insert 0 200 400 600 800 1000 1200 8:800K 16:1600K 32:3200K Throughput(ops/s) Cluster Size : Number of Records HBase 50% read, 50% update • YCSB 100% Insert on SDSC Comet (32 nodes) – NVFS-BlkIO gains by 21% by storing only WALs to NVM • YCSB 50% Read, 50% Update on SDSC Comet (32 nodes) – NVFS-BlkIO gains by 20% by storing only WALs to NVM 20%21%
  • 23. DataWorks Summit San Jose‘18 23Network Based Computing Laboratory Opportunities to Use NVRAM+RDMA in MapReduce Disk Operations • Map and Reduce Tasks carry out the total job execution – Map tasks read from HDFS, operate on it, and write the intermediate data to local disk (persistent) – Reduce tasks get these data by shuffle from NodeManagers, operate on it and write to HDFS (persistent) • Communication and I/O intensive; Shuffle phase uses HTTP over Java Sockets; I/O operations take place in SSD/HDD typically Bulk Data Transfer
  • 24. DataWorks Summit San Jose‘18 24Network Based Computing Laboratory Opportunities to Use NVRAM in MapReduce-RDMA DesignInputFiles OutputFiles IntermediateData Map Task Read Map Spill Merge Map Task Read Map Spill Merge Reduce Task Shuffle Reduce In- Mem Merge Reduce Task Shuffle Reduce In- Mem Merge RDMA All Operations are In- Memory Opportunities exist to improve the performance with NVRAM
  • 25. DataWorks Summit San Jose‘18 25Network Based Computing Laboratory NVRAM-Assisted Map Spilling in MapReduce-RDMA InputFiles OutputFiles IntermediateData Map Task Read Map Spill Merge Map Task Read Map Spill Merge Reduce Task Shuffle Reduce In- Mem Merge Reduce Task Shuffle Reduce In- Mem Merge RDMA NVRAM  Minimizes the disk operations in Spill phase M. W. Rahman, N. S. Islam, X. Lu, and D. K. Panda, Can Non-Volatile Memory Benefit MapReduce Applications on HPC Clusters? PDSW-DISCS, with SC 2016. M. W. Rahman, N. S. Islam, X. Lu, and D. K. Panda, NVMD: Non-Volatile Memory Assisted Design for Accelerating MapReduce and DAG Execution Frameworks on HPC Systems? IEEE BigData 2017.
  • 26. DataWorks Summit San Jose‘18 26Network Based Computing Laboratory Comparison with Sort and TeraSort • RMR-NVM achieves 2.37x benefit for Map phase compared to RMR and MR-IPoIB; overall benefit 55% compared to MR-IPoIB, 28% compared to RMR 2.37x 55% 2.48x 51% • RMR-NVM achieves 2.48x benefit for Map phase compared to RMR and MR-IPoIB; overall benefit 51% compared to MR-IPoIB, 31% compared to RMR
  • 27. DataWorks Summit San Jose‘18 27Network Based Computing Laboratory Evaluation of Intel HiBench Workloads • We evaluate different HiBench workloads with Huge data sets on 8 nodes • Performance benefits for Shuffle-intensive workloads compared to MR-IPoIB: – Sort: 42% (25 GB) – TeraSort: 39% (32 GB) – PageRank: 21% (5 million pages) • Other workloads: – WordCount: 18% (25 GB) – KMeans: 11% (100 million samples)
  • 28. DataWorks Summit San Jose‘18 28Network Based Computing Laboratory Evaluation of PUMA Workloads • We evaluate different PUMA workloads on 8 nodes with 30GB data size • Performance benefits for Shuffle-intensive workloads compared to MR-IPoIB : – AdjList: 39% – SelfJoin: 58% – RankedInvIndex: 39% • Other workloads: – SeqCount: 32% – InvIndex: 18%
  • 29. DataWorks Summit San Jose‘18 29Network Based Computing Laboratory • NRCIO: NVM-aware RDMA-based Communication and I/O Schemes • NRCIO for Big Data Analytics • NVMe-SSD based Big Data Analytics • Conclusion and Q&A Presentation Outline
  • 30. DataWorks Summit San Jose‘18 30Network Based Computing Laboratory Overview of NVMe Standard • NVMe is the standardized interface for PCIe SSDs • Built on ‘RDMA’ principles – Submission and completion I/O queues – Similar semantics as RDMA send/recv queues – Asynchronous command processing • Up to 64K I/O queues, with up to 64K commands per queue • Efficient small random I/O operation • MSI/MSI-X and interrupt aggregation NVMe Command Processing Source: NVMExpress.org
  • 31. DataWorks Summit San Jose‘18 31Network Based Computing Laboratory Overview of NVMe-over-Fabric • Remote access to flash with NVMe over the network • RDMA fabric is of most importance – Low latency makes remote access feasible – 1 to 1 mapping of NVMe I/O queues to RDMA send/recv queues NVMf Architecture I/O Submission Queue I/O Completion Queue RDMA Fabric SQ RQ NVMe Low latency overhead compared to local I/O
  • 32. DataWorks Summit San Jose‘18 32Network Based Computing Laboratory Overview of SPDK • High-performance storage library over NVMe • Significant performance improvement over POSIX NVMe driver
  • 33. DataWorks Summit San Jose‘18 33Network Based Computing Laboratory Design Challenges with NVMe-SSD • QoS – Hardware-assisted QoS • Persistence – Flushing buffered data • Performance – Consider flash related design aspects – Read/Write performance skew – Garbage collection • Virtualization – SR-IOV hardware support – Namespace isolation • New software systems – Disaggregated Storage with NVMf – Persistent Caches Co-design
  • 34. DataWorks Summit San Jose‘18 34Network Based Computing Laboratory Evaluation with RocksDB 0 5 10 15 Insert Overwrite Random Read Latency (us) POSIX SPDK 0 100 200 300 400 500 Write Sync Read Write Latency (us) POSIX SPDK • 20%, 33%, 61% improvement for Insert, Write Sync, and Read Write • Overwrite: Compaction and flushing in background – Low potential for improvement • Read: Performance much worse; Additional tuning/optimization required
  • 35. DataWorks Summit San Jose‘18 35Network Based Computing Laboratory Evaluation with RocksDB 0 5000 10000 15000 20000 Write Sync Read Write Throughput (ops/sec) POSIX SPDK 0 100000 200000 300000 400000 500000 600000 Insert Overwrite Random Read Throughput (ops/sec) POSIX SPDK • 25%, 50%, 160% improvement for Insert, Write Sync, and Read Write • Overwrite: Compaction and flushing in background – Low potential for improvement • Read: Performance much worse; Additional tuning/optimization required
  • 36. DataWorks Summit San Jose‘18 36Network Based Computing Laboratory QoS-aware SPDK Design 0 50 100 150 1 5 9 13 17 21 25 29 33 37 41 45 49 Bandwidth(MB/s) Time Scenario 1 High Priority Job (WRR) Medium Priority Job (WRR) High Priority Job (OSU-Design) Medium Priority Job (OSU-Design) 0 1 2 3 4 5 2 3 4 5 JobBandwidthRatio Scenario Synthetic Application Scenarios SPDK-WRR OSU-Design Desired • Synthetic application scenarios with different QoS requirements – Comparison using SPDK with Weighted Round Robbin NVMe arbitration • Near desired job bandwidth ratios • Stable and consistent bandwidth S. Gugnani, X. Lu, and D. K. Panda, Analyzing, Modeling, and Provisioning QoS for NVMe SSDs, (Under Review)
  • 37. DataWorks Summit San Jose‘18 37Network Based Computing Laboratory Conclusion and Future Work • Exploring NVM-aware RDMA-based Communication and I/O Schemes for Big Data Analytics • Proposed a new library, NRCIO (work-in-progress) • Re-design HDFS storage architecture with NVRAM • Re-design RDMA-MapReduce with NVRAM • Design Big Data analytics stacks with NVMe and NVMf protocols • Results are promising • Further optimizations in NRCIO • Co-design with more Big Data analytics frameworks
  • 38. DataWorks Summit San Jose‘18 38Network Based Computing Laboratory {panda, luxi}@cse.ohio-state.edu http://www.cse.ohio-state.edu/~panda http://www.cse.ohio-state.edu/~luxi Thank You! Network-Based Computing Laboratory http://nowlab.cse.ohio-state.edu/ The High-Performance Big Data Project http://hibd.cse.ohio-state.edu/