SlideShare una empresa de Scribd logo
1 de 8
Descargar para leer sin conexión
• Cognizant 20-20 Insights

Harnessing Hadoop: Understanding the Big
Data Processing Options for Optimizing
Analytical Workloads
As Hadoop’s ecosystem evolves, new open source tools and capabilities
are emerging that can help IT leaders build more capable and businessrelevant big data analytics environments.
Executive Summary
Asserting that data is vital to business is an
understatement. Organizations have generated
more and more data for years, but struggle to use
it effectively. Clearly data has more important
uses than ensuring compliance with regulatory
requirements. In addition, data is being generated
with greater velocity, due to the advent of new
pervasive devices (e.g., smartphones, tablets,
etc.), social Web sites (e.g., Facebook, Twitter,
LinkedIn, etc.) and other sources like GPS, Google
Maps, heat/pressure sensors, etc.
Moreover, a plethora of bits and bytes is being
created by data warehouses, repositories, backups
and archives. IT leaders need to work with the
lines of business to take advantage of the wealth
of insight that can be mined from these repositories. This search for data patterns will be similar
to the searches the oil and gas industry conduct
for precious resources. Similar business value can
be uncovered if organizations can improve how
“dirty” data is processed to refine key business
decisions.

cognizant 20-20 insights | november 2013

According to IDC1 and Century Link,2 our planet
now has 2.75 zettabytes (ZB) worth of data;
this number is likely to reach 8ZB by 2015. This
situation has given rise to the new buzzword
known as “big data.”
Today, many organizations are applying big data
analytics to challenges such as fraud detection,
risk/threat analysis, sentiment analysis, equities
analysis, weather forecasting, product recommendations, classifications, etc. All of these
activities require a platform that can process this
data and deliver useful (often predictive) insights.
This white paper discusses how to leverage a big
data analytics platform, dissecting its capabilities,
limitations and vendor choices.

Big Data’s Ground Zero
Apache Hadoop3 is a Java-based open source
platform that makes processing of large data
possible over thousands of distributed nodes.
Hadoop’s development resulted from publication
of two Google authored whitepapers (i.e., Google
File System and Google MapReduce). There are
numerous vendor-specific distributions available
Hadoop Base Components

MapReduce

(Distributed Programming Framework)

HDFS

(Hadoop Distributed File System)

Figure 1

based on Apache Hadoop, from companies such
as Cloudera (CDH3/4), Hortonworks (1.x/2.x) and
MapR (M3/M5). In addition, appliance-based
solutions are offered by many vendors, such as
IBM, EMC, Oracle, etc.
Figure 1 illustrates Apache Hadoop’s two base
components.
Hadoop distributed file system (HDFS) is a base
component of the Hadoop framework that
manages the data storage. It stores data in the
form of data blocks (default size: 64MB) on the
local hard disk. The input data size defines the
block size to be used in the cluster. A block size of
128MB for a large file set is a good choice.
Keeping large block sizes means a small number
of blocks can be stored, thereby minimizing
the memory requirement on the master node
(commonly known as NameNode) for storing the
metadata information. Block size also impacts
the job execution time, and thereby cluster performance. A file can be stored in HDFS with a
different block size during the upload process. For
file sizes smaller than the block size, the Hadoop
cluster may not perform optimally.
Example: A file size of 512MB requires eight
blocks (default: 64MB), so it will need memory to
store information of 25 items (1 filename + 8 * 3
copies). The same file with a block size of 128MB
will require memory for just 13 items.
HDFS has built-in fault tolerance capability,
thereby eliminating the need for setting up a
redundant array of inexpensive disks (RAID) on
the data nodes. The data blocks stored in HDFS
automatically get replicated across multiple data
nodes based on the replication factor (default: 3).
The replication factor determines the number of

cognizant 20-20 insights

copies of a data block to be created and distributed across other data nodes. The larger the replication factor, the better the read performance
of the cluster. On the other hand, it requires
more time to replicate the data blocks and more
storage for replicated copies.
For example, storing a file size of 1GB requires
around 4GB of storage space, due to a replication factor of 3 and an additional 20% to 30% of
space required during data processing.
Both block size and replication factor impact
Hadoop cluster performance. The value for
both the parameters depends on the input data
type and the application workload to run on this
cluster. In a development environment, the replication factor can be set to a smaller number —
say, 1 or 2 — but it is advisable to set this value to
a higher factor, if the data available on the cluster
is crucial. Keeping a default replication factor of
3 is a good choice from a data redundancy and
cluster performance perspective.
Figure 2 (next page) depicts the NameNode and
DataNode that constitute the HDFS service.
MapReduce is the second base component
of a Hadoop framework. It manages the data
processing part on the Hadoop cluster. A cluster
runs MapReduce jobs written in Java or any other
language (Python, Ruby, R, etc.) via streaming.
A single job may consist of multiple tasks. The
number of tasks that can run on a data node is
governed by the amount of memory installed. It is
recommended to have more memory installed on
each data node for better cluster performance.
Based on the application workload that is going
to run on the cluster, a data node may require
high compute power, high disk I/O or additional
memory.

2
Hadoop Services Schematic

MapReduce
(Processing)
Job Tracker

Name Node

Task Tracker

Task Tracker

Data Node

Data Node

...
.
...
.

Task Tracker

Data Node

Secondary
Name Node

HDFS
(Storage)
Figure 2

The job execution process is handled by the
JobTracker and TaskTracker services (see
HDFS – Hadoop
Figure 2). The typeDistributed File System
of job scheduler chosen for
a cluster depends on the application workload.
If the cluster will be running short running jobs,
then FIFO scheduler will serve the purpose. For
a much more balanced cluster, one can choose
to use Fair Scheduler. In MRv1.0, there’s no high
availability (HA) for the JobTracker service. If the
node running JobTracker service fails, all jobs
submitted in the cluster will be discontinued and
must be resubmitted once the node is recovered
or another node is made available. This issue has
been fixed in the newer release — i.e., MRv2.0,
also known as yet another resource negotiator
(YARN).4

•	 Pig provides a high-level procedural language

known as PigLatin. Programs written in PigLatin
are automatically converted into map-reduce
jobs. This is very suitable in situations where
developers are not familiar with high-level
languages such as Java, but have strong
scripting knowledge and expertise. They can
create their own customized functions. It is
predominantly recommended for processing
large amounts of semi-structured data.

•	 Hive

provides a SQL-like language known
as HiveQL (query language). Analysts can
run queries against the data stored in HDFS
using SQL-like queries that automatically
get converted into map-reduce jobs. It is recommended where we have large amounts
of unstructured data and we want to fetch
particular details without writing map-reduce
jobs. Companies like RevolutionAnalytics6
can leverage Hadoop clusters running Hive
via RHadoop connectors. RHadoop provides
a connector for HDFS, MapReduce and HBase
connectivity.

Figure 2 also depicts the interaction among these
five service components.
The Secondary Name Node plays the vital role of
check-pointing. It is this node that off-loads the
work of metadata image creation and updating
on a regular basis. It is recommended to keep the
machine configuration the same as Name Node
so that in the case of an emergency this node can
be promoted as a Name Node.

•	 HBase is a type of NOSQL database and offers

scalable and distributed database service.
However, it is not optimized for transactional
applications and relational analytics. Be aware:
HBase is not a replacement for a relational
database. A cluster with HBase installed requires
an additional 24GB to 32GB of memory on each
data node. It is recommended that the data
nodes running HBase Region Server should run
a smaller number of MapReduce tasks.

The Hadoop Ecosystem
Hadoop base components are surrounded by
numerous ecosystem5 projects. Many of them are
already top-level Apache (open source) projects;
some, however, remain in the incubation stage.
Figure 3 (next page) highlights a few of the ecosystem components. The most common ones include:

cognizant 20-20 insights

3
Avro

(Serialization)

Ambari

(System Management)

(Coordination)

Zookeeper

Sqoop

(CLI for Data Movement)

MapReduce
(Distributed Programming
Framework)

HBase

(SQL)

(NoSQL Store)

Hive
HCatalog

Hadoop
Ecosystem

Pig
(Data Flow)

(Table/Schema Mgmt.)

Scanning the Hadoop Project Ecosystem

HDFS

Infrastructure
Layer

OS
Layer

(Hadoop Distributed File System)

Physical
Machines

Virtual
Machines

Storage

Network

Figure 3

Hadoop clusters are mainly used in R&D-related activities. Many enterprises are now using
Hadoop with production workloads. The type of
ecosystem components required depends upon
the enterprise needs. Enterprises with large data
warehouses normally go for production-ready
tools from proven vendors such as IBM, Oracle,
EMC, etc. Also, not every enterprise requires
a Hadoop cluster. A cluster is mainly required
by business units that are developing Hadoopbased solutions or running analytics workloads.
Business units that are just entering into analytics
may start small by building a 5-10 node cluster.
As shown in Figure 4 (next page), a Hadoop
cluster can scale from just a few data nodes to
thousands of data nodes as the need arises or
data grows. Hadoop clusters show linear scalability, both in terms of performance and storage
capacity, as the count of nodes increase. Apache
Hadoop runs primarily on the Intel x86/x86_64based architecture.
Clusters must be designed with a particular
workload in mind (e.g., data-intensive loads,
processor-intensive loads, mixed loads, real-time
clusters, etc.). As there are many Hadoop distributions available, implementation selection
depends upon feature availability, licensing cost,

cognizant 20-20 insights

performance reports, hardware requirements,
skills availability within the enterprise and the
workload requirements.
Per a comStore report,7 MapR performs better
than Cloudera and works out to be less costly.
Hortonworks, on the other hand, has a strong
relationship with Microsoft. Its distribution is also
available for a Microsoft platform (HDP v1.3 for
Windows), which makes this distribution popular
among the Microsoft community.
Hadoop Appliance
An alternate approach to building a Hadoop
cluster is to use a Hadoop appliance. These are
customizable and ready-to-use complete stacks
from vendors such as IBM (IBM Netezza), Oracle
(Big Data Appliance), EMC (Data Computing
Appliance), HP (AppSystem), Teradata Aster, etc.
Using an appliance-based approach speeds goto-market and results in an integrated solution
from a single (and accountable) vendor like IBM,
Teradata, HP, Oracle, etc.
These appliances come in different sizes and configurations. Every vendor uses a different Hadoop
distribution. For example: Oracle Big Data is based
on a Cloudera distribution; EMC uses MapR and
HP AppSystem can be configured with Cloudera,

4
Hadoop Cluster: An Anatomical View
Network Switch

Kerberos
(Security)

Cluster Manager
(Management & Monitoring)

Job Tracker/
Resource Manager
(Scheduler)
Manual/Automatic
Failover

Client Nodes
(Edge Nodes)

Name Nodes
(Master Node)

Data Nodes
(Slave Nodes)

Figure 4

MapR or Hortonworks. These appliances are
designed to solve a particular type of problem or
for a particular workload in mind.

•	 IBM Netezza 1000 is designed for high-per-

formance business intelligence and advanced
analytics. It’s a data warehouse appliance
that can run complex and mixed workloads.
In addition, it supports multiple languages (C/
C++, Java, Python, FORTRAN), frameworks
(Hadoop) and tools (IBM SPSS, R and SAS).

•	 The Teradata Aster appliance comes with 50

pre-built MapReduce functions optimized for
graph, text and behavior/marketing analysis.
It also comes with Aster Database and Apache
Hadoop to process structured, unstructured
and semi-structured data. A full Hadoop cabinet
includes two masters and 16 data nodes which
can deliver up to 152TB of user space.

•	 The HP AppSystem appliance comes with pre-

configured HP ProLiant DL380e Gen8 servers
running Red Hat Enterprise Linux 6.1 and a
Cloudera distribution. Enterprises can use this
appliance with HP Vertica Community Edition
to run real-time analysis of structured data.

cognizant 20-20 insights

HDFS and Hadoop
HDFS is not the only platform for performing big
data tasks. Vendors have emerged with solutions
that offer a replacement for HDFS. For example,
DataStax8 uses Cassandra; Amazon Web Services
uses S3; IBM has General Parallel File System
(GPFS); EMC relies on Isilon; MapR uses Direct
NFS; etc. (To learn more, read about alternate
technologies.)9 An EMC appliance uses Isilon
scale-out NAS technology, along with HDFS. This
combination eliminates the need to export/import
the data into HDFS (see Figure 5, next page).
IBM uses GPFS instead of HDFS in its solution
after finding that GPFS performed better than
HDFS (see Figure 6, also next page).
Many IT shops are scrambling to build Hadoop
clusters in their enterprise. The question is: What
are they going to do with these clusters? Not
every enterprise needs to have its own cluster;
it all depends on the nature of the business
and whether the cluster serves a legitimate and
compelling business use case. For example, enterprises into financial transactions would definitely
require an analytic platform for fraud detection,

5
EMC Subs Isilon for HDFS
R (RHIPE)

PIG

Mahout

Job Tracker

HIVE

HBASE

Name Node

Task Tracker

Data Node

Ethernet

HCFS

Figure 5

portfolio analysis, churn analysis, etc. Similarly,
enterprises into research work require large-size
clusters for faster processing and large data
analysis.

•	 Joyent, which offers Hadoop services based on

Hadoop-as-a-Service

•	 Mortar, which delivers Hadoop services based

There are couple of ways in which Hadoop
clusters can be leveraged in addition to building
a new cluster or taking an appliance-based
approach. A more cost-effective approach is to
use Amazon10 Elastic Map-Reduce (EMR) service.
Enterprises can build a cluster on demand and
pay for the duration they used it. Amazon offers
compute instances running open-source Apache
Hadoop and MapR (M3/M5/M7) distribution.
Other vendors that offer “Hadoop-as-a-Service”
include:

Hortonworks distribution.

•	 Skytap, which provides Hadoop services based
on Cloudera (CDH4) distribution.
on MapR distribution.

All of these vendors charge on a pay-per-use
basis. New vendors will eventually emerge as
market demand increases and more enterprises
require additional analytics, business intelligence
(BI) and data warehousing options. As the competitive stakes rise, we recommend comparing
prices and capabilities to get the best price/performance for your big data environment.

Benchmarking HDFS vs. GPFS

Execution Time

2000
1500
1000
500
0

Postmark
HDFS

Teragen

Grep

GPFS

Figure 6

cognizant 20-20 insights

6

CacheTest

Terasort
The Hadoop platform by itself can’t do anything.
Enterprises need applications that can leverage
this platform. Developing an analytics application
that offers business value requires data scientists
and data analysts. These applications can perform
many different types of jobs, from pure analytical,
data and BI jobs through data processing. Hadoop
can run all types of workloads, so it is becoming
increasingly important to design an effective and
efficient cluster using the latest technologies and
framework.
Hadoop Competitors
The Hadoop framework provides a batch
processing environment that works well with
historical data in data warehouses and archives.
What if there is a need to perform real-time
processing such as in the case of fraud detection,
bank transactions, etc.? Enterprises have begun
assessing alternative approaches and technologies to solve this issue.
Storm is an open source project that helps in
setting up the environment for real-time (stream)
processing. Besides, there are other options
such as massively parallel processing (MPP),
high performance computing cluster (HPCC) and
Hadapt to consider. However, all these technologies will require a strong understanding of the
workload that will be run. If the workload is more
processing intensive, it’s always recommended
to go with HPCC. Besides, HPCC uses a declarative programming language that is extensible,
homogeneous and complete. It uses an Open
Data Model which is not constrained by key-value
pairs, and data processing can happen in parallel.

To learn more, read HPCC vs. Hadoop.11 Massively
parallel processing databases have been around
even before MapReduce hit the market. It makes
use of SQL as an interface to interact with the
system. SQL is a commonly used language by
data scientists and analysts. MPP systems are
mainly used in high-end data warehouse implementations.

Looking Ahead
Hadoop is a strong framework for running
data-intensive jobs, but it’s still evolving as the
ecosystem projects around it mature and expand.
Market demand is spurring new technological
developments, daily. With these innovations come
various options to choose from, such as building
a new cluster, leveraging an appliance or making
use of Hadoop-as-a-Service offerings.
Moreover, the market is filling with new companies,
which is creating more competitive technologies
and options to consider. For starters, enterprises
need to make the right decision in choosing their
Hadoop distributions. Enterprises can start small
by picking up any distribution and building a
5-10 node cluster to test the features, functionality and performance. Most reputable vendors
provide a free version of their distribution for
evaluation purposes. We advise enterprises to
test 2-3 vendor distributions against its business
requirements before making any final decisions.
However, IT decision-makers should pay close
attention to emerging competitors to position
their organizations for impending technology
enhancements.

Footnotes
1	

IDC report: Worldwide Big Data Technology and Services 2012-2015 Forecast.

2	

www.centurylink.com/business/artifacts/pdf/resources/big-data-defining-the-digital-deluge.pdf.

3	

http://en.wikipedia.org/wiki/Hadoop.

4	

http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/YARN.html.

5	

http://pig.apache.org/; http://hive.apache.org/; http://hbase.apache.org; “Comparing Pig Latin and SQL
for Constructing Data Processing Pipelines,” developer.yahoo.com/blogs/hadoop/comparing-pig-latinsql-constructing-data-processing-pipelines-444.html; “Process a Million Songs with Apache Pig,” blog.
cloudera.com/blog/2012/08/process-a-million-songs-with-apache-pig/.

6	

www.revolutionanalytics.com/products/revolution-enterprise.php.

7	

comStore report: Switching to MapR for Reliable Insights into Global Internet Consumer Behavior.

8	

www.datastax.com/wp-content/uploads/2012/09/WP-DataStax-HDFSvsCFS.pdf.

9	

gigaom.com/2012/07/11/because-hadoop-isnt-perfect-8-ways-to-replace-hdfs/.

10	

aws.amazon.com/elasticmapreduce/pricing/.

11	

hpccsystems.com/Why-HPCC/HPCC-vs-Hadoop.

cognizant 20-20 insights

7
About the Authors
Harish Chauhan is an Associate Director and leads new services development around big data and new
emerging technologies for Cognizant’s General Technology Office. He has over 21 years of experience and
holds a bachelor of engineering degree in computer science. His areas of expertise include virtualization,
cloud computing and distributed computing (Hadoop/HPC). He contributed technical articles, co-authored
an IBM Redbook on the virtualization engine and filed two patents in the areas of virtualization. He can be
reached at Harish.Chauhan@cognizant.com.
Jerald Murphy is Cognizant’s Worldwide Practice Leader for Cognizant Business Consulting’s IT Infrastructure Services Practice. He also leads Cognizant’s Technology Office for Enterprise Computing,
Cloud Services and Emerging Technology. Jerry has spent over 20 years in the computer industry, in
various technical, marketing, sales, senior management and industry analyst positions. He is one of the
industry’s leading authorities on enterprise IT infrastructure design, security and management, and has
spent 20-plus years conducting engineering research and helping to develop infrastructure and security
strategies for many of the largest global companies and service providers. Jerry’s areas of expertise
include data center infrastructure and enterprise systems management. Additional focus areas include
managed services and IT outsourcing. Jerry has a B.S.E.E. from The United States Military Academy and a
master’s degree in electrical engineering from Princeton University. He can be reached at Jerald.Murphy@
cognizant.com.

About Cognizant
Cognizant (NASDAQ: CTSH) is a leading provider of information technology, consulting, and business process outsourcing services, dedicated to helping the world’s leading companies build stronger businesses. Headquartered in
Teaneck, New Jersey (U.S.), Cognizant combines a passion for client satisfaction, technology innovation, deep industry
and business process expertise, and a global, collaborative workforce that embodies the future of work. With over 50
delivery centers worldwide and approximately 164,300 employees as of June 30, 2013, Cognizant is a member of the
NASDAQ-100, the S&P 500, the Forbes Global 2000, and the Fortune 500 and is ranked among the top performing
and fastest growing companies in the world. Visit us online at www.cognizant.com or follow us on Twitter: Cognizant.

World Headquarters

European Headquarters

India Operations Headquarters

500 Frank W. Burr Blvd.
Teaneck, NJ 07666 USA
Phone: +1 201 801 0233
Fax: +1 201 801 0243
Toll Free: +1 888 937 3277
Email: inquiry@cognizant.com

1 Kingdom Street
Paddington Central
London W2 6BD
Phone: +44 (0) 20 7297 7600
Fax: +44 (0) 20 7121 0102
Email: infouk@cognizant.com

#5/535, Old Mahabalipuram Road
Okkiyam Pettai, Thoraipakkam
Chennai, 600 096 India
Phone: +91 (0) 44 4209 6000
Fax: +91 (0) 44 4209 6060
Email: inquiryindia@cognizant.com

©
­­ Copyright 2013, Cognizant. All rights reserved. No part of this document may be reproduced, stored in a retrieval system, transmitted in any form or by any
means, electronic, mechanical, photocopying, recording, or otherwise, without the express written permission from Cognizant. The information contained herein is
subject to change without notice. All other trademarks mentioned herein are the property of their respective owners.

Más contenido relacionado

La actualidad más candente

Survey of Parallel Data Processing in Context with MapReduce
Survey of Parallel Data Processing in Context with MapReduce Survey of Parallel Data Processing in Context with MapReduce
Survey of Parallel Data Processing in Context with MapReduce
cscpconf
 
Hadoop technology
Hadoop technologyHadoop technology
Hadoop technology
tipanagiriharika
 

La actualidad más candente (18)

hadoop
hadoophadoop
hadoop
 
Understanding hdfs
Understanding hdfsUnderstanding hdfs
Understanding hdfs
 
IJET-V3I2P14
IJET-V3I2P14IJET-V3I2P14
IJET-V3I2P14
 
Apache Hadoop - Big Data Engineering
Apache Hadoop - Big Data EngineeringApache Hadoop - Big Data Engineering
Apache Hadoop - Big Data Engineering
 
Hadoop
HadoopHadoop
Hadoop
 
Big data
Big dataBig data
Big data
 
Hadoop technology doc
Hadoop technology docHadoop technology doc
Hadoop technology doc
 
Report Hadoop Map Reduce
Report Hadoop Map ReduceReport Hadoop Map Reduce
Report Hadoop Map Reduce
 
Hadoop MapReduce Framework
Hadoop MapReduce FrameworkHadoop MapReduce Framework
Hadoop MapReduce Framework
 
Introduction to Hadoop part1
Introduction to Hadoop part1Introduction to Hadoop part1
Introduction to Hadoop part1
 
Big data
Big dataBig data
Big data
 
002 Introduction to hadoop v3
002   Introduction to hadoop v3002   Introduction to hadoop v3
002 Introduction to hadoop v3
 
Survey of Parallel Data Processing in Context with MapReduce
Survey of Parallel Data Processing in Context with MapReduce Survey of Parallel Data Processing in Context with MapReduce
Survey of Parallel Data Processing in Context with MapReduce
 
Hadoop Seminar Report
Hadoop Seminar ReportHadoop Seminar Report
Hadoop Seminar Report
 
Introduction to Big Data & Hadoop
Introduction to Big Data & HadoopIntroduction to Big Data & Hadoop
Introduction to Big Data & Hadoop
 
Managing Big data with Hadoop
Managing Big data with HadoopManaging Big data with Hadoop
Managing Big data with Hadoop
 
Introduction to Hadoop and Hadoop component
Introduction to Hadoop and Hadoop component Introduction to Hadoop and Hadoop component
Introduction to Hadoop and Hadoop component
 
Hadoop technology
Hadoop technologyHadoop technology
Hadoop technology
 

Destacado

Destacado (6)

Tracking What Matters: Best Practices and Common Pitfalls in Social Media Mea...
Tracking What Matters: Best Practices and Common Pitfalls in Social Media Mea...Tracking What Matters: Best Practices and Common Pitfalls in Social Media Mea...
Tracking What Matters: Best Practices and Common Pitfalls in Social Media Mea...
 
How Advanced Analytics Will Inform and Transform U.S. Retail
How Advanced Analytics Will Inform and Transform U.S. RetailHow Advanced Analytics Will Inform and Transform U.S. Retail
How Advanced Analytics Will Inform and Transform U.S. Retail
 
Mobile Payments: How U.S. Banks Can Deal with Disruptive Change
Mobile Payments: How U.S. Banks Can Deal with Disruptive ChangeMobile Payments: How U.S. Banks Can Deal with Disruptive Change
Mobile Payments: How U.S. Banks Can Deal with Disruptive Change
 
Travel Planning 2020: The Journey Toward Market Prosperity
Travel Planning 2020: The Journey Toward Market ProsperityTravel Planning 2020: The Journey Toward Market Prosperity
Travel Planning 2020: The Journey Toward Market Prosperity
 
The Collateral Conundrum: A US$11 Trillion Opportunity?
The Collateral Conundrum: A US$11 Trillion Opportunity?The Collateral Conundrum: A US$11 Trillion Opportunity?
The Collateral Conundrum: A US$11 Trillion Opportunity?
 
Dealing with Big Data: Planning for and Surviving the Petabyte Age
Dealing with Big Data: Planning for and Surviving the Petabyte AgeDealing with Big Data: Planning for and Surviving the Petabyte Age
Dealing with Big Data: Planning for and Surviving the Petabyte Age
 

Similar a Harnessing Hadoop: Understanding the Big Data Processing Options for Optimizing Analytical Workloads

Survey on Performance of Hadoop Map reduce Optimization Methods
Survey on Performance of Hadoop Map reduce Optimization MethodsSurvey on Performance of Hadoop Map reduce Optimization Methods
Survey on Performance of Hadoop Map reduce Optimization Methods
paperpublications3
 
Overview of big data & hadoop v1
Overview of big data & hadoop   v1Overview of big data & hadoop   v1
Overview of big data & hadoop v1
Thanh Nguyen
 
Introduction to Big Data and Hadoop using Local Standalone Mode
Introduction to Big Data and Hadoop using Local Standalone ModeIntroduction to Big Data and Hadoop using Local Standalone Mode
Introduction to Big Data and Hadoop using Local Standalone Mode
inventionjournals
 

Similar a Harnessing Hadoop: Understanding the Big Data Processing Options for Optimizing Analytical Workloads (20)

Hadoop
HadoopHadoop
Hadoop
 
2.1-HADOOP.pdf
2.1-HADOOP.pdf2.1-HADOOP.pdf
2.1-HADOOP.pdf
 
Hadoop a Natural Choice for Data Intensive Log Processing
Hadoop a Natural Choice for Data Intensive Log ProcessingHadoop a Natural Choice for Data Intensive Log Processing
Hadoop a Natural Choice for Data Intensive Log Processing
 
Big Data Analysis and Its Scheduling Policy – Hadoop
Big Data Analysis and Its Scheduling Policy – HadoopBig Data Analysis and Its Scheduling Policy – Hadoop
Big Data Analysis and Its Scheduling Policy – Hadoop
 
G017143640
G017143640G017143640
G017143640
 
Bigdata and Hadoop Introduction
Bigdata and Hadoop IntroductionBigdata and Hadoop Introduction
Bigdata and Hadoop Introduction
 
Hadoop and Mapreduce Introduction
Hadoop and Mapreduce IntroductionHadoop and Mapreduce Introduction
Hadoop and Mapreduce Introduction
 
hadoop
hadoophadoop
hadoop
 
Survey on Performance of Hadoop Map reduce Optimization Methods
Survey on Performance of Hadoop Map reduce Optimization MethodsSurvey on Performance of Hadoop Map reduce Optimization Methods
Survey on Performance of Hadoop Map reduce Optimization Methods
 
project report on hadoop
project report on hadoopproject report on hadoop
project report on hadoop
 
Design Issues and Challenges of Peer-to-Peer Video on Demand System
Design Issues and Challenges of Peer-to-Peer Video on Demand System Design Issues and Challenges of Peer-to-Peer Video on Demand System
Design Issues and Challenges of Peer-to-Peer Video on Demand System
 
Big data overview of apache hadoop
Big data overview of apache hadoopBig data overview of apache hadoop
Big data overview of apache hadoop
 
Big data overview of apache hadoop
Big data overview of apache hadoopBig data overview of apache hadoop
Big data overview of apache hadoop
 
Big Data and Hadoop Guide
Big Data and Hadoop GuideBig Data and Hadoop Guide
Big Data and Hadoop Guide
 
Big data
Big dataBig data
Big data
 
Hadoop and BigData - July 2016
Hadoop and BigData - July 2016Hadoop and BigData - July 2016
Hadoop and BigData - July 2016
 
Overview of big data & hadoop v1
Overview of big data & hadoop   v1Overview of big data & hadoop   v1
Overview of big data & hadoop v1
 
Hadoop by kamran khan
Hadoop by kamran khanHadoop by kamran khan
Hadoop by kamran khan
 
Introduction to Big Data and Hadoop using Local Standalone Mode
Introduction to Big Data and Hadoop using Local Standalone ModeIntroduction to Big Data and Hadoop using Local Standalone Mode
Introduction to Big Data and Hadoop using Local Standalone Mode
 
Infrastructure Considerations for Analytical Workloads
Infrastructure Considerations for Analytical WorkloadsInfrastructure Considerations for Analytical Workloads
Infrastructure Considerations for Analytical Workloads
 

Más de Cognizant

Más de Cognizant (20)

Using Adaptive Scrum to Tame Process Reverse Engineering in Data Analytics Pr...
Using Adaptive Scrum to Tame Process Reverse Engineering in Data Analytics Pr...Using Adaptive Scrum to Tame Process Reverse Engineering in Data Analytics Pr...
Using Adaptive Scrum to Tame Process Reverse Engineering in Data Analytics Pr...
 
Data Modernization: Breaking the AI Vicious Cycle for Superior Decision-making
Data Modernization: Breaking the AI Vicious Cycle for Superior Decision-makingData Modernization: Breaking the AI Vicious Cycle for Superior Decision-making
Data Modernization: Breaking the AI Vicious Cycle for Superior Decision-making
 
It Takes an Ecosystem: How Technology Companies Deliver Exceptional Experiences
It Takes an Ecosystem: How Technology Companies Deliver Exceptional ExperiencesIt Takes an Ecosystem: How Technology Companies Deliver Exceptional Experiences
It Takes an Ecosystem: How Technology Companies Deliver Exceptional Experiences
 
Intuition Engineered
Intuition EngineeredIntuition Engineered
Intuition Engineered
 
The Work Ahead: Transportation and Logistics Delivering on the Digital-Physic...
The Work Ahead: Transportation and Logistics Delivering on the Digital-Physic...The Work Ahead: Transportation and Logistics Delivering on the Digital-Physic...
The Work Ahead: Transportation and Logistics Delivering on the Digital-Physic...
 
Enhancing Desirability: Five Considerations for Winning Digital Initiatives
Enhancing Desirability: Five Considerations for Winning Digital InitiativesEnhancing Desirability: Five Considerations for Winning Digital Initiatives
Enhancing Desirability: Five Considerations for Winning Digital Initiatives
 
The Work Ahead in Manufacturing: Fulfilling the Agility Mandate
The Work Ahead in Manufacturing: Fulfilling the Agility MandateThe Work Ahead in Manufacturing: Fulfilling the Agility Mandate
The Work Ahead in Manufacturing: Fulfilling the Agility Mandate
 
The Work Ahead in Higher Education: Repaving the Road for the Employees of To...
The Work Ahead in Higher Education: Repaving the Road for the Employees of To...The Work Ahead in Higher Education: Repaving the Road for the Employees of To...
The Work Ahead in Higher Education: Repaving the Road for the Employees of To...
 
Engineering the Next-Gen Digital Claims Organisation for Australian General I...
Engineering the Next-Gen Digital Claims Organisation for Australian General I...Engineering the Next-Gen Digital Claims Organisation for Australian General I...
Engineering the Next-Gen Digital Claims Organisation for Australian General I...
 
Profitability in the Direct-to-Consumer Marketplace: A Playbook for Media and...
Profitability in the Direct-to-Consumer Marketplace: A Playbook for Media and...Profitability in the Direct-to-Consumer Marketplace: A Playbook for Media and...
Profitability in the Direct-to-Consumer Marketplace: A Playbook for Media and...
 
Green Rush: The Economic Imperative for Sustainability
Green Rush: The Economic Imperative for SustainabilityGreen Rush: The Economic Imperative for Sustainability
Green Rush: The Economic Imperative for Sustainability
 
Policy Administration Modernization: Four Paths for Insurers
Policy Administration Modernization: Four Paths for InsurersPolicy Administration Modernization: Four Paths for Insurers
Policy Administration Modernization: Four Paths for Insurers
 
The Work Ahead in Utilities: Powering a Sustainable Future with Digital
The Work Ahead in Utilities: Powering a Sustainable Future with DigitalThe Work Ahead in Utilities: Powering a Sustainable Future with Digital
The Work Ahead in Utilities: Powering a Sustainable Future with Digital
 
AI in Media & Entertainment: Starting the Journey to Value
AI in Media & Entertainment: Starting the Journey to ValueAI in Media & Entertainment: Starting the Journey to Value
AI in Media & Entertainment: Starting the Journey to Value
 
Operations Workforce Management: A Data-Informed, Digital-First Approach
Operations Workforce Management: A Data-Informed, Digital-First ApproachOperations Workforce Management: A Data-Informed, Digital-First Approach
Operations Workforce Management: A Data-Informed, Digital-First Approach
 
Five Priorities for Quality Engineering When Taking Banking to the Cloud
Five Priorities for Quality Engineering When Taking Banking to the CloudFive Priorities for Quality Engineering When Taking Banking to the Cloud
Five Priorities for Quality Engineering When Taking Banking to the Cloud
 
Getting Ahead With AI: How APAC Companies Replicate Success by Remaining Focused
Getting Ahead With AI: How APAC Companies Replicate Success by Remaining FocusedGetting Ahead With AI: How APAC Companies Replicate Success by Remaining Focused
Getting Ahead With AI: How APAC Companies Replicate Success by Remaining Focused
 
Crafting the Utility of the Future
Crafting the Utility of the FutureCrafting the Utility of the Future
Crafting the Utility of the Future
 
Utilities Can Ramp Up CX with a Customer Data Platform
Utilities Can Ramp Up CX with a Customer Data PlatformUtilities Can Ramp Up CX with a Customer Data Platform
Utilities Can Ramp Up CX with a Customer Data Platform
 
The Work Ahead in Intelligent Automation: Coping with Complexity in a Post-Pa...
The Work Ahead in Intelligent Automation: Coping with Complexity in a Post-Pa...The Work Ahead in Intelligent Automation: Coping with Complexity in a Post-Pa...
The Work Ahead in Intelligent Automation: Coping with Complexity in a Post-Pa...
 

Último

Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
vu2urc
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
Earley Information Science
 

Último (20)

Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
 
Tech Trends Report 2024 Future Today Institute.pdf
Tech Trends Report 2024 Future Today Institute.pdfTech Trends Report 2024 Future Today Institute.pdf
Tech Trends Report 2024 Future Today Institute.pdf
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
Evaluating the top large language models.pdf
Evaluating the top large language models.pdfEvaluating the top large language models.pdf
Evaluating the top large language models.pdf
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men
 
What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?
 
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUnderstanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdfThe Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
The Role of Taxonomy and Ontology in Semantic Layers - Heather Hedden.pdf
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed texts
 

Harnessing Hadoop: Understanding the Big Data Processing Options for Optimizing Analytical Workloads

  • 1. • Cognizant 20-20 Insights Harnessing Hadoop: Understanding the Big Data Processing Options for Optimizing Analytical Workloads As Hadoop’s ecosystem evolves, new open source tools and capabilities are emerging that can help IT leaders build more capable and businessrelevant big data analytics environments. Executive Summary Asserting that data is vital to business is an understatement. Organizations have generated more and more data for years, but struggle to use it effectively. Clearly data has more important uses than ensuring compliance with regulatory requirements. In addition, data is being generated with greater velocity, due to the advent of new pervasive devices (e.g., smartphones, tablets, etc.), social Web sites (e.g., Facebook, Twitter, LinkedIn, etc.) and other sources like GPS, Google Maps, heat/pressure sensors, etc. Moreover, a plethora of bits and bytes is being created by data warehouses, repositories, backups and archives. IT leaders need to work with the lines of business to take advantage of the wealth of insight that can be mined from these repositories. This search for data patterns will be similar to the searches the oil and gas industry conduct for precious resources. Similar business value can be uncovered if organizations can improve how “dirty” data is processed to refine key business decisions. cognizant 20-20 insights | november 2013 According to IDC1 and Century Link,2 our planet now has 2.75 zettabytes (ZB) worth of data; this number is likely to reach 8ZB by 2015. This situation has given rise to the new buzzword known as “big data.” Today, many organizations are applying big data analytics to challenges such as fraud detection, risk/threat analysis, sentiment analysis, equities analysis, weather forecasting, product recommendations, classifications, etc. All of these activities require a platform that can process this data and deliver useful (often predictive) insights. This white paper discusses how to leverage a big data analytics platform, dissecting its capabilities, limitations and vendor choices. Big Data’s Ground Zero Apache Hadoop3 is a Java-based open source platform that makes processing of large data possible over thousands of distributed nodes. Hadoop’s development resulted from publication of two Google authored whitepapers (i.e., Google File System and Google MapReduce). There are numerous vendor-specific distributions available
  • 2. Hadoop Base Components MapReduce (Distributed Programming Framework) HDFS (Hadoop Distributed File System) Figure 1 based on Apache Hadoop, from companies such as Cloudera (CDH3/4), Hortonworks (1.x/2.x) and MapR (M3/M5). In addition, appliance-based solutions are offered by many vendors, such as IBM, EMC, Oracle, etc. Figure 1 illustrates Apache Hadoop’s two base components. Hadoop distributed file system (HDFS) is a base component of the Hadoop framework that manages the data storage. It stores data in the form of data blocks (default size: 64MB) on the local hard disk. The input data size defines the block size to be used in the cluster. A block size of 128MB for a large file set is a good choice. Keeping large block sizes means a small number of blocks can be stored, thereby minimizing the memory requirement on the master node (commonly known as NameNode) for storing the metadata information. Block size also impacts the job execution time, and thereby cluster performance. A file can be stored in HDFS with a different block size during the upload process. For file sizes smaller than the block size, the Hadoop cluster may not perform optimally. Example: A file size of 512MB requires eight blocks (default: 64MB), so it will need memory to store information of 25 items (1 filename + 8 * 3 copies). The same file with a block size of 128MB will require memory for just 13 items. HDFS has built-in fault tolerance capability, thereby eliminating the need for setting up a redundant array of inexpensive disks (RAID) on the data nodes. The data blocks stored in HDFS automatically get replicated across multiple data nodes based on the replication factor (default: 3). The replication factor determines the number of cognizant 20-20 insights copies of a data block to be created and distributed across other data nodes. The larger the replication factor, the better the read performance of the cluster. On the other hand, it requires more time to replicate the data blocks and more storage for replicated copies. For example, storing a file size of 1GB requires around 4GB of storage space, due to a replication factor of 3 and an additional 20% to 30% of space required during data processing. Both block size and replication factor impact Hadoop cluster performance. The value for both the parameters depends on the input data type and the application workload to run on this cluster. In a development environment, the replication factor can be set to a smaller number — say, 1 or 2 — but it is advisable to set this value to a higher factor, if the data available on the cluster is crucial. Keeping a default replication factor of 3 is a good choice from a data redundancy and cluster performance perspective. Figure 2 (next page) depicts the NameNode and DataNode that constitute the HDFS service. MapReduce is the second base component of a Hadoop framework. It manages the data processing part on the Hadoop cluster. A cluster runs MapReduce jobs written in Java or any other language (Python, Ruby, R, etc.) via streaming. A single job may consist of multiple tasks. The number of tasks that can run on a data node is governed by the amount of memory installed. It is recommended to have more memory installed on each data node for better cluster performance. Based on the application workload that is going to run on the cluster, a data node may require high compute power, high disk I/O or additional memory. 2
  • 3. Hadoop Services Schematic MapReduce (Processing) Job Tracker Name Node Task Tracker Task Tracker Data Node Data Node ... . ... . Task Tracker Data Node Secondary Name Node HDFS (Storage) Figure 2 The job execution process is handled by the JobTracker and TaskTracker services (see HDFS – Hadoop Figure 2). The typeDistributed File System of job scheduler chosen for a cluster depends on the application workload. If the cluster will be running short running jobs, then FIFO scheduler will serve the purpose. For a much more balanced cluster, one can choose to use Fair Scheduler. In MRv1.0, there’s no high availability (HA) for the JobTracker service. If the node running JobTracker service fails, all jobs submitted in the cluster will be discontinued and must be resubmitted once the node is recovered or another node is made available. This issue has been fixed in the newer release — i.e., MRv2.0, also known as yet another resource negotiator (YARN).4 • Pig provides a high-level procedural language known as PigLatin. Programs written in PigLatin are automatically converted into map-reduce jobs. This is very suitable in situations where developers are not familiar with high-level languages such as Java, but have strong scripting knowledge and expertise. They can create their own customized functions. It is predominantly recommended for processing large amounts of semi-structured data. • Hive provides a SQL-like language known as HiveQL (query language). Analysts can run queries against the data stored in HDFS using SQL-like queries that automatically get converted into map-reduce jobs. It is recommended where we have large amounts of unstructured data and we want to fetch particular details without writing map-reduce jobs. Companies like RevolutionAnalytics6 can leverage Hadoop clusters running Hive via RHadoop connectors. RHadoop provides a connector for HDFS, MapReduce and HBase connectivity. Figure 2 also depicts the interaction among these five service components. The Secondary Name Node plays the vital role of check-pointing. It is this node that off-loads the work of metadata image creation and updating on a regular basis. It is recommended to keep the machine configuration the same as Name Node so that in the case of an emergency this node can be promoted as a Name Node. • HBase is a type of NOSQL database and offers scalable and distributed database service. However, it is not optimized for transactional applications and relational analytics. Be aware: HBase is not a replacement for a relational database. A cluster with HBase installed requires an additional 24GB to 32GB of memory on each data node. It is recommended that the data nodes running HBase Region Server should run a smaller number of MapReduce tasks. The Hadoop Ecosystem Hadoop base components are surrounded by numerous ecosystem5 projects. Many of them are already top-level Apache (open source) projects; some, however, remain in the incubation stage. Figure 3 (next page) highlights a few of the ecosystem components. The most common ones include: cognizant 20-20 insights 3
  • 4. Avro (Serialization) Ambari (System Management) (Coordination) Zookeeper Sqoop (CLI for Data Movement) MapReduce (Distributed Programming Framework) HBase (SQL) (NoSQL Store) Hive HCatalog Hadoop Ecosystem Pig (Data Flow) (Table/Schema Mgmt.) Scanning the Hadoop Project Ecosystem HDFS Infrastructure Layer OS Layer (Hadoop Distributed File System) Physical Machines Virtual Machines Storage Network Figure 3 Hadoop clusters are mainly used in R&D-related activities. Many enterprises are now using Hadoop with production workloads. The type of ecosystem components required depends upon the enterprise needs. Enterprises with large data warehouses normally go for production-ready tools from proven vendors such as IBM, Oracle, EMC, etc. Also, not every enterprise requires a Hadoop cluster. A cluster is mainly required by business units that are developing Hadoopbased solutions or running analytics workloads. Business units that are just entering into analytics may start small by building a 5-10 node cluster. As shown in Figure 4 (next page), a Hadoop cluster can scale from just a few data nodes to thousands of data nodes as the need arises or data grows. Hadoop clusters show linear scalability, both in terms of performance and storage capacity, as the count of nodes increase. Apache Hadoop runs primarily on the Intel x86/x86_64based architecture. Clusters must be designed with a particular workload in mind (e.g., data-intensive loads, processor-intensive loads, mixed loads, real-time clusters, etc.). As there are many Hadoop distributions available, implementation selection depends upon feature availability, licensing cost, cognizant 20-20 insights performance reports, hardware requirements, skills availability within the enterprise and the workload requirements. Per a comStore report,7 MapR performs better than Cloudera and works out to be less costly. Hortonworks, on the other hand, has a strong relationship with Microsoft. Its distribution is also available for a Microsoft platform (HDP v1.3 for Windows), which makes this distribution popular among the Microsoft community. Hadoop Appliance An alternate approach to building a Hadoop cluster is to use a Hadoop appliance. These are customizable and ready-to-use complete stacks from vendors such as IBM (IBM Netezza), Oracle (Big Data Appliance), EMC (Data Computing Appliance), HP (AppSystem), Teradata Aster, etc. Using an appliance-based approach speeds goto-market and results in an integrated solution from a single (and accountable) vendor like IBM, Teradata, HP, Oracle, etc. These appliances come in different sizes and configurations. Every vendor uses a different Hadoop distribution. For example: Oracle Big Data is based on a Cloudera distribution; EMC uses MapR and HP AppSystem can be configured with Cloudera, 4
  • 5. Hadoop Cluster: An Anatomical View Network Switch Kerberos (Security) Cluster Manager (Management & Monitoring) Job Tracker/ Resource Manager (Scheduler) Manual/Automatic Failover Client Nodes (Edge Nodes) Name Nodes (Master Node) Data Nodes (Slave Nodes) Figure 4 MapR or Hortonworks. These appliances are designed to solve a particular type of problem or for a particular workload in mind. • IBM Netezza 1000 is designed for high-per- formance business intelligence and advanced analytics. It’s a data warehouse appliance that can run complex and mixed workloads. In addition, it supports multiple languages (C/ C++, Java, Python, FORTRAN), frameworks (Hadoop) and tools (IBM SPSS, R and SAS). • The Teradata Aster appliance comes with 50 pre-built MapReduce functions optimized for graph, text and behavior/marketing analysis. It also comes with Aster Database and Apache Hadoop to process structured, unstructured and semi-structured data. A full Hadoop cabinet includes two masters and 16 data nodes which can deliver up to 152TB of user space. • The HP AppSystem appliance comes with pre- configured HP ProLiant DL380e Gen8 servers running Red Hat Enterprise Linux 6.1 and a Cloudera distribution. Enterprises can use this appliance with HP Vertica Community Edition to run real-time analysis of structured data. cognizant 20-20 insights HDFS and Hadoop HDFS is not the only platform for performing big data tasks. Vendors have emerged with solutions that offer a replacement for HDFS. For example, DataStax8 uses Cassandra; Amazon Web Services uses S3; IBM has General Parallel File System (GPFS); EMC relies on Isilon; MapR uses Direct NFS; etc. (To learn more, read about alternate technologies.)9 An EMC appliance uses Isilon scale-out NAS technology, along with HDFS. This combination eliminates the need to export/import the data into HDFS (see Figure 5, next page). IBM uses GPFS instead of HDFS in its solution after finding that GPFS performed better than HDFS (see Figure 6, also next page). Many IT shops are scrambling to build Hadoop clusters in their enterprise. The question is: What are they going to do with these clusters? Not every enterprise needs to have its own cluster; it all depends on the nature of the business and whether the cluster serves a legitimate and compelling business use case. For example, enterprises into financial transactions would definitely require an analytic platform for fraud detection, 5
  • 6. EMC Subs Isilon for HDFS R (RHIPE) PIG Mahout Job Tracker HIVE HBASE Name Node Task Tracker Data Node Ethernet HCFS Figure 5 portfolio analysis, churn analysis, etc. Similarly, enterprises into research work require large-size clusters for faster processing and large data analysis. • Joyent, which offers Hadoop services based on Hadoop-as-a-Service • Mortar, which delivers Hadoop services based There are couple of ways in which Hadoop clusters can be leveraged in addition to building a new cluster or taking an appliance-based approach. A more cost-effective approach is to use Amazon10 Elastic Map-Reduce (EMR) service. Enterprises can build a cluster on demand and pay for the duration they used it. Amazon offers compute instances running open-source Apache Hadoop and MapR (M3/M5/M7) distribution. Other vendors that offer “Hadoop-as-a-Service” include: Hortonworks distribution. • Skytap, which provides Hadoop services based on Cloudera (CDH4) distribution. on MapR distribution. All of these vendors charge on a pay-per-use basis. New vendors will eventually emerge as market demand increases and more enterprises require additional analytics, business intelligence (BI) and data warehousing options. As the competitive stakes rise, we recommend comparing prices and capabilities to get the best price/performance for your big data environment. Benchmarking HDFS vs. GPFS Execution Time 2000 1500 1000 500 0 Postmark HDFS Teragen Grep GPFS Figure 6 cognizant 20-20 insights 6 CacheTest Terasort
  • 7. The Hadoop platform by itself can’t do anything. Enterprises need applications that can leverage this platform. Developing an analytics application that offers business value requires data scientists and data analysts. These applications can perform many different types of jobs, from pure analytical, data and BI jobs through data processing. Hadoop can run all types of workloads, so it is becoming increasingly important to design an effective and efficient cluster using the latest technologies and framework. Hadoop Competitors The Hadoop framework provides a batch processing environment that works well with historical data in data warehouses and archives. What if there is a need to perform real-time processing such as in the case of fraud detection, bank transactions, etc.? Enterprises have begun assessing alternative approaches and technologies to solve this issue. Storm is an open source project that helps in setting up the environment for real-time (stream) processing. Besides, there are other options such as massively parallel processing (MPP), high performance computing cluster (HPCC) and Hadapt to consider. However, all these technologies will require a strong understanding of the workload that will be run. If the workload is more processing intensive, it’s always recommended to go with HPCC. Besides, HPCC uses a declarative programming language that is extensible, homogeneous and complete. It uses an Open Data Model which is not constrained by key-value pairs, and data processing can happen in parallel. To learn more, read HPCC vs. Hadoop.11 Massively parallel processing databases have been around even before MapReduce hit the market. It makes use of SQL as an interface to interact with the system. SQL is a commonly used language by data scientists and analysts. MPP systems are mainly used in high-end data warehouse implementations. Looking Ahead Hadoop is a strong framework for running data-intensive jobs, but it’s still evolving as the ecosystem projects around it mature and expand. Market demand is spurring new technological developments, daily. With these innovations come various options to choose from, such as building a new cluster, leveraging an appliance or making use of Hadoop-as-a-Service offerings. Moreover, the market is filling with new companies, which is creating more competitive technologies and options to consider. For starters, enterprises need to make the right decision in choosing their Hadoop distributions. Enterprises can start small by picking up any distribution and building a 5-10 node cluster to test the features, functionality and performance. Most reputable vendors provide a free version of their distribution for evaluation purposes. We advise enterprises to test 2-3 vendor distributions against its business requirements before making any final decisions. However, IT decision-makers should pay close attention to emerging competitors to position their organizations for impending technology enhancements. Footnotes 1 IDC report: Worldwide Big Data Technology and Services 2012-2015 Forecast. 2 www.centurylink.com/business/artifacts/pdf/resources/big-data-defining-the-digital-deluge.pdf. 3 http://en.wikipedia.org/wiki/Hadoop. 4 http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/YARN.html. 5 http://pig.apache.org/; http://hive.apache.org/; http://hbase.apache.org; “Comparing Pig Latin and SQL for Constructing Data Processing Pipelines,” developer.yahoo.com/blogs/hadoop/comparing-pig-latinsql-constructing-data-processing-pipelines-444.html; “Process a Million Songs with Apache Pig,” blog. cloudera.com/blog/2012/08/process-a-million-songs-with-apache-pig/. 6 www.revolutionanalytics.com/products/revolution-enterprise.php. 7 comStore report: Switching to MapR for Reliable Insights into Global Internet Consumer Behavior. 8 www.datastax.com/wp-content/uploads/2012/09/WP-DataStax-HDFSvsCFS.pdf. 9 gigaom.com/2012/07/11/because-hadoop-isnt-perfect-8-ways-to-replace-hdfs/. 10 aws.amazon.com/elasticmapreduce/pricing/. 11 hpccsystems.com/Why-HPCC/HPCC-vs-Hadoop. cognizant 20-20 insights 7
  • 8. About the Authors Harish Chauhan is an Associate Director and leads new services development around big data and new emerging technologies for Cognizant’s General Technology Office. He has over 21 years of experience and holds a bachelor of engineering degree in computer science. His areas of expertise include virtualization, cloud computing and distributed computing (Hadoop/HPC). He contributed technical articles, co-authored an IBM Redbook on the virtualization engine and filed two patents in the areas of virtualization. He can be reached at Harish.Chauhan@cognizant.com. Jerald Murphy is Cognizant’s Worldwide Practice Leader for Cognizant Business Consulting’s IT Infrastructure Services Practice. He also leads Cognizant’s Technology Office for Enterprise Computing, Cloud Services and Emerging Technology. Jerry has spent over 20 years in the computer industry, in various technical, marketing, sales, senior management and industry analyst positions. He is one of the industry’s leading authorities on enterprise IT infrastructure design, security and management, and has spent 20-plus years conducting engineering research and helping to develop infrastructure and security strategies for many of the largest global companies and service providers. Jerry’s areas of expertise include data center infrastructure and enterprise systems management. Additional focus areas include managed services and IT outsourcing. Jerry has a B.S.E.E. from The United States Military Academy and a master’s degree in electrical engineering from Princeton University. He can be reached at Jerald.Murphy@ cognizant.com. About Cognizant Cognizant (NASDAQ: CTSH) is a leading provider of information technology, consulting, and business process outsourcing services, dedicated to helping the world’s leading companies build stronger businesses. Headquartered in Teaneck, New Jersey (U.S.), Cognizant combines a passion for client satisfaction, technology innovation, deep industry and business process expertise, and a global, collaborative workforce that embodies the future of work. With over 50 delivery centers worldwide and approximately 164,300 employees as of June 30, 2013, Cognizant is a member of the NASDAQ-100, the S&P 500, the Forbes Global 2000, and the Fortune 500 and is ranked among the top performing and fastest growing companies in the world. Visit us online at www.cognizant.com or follow us on Twitter: Cognizant. World Headquarters European Headquarters India Operations Headquarters 500 Frank W. Burr Blvd. Teaneck, NJ 07666 USA Phone: +1 201 801 0233 Fax: +1 201 801 0243 Toll Free: +1 888 937 3277 Email: inquiry@cognizant.com 1 Kingdom Street Paddington Central London W2 6BD Phone: +44 (0) 20 7297 7600 Fax: +44 (0) 20 7121 0102 Email: infouk@cognizant.com #5/535, Old Mahabalipuram Road Okkiyam Pettai, Thoraipakkam Chennai, 600 096 India Phone: +91 (0) 44 4209 6000 Fax: +91 (0) 44 4209 6060 Email: inquiryindia@cognizant.com © ­­ Copyright 2013, Cognizant. All rights reserved. No part of this document may be reproduced, stored in a retrieval system, transmitted in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the express written permission from Cognizant. The information contained herein is subject to change without notice. All other trademarks mentioned herein are the property of their respective owners.