SlideShare una empresa de Scribd logo
1 de 36
Big Data Components
Flume, Pig and Sqoop
Data Management Without Hadoop
Data Management with Hadoop
Components in Hadoop Architecture
• The Gray components are pure open source and Blue are Open Source and yet contributed by other companies
HDFS Components
• Node – A Computer ( Commodity Hardware)
• Rack – Collection of Nodes (30 to 40 in the same network) Bandwidth inside and Between Rack
Varies
• Cluster – Collection of Racks
• Distributed File System
• Hadoop Distributed FileSystem
• Map Reduce Engine
• Built in Resource Manager and Scheduler
Hadoop Cluster
Flume and Sqoop
• These both frameworks for transferring data to and from Hadoop File System (HDFS)
• The main difference between Flume and Sqoop is Flume will be used to capture a stream of moving data where as
Sqoop loads data from relational databases to HDFS
Flume
• This is an event driven framework used to capture data that continuously flows into the system
• Flume runs as one or more agents and each agent has three different components
• Source
• Channels
• Sinks
Flume Agent
• Source – This component retrieves the data from a particular application e.g. Web Server
• Channel – This simply acts as a pipe which temporarily stores the data if Output rate is lesser than the input rate.
• Sink – This components processes the data and stores it in a specific destination mostly a HDFS
Source Sink
Channel
Web Server
HDFS
AGENT
A Single Agent can
have multiple sources,
channels and Sinks
Use of a Channel
• Source will write events in a channel
• Channel maintains such events and removes it only when the sink completes
performing the event
• There are two types of Channel
• In-Memory – Processes the events faster, but it is volatile
• File Based – Processes the events slower, but permanent
Multiplexing and Serialization
• Output from one agent can serve as input to the other agent
• Avro is a remote call-and-serialization framework from Apache to do
this effectively
Fan out flow
• If the events from a single source is distributed to multiple channels, then it is called as Fanning out the flow
Source Channel 2
Channel 3
Channel 1
Source Channel 2
Channel 3
Channel 1
Replicating Fan Out
Source Channel 2
Channel 3
Channel 1
Multiplexing Fan Out
Flume Commands
• These are the commands listed out in Terminal
Why the name Pig?
• According to the Apache Pig philosophy, pigs eat anything, live anywhere and are domesticated
• In Hadoop pig is used for processing any kind of data (Structured, Unstructured and Semi Structured)
What’s so great about Pig
• Java is a low level language (Users must be aware of what the
program does and how the program does it)
• Whereas Pig is a high level language (Users must be aware of only
what the program does and need not worry about how it is done)
• Its extensible – Java classes can be defined separately and called
within a Pig program
Components of Pig
• Pig consists of two components
Pig
Language
Pig Latin
Complier
Data Flow Language
• Pig is called as a Data Flow Language
• Users will define a data stream
• Through out the stream several transformations are applied on the data
• Transformations includes mathematical operations, grouping, filtering etc.
Programs like ‘C’ are called Control flow
languages as they have loops and if
statements
Steps involved in Data Flow
Load
Transform
Dump/Save
Users can specify a single file or entire directory
Filter, Join, Group, Order etc
Dump the results somewhere or save in a file
Pig – Data Types
Pig has four different data types
• Atom – It can be a string or a number. This is similar to Int, long or char in other programming languages
• Tuple – It is a record that consists of a series of fields. Each field can contain a string or a number
• Bag – It is a collection of non-unique tuples. Each tuple can have different number of records
• Map – It is a collection of key value pairs. Any type can be stored in value and key has to be unique
If the value is unknown, the keyword “null” can be used as a place holder in the program
Pig - Operators
These are all the operators used at various levels
Pig – Debug and Troubleshoot
• There are few commands which can be used for debugging
Modes of Execution
Pig scripts can be executed in two different environments
Local Mode:
Pig is executed in a single node (Linux machine) and it does not requires Hadoop or HDFS.
This is used for testing pig logics.
pig -x local programname.pig
MapReduce Mode:
This is an actual Hadoop environment deployed along with HDFS.
pig -x mapreduce programname.pig
Packaging Pigs
Pig scripts can be packaged in three different ways
Script: This method is nothing more than a file containing Pig Latin commands, identified by the .pig suffix
(FlightData.pig, for example).Ending your Pig program with the .pig extension is a convention but not required.
Grunt: Grunt acts as a command interpreter where you can interactively enter Pig Latin at the Grunt command
line and immediately see the response. This method is helpful for prototyping during initial development and
with what-if scenarios.
Embedded: Pig Latin statements can be executed within Java, Python, or JavaScript programs.
User Defined Functions
• There are lot of User Defined Functions (UDFs) available for Pig
• These UDFs can be written in any languages and used with Pig
• Community members of open source have already posted several useful UDFs online
• Pig can be embedded in host languages like Java, Python and Java Script to integrate existing applications
with pig
• We can even make Pig to support control flow language by placing a Pig Latin script within “iF”
loop and it runs a MapReduce job until the condition is met
Sqoop
• It acts as SQL designed for Hadoop
• The main use of Sqoop is to load the data from other external data sources onto the Hadoop
Distributed File System (HDFS)
• Other data sources can be structured, semi-structured or even unstructured
Need for Sqoop
• Organizations have been storing data for many years in Relational Databases
• There are several types of RDBMS such as
Need for Sqoop
• Those data has to be fed into HDFS for distributed processing
• Sqoop is the best command line based (now web based as well) tool
to perform the import/export operations to and from HDFS
• Similar to Agents in Flume, Sqoop consists of different Connectors
Sqoop Architecture
• User/Administrator can control Sqoop
Sqoop job types
• Sqoop performs two important operations
Other Data
Source
(RDBMS,
Cassandra etc..)
Hadoop
Distributed
File System
Sqoop
Import
Other Data
Source
(RDBMS,
Cassandra etc..)
Hadoop
Distributed
File System
Sqoop
Export
Perform Data Processing
and Analysis
• This characteristic of Sqoop is called as bidirectional tool
How Sqoop Works?
• Sqoop communicates with the MapReduce engine and seeks help for copying data from
other Data sources into HDFS
• MapReduce will allocate mappers and performs the copy operation
• Types of operations
• Import one table
• Import complete database
• Import selected tables
• Import selected columns from a particular table
• Filter out certain rows from certain table etc
2 important features
Import Data in Compressed Format
While Sqoop imports data and stores on HDFS file system, it can be set to
compress the data and store it to reduce the overall utilization of the disk.
Well know compressed file formats are GZIP, BZ2 etc.
Parallelism
By default four mappers will be allocated to copy Data from Other DB into
HDFS. Users can increase the number of mappers to even 8 or 16
JDBC Drivers
• JDBC acts as an interface between an application and its database
• An application can send data into the database or it can retrieve
whenever it wants
• Sqoop connectors work along with the JDBC drivers
Sqoop latest version
• This is what is inside Sqoop
Sqoop Latest version
REST
Representational State Transfer – A software architecture style
UI
User Interface
Connectors
Interface that communicates with other data sources
JDBC drivers
My SQL
http://www.mysql.com/downloads/connector/j/5.1.html
Oracle
http://www.oracle.com/technetwork/database/enterprise-edition/jdbc-112010-090769.html
Microsoft SQL
http://www.microsoft.com/en-us/download/details.aspx?displaylang=en&id=11774
Difference between Flume and Sqoop
Sqoop Flume
Sqoop is used for importing data from structured data
sources such as RDBMS.
Flume is used for moving bulk streaming data into HDFS.
Sqoop has a connector based architecture. Connectors
know how to connect to the respective data source and
fetch the data.
Flume has an agent based architecture. Here, code is
written (which is called as 'agent') which takes care of
fetching data.
HDFS is a destination for data import using Sqoop. Data flows to HDFS through zero or more channels.
Sqoop data load is not event driven. Flume data load can be driven by event.
In order to import data from structured data sources, one
has to use Sqoop only, because its connectors know how
to interact with structured data sources and fetch data
from them.
In order to load streaming data such as tweets generated
on Twitter or log files of a web server, Flume should be
used. Flume agents are built for fetching streaming data.

Más contenido relacionado

La actualidad más candente

Hadoop & MapReduce
Hadoop & MapReduceHadoop & MapReduce
Hadoop & MapReduceNewvewm
 
Cloud deployment models
Cloud deployment modelsCloud deployment models
Cloud deployment modelsAshok Kumar
 
GOOGLE FILE SYSTEM
GOOGLE FILE SYSTEMGOOGLE FILE SYSTEM
GOOGLE FILE SYSTEMJYoTHiSH o.s
 
Deadlock in Distributed Systems
Deadlock in Distributed SystemsDeadlock in Distributed Systems
Deadlock in Distributed SystemsPritom Saha Akash
 
Cloud Resource Management
Cloud Resource ManagementCloud Resource Management
Cloud Resource ManagementNASIRSAYYED4
 
Parallel Programming
Parallel ProgrammingParallel Programming
Parallel ProgrammingUday Sharma
 
Types of clouds in cloud computing
Types of clouds in cloud computingTypes of clouds in cloud computing
Types of clouds in cloud computingMahesh Chemmala
 
Data cube computation
Data cube computationData cube computation
Data cube computationRashmi Sheikh
 
Introduction to Distributed System
Introduction to Distributed SystemIntroduction to Distributed System
Introduction to Distributed SystemSunita Sahu
 
Congestion control
Congestion controlCongestion control
Congestion controlAman Jaiswal
 
Levels of Virtualization.docx
Levels of Virtualization.docxLevels of Virtualization.docx
Levels of Virtualization.docxkumari36
 
Big Data technology Landscape
Big Data technology LandscapeBig Data technology Landscape
Big Data technology LandscapeShivanandaVSeeri
 
Cloud Security And Privacy
Cloud Security And PrivacyCloud Security And Privacy
Cloud Security And Privacytmather
 
Data security in cloud computing
Data security in cloud computingData security in cloud computing
Data security in cloud computingPrince Chandu
 

La actualidad más candente (20)

Hadoop & MapReduce
Hadoop & MapReduceHadoop & MapReduce
Hadoop & MapReduce
 
Cloud deployment models
Cloud deployment modelsCloud deployment models
Cloud deployment models
 
GOOGLE FILE SYSTEM
GOOGLE FILE SYSTEMGOOGLE FILE SYSTEM
GOOGLE FILE SYSTEM
 
Introduction to Hadoop
Introduction to HadoopIntroduction to Hadoop
Introduction to Hadoop
 
Map Reduce
Map ReduceMap Reduce
Map Reduce
 
Deadlock in Distributed Systems
Deadlock in Distributed SystemsDeadlock in Distributed Systems
Deadlock in Distributed Systems
 
Cloud Computing Architecture
Cloud Computing ArchitectureCloud Computing Architecture
Cloud Computing Architecture
 
Cloud Resource Management
Cloud Resource ManagementCloud Resource Management
Cloud Resource Management
 
Parallel Programming
Parallel ProgrammingParallel Programming
Parallel Programming
 
PPT on Hadoop
PPT on HadoopPPT on Hadoop
PPT on Hadoop
 
Types of clouds in cloud computing
Types of clouds in cloud computingTypes of clouds in cloud computing
Types of clouds in cloud computing
 
Data cube computation
Data cube computationData cube computation
Data cube computation
 
Hadoop
HadoopHadoop
Hadoop
 
Introduction to Distributed System
Introduction to Distributed SystemIntroduction to Distributed System
Introduction to Distributed System
 
Congestion control
Congestion controlCongestion control
Congestion control
 
Levels of Virtualization.docx
Levels of Virtualization.docxLevels of Virtualization.docx
Levels of Virtualization.docx
 
Big Data technology Landscape
Big Data technology LandscapeBig Data technology Landscape
Big Data technology Landscape
 
Cloud Security And Privacy
Cloud Security And PrivacyCloud Security And Privacy
Cloud Security And Privacy
 
Data security in cloud computing
Data security in cloud computingData security in cloud computing
Data security in cloud computing
 
Spatial Database
Spatial DatabaseSpatial Database
Spatial Database
 

Destacado

Big data: Loading your data with flume and sqoop
Big data:  Loading your data with flume and sqoopBig data:  Loading your data with flume and sqoop
Big data: Loading your data with flume and sqoopChristophe Marchal
 
Sqoop on Spark for Data Ingestion
Sqoop on Spark for Data IngestionSqoop on Spark for Data Ingestion
Sqoop on Spark for Data IngestionDataWorks Summit
 
Hadoop Summit 2012 | A New Generation of Data Transfer Tools for Hadoop: Sqoop 2
Hadoop Summit 2012 | A New Generation of Data Transfer Tools for Hadoop: Sqoop 2Hadoop Summit 2012 | A New Generation of Data Transfer Tools for Hadoop: Sqoop 2
Hadoop Summit 2012 | A New Generation of Data Transfer Tools for Hadoop: Sqoop 2Cloudera, Inc.
 
Habits of Effective Sqoop Users
Habits of Effective Sqoop UsersHabits of Effective Sqoop Users
Habits of Effective Sqoop UsersKathleen Ting
 
Que debe saber un DBA de SQL Server sobre Hadoop
Que debe saber un DBA de SQL Server sobre HadoopQue debe saber un DBA de SQL Server sobre Hadoop
Que debe saber un DBA de SQL Server sobre HadoopEduardo Castro
 
Hadoop Successes and Failures to Drive Deployment Evolution
Hadoop Successes and Failures to Drive Deployment EvolutionHadoop Successes and Failures to Drive Deployment Evolution
Hadoop Successes and Failures to Drive Deployment EvolutionBenoit Perroud
 
HBase @ Twitter
HBase @ TwitterHBase @ Twitter
HBase @ Twitterctrezzo
 
Storage Infrastructure Behind Facebook Messages
Storage Infrastructure Behind Facebook MessagesStorage Infrastructure Behind Facebook Messages
Storage Infrastructure Behind Facebook Messagesyarapavan
 
Hadoop Distributed File System Reliability and Durability at Facebook
Hadoop Distributed File System Reliability and Durability at FacebookHadoop Distributed File System Reliability and Durability at Facebook
Hadoop Distributed File System Reliability and Durability at FacebookDataWorks Summit
 
Visualizing Big Data – The Fundamentals
Visualizing Big Data – The FundamentalsVisualizing Big Data – The Fundamentals
Visualizing Big Data – The FundamentalsStampedeCon
 
Hadoop Summit 2012 | HBase Consistency and Performance Improvements
Hadoop Summit 2012 | HBase Consistency and Performance ImprovementsHadoop Summit 2012 | HBase Consistency and Performance Improvements
Hadoop Summit 2012 | HBase Consistency and Performance ImprovementsCloudera, Inc.
 
Big-Data Server Farm Architecture
Big-Data Server Farm Architecture Big-Data Server Farm Architecture
Big-Data Server Farm Architecture Jordan Chung
 
Apache sqoop with an use case
Apache sqoop with an use caseApache sqoop with an use case
Apache sqoop with an use caseDavin Abraham
 
Pig - Analyzing data sets
Pig - Analyzing data setsPig - Analyzing data sets
Pig - Analyzing data setsCreditas
 
apache pig performance optimizations talk at apachecon 2010
apache pig performance optimizations talk at apachecon 2010apache pig performance optimizations talk at apachecon 2010
apache pig performance optimizations talk at apachecon 2010Thejas Nair
 

Destacado (20)

Hadoop - Apache Pig
Hadoop - Apache PigHadoop - Apache Pig
Hadoop - Apache Pig
 
Big data: Loading your data with flume and sqoop
Big data:  Loading your data with flume and sqoopBig data:  Loading your data with flume and sqoop
Big data: Loading your data with flume and sqoop
 
Sqoop on Spark for Data Ingestion
Sqoop on Spark for Data IngestionSqoop on Spark for Data Ingestion
Sqoop on Spark for Data Ingestion
 
Cloudera's Flume
Cloudera's FlumeCloudera's Flume
Cloudera's Flume
 
Flume vs. kafka
Flume vs. kafkaFlume vs. kafka
Flume vs. kafka
 
Hadoop Summit 2012 | A New Generation of Data Transfer Tools for Hadoop: Sqoop 2
Hadoop Summit 2012 | A New Generation of Data Transfer Tools for Hadoop: Sqoop 2Hadoop Summit 2012 | A New Generation of Data Transfer Tools for Hadoop: Sqoop 2
Hadoop Summit 2012 | A New Generation of Data Transfer Tools for Hadoop: Sqoop 2
 
Habits of Effective Sqoop Users
Habits of Effective Sqoop UsersHabits of Effective Sqoop Users
Habits of Effective Sqoop Users
 
Que debe saber un DBA de SQL Server sobre Hadoop
Que debe saber un DBA de SQL Server sobre HadoopQue debe saber un DBA de SQL Server sobre Hadoop
Que debe saber un DBA de SQL Server sobre Hadoop
 
Hadoop 101 v1
Hadoop 101 v1Hadoop 101 v1
Hadoop 101 v1
 
Hadoop Successes and Failures to Drive Deployment Evolution
Hadoop Successes and Failures to Drive Deployment EvolutionHadoop Successes and Failures to Drive Deployment Evolution
Hadoop Successes and Failures to Drive Deployment Evolution
 
HBase @ Twitter
HBase @ TwitterHBase @ Twitter
HBase @ Twitter
 
Storage Infrastructure Behind Facebook Messages
Storage Infrastructure Behind Facebook MessagesStorage Infrastructure Behind Facebook Messages
Storage Infrastructure Behind Facebook Messages
 
Hadoop Distributed File System Reliability and Durability at Facebook
Hadoop Distributed File System Reliability and Durability at FacebookHadoop Distributed File System Reliability and Durability at Facebook
Hadoop Distributed File System Reliability and Durability at Facebook
 
Visualizing Big Data – The Fundamentals
Visualizing Big Data – The FundamentalsVisualizing Big Data – The Fundamentals
Visualizing Big Data – The Fundamentals
 
Hadoop Summit 2012 | HBase Consistency and Performance Improvements
Hadoop Summit 2012 | HBase Consistency and Performance ImprovementsHadoop Summit 2012 | HBase Consistency and Performance Improvements
Hadoop Summit 2012 | HBase Consistency and Performance Improvements
 
Big-Data Server Farm Architecture
Big-Data Server Farm Architecture Big-Data Server Farm Architecture
Big-Data Server Farm Architecture
 
Apache sqoop with an use case
Apache sqoop with an use caseApache sqoop with an use case
Apache sqoop with an use case
 
Pig - Analyzing data sets
Pig - Analyzing data setsPig - Analyzing data sets
Pig - Analyzing data sets
 
apache pig performance optimizations talk at apachecon 2010
apache pig performance optimizations talk at apachecon 2010apache pig performance optimizations talk at apachecon 2010
apache pig performance optimizations talk at apachecon 2010
 
storm at twitter
storm at twitterstorm at twitter
storm at twitter
 

Similar a Big data components - Introduction to Flume, Pig and Sqoop

BDA R20 21NM - Summary Big Data Analytics
BDA R20 21NM - Summary Big Data AnalyticsBDA R20 21NM - Summary Big Data Analytics
BDA R20 21NM - Summary Big Data AnalyticsNetajiGandi1
 
SpringPeople Introduction to Apache Hadoop
SpringPeople Introduction to Apache HadoopSpringPeople Introduction to Apache Hadoop
SpringPeople Introduction to Apache HadoopSpringPeople
 
Apache frameworks for Big and Fast Data
Apache frameworks for Big and Fast DataApache frameworks for Big and Fast Data
Apache frameworks for Big and Fast DataNaveen Korakoppa
 
Introduction to Hadoop and Big Data
Introduction to Hadoop and Big DataIntroduction to Hadoop and Big Data
Introduction to Hadoop and Big DataJoe Alex
 
Session 01 - Into to Hadoop
Session 01 - Into to HadoopSession 01 - Into to Hadoop
Session 01 - Into to HadoopAnandMHadoop
 
A slide share pig in CCS334 for big data analytics
A slide share pig in CCS334 for big data analyticsA slide share pig in CCS334 for big data analytics
A slide share pig in CCS334 for big data analyticsKrishnaVeni451953
 
Big data analytics with hadoop volume 2
Big data analytics with hadoop volume 2Big data analytics with hadoop volume 2
Big data analytics with hadoop volume 2Imviplav
 
hadoop eco system regarding big data analytics.pptx
hadoop eco system regarding big data analytics.pptxhadoop eco system regarding big data analytics.pptx
hadoop eco system regarding big data analytics.pptxmrudulasb
 
GETTING YOUR DATA IN HADOOP.pptx
GETTING YOUR DATA IN HADOOP.pptxGETTING YOUR DATA IN HADOOP.pptx
GETTING YOUR DATA IN HADOOP.pptxinfinix8
 
Hive and Pig for .NET User Group
Hive and Pig for .NET User GroupHive and Pig for .NET User Group
Hive and Pig for .NET User GroupCsaba Toth
 
Introduction To Hadoop Ecosystem
Introduction To Hadoop EcosystemIntroduction To Hadoop Ecosystem
Introduction To Hadoop EcosystemInSemble
 
Apache Thrift, a brief introduction
Apache Thrift, a brief introductionApache Thrift, a brief introduction
Apache Thrift, a brief introductionRandy Abernethy
 

Similar a Big data components - Introduction to Flume, Pig and Sqoop (20)

BDA R20 21NM - Summary Big Data Analytics
BDA R20 21NM - Summary Big Data AnalyticsBDA R20 21NM - Summary Big Data Analytics
BDA R20 21NM - Summary Big Data Analytics
 
SpringPeople Introduction to Apache Hadoop
SpringPeople Introduction to Apache HadoopSpringPeople Introduction to Apache Hadoop
SpringPeople Introduction to Apache Hadoop
 
Apache frameworks for Big and Fast Data
Apache frameworks for Big and Fast DataApache frameworks for Big and Fast Data
Apache frameworks for Big and Fast Data
 
Introduction to Hadoop and Big Data
Introduction to Hadoop and Big DataIntroduction to Hadoop and Big Data
Introduction to Hadoop and Big Data
 
Session 01 - Into to Hadoop
Session 01 - Into to HadoopSession 01 - Into to Hadoop
Session 01 - Into to Hadoop
 
A slide share pig in CCS334 for big data analytics
A slide share pig in CCS334 for big data analyticsA slide share pig in CCS334 for big data analytics
A slide share pig in CCS334 for big data analytics
 
Big data analytics with hadoop volume 2
Big data analytics with hadoop volume 2Big data analytics with hadoop volume 2
Big data analytics with hadoop volume 2
 
hadoop eco system regarding big data analytics.pptx
hadoop eco system regarding big data analytics.pptxhadoop eco system regarding big data analytics.pptx
hadoop eco system regarding big data analytics.pptx
 
GETTING YOUR DATA IN HADOOP.pptx
GETTING YOUR DATA IN HADOOP.pptxGETTING YOUR DATA IN HADOOP.pptx
GETTING YOUR DATA IN HADOOP.pptx
 
Unit V.pdf
Unit V.pdfUnit V.pdf
Unit V.pdf
 
Hadoop
HadoopHadoop
Hadoop
 
Getting started big data
Getting started big dataGetting started big data
Getting started big data
 
Hive and Pig for .NET User Group
Hive and Pig for .NET User GroupHive and Pig for .NET User Group
Hive and Pig for .NET User Group
 
Big data Hadoop
Big data  Hadoop   Big data  Hadoop
Big data Hadoop
 
Apache PIG
Apache PIGApache PIG
Apache PIG
 
Introduction To Hadoop Ecosystem
Introduction To Hadoop EcosystemIntroduction To Hadoop Ecosystem
Introduction To Hadoop Ecosystem
 
Apache Thrift, a brief introduction
Apache Thrift, a brief introductionApache Thrift, a brief introduction
Apache Thrift, a brief introduction
 
Cppt Hadoop
Cppt HadoopCppt Hadoop
Cppt Hadoop
 
Cppt
CpptCppt
Cppt
 
Cppt
CpptCppt
Cppt
 

Último

Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native ApplicationsWSO2
 
ICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesrafiqahmad00786416
 
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...apidays
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc
 
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 AmsterdamDEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 AmsterdamUiPathCommunity
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyKhushali Kathiriya
 
Corporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxCorporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxRustici Software
 
Platformless Horizons for Digital Adaptability
Platformless Horizons for Digital AdaptabilityPlatformless Horizons for Digital Adaptability
Platformless Horizons for Digital AdaptabilityWSO2
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingEdi Saputra
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobeapidays
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfsudhanshuwaghmare1
 
CNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In PakistanCNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In Pakistandanishmna97
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherRemote DBA Services
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...apidays
 
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...Jeffrey Haguewood
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...DianaGray10
 
Elevate Developer Efficiency & build GenAI Application with Amazon Q​
Elevate Developer Efficiency & build GenAI Application with Amazon Q​Elevate Developer Efficiency & build GenAI Application with Amazon Q​
Elevate Developer Efficiency & build GenAI Application with Amazon Q​Bhuvaneswari Subramani
 
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Victor Rentea
 
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...apidays
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MIND CTI
 

Último (20)

Architecting Cloud Native Applications
Architecting Cloud Native ApplicationsArchitecting Cloud Native Applications
Architecting Cloud Native Applications
 
ICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesICT role in 21st century education and its challenges
ICT role in 21st century education and its challenges
 
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 AmsterdamDEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : Uncertainty
 
Corporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxCorporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptx
 
Platformless Horizons for Digital Adaptability
Platformless Horizons for Digital AdaptabilityPlatformless Horizons for Digital Adaptability
Platformless Horizons for Digital Adaptability
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
CNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In PakistanCNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In Pakistan
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
 
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
Elevate Developer Efficiency & build GenAI Application with Amazon Q​
Elevate Developer Efficiency & build GenAI Application with Amazon Q​Elevate Developer Efficiency & build GenAI Application with Amazon Q​
Elevate Developer Efficiency & build GenAI Application with Amazon Q​
 
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
 
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 

Big data components - Introduction to Flume, Pig and Sqoop

  • 4. Components in Hadoop Architecture • The Gray components are pure open source and Blue are Open Source and yet contributed by other companies
  • 5. HDFS Components • Node – A Computer ( Commodity Hardware) • Rack – Collection of Nodes (30 to 40 in the same network) Bandwidth inside and Between Rack Varies • Cluster – Collection of Racks • Distributed File System • Hadoop Distributed FileSystem • Map Reduce Engine • Built in Resource Manager and Scheduler
  • 7. Flume and Sqoop • These both frameworks for transferring data to and from Hadoop File System (HDFS) • The main difference between Flume and Sqoop is Flume will be used to capture a stream of moving data where as Sqoop loads data from relational databases to HDFS
  • 8. Flume • This is an event driven framework used to capture data that continuously flows into the system • Flume runs as one or more agents and each agent has three different components • Source • Channels • Sinks
  • 9. Flume Agent • Source – This component retrieves the data from a particular application e.g. Web Server • Channel – This simply acts as a pipe which temporarily stores the data if Output rate is lesser than the input rate. • Sink – This components processes the data and stores it in a specific destination mostly a HDFS Source Sink Channel Web Server HDFS AGENT A Single Agent can have multiple sources, channels and Sinks
  • 10. Use of a Channel • Source will write events in a channel • Channel maintains such events and removes it only when the sink completes performing the event • There are two types of Channel • In-Memory – Processes the events faster, but it is volatile • File Based – Processes the events slower, but permanent
  • 11. Multiplexing and Serialization • Output from one agent can serve as input to the other agent • Avro is a remote call-and-serialization framework from Apache to do this effectively
  • 12. Fan out flow • If the events from a single source is distributed to multiple channels, then it is called as Fanning out the flow Source Channel 2 Channel 3 Channel 1 Source Channel 2 Channel 3 Channel 1 Replicating Fan Out Source Channel 2 Channel 3 Channel 1 Multiplexing Fan Out
  • 13. Flume Commands • These are the commands listed out in Terminal
  • 14. Why the name Pig? • According to the Apache Pig philosophy, pigs eat anything, live anywhere and are domesticated • In Hadoop pig is used for processing any kind of data (Structured, Unstructured and Semi Structured)
  • 15. What’s so great about Pig • Java is a low level language (Users must be aware of what the program does and how the program does it) • Whereas Pig is a high level language (Users must be aware of only what the program does and need not worry about how it is done) • Its extensible – Java classes can be defined separately and called within a Pig program
  • 16. Components of Pig • Pig consists of two components Pig Language Pig Latin Complier
  • 17. Data Flow Language • Pig is called as a Data Flow Language • Users will define a data stream • Through out the stream several transformations are applied on the data • Transformations includes mathematical operations, grouping, filtering etc. Programs like ‘C’ are called Control flow languages as they have loops and if statements
  • 18. Steps involved in Data Flow Load Transform Dump/Save Users can specify a single file or entire directory Filter, Join, Group, Order etc Dump the results somewhere or save in a file
  • 19. Pig – Data Types Pig has four different data types • Atom – It can be a string or a number. This is similar to Int, long or char in other programming languages • Tuple – It is a record that consists of a series of fields. Each field can contain a string or a number • Bag – It is a collection of non-unique tuples. Each tuple can have different number of records • Map – It is a collection of key value pairs. Any type can be stored in value and key has to be unique If the value is unknown, the keyword “null” can be used as a place holder in the program
  • 20. Pig - Operators These are all the operators used at various levels
  • 21. Pig – Debug and Troubleshoot • There are few commands which can be used for debugging
  • 22. Modes of Execution Pig scripts can be executed in two different environments Local Mode: Pig is executed in a single node (Linux machine) and it does not requires Hadoop or HDFS. This is used for testing pig logics. pig -x local programname.pig MapReduce Mode: This is an actual Hadoop environment deployed along with HDFS. pig -x mapreduce programname.pig
  • 23. Packaging Pigs Pig scripts can be packaged in three different ways Script: This method is nothing more than a file containing Pig Latin commands, identified by the .pig suffix (FlightData.pig, for example).Ending your Pig program with the .pig extension is a convention but not required. Grunt: Grunt acts as a command interpreter where you can interactively enter Pig Latin at the Grunt command line and immediately see the response. This method is helpful for prototyping during initial development and with what-if scenarios. Embedded: Pig Latin statements can be executed within Java, Python, or JavaScript programs.
  • 24. User Defined Functions • There are lot of User Defined Functions (UDFs) available for Pig • These UDFs can be written in any languages and used with Pig • Community members of open source have already posted several useful UDFs online • Pig can be embedded in host languages like Java, Python and Java Script to integrate existing applications with pig • We can even make Pig to support control flow language by placing a Pig Latin script within “iF” loop and it runs a MapReduce job until the condition is met
  • 25. Sqoop • It acts as SQL designed for Hadoop • The main use of Sqoop is to load the data from other external data sources onto the Hadoop Distributed File System (HDFS) • Other data sources can be structured, semi-structured or even unstructured
  • 26. Need for Sqoop • Organizations have been storing data for many years in Relational Databases • There are several types of RDBMS such as
  • 27. Need for Sqoop • Those data has to be fed into HDFS for distributed processing • Sqoop is the best command line based (now web based as well) tool to perform the import/export operations to and from HDFS • Similar to Agents in Flume, Sqoop consists of different Connectors
  • 29. Sqoop job types • Sqoop performs two important operations Other Data Source (RDBMS, Cassandra etc..) Hadoop Distributed File System Sqoop Import Other Data Source (RDBMS, Cassandra etc..) Hadoop Distributed File System Sqoop Export Perform Data Processing and Analysis • This characteristic of Sqoop is called as bidirectional tool
  • 30. How Sqoop Works? • Sqoop communicates with the MapReduce engine and seeks help for copying data from other Data sources into HDFS • MapReduce will allocate mappers and performs the copy operation • Types of operations • Import one table • Import complete database • Import selected tables • Import selected columns from a particular table • Filter out certain rows from certain table etc
  • 31. 2 important features Import Data in Compressed Format While Sqoop imports data and stores on HDFS file system, it can be set to compress the data and store it to reduce the overall utilization of the disk. Well know compressed file formats are GZIP, BZ2 etc. Parallelism By default four mappers will be allocated to copy Data from Other DB into HDFS. Users can increase the number of mappers to even 8 or 16
  • 32. JDBC Drivers • JDBC acts as an interface between an application and its database • An application can send data into the database or it can retrieve whenever it wants • Sqoop connectors work along with the JDBC drivers
  • 33. Sqoop latest version • This is what is inside Sqoop
  • 34. Sqoop Latest version REST Representational State Transfer – A software architecture style UI User Interface Connectors Interface that communicates with other data sources
  • 36. Difference between Flume and Sqoop Sqoop Flume Sqoop is used for importing data from structured data sources such as RDBMS. Flume is used for moving bulk streaming data into HDFS. Sqoop has a connector based architecture. Connectors know how to connect to the respective data source and fetch the data. Flume has an agent based architecture. Here, code is written (which is called as 'agent') which takes care of fetching data. HDFS is a destination for data import using Sqoop. Data flows to HDFS through zero or more channels. Sqoop data load is not event driven. Flume data load can be driven by event. In order to import data from structured data sources, one has to use Sqoop only, because its connectors know how to interact with structured data sources and fetch data from them. In order to load streaming data such as tweets generated on Twitter or log files of a web server, Flume should be used. Flume agents are built for fetching streaming data.