SlideShare una empresa de Scribd logo
1 de 81
AWS Big Data Demystified #1.2
Big Data Architecture
Lessons Learned
Omid Vahdaty, Big Data Ninja
Disclaimer
● I am not the best, I simply love
what I do VERY much.
● You are more than welcome to
challenge me or anything I have
to say as I could be wrong.
● This Lecture has evolves over
time, this is the 3nd iteration.
● Feel Free to send me comments
In the Past(web,api, ops db, data warehouse)
4
Then came Big Data...
5
Then came the cloud...
6
Then came the invoice ...
It keeps
growing….
TODAY’S BIG DATA
APPLICATION STACK
PaaS and DC...
MY BIG DATA
APPLICATION STACK
“demystified”...
MY AWS BIG DATA
APPLICATION STACK
With some “Myst”...
Jargon
Big data?
Architecture?
Considerations?
Challenges?
How to get started?
Big Jargon & Basics concepts u should know
https://amazon-aws-big-data-demystified.ninja/2019/02/18/big-data-jargon-faqs-
and-everything-you-wanted-to-know-and-didnt-ask-about-big-data/
● What is Big Data?
● scale out / up ?
● structured/semi structured/unstructured data ?
● ACID?
● OLAP VS OLTP? == Analytics VS operational
● DIY = Do it Yourself
● PaaS = Platform as a service
Big Data =
When your data outgrows
your infrastructure ability to process
● Volume (x TB processing per day)
● Velocity ( x GB/s)
● Variety (JSON, CSV, events etc)
● Veracity (how much of the data is accurate?)
Challenges creating big data architecture?
● What is the business use case ? How fast do u need the insights?
○ 15 min - 24 hours delay and above → use batch
○ Less than 15 min?
■ Might be batch - depends data source is files or events?.
■ Streaming?
● Sub seconds delay?
● Sub minute delay?
■ Streaming with in flight analytics ?
○ How complex is the compute jobs? Aggregations? joins?
Challenges creating big data architecture?
● What is the Velocity?
○ Under 100K events per second? Not a problem
○ Over 1M events per second? Costly. But doable.
○ Over 1B events per seconds? Not trivial at all.
● Volume ?
○ ~1TB a day ? Not a problem
○ Over ? it depends.
○ Over a petabyte? Well…. It depends.
● Veracity (how are you going to handle different data sources?)
○ Structured (CSV)
○ Semi structured (JSON,XML)
○ Unstructured (pictures, movies etc)
Challenges creating big data architecture?
● Performance targets?
● Costs targets?
● Security restrictions?
● Regulation restriction? privacy?
● Which technology to choose?
● Datacenter or cloud?
● Latency?
● Throughput?
● Concurrency?
● Security Access patterns?
● Pass? Max 7 technologies
● Iaas? Max 4 technologies
Cloud Architecture rules of thumb...
● Decouple :
○ Store
○ Process
○ Store
○ Process
○ insight...
● Rule of thumb: max 3 technologies in dc, 7 tech max in cloud
○ Do use more b/c: maintenance
○ Training time
○ complexity/simplicity
How to get started on big data architecture?
Get answers to the below:
1. What is the business use case?
a. Volume? velocity? Variety? vercity/
b. Did you map all of data sources?
2. Where should we build a data platform?
a. What is the product? → requirements.
b. Cloud? datacenter?
3. Architecture?
a. DIY? Paas? , Pay as you go? Or fixed? Decoupled?
b. Fast? Cheap? simple?
4. Did you Communicate you plans?
5. Did you map all known challenges?
Business Use Case?
Use Case 1: Analyzing browsing history
● Data Collection: browsing history from an ISP
● Product - derives user intent and interest for marketing purposes.
● Challenges
○ Velocity: 1 TB per day
○ History of: 3M
○ Remote DC
○ Enterprise grade security
○ Privacy
Use Case 2: Insights from location based data
● Data collection: from a Mobile operator
● Products:
○ derives user intent and interest for marketing purposes.
○ derive location based intent for marketing purposes.
● Challenges
○ Velocity: 4GB/s …
○ Scalability: Rate was expected double every year...
○ Remote DC
○ Enterprise grade security
○ Privacy
○ Edge analytics
Use Case 3: Analyzing location based events.
● Data collection: streaming
● Product: building location based audiences
● Challenges: minimizing DevOps work on maintenance
Getting started
(notice the markings on upper right
corner)
I didn't choose
this technology
I choose this
technology
So what is the product?
● Big data platform that
○ ingests data from multiple sources (cloud and DC)
○ Analyzes the data
○ Generates insights :
■ Smart Segments (online marketing)
■ Smart reports (for marketer)
■ Audience analysis (for agencies)
● Customers?
○ Marketers
○ Publishers
○ Agencies
Where to build the data platform?
We choose AWS because
● After a long competitive analysis we choose AWS because, it seems to
have all the relevant features For all our big data products and wallas
publisher products
● The project was challenging enough, without adding the complexity of a
learning curve (learning new cloud). We already knew how to work with
AWS
● Of course, there business aspects as well.
My Big Data product does:
● Data Ingestion
○ Online
■ messaging
■ Streaming
○ Offline
■ Batch
■ Performance aspects
● Data Transformation (Hive)
○ JSON, CSV, TXT, PARQUET, Binary
● Data Modeling - (R, ML, AI, DEEP, SPARK)
● Data Visualization (choose your poison)
● PII regulation + GPDR regulation
● And: Performance... Cost… Security… Simple... Cloud best practices...
Big Data Generic Architecture
Data Ingestion
(file based ETL from remote DC)
Data Transformation ( row to colunar + cleansing)
Data Modeling ( joins/agg/ML/R)
Data Presentation
Text,
RAW
Data Ingestion
A layer in your big data architecture
designed to do one thing : ingest
data via Batch or Streaming, I.e
move (only) data from point A to
point B. from source data to the
next layer in the architecture
(decoupled).
Big Data Generic Architecture | Data Ingestion
Data Ingestion
Data Transformation
Data Modeling
Data Presentation
Batch Data collection considerations
● Every hour , about 30GB compressed CSV file
● Why s3
○ Multi part upload
○ S3 CLI
○ S3 SDK
○ (tip : gzip! )
● Why ETL Client - needs to run at remote DC
● Why NOT your own ETL client
○ Involves code →
■ Bugs?
■ maintenance
○ Don't analyze data at Edge , cant go back in time.
● Why Not Streaming?
○ less accurate
○ Expensive
S3 Considerations
● Security
○ at rest: server side S3-Managed Keys (SSE-S3)
○ at transit: SSL / VPN
○ Hardening: user, IP ACL, write permission only.
● Upload
○ AWS s3 cli
○ Multi part upload
○ Aborting Incomplete Multipart Uploads Using a
Bucket Lifecycle Policy
○ Consider S3 CLI Sync command instead of CP
Sqoop - ETL
● Open source , part of EMR
● HDFS to RDMS and back. Via JDBC.
● E.g BiDirectional ETL from RDS to HDFS
● Unlikely use case: ETL from customer source operational DB.
Flume & Kafka
● Opens source project for streaming & messaging
● Popular
● Generic
● Good practice for many use cases. (a meetup by it self)
● Highly durable, scalable, extension etc.
● Downside : DIY, Non trivial to get started
Data Transfer Options
● Direct Connect (4GB/s?)
● For all other use case
○ S3 multipart upload
○ Compression
○ Security
■ Data at motion
■ Data at rest
Quick intro to Stream ingestion
● Kinesis Client Library (code)
● AWS lambda (code)
● EMR (managed hadoop)
● Third party (DIY)
○ Spark streaming (latency min =1 sec) , near real time, with lot of libraries.
○ Storm - Most real time (sub millisec), java code based.
○ Flink (similar to spark)
Kinesis family of products
● Kinesis Stream - collect@source and near real time processing
○ Near real time
○ High throughput
○ Low cost
○ Easy administration - set desired level of capacity
○ Delivery to : s3,redshift, Dynamo, ...
○ Ingress 1mb, egress 2mbs. Upto 1000 Transaction per second.
○ Not managed!
● Kinesis Analytics - in flight analytics.
● Kin. Firehose - Park you data @ destination.
Kinesis Firehose - for Data parking
● Not for fast lane - no in flight analytics
● Ingest , transform and load to:
○ Kinesis
○ S3
○ Redshift
○ elastic search
● Managed Service
Comparison of Kinesis products
● Streams
○ Sub 1 sec processing latency
○ Choice of stream processor (generic)
○ For smaller events
● Firehose
○ Zero admin
○ 4 targets built in (redshift, s3, search, etc)
○ Buffering 60 sec minimum.
○ For larger “events”
Data
Transformation
A layer in your big data architecture
designed to : Transform and
Cleanse data (row data to columnar
data and convert data types, Fix
bugs in data)
Big Data Generic Architecture | Transformation
Data Ingestion
S3
Data Transformation
Data Modeling
Data Presentation
EMR ecosystem
● Hive
● Pig
● Hue
● Spark
● Oozie
● Presto
● Ganglia
● Zookeeper (hbase)
● zeppelin
EMR Architecture
● Master node
● Core nodes - like data nodes (with storage: HDFS)
● Task nodes - (extends compute)
● Does Not have Standby Master node
● Best for transient cluster (goes up and down every night)
EMR lesson learned...
● Bigger instance type is good architecture
● Use spot instances - for the tasks only.
● Don't always use TEZ (MR? Spark?)
● Make sure your choose instance with network optimized
● Resize cluster is not recommended
● Bootstrap to automate cluster upon provisioning
● Use Steps to automate steps on running cluster
● Use Glue to share Hive MetaStore
● Good Cost reduction article on EMR
So use EMR for ...
● Most dominant
○ Hive
○ Spark
○ Presto
● And many more….
● Good for:
○ Data transformation
○ Data modeling
○ Batch
○ Machine learning
Hive
● SQL over hadoop.
● Engine: spark, tez, MR
● JDBC / ODBC
● Not good when need to shuffle.
● Not peta scale.
● SerDe json, parquet,regex,text etc.
● Dynamic partitions
● Insert overwrite
● Data Transformation
● Convert to Columnar
Presto
● SQL over hadoop
● Not good always for join on 2 large tables.
● Limited by memory
● Not fault tolerant like hive.
● Optimized for ad hoc queries
● No insert overwrite
● No dynamic partitions.
● Has some connectors : redshift and more
● https://amazon-aws-big-data-
demystified.ninja/2018/07/02/aws-emr-
presto-demystified-everything-you-
wanted-to-know-about-presto/
Pig
● Distributed Shell scripting
● Generating SQL like operations.
● Engine: MR, Tez
● S3, DynamoDB access
● Use Case: for data science who don't know
SQL, for system people, for those who want
to avoid java/scala
● Fair fight compared to hive in term of
performance only
● Good for unstructured files ETL : file to file ,
and use sqoop.
Hue
● Hadoop user experience
● Logs in real time and failures.
● Multiple users
● Native access to S3.
● File browser to HDFS.
● Manipulate metascore
● Job Browser
● Query editor
● Hbase browser
● Sqoop editor, oozier editor, Pig Editor
Orchestration
● EMR Oozie
○ Opens source workflow
■ Workflow: graph of action
■ Coordinator: scheduler jobs
○ Support: hive, sqoop , spark etc.
● Other options: AirFlow, Knime, Luigi, Azkaban,AWS Data Pipeline
Big Data Generic Architecture | Transformation
Data Ingestion
S3
Data Transformation
Data Modeling
Data Visualization
Data Modeling
A layer in your big data architecture
designed to Model data: Joins,
Aggregations, nightly jobs, Machine
learning
Big Data Generic Architecture | Modeling
Data Ingestion
S3
Data Transformation
Data Modeling
Data Presentation
Spark
● In memory
● X10 to X100 times faster from hive
● Good optimizer for distribution
● Rich API
● Spark SQL
● Spark Streaming
● Spark ML (ML lib)
● Spark GraphX (DB graphs)
● SparkR
Spark Streaming
● Near real time (1 sec latency)
● like batch of 1sec windows
● Streaming jobs with API
● DIY = Not relevant to us...
Spark ML
● Classification
● Regression
● Collaborative filtering
● Clustering
● Decomposition
● Code: java, scala, python, sparkR
Spark flavours
● Standalone
● With yarn
● With mesos
Spark Downside
● Compute intensive
● Performance gain over mapreduce is not guaranteed.
● Streaming processing is actually batch with very small window.
Spark SQL
● Same syntax as hive
● Optional JDBC via thrift
● Non trivial learning curve
● Upto X10 faster than hive.
● Works well with Zeppelin (out of the box)
● Does not replaces Hive
● Spark not always faster than hive
● insert overwrite -
Apache Zeppelin
● Notebook - visualizer
● Built in spark integration
● Interactive data analytics
● Easy collaboration.
● Uses SQL
● works on top of Hive/ Spark SQL
● Inside EMR.
● Uses in the background:
○ Shiro
○ Livy
R + spark R
● Open source package for statistical computing.
● Works with EMR
● “Matlab” equivalent
● Works with spark
● Not for developer :) for statistician
● R is single threaded - use spark R to distribute.
● Not everything works perfect.
Redshift
● OLAP, not OLTP→ analytics , not transaction
● Fully SQL
● Fully ACID
● No indexing
● Fully managed
● Petabyte Scale
● MPP
● Can create slow queue for queries
○ which are long lasting.
● DO NOT USE FOR transformation.
● Good for : DW, Complex Joins.
Redshift spectrum
● Extension of Redshift, use external table on S3.
● Require redshift cluster.
● Not possible for CTAS to s3, complex data structure, joins.
● Good for
○ Read only Queries
○ Aggregations on Exabyte.
EMR vs Redshift
● How much data loaded and unloaded?
● Which operations need to performed?
● Recycling data? → EMR
● History to be analyzed again and again ? → emr
● What the data needs to end up? BI?
● Use spectrum in some use cases. (aggregations)?
● Raw data? S3.
● When to use emr and when redshift?
Hive VS. Redshift
● Amount of concurrency ? low → hive, high → redshift
● Access to customers? Redshift? athena?
● Transformation, Unstructured , batch, ETL → hive.
● Peta scale ? redshift
● Complex joins → Redshift
Sage Maker
● Web notebook (jupiter
based) for data science
● Connects to all your data
sources (s3,athena etc)
● Help you manage the
entire lifecycle machine
learning
● Managed Service
● Used to create a ML to
predict cookie gender
AWS Glue
Shared meta store
Helps with some data transformation (managed service)
Automatic Schema discovery
AWD RDS (aurora, postgres, mysql)
● We used RDS aurora as Operational DB
● We did not use it for big data analytics although it supports upto 64Tb
● It is row based.
● The syntax is missing analytical functions
Big Data Generic Architecture | Modeling
Data Ingestion
S3
Data Transformation
Data Modeling
Data Presentation
Data
Presentation Used ONLY for presenting data for
operational applications or BI, Use
managed service to ensure HA.
Big Data Generic Architecture | Presentation
Data Collection
S3
Data Transformation
Data Modeling
Data Visualization
Athena
● Presto SQL
● In memory
● Hive metastore for DDL functionality
○ Complex data types
○ Multiple formats
○ Partitions
● Now supports CTAS (No inserts are supported)
● Good for:
○ Read only SQL,
○ Ad hoc query,
○ low cost,
○ Managed
● Good cost reduction article on athena
Visualize
● QuickSight
● Managed Visualizer, simple, cheap
Summery Take Away Message &
Action Items
Big Data Generic Architecture | Summary
Data Ingestion
S3
Data Transformation
Data Modeling
Data Presentation
Summary: Lesson learned
● Decouple, Decouple, Decouple
● Productivity of Data Science and Data engineering
○ Common language of both teams IS SQL!
○ Minimize the life cycle from dev to production of ETL and ML jobs
● Minimize the amount DB’s used
○ Different syntax (presto/hive/redshift)
○ Different data types
○ Minimize ETLS via External Tables+Glue!
● Not always Streaming is justified (what is the business use case? PaaS?)
● Spark SQL
○ Sometimes faster than redshift
○ Sometimes slower than hive
○ Learning curve is non trivial
Summery: Common Q&A
1. Can this architecture be done on another cloud?
2. Redshift VS EMR ?
3. Athena VS Redshift?
4. Cost reduction on EMR?
5. Cost Reduction on Athena?
6. Exporting data from Google Analytics into AWS?
Lesson learned: Big Data Architecture ?
Faster! Cheaper! Simpler!
How to get started | Call for Action
Lectures: AWS Big Data Demystified lectures #1 until #4
AWS Big Data Demystified Meetup Big Data Demystified meetup
Stay in touch...
● Omid Vahdaty
● +972-54-2384178
● https://big-data-demystified.ninja/
● Join our meetups subscribe to youtube channels
○ https://www.meetup.com/AWS-Big-Data-Demystified/
○ https://www.meetup.com/Big-Data-Demystified/
○ Big Data Demystified YouTube
○ AWS Big Data Demystified YouTube
○ WhatsApp group

Más contenido relacionado

La actualidad más candente

NoSQL for the SQL Server Pro
NoSQL for the SQL Server ProNoSQL for the SQL Server Pro
NoSQL for the SQL Server Pro
Lynn Langit
 
Case studies session 2
Case studies   session 2Case studies   session 2
Case studies session 2
HBaseCon
 
Visual Mapping of Clickstream Data
Visual Mapping of Clickstream DataVisual Mapping of Clickstream Data
Visual Mapping of Clickstream Data
DataWorks Summit
 

La actualidad más candente (20)

Data Collection and Storage
Data Collection and StorageData Collection and Storage
Data Collection and Storage
 
A Study Review of Common Big Data Architecture for Small-Medium Enterprise
A Study Review of Common Big Data Architecture for Small-Medium EnterpriseA Study Review of Common Big Data Architecture for Small-Medium Enterprise
A Study Review of Common Big Data Architecture for Small-Medium Enterprise
 
NoSQL for the SQL Server Pro
NoSQL for the SQL Server ProNoSQL for the SQL Server Pro
NoSQL for the SQL Server Pro
 
Processing and Analytics
Processing and AnalyticsProcessing and Analytics
Processing and Analytics
 
(ARC202) Real-World Real-Time Analytics | AWS re:Invent 2014
(ARC202) Real-World Real-Time Analytics | AWS re:Invent 2014(ARC202) Real-World Real-Time Analytics | AWS re:Invent 2014
(ARC202) Real-World Real-Time Analytics | AWS re:Invent 2014
 
Big data serving: Processing and inference at scale in real time
Big data serving: Processing and inference at scale in real timeBig data serving: Processing and inference at scale in real time
Big data serving: Processing and inference at scale in real time
 
AWS re:Invent 2016: How Telltale Games migrated its story analytics from Apac...
AWS re:Invent 2016: How Telltale Games migrated its story analytics from Apac...AWS re:Invent 2016: How Telltale Games migrated its story analytics from Apac...
AWS re:Invent 2016: How Telltale Games migrated its story analytics from Apac...
 
When to Use MongoDB
When to Use MongoDBWhen to Use MongoDB
When to Use MongoDB
 
Google Cloud Spanner Preview
Google Cloud Spanner PreviewGoogle Cloud Spanner Preview
Google Cloud Spanner Preview
 
Big data real time architectures
Big data real time architecturesBig data real time architectures
Big data real time architectures
 
Lambda architecture @ Indix
Lambda architecture @ IndixLambda architecture @ Indix
Lambda architecture @ Indix
 
Shift: Real World Migration from MongoDB to Cassandra
Shift: Real World Migration from MongoDB to CassandraShift: Real World Migration from MongoDB to Cassandra
Shift: Real World Migration from MongoDB to Cassandra
 
REDSHIFT - Amazon
REDSHIFT - AmazonREDSHIFT - Amazon
REDSHIFT - Amazon
 
Case studies session 2
Case studies   session 2Case studies   session 2
Case studies session 2
 
Welcome | MariaDB today and our vision for the future
Welcome | MariaDB today and our vision for the futureWelcome | MariaDB today and our vision for the future
Welcome | MariaDB today and our vision for the future
 
Big data in Azure
Big data in AzureBig data in Azure
Big data in Azure
 
Visual Mapping of Clickstream Data
Visual Mapping of Clickstream DataVisual Mapping of Clickstream Data
Visual Mapping of Clickstream Data
 
Real-Time Analytics with Apache Cassandra and Apache Spark
Real-Time Analytics with Apache Cassandra and Apache SparkReal-Time Analytics with Apache Cassandra and Apache Spark
Real-Time Analytics with Apache Cassandra and Apache Spark
 
(BDT303) Construct Your ETL Pipeline with AWS Data Pipeline, Amazon EMR, and ...
(BDT303) Construct Your ETL Pipeline with AWS Data Pipeline, Amazon EMR, and ...(BDT303) Construct Your ETL Pipeline with AWS Data Pipeline, Amazon EMR, and ...
(BDT303) Construct Your ETL Pipeline with AWS Data Pipeline, Amazon EMR, and ...
 
Zeotap: Moving to ScyllaDB - A Graph of Billions Scale
Zeotap: Moving to ScyllaDB - A Graph of Billions ScaleZeotap: Moving to ScyllaDB - A Graph of Billions Scale
Zeotap: Moving to ScyllaDB - A Graph of Billions Scale
 

Similar a AWS Big Data Demystified #1.2 | Big Data architecture lessons learned

Going Real-Time: Creating Frequently-Updating Datasets for Personalization: S...
Going Real-Time: Creating Frequently-Updating Datasets for Personalization: S...Going Real-Time: Creating Frequently-Updating Datasets for Personalization: S...
Going Real-Time: Creating Frequently-Updating Datasets for Personalization: S...
Spark Summit
 
Apache Storm Concepts
Apache Storm ConceptsApache Storm Concepts
Apache Storm Concepts
André Dias
 
kranonit S06E01 Игорь Цинько: High load
kranonit S06E01 Игорь Цинько: High loadkranonit S06E01 Игорь Цинько: High load
kranonit S06E01 Игорь Цинько: High load
Krivoy Rog IT Community
 

Similar a AWS Big Data Demystified #1.2 | Big Data architecture lessons learned (20)

NetflixOSS Meetup season 3 episode 1
NetflixOSS Meetup season 3 episode 1NetflixOSS Meetup season 3 episode 1
NetflixOSS Meetup season 3 episode 1
 
Data Platform in the Cloud
Data Platform in the CloudData Platform in the Cloud
Data Platform in the Cloud
 
OSMC 2018 | Learnings, patterns and Uber’s metrics platform M3, open sourced ...
OSMC 2018 | Learnings, patterns and Uber’s metrics platform M3, open sourced ...OSMC 2018 | Learnings, patterns and Uber’s metrics platform M3, open sourced ...
OSMC 2018 | Learnings, patterns and Uber’s metrics platform M3, open sourced ...
 
Going Real-Time: Creating Frequently-Updating Datasets for Personalization: S...
Going Real-Time: Creating Frequently-Updating Datasets for Personalization: S...Going Real-Time: Creating Frequently-Updating Datasets for Personalization: S...
Going Real-Time: Creating Frequently-Updating Datasets for Personalization: S...
 
TRHUG 2015 - Veloxity Big Data Migration Use Case
TRHUG 2015 - Veloxity Big Data Migration Use CaseTRHUG 2015 - Veloxity Big Data Migration Use Case
TRHUG 2015 - Veloxity Big Data Migration Use Case
 
Building data "Py-pelines"
Building data "Py-pelines"Building data "Py-pelines"
Building data "Py-pelines"
 
Dirty Data? Clean it up! - Rocky Mountain DataCon 2016
Dirty Data? Clean it up! - Rocky Mountain DataCon 2016Dirty Data? Clean it up! - Rocky Mountain DataCon 2016
Dirty Data? Clean it up! - Rocky Mountain DataCon 2016
 
Yipit - AWS Start-Up Customer
Yipit - AWS Start-Up Customer Yipit - AWS Start-Up Customer
Yipit - AWS Start-Up Customer
 
Make your data fly - Building data platform in AWS
Make your data fly - Building data platform in AWSMake your data fly - Building data platform in AWS
Make your data fly - Building data platform in AWS
 
MongoDB World 2019: Packing Up Your Data and Moving to MongoDB Atlas
MongoDB World 2019: Packing Up Your Data and Moving to MongoDB AtlasMongoDB World 2019: Packing Up Your Data and Moving to MongoDB Atlas
MongoDB World 2019: Packing Up Your Data and Moving to MongoDB Atlas
 
AWS Techniques and lessons writing low cost autoscaling GitLab runners
AWS Techniques and lessons writing low cost autoscaling GitLab runnersAWS Techniques and lessons writing low cost autoscaling GitLab runners
AWS Techniques and lessons writing low cost autoscaling GitLab runners
 
Netflix Open Source Meetup Season 4 Episode 2
Netflix Open Source Meetup Season 4 Episode 2Netflix Open Source Meetup Season 4 Episode 2
Netflix Open Source Meetup Season 4 Episode 2
 
Simply Business' Data Platform
Simply Business' Data PlatformSimply Business' Data Platform
Simply Business' Data Platform
 
Dirty data? Clean it up! - Datapalooza Denver 2016
Dirty data? Clean it up! - Datapalooza Denver 2016Dirty data? Clean it up! - Datapalooza Denver 2016
Dirty data? Clean it up! - Datapalooza Denver 2016
 
Apache Storm Concepts
Apache Storm ConceptsApache Storm Concepts
Apache Storm Concepts
 
kranonit S06E01 Игорь Цинько: High load
kranonit S06E01 Игорь Цинько: High loadkranonit S06E01 Игорь Цинько: High load
kranonit S06E01 Игорь Цинько: High load
 
Designing for operability and managability
Designing for operability and managabilityDesigning for operability and managability
Designing for operability and managability
 
Big Data on Cloud Native Platform
Big Data on Cloud Native PlatformBig Data on Cloud Native Platform
Big Data on Cloud Native Platform
 
Big Data on Cloud Native Platform
Big Data on Cloud Native PlatformBig Data on Cloud Native Platform
Big Data on Cloud Native Platform
 
Machine learning and big data @ uber a tale of two systems
Machine learning and big data @ uber a tale of two systemsMachine learning and big data @ uber a tale of two systems
Machine learning and big data @ uber a tale of two systems
 

Más de Omid Vahdaty

Más de Omid Vahdaty (20)

Data Pipline Observability meetup
Data Pipline Observability meetup Data Pipline Observability meetup
Data Pipline Observability meetup
 
Couchbase Data Platform | Big Data Demystified
Couchbase Data Platform | Big Data DemystifiedCouchbase Data Platform | Big Data Demystified
Couchbase Data Platform | Big Data Demystified
 
Machine Learning Essentials Demystified part2 | Big Data Demystified
Machine Learning Essentials Demystified part2 | Big Data DemystifiedMachine Learning Essentials Demystified part2 | Big Data Demystified
Machine Learning Essentials Demystified part2 | Big Data Demystified
 
Machine Learning Essentials Demystified part1 | Big Data Demystified
Machine Learning Essentials Demystified part1 | Big Data DemystifiedMachine Learning Essentials Demystified part1 | Big Data Demystified
Machine Learning Essentials Demystified part1 | Big Data Demystified
 
The technology of fake news between a new front and a new frontier | Big Dat...
The technology of fake news  between a new front and a new frontier | Big Dat...The technology of fake news  between a new front and a new frontier | Big Dat...
The technology of fake news between a new front and a new frontier | Big Dat...
 
Making your analytics talk business | Big Data Demystified
Making your analytics talk business | Big Data DemystifiedMaking your analytics talk business | Big Data Demystified
Making your analytics talk business | Big Data Demystified
 
BI STRATEGY FROM A BIRD'S EYE VIEW (How to become a trusted advisor) | Omri H...
BI STRATEGY FROM A BIRD'S EYE VIEW (How to become a trusted advisor) | Omri H...BI STRATEGY FROM A BIRD'S EYE VIEW (How to become a trusted advisor) | Omri H...
BI STRATEGY FROM A BIRD'S EYE VIEW (How to become a trusted advisor) | Omri H...
 
AI and Big Data in Health Sector Opportunities and challenges | Big Data Demy...
AI and Big Data in Health Sector Opportunities and challenges | Big Data Demy...AI and Big Data in Health Sector Opportunities and challenges | Big Data Demy...
AI and Big Data in Health Sector Opportunities and challenges | Big Data Demy...
 
Aerospike meetup july 2019 | Big Data Demystified
Aerospike meetup july 2019 | Big Data DemystifiedAerospike meetup july 2019 | Big Data Demystified
Aerospike meetup july 2019 | Big Data Demystified
 
ALIGNING YOUR BI OPERATIONS WITH YOUR CUSTOMERS' UNSPOKEN NEEDS, by Eyal Stei...
ALIGNING YOUR BI OPERATIONS WITH YOUR CUSTOMERS' UNSPOKEN NEEDS, by Eyal Stei...ALIGNING YOUR BI OPERATIONS WITH YOUR CUSTOMERS' UNSPOKEN NEEDS, by Eyal Stei...
ALIGNING YOUR BI OPERATIONS WITH YOUR CUSTOMERS' UNSPOKEN NEEDS, by Eyal Stei...
 
AWS Big Data Demystified #3 | Zeppelin + spark sql, jdbc + thrift, ganglia, r...
AWS Big Data Demystified #3 | Zeppelin + spark sql, jdbc + thrift, ganglia, r...AWS Big Data Demystified #3 | Zeppelin + spark sql, jdbc + thrift, ganglia, r...
AWS Big Data Demystified #3 | Zeppelin + spark sql, jdbc + thrift, ganglia, r...
 
AWS Big Data Demystified #2 | Athena, Spectrum, Emr, Hive
AWS Big Data Demystified #2 |  Athena, Spectrum, Emr, Hive AWS Big Data Demystified #2 |  Athena, Spectrum, Emr, Hive
AWS Big Data Demystified #2 | Athena, Spectrum, Emr, Hive
 
Emr spark tuning demystified
Emr spark tuning demystifiedEmr spark tuning demystified
Emr spark tuning demystified
 
Emr zeppelin & Livy demystified
Emr zeppelin & Livy demystifiedEmr zeppelin & Livy demystified
Emr zeppelin & Livy demystified
 
Zeppelin and spark sql demystified
Zeppelin and spark sql demystifiedZeppelin and spark sql demystified
Zeppelin and spark sql demystified
 
Introduction to AWS Big Data
Introduction to AWS Big Data Introduction to AWS Big Data
Introduction to AWS Big Data
 
Aws s3 security
Aws s3 securityAws s3 security
Aws s3 security
 
Introduction to streaming and messaging flume,kafka,SQS,kinesis
Introduction to streaming and messaging  flume,kafka,SQS,kinesis Introduction to streaming and messaging  flume,kafka,SQS,kinesis
Introduction to streaming and messaging flume,kafka,SQS,kinesis
 
Introduction to aws dynamo db
Introduction to aws dynamo dbIntroduction to aws dynamo db
Introduction to aws dynamo db
 
Hive vs. Impala
Hive vs. ImpalaHive vs. Impala
Hive vs. Impala
 

Último

VIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 BookingVIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 Booking
dharasingh5698
 
notes on Evolution Of Analytic Scalability.ppt
notes on Evolution Of Analytic Scalability.pptnotes on Evolution Of Analytic Scalability.ppt
notes on Evolution Of Analytic Scalability.ppt
MsecMca
 
Call Girls In Bangalore ☎ 7737669865 🥵 Book Your One night Stand
Call Girls In Bangalore ☎ 7737669865 🥵 Book Your One night StandCall Girls In Bangalore ☎ 7737669865 🥵 Book Your One night Stand
Call Girls In Bangalore ☎ 7737669865 🥵 Book Your One night Stand
amitlee9823
 
AKTU Computer Networks notes --- Unit 3.pdf
AKTU Computer Networks notes ---  Unit 3.pdfAKTU Computer Networks notes ---  Unit 3.pdf
AKTU Computer Networks notes --- Unit 3.pdf
ankushspencer015
 
Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar ≼🔝 Delhi door step de...
Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar  ≼🔝 Delhi door step de...Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar  ≼🔝 Delhi door step de...
Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar ≼🔝 Delhi door step de...
9953056974 Low Rate Call Girls In Saket, Delhi NCR
 
Call Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
Call Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort ServiceCall Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
Call Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
9953056974 Low Rate Call Girls In Saket, Delhi NCR
 

Último (20)

VIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 BookingVIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 Booking
VIP Call Girls Palanpur 7001035870 Whatsapp Number, 24/07 Booking
 
notes on Evolution Of Analytic Scalability.ppt
notes on Evolution Of Analytic Scalability.pptnotes on Evolution Of Analytic Scalability.ppt
notes on Evolution Of Analytic Scalability.ppt
 
Work-Permit-Receiver-in-Saudi-Aramco.pptx
Work-Permit-Receiver-in-Saudi-Aramco.pptxWork-Permit-Receiver-in-Saudi-Aramco.pptx
Work-Permit-Receiver-in-Saudi-Aramco.pptx
 
Water Industry Process Automation & Control Monthly - April 2024
Water Industry Process Automation & Control Monthly - April 2024Water Industry Process Automation & Control Monthly - April 2024
Water Industry Process Automation & Control Monthly - April 2024
 
chapter 5.pptx: drainage and irrigation engineering
chapter 5.pptx: drainage and irrigation engineeringchapter 5.pptx: drainage and irrigation engineering
chapter 5.pptx: drainage and irrigation engineering
 
Generative AI or GenAI technology based PPT
Generative AI or GenAI technology based PPTGenerative AI or GenAI technology based PPT
Generative AI or GenAI technology based PPT
 
data_management_and _data_science_cheat_sheet.pdf
data_management_and _data_science_cheat_sheet.pdfdata_management_and _data_science_cheat_sheet.pdf
data_management_and _data_science_cheat_sheet.pdf
 
Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance BookingCall Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Booking
 
Call Girls In Bangalore ☎ 7737669865 🥵 Book Your One night Stand
Call Girls In Bangalore ☎ 7737669865 🥵 Book Your One night StandCall Girls In Bangalore ☎ 7737669865 🥵 Book Your One night Stand
Call Girls In Bangalore ☎ 7737669865 🥵 Book Your One night Stand
 
UNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its PerformanceUNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its Performance
 
Thermal Engineering-R & A / C - unit - V
Thermal Engineering-R & A / C - unit - VThermal Engineering-R & A / C - unit - V
Thermal Engineering-R & A / C - unit - V
 
VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...
VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...
VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...
 
AKTU Computer Networks notes --- Unit 3.pdf
AKTU Computer Networks notes ---  Unit 3.pdfAKTU Computer Networks notes ---  Unit 3.pdf
AKTU Computer Networks notes --- Unit 3.pdf
 
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdf
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdfONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdf
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdf
 
Top Rated Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
Top Rated  Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...Top Rated  Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
Top Rated Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
 
Thermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.pptThermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.ppt
 
Unit 1 - Soil Classification and Compaction.pdf
Unit 1 - Soil Classification and Compaction.pdfUnit 1 - Soil Classification and Compaction.pdf
Unit 1 - Soil Classification and Compaction.pdf
 
Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar ≼🔝 Delhi door step de...
Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar  ≼🔝 Delhi door step de...Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar  ≼🔝 Delhi door step de...
Call Now ≽ 9953056974 ≼🔝 Call Girls In New Ashok Nagar ≼🔝 Delhi door step de...
 
Call Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
Call Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort ServiceCall Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
Call Girls in Ramesh Nagar Delhi 💯 Call Us 🔝9953056974 🔝 Escort Service
 
KubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghlyKubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghly
 

AWS Big Data Demystified #1.2 | Big Data architecture lessons learned

  • 1. AWS Big Data Demystified #1.2 Big Data Architecture Lessons Learned Omid Vahdaty, Big Data Ninja
  • 2. Disclaimer ● I am not the best, I simply love what I do VERY much. ● You are more than welcome to challenge me or anything I have to say as I could be wrong. ● This Lecture has evolves over time, this is the 3nd iteration. ● Feel Free to send me comments
  • 3.
  • 4. In the Past(web,api, ops db, data warehouse) 4
  • 5. Then came Big Data... 5
  • 6. Then came the cloud... 6
  • 7. Then came the invoice ...
  • 9. TODAY’S BIG DATA APPLICATION STACK PaaS and DC...
  • 10. MY BIG DATA APPLICATION STACK “demystified”...
  • 11. MY AWS BIG DATA APPLICATION STACK With some “Myst”...
  • 13. Big Jargon & Basics concepts u should know https://amazon-aws-big-data-demystified.ninja/2019/02/18/big-data-jargon-faqs- and-everything-you-wanted-to-know-and-didnt-ask-about-big-data/ ● What is Big Data? ● scale out / up ? ● structured/semi structured/unstructured data ? ● ACID? ● OLAP VS OLTP? == Analytics VS operational ● DIY = Do it Yourself ● PaaS = Platform as a service
  • 14. Big Data = When your data outgrows your infrastructure ability to process ● Volume (x TB processing per day) ● Velocity ( x GB/s) ● Variety (JSON, CSV, events etc) ● Veracity (how much of the data is accurate?)
  • 15. Challenges creating big data architecture? ● What is the business use case ? How fast do u need the insights? ○ 15 min - 24 hours delay and above → use batch ○ Less than 15 min? ■ Might be batch - depends data source is files or events?. ■ Streaming? ● Sub seconds delay? ● Sub minute delay? ■ Streaming with in flight analytics ? ○ How complex is the compute jobs? Aggregations? joins?
  • 16. Challenges creating big data architecture? ● What is the Velocity? ○ Under 100K events per second? Not a problem ○ Over 1M events per second? Costly. But doable. ○ Over 1B events per seconds? Not trivial at all. ● Volume ? ○ ~1TB a day ? Not a problem ○ Over ? it depends. ○ Over a petabyte? Well…. It depends. ● Veracity (how are you going to handle different data sources?) ○ Structured (CSV) ○ Semi structured (JSON,XML) ○ Unstructured (pictures, movies etc)
  • 17. Challenges creating big data architecture? ● Performance targets? ● Costs targets? ● Security restrictions? ● Regulation restriction? privacy? ● Which technology to choose? ● Datacenter or cloud? ● Latency? ● Throughput? ● Concurrency? ● Security Access patterns? ● Pass? Max 7 technologies ● Iaas? Max 4 technologies
  • 18. Cloud Architecture rules of thumb... ● Decouple : ○ Store ○ Process ○ Store ○ Process ○ insight... ● Rule of thumb: max 3 technologies in dc, 7 tech max in cloud ○ Do use more b/c: maintenance ○ Training time ○ complexity/simplicity
  • 19. How to get started on big data architecture? Get answers to the below: 1. What is the business use case? a. Volume? velocity? Variety? vercity/ b. Did you map all of data sources? 2. Where should we build a data platform? a. What is the product? → requirements. b. Cloud? datacenter? 3. Architecture? a. DIY? Paas? , Pay as you go? Or fixed? Decoupled? b. Fast? Cheap? simple? 4. Did you Communicate you plans? 5. Did you map all known challenges?
  • 21. Use Case 1: Analyzing browsing history ● Data Collection: browsing history from an ISP ● Product - derives user intent and interest for marketing purposes. ● Challenges ○ Velocity: 1 TB per day ○ History of: 3M ○ Remote DC ○ Enterprise grade security ○ Privacy
  • 22. Use Case 2: Insights from location based data ● Data collection: from a Mobile operator ● Products: ○ derives user intent and interest for marketing purposes. ○ derive location based intent for marketing purposes. ● Challenges ○ Velocity: 4GB/s … ○ Scalability: Rate was expected double every year... ○ Remote DC ○ Enterprise grade security ○ Privacy ○ Edge analytics
  • 23. Use Case 3: Analyzing location based events. ● Data collection: streaming ● Product: building location based audiences ● Challenges: minimizing DevOps work on maintenance
  • 24. Getting started (notice the markings on upper right corner) I didn't choose this technology I choose this technology
  • 25. So what is the product? ● Big data platform that ○ ingests data from multiple sources (cloud and DC) ○ Analyzes the data ○ Generates insights : ■ Smart Segments (online marketing) ■ Smart reports (for marketer) ■ Audience analysis (for agencies) ● Customers? ○ Marketers ○ Publishers ○ Agencies
  • 26. Where to build the data platform?
  • 27. We choose AWS because ● After a long competitive analysis we choose AWS because, it seems to have all the relevant features For all our big data products and wallas publisher products ● The project was challenging enough, without adding the complexity of a learning curve (learning new cloud). We already knew how to work with AWS ● Of course, there business aspects as well.
  • 28. My Big Data product does: ● Data Ingestion ○ Online ■ messaging ■ Streaming ○ Offline ■ Batch ■ Performance aspects ● Data Transformation (Hive) ○ JSON, CSV, TXT, PARQUET, Binary ● Data Modeling - (R, ML, AI, DEEP, SPARK) ● Data Visualization (choose your poison) ● PII regulation + GPDR regulation ● And: Performance... Cost… Security… Simple... Cloud best practices...
  • 29. Big Data Generic Architecture Data Ingestion (file based ETL from remote DC) Data Transformation ( row to colunar + cleansing) Data Modeling ( joins/agg/ML/R) Data Presentation Text, RAW
  • 30. Data Ingestion A layer in your big data architecture designed to do one thing : ingest data via Batch or Streaming, I.e move (only) data from point A to point B. from source data to the next layer in the architecture (decoupled).
  • 31. Big Data Generic Architecture | Data Ingestion Data Ingestion Data Transformation Data Modeling Data Presentation
  • 32. Batch Data collection considerations ● Every hour , about 30GB compressed CSV file ● Why s3 ○ Multi part upload ○ S3 CLI ○ S3 SDK ○ (tip : gzip! ) ● Why ETL Client - needs to run at remote DC ● Why NOT your own ETL client ○ Involves code → ■ Bugs? ■ maintenance ○ Don't analyze data at Edge , cant go back in time. ● Why Not Streaming? ○ less accurate ○ Expensive
  • 33. S3 Considerations ● Security ○ at rest: server side S3-Managed Keys (SSE-S3) ○ at transit: SSL / VPN ○ Hardening: user, IP ACL, write permission only. ● Upload ○ AWS s3 cli ○ Multi part upload ○ Aborting Incomplete Multipart Uploads Using a Bucket Lifecycle Policy ○ Consider S3 CLI Sync command instead of CP
  • 34. Sqoop - ETL ● Open source , part of EMR ● HDFS to RDMS and back. Via JDBC. ● E.g BiDirectional ETL from RDS to HDFS ● Unlikely use case: ETL from customer source operational DB.
  • 35. Flume & Kafka ● Opens source project for streaming & messaging ● Popular ● Generic ● Good practice for many use cases. (a meetup by it self) ● Highly durable, scalable, extension etc. ● Downside : DIY, Non trivial to get started
  • 36. Data Transfer Options ● Direct Connect (4GB/s?) ● For all other use case ○ S3 multipart upload ○ Compression ○ Security ■ Data at motion ■ Data at rest
  • 37. Quick intro to Stream ingestion ● Kinesis Client Library (code) ● AWS lambda (code) ● EMR (managed hadoop) ● Third party (DIY) ○ Spark streaming (latency min =1 sec) , near real time, with lot of libraries. ○ Storm - Most real time (sub millisec), java code based. ○ Flink (similar to spark)
  • 38. Kinesis family of products ● Kinesis Stream - collect@source and near real time processing ○ Near real time ○ High throughput ○ Low cost ○ Easy administration - set desired level of capacity ○ Delivery to : s3,redshift, Dynamo, ... ○ Ingress 1mb, egress 2mbs. Upto 1000 Transaction per second. ○ Not managed! ● Kinesis Analytics - in flight analytics. ● Kin. Firehose - Park you data @ destination.
  • 39. Kinesis Firehose - for Data parking ● Not for fast lane - no in flight analytics ● Ingest , transform and load to: ○ Kinesis ○ S3 ○ Redshift ○ elastic search ● Managed Service
  • 40. Comparison of Kinesis products ● Streams ○ Sub 1 sec processing latency ○ Choice of stream processor (generic) ○ For smaller events ● Firehose ○ Zero admin ○ 4 targets built in (redshift, s3, search, etc) ○ Buffering 60 sec minimum. ○ For larger “events”
  • 41. Data Transformation A layer in your big data architecture designed to : Transform and Cleanse data (row data to columnar data and convert data types, Fix bugs in data)
  • 42. Big Data Generic Architecture | Transformation Data Ingestion S3 Data Transformation Data Modeling Data Presentation
  • 43. EMR ecosystem ● Hive ● Pig ● Hue ● Spark ● Oozie ● Presto ● Ganglia ● Zookeeper (hbase) ● zeppelin
  • 44. EMR Architecture ● Master node ● Core nodes - like data nodes (with storage: HDFS) ● Task nodes - (extends compute) ● Does Not have Standby Master node ● Best for transient cluster (goes up and down every night)
  • 45. EMR lesson learned... ● Bigger instance type is good architecture ● Use spot instances - for the tasks only. ● Don't always use TEZ (MR? Spark?) ● Make sure your choose instance with network optimized ● Resize cluster is not recommended ● Bootstrap to automate cluster upon provisioning ● Use Steps to automate steps on running cluster ● Use Glue to share Hive MetaStore ● Good Cost reduction article on EMR
  • 46. So use EMR for ... ● Most dominant ○ Hive ○ Spark ○ Presto ● And many more…. ● Good for: ○ Data transformation ○ Data modeling ○ Batch ○ Machine learning
  • 47. Hive ● SQL over hadoop. ● Engine: spark, tez, MR ● JDBC / ODBC ● Not good when need to shuffle. ● Not peta scale. ● SerDe json, parquet,regex,text etc. ● Dynamic partitions ● Insert overwrite ● Data Transformation ● Convert to Columnar
  • 48. Presto ● SQL over hadoop ● Not good always for join on 2 large tables. ● Limited by memory ● Not fault tolerant like hive. ● Optimized for ad hoc queries ● No insert overwrite ● No dynamic partitions. ● Has some connectors : redshift and more ● https://amazon-aws-big-data- demystified.ninja/2018/07/02/aws-emr- presto-demystified-everything-you- wanted-to-know-about-presto/
  • 49. Pig ● Distributed Shell scripting ● Generating SQL like operations. ● Engine: MR, Tez ● S3, DynamoDB access ● Use Case: for data science who don't know SQL, for system people, for those who want to avoid java/scala ● Fair fight compared to hive in term of performance only ● Good for unstructured files ETL : file to file , and use sqoop.
  • 50. Hue ● Hadoop user experience ● Logs in real time and failures. ● Multiple users ● Native access to S3. ● File browser to HDFS. ● Manipulate metascore ● Job Browser ● Query editor ● Hbase browser ● Sqoop editor, oozier editor, Pig Editor
  • 51. Orchestration ● EMR Oozie ○ Opens source workflow ■ Workflow: graph of action ■ Coordinator: scheduler jobs ○ Support: hive, sqoop , spark etc. ● Other options: AirFlow, Knime, Luigi, Azkaban,AWS Data Pipeline
  • 52. Big Data Generic Architecture | Transformation Data Ingestion S3 Data Transformation Data Modeling Data Visualization
  • 53. Data Modeling A layer in your big data architecture designed to Model data: Joins, Aggregations, nightly jobs, Machine learning
  • 54. Big Data Generic Architecture | Modeling Data Ingestion S3 Data Transformation Data Modeling Data Presentation
  • 55. Spark ● In memory ● X10 to X100 times faster from hive ● Good optimizer for distribution ● Rich API ● Spark SQL ● Spark Streaming ● Spark ML (ML lib) ● Spark GraphX (DB graphs) ● SparkR
  • 56. Spark Streaming ● Near real time (1 sec latency) ● like batch of 1sec windows ● Streaming jobs with API ● DIY = Not relevant to us...
  • 57. Spark ML ● Classification ● Regression ● Collaborative filtering ● Clustering ● Decomposition ● Code: java, scala, python, sparkR
  • 58. Spark flavours ● Standalone ● With yarn ● With mesos
  • 59. Spark Downside ● Compute intensive ● Performance gain over mapreduce is not guaranteed. ● Streaming processing is actually batch with very small window.
  • 60. Spark SQL ● Same syntax as hive ● Optional JDBC via thrift ● Non trivial learning curve ● Upto X10 faster than hive. ● Works well with Zeppelin (out of the box) ● Does not replaces Hive ● Spark not always faster than hive ● insert overwrite -
  • 61. Apache Zeppelin ● Notebook - visualizer ● Built in spark integration ● Interactive data analytics ● Easy collaboration. ● Uses SQL ● works on top of Hive/ Spark SQL ● Inside EMR. ● Uses in the background: ○ Shiro ○ Livy
  • 62. R + spark R ● Open source package for statistical computing. ● Works with EMR ● “Matlab” equivalent ● Works with spark ● Not for developer :) for statistician ● R is single threaded - use spark R to distribute. ● Not everything works perfect.
  • 63. Redshift ● OLAP, not OLTP→ analytics , not transaction ● Fully SQL ● Fully ACID ● No indexing ● Fully managed ● Petabyte Scale ● MPP ● Can create slow queue for queries ○ which are long lasting. ● DO NOT USE FOR transformation. ● Good for : DW, Complex Joins.
  • 64. Redshift spectrum ● Extension of Redshift, use external table on S3. ● Require redshift cluster. ● Not possible for CTAS to s3, complex data structure, joins. ● Good for ○ Read only Queries ○ Aggregations on Exabyte.
  • 65. EMR vs Redshift ● How much data loaded and unloaded? ● Which operations need to performed? ● Recycling data? → EMR ● History to be analyzed again and again ? → emr ● What the data needs to end up? BI? ● Use spectrum in some use cases. (aggregations)? ● Raw data? S3. ● When to use emr and when redshift?
  • 66. Hive VS. Redshift ● Amount of concurrency ? low → hive, high → redshift ● Access to customers? Redshift? athena? ● Transformation, Unstructured , batch, ETL → hive. ● Peta scale ? redshift ● Complex joins → Redshift
  • 67. Sage Maker ● Web notebook (jupiter based) for data science ● Connects to all your data sources (s3,athena etc) ● Help you manage the entire lifecycle machine learning ● Managed Service ● Used to create a ML to predict cookie gender
  • 68. AWS Glue Shared meta store Helps with some data transformation (managed service) Automatic Schema discovery
  • 69. AWD RDS (aurora, postgres, mysql) ● We used RDS aurora as Operational DB ● We did not use it for big data analytics although it supports upto 64Tb ● It is row based. ● The syntax is missing analytical functions
  • 70. Big Data Generic Architecture | Modeling Data Ingestion S3 Data Transformation Data Modeling Data Presentation
  • 71. Data Presentation Used ONLY for presenting data for operational applications or BI, Use managed service to ensure HA.
  • 72. Big Data Generic Architecture | Presentation Data Collection S3 Data Transformation Data Modeling Data Visualization
  • 73. Athena ● Presto SQL ● In memory ● Hive metastore for DDL functionality ○ Complex data types ○ Multiple formats ○ Partitions ● Now supports CTAS (No inserts are supported) ● Good for: ○ Read only SQL, ○ Ad hoc query, ○ low cost, ○ Managed ● Good cost reduction article on athena
  • 74. Visualize ● QuickSight ● Managed Visualizer, simple, cheap
  • 75. Summery Take Away Message & Action Items
  • 76. Big Data Generic Architecture | Summary Data Ingestion S3 Data Transformation Data Modeling Data Presentation
  • 77. Summary: Lesson learned ● Decouple, Decouple, Decouple ● Productivity of Data Science and Data engineering ○ Common language of both teams IS SQL! ○ Minimize the life cycle from dev to production of ETL and ML jobs ● Minimize the amount DB’s used ○ Different syntax (presto/hive/redshift) ○ Different data types ○ Minimize ETLS via External Tables+Glue! ● Not always Streaming is justified (what is the business use case? PaaS?) ● Spark SQL ○ Sometimes faster than redshift ○ Sometimes slower than hive ○ Learning curve is non trivial
  • 78. Summery: Common Q&A 1. Can this architecture be done on another cloud? 2. Redshift VS EMR ? 3. Athena VS Redshift? 4. Cost reduction on EMR? 5. Cost Reduction on Athena? 6. Exporting data from Google Analytics into AWS?
  • 79. Lesson learned: Big Data Architecture ? Faster! Cheaper! Simpler!
  • 80. How to get started | Call for Action Lectures: AWS Big Data Demystified lectures #1 until #4 AWS Big Data Demystified Meetup Big Data Demystified meetup
  • 81. Stay in touch... ● Omid Vahdaty ● +972-54-2384178 ● https://big-data-demystified.ninja/ ● Join our meetups subscribe to youtube channels ○ https://www.meetup.com/AWS-Big-Data-Demystified/ ○ https://www.meetup.com/Big-Data-Demystified/ ○ Big Data Demystified YouTube ○ AWS Big Data Demystified YouTube ○ WhatsApp group