SlideShare una empresa de Scribd logo
1 de 39
© 2016, Amazon Web Services, Inc. or its Affiliates. All rights reserved.
Shree Kenghe and Dario Rivera Solution Architect, AWS
Building Your First Big Data Application on AWS
Your First Big Data Application on AWS
PROCESS
STORE
ANALYZE & VISUALIZE
COLLECT
Your First Big Data Application on AWS
PROCESS:
Amazon EMR with Spark & Hive
STORE
ANALYZE & VISUALIZE:
Amazon Redshift and Amazon QuickSight
COLLECT:
Amazon Kinesis Firehose
http://aws.amazon.com/big-data/use-cases/
A Modern Take on the Classic Data Warehouse
Setting up the environment
Data Storage with Amazon S3
Download all the CLI steps: http://bit.ly/aws-big-data-steps
Create an Amazon S3 bucket to store the data collected
with Amazon Kinesis Firehose
aws s3 mb s3://YOUR-S3-BUCKET-NAME
Access Control with IAM
Create an IAM role to allow Firehose to write to the S3 bucket
firehose-policy.json:
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Principal": {"Service": "firehose.amazonaws.com"},
"Action": "sts:AssumeRole"
}
}
Access Control with IAM
Create an IAM role to allow Firehose to write to the S3 bucket
s3-rw-policy.json:
{ "Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::YOUR-S3-BUCKET-NAME",
"arn:aws:s3:::YOUR-S3-BUCKET-NAME/*"
]
} }
Access Control with IAM
Create an IAM role to allow Firehose to write to the S3 bucket
aws iam create-role --role-name firehose-demo 
--assume-role-policy-document file://firehose-policy.json
Copy the value in “Arn” in the output,
e.g., arn:aws:iam::123456789:role/firehose-demo
aws iam put-role-policy --role-name firehose-demo 
--policy-name firehose-s3-rw 
--policy-document file://s3-rw-policy.json
Data Collection with Amazon Kinesis Firehose
Create a Firehose stream for incoming log data
aws firehose create-delivery-stream 
--delivery-stream-name demo-firehose-stream 
--s3-destination-configuration 
RoleARN=YOUR-FIREHOSE-ARN,
BucketARN="arn:aws:s3:::YOUR-S3-BUCKET-NAME",
Prefix=firehose/,
BufferingHints={IntervalInSeconds=60},
CompressionFormat=GZIP
Data Processing with Amazon EMR
Launch an Amazon EMR cluster with Spark and Hive
aws emr create-cluster 
--name "demo" 
--release-label emr-4.5.0 
--instance-type m3.xlarge 
--instance-count 2 
--ec2-attributes KeyName=YOUR-AWS-SSH-KEY 
--use-default-roles 
--applications Name=Hive Name=Spark Name=Zeppelin-Sandbox
Record your ClusterId from the output.
Access Control with IAM
Create an IAM role to allow Redshift to copy from S3 bucket
aws iam create-role --role-name redshift-role 
--assume-role-policy-document file://redshift-policy.json
Copy the value in “arn” in the output,
e.g., arn:aws:iam::123456789:role/redshift-role
aws iam put-role-policy --role-name redshift-role 
--policy-name redshift-s3 
--policy-document file://redshift-s3-policy.json
Data Analysis with Amazon Redshift
Create a single-node Amazon Redshift data warehouse:
aws redshift create-cluster 
--cluster-identifier demo 
--db-name demo 
--node-type dc1.large 
--cluster-type single-node 
--iam-roles "arn:aws:iam::YOUR-AWS-ACCOUNT:role/redshift-
copy-role" 
--master-username master 
--master-user-password YOUR-REDSHIFT-PASSWORD 
--publicly-accessible 
--port 8192
Collect
Weblogs – Common Log Format (CLF)
75.35.230.210 - - [20/Jul/2009:22:22:42 -0700]
"GET /images/pigtrihawk.jpg HTTP/1.1" 200 29236
"http://www.swivel.com/graphs/show/1163466"
"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.11)
Gecko/2009060215 Firefox/3.0.11 (.NET CLR 3.5.30729)"
Writing into Amazon Kinesis Firehose
Download the demo weblog: http://bit.ly/aws-big-data
Open Python and run the following code to import the log into the stream:
import boto3
iam = boto3.client('iam')
firehose = boto3.client('firehose')
with open('weblog', 'r') as f:
for line in f:
firehose.put_record(
DeliveryStreamName='demo-firehose-stream',
Record={'Data': line}
)
print 'Record added'
Process
Apache Spark
• Fast, general purpose engine
for large-scale data processing
• Write applications quickly in
Java, Scala, or Python
• Combine SQL, streaming, and
complex analytics
Spark SQL
Spark's module for working with structured data using SQL
Run unmodified Hive queries on existing data
Apache Zeppelin
• Web-based notebook for interactive analytics
• Multiple language backend
• Apache Spark integration
• Data visualization
• Collaboration
https://zeppelin.incubator.apache.org/
View the Output Files in Amazon S3
After about 1 minute, you should see files in your S3
bucket:
aws s3 ls s3://YOUR-S3-BUCKET-NAME/firehose/ --recursive
Connect to Your EMR Cluster and Zeppelin
aws emr describe-cluster --cluster-id YOUR-EMR-CLUSTER-ID
Copy the MasterPublicDnsName. Use port forwarding so you can
access Zeppelin at http://localhost:18890 on your local machine.
ssh -i PATH-TO-YOUR-SSH-KEY -L 18890:localhost:8890 
hadoop@YOUR-EMR-DNS-NAME
Open Zeppelin with your local web browser and create a new “Note”:
http://localhost:18890
Exploring the Data in Amazon S3 using Spark
Download the Zeppelin notebook: http://bit.ly/aws-big-data-zeppelin
// Load all the files from S3 into a RDD
val accessLogLines = sc.textFile("s3://YOUR-S3-BUCKET-NAME/firehose/*/*/*/*")
// Count the lines
accessLogLines.count
// Print one line as a string
accessLogLines.first
// delimited by space so split them into fields
var accessLogFields = accessLogLines.map(_.split(" ").map(_.trim))
// Print the fields of a line
accessLogFields.first
Combine Fields: “A, B, C”  “A B C”
var accessLogColumns = accessLogFields
.map( arrayOfFields => { var temp1 =""; for (field <- arrayOfFields) yield {
var temp2 = ""
if (temp1.replaceAll("[",""").startsWith(""") && !temp1.endsWith("""))
temp1 = temp1 + " " + field.replaceAll("[|]",""")
else temp1 = field.replaceAll("[|]",""")
temp2 = temp1
if (temp1.endsWith(""")) temp1 = ""
temp2
}})
.map( fields => fields.filter(field => (field.startsWith(""") &&
field.endsWith(""")) || !field.startsWith(""") ))
.map(fields => fields.map(_.replaceAll(""","")))
Create a Data Frame and Transform the Data
import java.sql.Timestamp
import java.net.URL
case class accessLogs(
ipAddress: String,
requestTime: Timestamp,
requestMethod: String,
requestPath: String,
requestProtocol: String,
responseCode: String,
responseSize: String,
referrerHost: String,
userAgent: String
)
Create a Data Frame and Transform the Data
val accessLogsDF = accessLogColumns.map(line => {
var ipAddress = line(0)
var requestTime = new Timestamp(new
java.text.SimpleDateFormat("dd/MMM/yyyy:HH:mm:ss Z").parse(line(3)).getTime())
var requestString = line(4).split(" ").map(_.trim())
var requestMethod = if (line(4).toString() != "-") requestString(0) else ""
var requestPath = if (line(4).toString() != "-") requestString(1) else ""
var requestProtocol = if (line(4).toString() != "-") requestString(2) else ""
var responseCode = line(5).replaceAll("-","")
var responseSize = line(6).replaceAll("-","")
var referrerHost = line(7)
var userAgent = line(8)
accessLogs(ipAddress, requestTime, requestMethod, requestPath,
requestProtocol,responseCode, responseSize, referrerHost, userAgent)
}).toDF()
Create an External Table Backed by Amazon S3
%sql
CREATE EXTERNAL TABLE access_logs
(
ip_address String,
request_time Timestamp,
request_method String,
request_path String,
request_protocol String,
response_code String,
response_size String,
referrer_host String,
user_agent String
)
PARTITIONED BY (year STRING,month STRING, day STRING)
ROW FORMAT DELIMITED FIELDS TERMINATED BY 't'
STORED AS TEXTFILE
LOCATION 's3://YOUR-S3-BUCKET-NAME/access-log-processed'
Configure Hive Partitioning and Compression
// set up Hive's "dynamic partitioning”
%sql
SET hive.exec.dynamic.partition=true
// compress output files on Amazon S3 using Gzip
%sql
SET hive.exec.compress.output=true
%sql
SET mapred.output.compression.codec=org.apache.hadoop.io.compress.GzipCodec
%sql
SET io.compression.codecs=org.apache.hadoop.io.compress.GzipCodec
%sql
SET hive.exec.dynamic.partition.mode=nonstrict;
Write Output to Amazon S3
import org.apache.spark.sql.SaveMode
accessLogsDF
.withColumn("year", year(accessLogsDF("requestTime")))
.withColumn("month", month(accessLogsDF("requestTime")))
.withColumn("day", dayofmonth(accessLogsDF("requestTime")))
.write
.partitionBy("year","month","day")
.mode(SaveMode.Overwrite)
.insertInto("access_logs")
Query the Data Using Spark SQL
// Check the count of records
%sql
select count(*) from access_log_processed
// Fetch the first 10 records
%sql
select * from access_log_processed limit 10
View the Output Files in Amazon S3
Leave Zeppelin and go back to the console…
List the partition prefixes and output files:
aws s3 ls s3://YOUR-S3-BUCKET-NAME/access-log-processed/ 
--recursive
Analyze
Connect to Amazon Redshift
Using the PostgreSQL CLI
psql -h YOUR-REDSHIFT-ENDPOINT 
-p 8192 -U master demo
Or use any JDBC or ODBC SQL client with the PostgreSQL 8.x drivers
or native Amazon Redshift support
• Aginity Workbench for Amazon Redshift
• SQL Workbench/J
Create an Amazon Redshift Table to Hold Your Data
CREATE TABLE accesslogs
(
host_address varchar(512),
request_time timestamp,
request_method varchar(5),
request_path varchar(1024),
request_protocol varchar(10),
response_code Int,
response_size Int,
referrer_host varchar(1024),
user_agent varchar(512)
)
DISTKEY(host_address)
SORTKEY(request_time);
Loading Data into Amazon Redshift
“COPY” command loads files in parallel from
Amazon S3:
COPY accesslogs
FROM 's3://YOUR-S3-BUCKET-NAME/access-log-processed'
CREDENTIALS
'aws_iam_role=arn:aws:iam::YOUR-AWS-ACCOUNT-ID:role/ROLE-NAME'
DELIMITER 't'
MAXERROR 0
GZIP;
Amazon Redshift Test Queries
-- find distribution of response codes over days
SELECT TRUNC(request_time), response_code, COUNT(1) FROM
accesslogs GROUP BY 1,2 ORDER BY 1,3 DESC;
-- find the 404 status codes
SELECT COUNT(1) FROM accessLogs WHERE response_code = 404;
-- show all requests for status as PAGE NOT FOUND
SELECT TOP 1 request_path,COUNT(1) FROM accesslogs WHERE
response_code = 404 GROUP BY 1 ORDER BY 2 DESC;
Visualize the Results
DEMO
Amazon QuickSight
Learn from big data experts
blogs.aws.amazon.com/bigdata
Thank you!

Más contenido relacionado

La actualidad más candente

AWS re:Invent 2016: Big Data Mini Con State of the Union (BDM205)
AWS re:Invent 2016: Big Data Mini Con State of the Union (BDM205)AWS re:Invent 2016: Big Data Mini Con State of the Union (BDM205)
AWS re:Invent 2016: Big Data Mini Con State of the Union (BDM205)Amazon Web Services
 
Tackle Your Dark Data Challenge with AWS Glue - AWS Online Tech Talks
Tackle Your Dark Data  Challenge with AWS Glue - AWS Online Tech TalksTackle Your Dark Data  Challenge with AWS Glue - AWS Online Tech Talks
Tackle Your Dark Data Challenge with AWS Glue - AWS Online Tech TalksAmazon Web Services
 
Deep Dive and Best Practices for Real Time Streaming Applications
Deep Dive and Best Practices for Real Time Streaming ApplicationsDeep Dive and Best Practices for Real Time Streaming Applications
Deep Dive and Best Practices for Real Time Streaming ApplicationsAmazon Web Services
 
Deploying your Data Warehouse on AWS
Deploying your Data Warehouse on AWSDeploying your Data Warehouse on AWS
Deploying your Data Warehouse on AWSAmazon Web Services
 
Modern Data Architectures for Business Insights at Scale
Modern Data Architectures for Business Insights at ScaleModern Data Architectures for Business Insights at Scale
Modern Data Architectures for Business Insights at ScaleAmazon Web Services
 
Big Data Architectural Patterns and Best Practices on AWS
Big Data Architectural Patterns and Best Practices on AWSBig Data Architectural Patterns and Best Practices on AWS
Big Data Architectural Patterns and Best Practices on AWSAmazon Web Services
 
Big Data Architectural Patterns and Best Practices
Big Data Architectural Patterns and Best PracticesBig Data Architectural Patterns and Best Practices
Big Data Architectural Patterns and Best PracticesAmazon Web Services
 
AWS APAC Webinar Week - Big Data on AWS. RedShift, EMR, & IOT
AWS APAC Webinar Week - Big Data on AWS. RedShift, EMR, & IOTAWS APAC Webinar Week - Big Data on AWS. RedShift, EMR, & IOT
AWS APAC Webinar Week - Big Data on AWS. RedShift, EMR, & IOTAmazon Web Services
 
AWS re:Invent 2016: Big Data Architectural Patterns and Best Practices on AWS...
AWS re:Invent 2016: Big Data Architectural Patterns and Best Practices on AWS...AWS re:Invent 2016: Big Data Architectural Patterns and Best Practices on AWS...
AWS re:Invent 2016: Big Data Architectural Patterns and Best Practices on AWS...Amazon Web Services
 
Streaming data analytics (Kinesis, EMR/Spark) - Pop-up Loft Tel Aviv
Streaming data analytics (Kinesis, EMR/Spark) - Pop-up Loft Tel Aviv Streaming data analytics (Kinesis, EMR/Spark) - Pop-up Loft Tel Aviv
Streaming data analytics (Kinesis, EMR/Spark) - Pop-up Loft Tel Aviv Amazon Web Services
 
Best Practices for Building a Data Lake on AWS
Best Practices for Building a Data Lake on AWSBest Practices for Building a Data Lake on AWS
Best Practices for Building a Data Lake on AWSAmazon Web Services
 
Supercharging the Value of Your Data with Amazon S3
Supercharging the Value of Your Data with Amazon S3Supercharging the Value of Your Data with Amazon S3
Supercharging the Value of Your Data with Amazon S3Amazon Web Services
 
Building A Modern Data Analytics Architecture on AWS
Building A Modern Data Analytics Architecture on AWSBuilding A Modern Data Analytics Architecture on AWS
Building A Modern Data Analytics Architecture on AWSAmazon Web Services
 
(BDT310) Big Data Architectural Patterns and Best Practices on AWS | AWS re:I...
(BDT310) Big Data Architectural Patterns and Best Practices on AWS | AWS re:I...(BDT310) Big Data Architectural Patterns and Best Practices on AWS | AWS re:I...
(BDT310) Big Data Architectural Patterns and Best Practices on AWS | AWS re:I...Amazon Web Services
 
Day 4 - Big Data on AWS - RedShift, EMR & the Internet of Things
Day 4 - Big Data on AWS - RedShift, EMR & the Internet of ThingsDay 4 - Big Data on AWS - RedShift, EMR & the Internet of Things
Day 4 - Big Data on AWS - RedShift, EMR & the Internet of ThingsAmazon Web Services
 
(BDT322) How Redfin & Twitter Leverage Amazon S3 For Big Data
(BDT322) How Redfin & Twitter Leverage Amazon S3 For Big Data(BDT322) How Redfin & Twitter Leverage Amazon S3 For Big Data
(BDT322) How Redfin & Twitter Leverage Amazon S3 For Big DataAmazon Web Services
 
The Power of Big Data - AWS Summit Bahrain 2017
The Power of Big Data - AWS Summit Bahrain 2017The Power of Big Data - AWS Summit Bahrain 2017
The Power of Big Data - AWS Summit Bahrain 2017Amazon Web Services
 
Data Warehousing in the Era of Big Data: Intro to Amazon Redshift
Data Warehousing in the Era of Big Data: Intro to Amazon RedshiftData Warehousing in the Era of Big Data: Intro to Amazon Redshift
Data Warehousing in the Era of Big Data: Intro to Amazon RedshiftAmazon Web Services
 

La actualidad más candente (20)

AWS re:Invent 2016: Big Data Mini Con State of the Union (BDM205)
AWS re:Invent 2016: Big Data Mini Con State of the Union (BDM205)AWS re:Invent 2016: Big Data Mini Con State of the Union (BDM205)
AWS re:Invent 2016: Big Data Mini Con State of the Union (BDM205)
 
AWS Big Data Solution Days
AWS Big Data Solution DaysAWS Big Data Solution Days
AWS Big Data Solution Days
 
Tackle Your Dark Data Challenge with AWS Glue - AWS Online Tech Talks
Tackle Your Dark Data  Challenge with AWS Glue - AWS Online Tech TalksTackle Your Dark Data  Challenge with AWS Glue - AWS Online Tech Talks
Tackle Your Dark Data Challenge with AWS Glue - AWS Online Tech Talks
 
Deep Dive and Best Practices for Real Time Streaming Applications
Deep Dive and Best Practices for Real Time Streaming ApplicationsDeep Dive and Best Practices for Real Time Streaming Applications
Deep Dive and Best Practices for Real Time Streaming Applications
 
AWS Analytics
AWS AnalyticsAWS Analytics
AWS Analytics
 
Deploying your Data Warehouse on AWS
Deploying your Data Warehouse on AWSDeploying your Data Warehouse on AWS
Deploying your Data Warehouse on AWS
 
Modern Data Architectures for Business Insights at Scale
Modern Data Architectures for Business Insights at ScaleModern Data Architectures for Business Insights at Scale
Modern Data Architectures for Business Insights at Scale
 
Big Data Architectural Patterns and Best Practices on AWS
Big Data Architectural Patterns and Best Practices on AWSBig Data Architectural Patterns and Best Practices on AWS
Big Data Architectural Patterns and Best Practices on AWS
 
Big Data Architectural Patterns and Best Practices
Big Data Architectural Patterns and Best PracticesBig Data Architectural Patterns and Best Practices
Big Data Architectural Patterns and Best Practices
 
AWS APAC Webinar Week - Big Data on AWS. RedShift, EMR, & IOT
AWS APAC Webinar Week - Big Data on AWS. RedShift, EMR, & IOTAWS APAC Webinar Week - Big Data on AWS. RedShift, EMR, & IOT
AWS APAC Webinar Week - Big Data on AWS. RedShift, EMR, & IOT
 
AWS re:Invent 2016: Big Data Architectural Patterns and Best Practices on AWS...
AWS re:Invent 2016: Big Data Architectural Patterns and Best Practices on AWS...AWS re:Invent 2016: Big Data Architectural Patterns and Best Practices on AWS...
AWS re:Invent 2016: Big Data Architectural Patterns and Best Practices on AWS...
 
Streaming data analytics (Kinesis, EMR/Spark) - Pop-up Loft Tel Aviv
Streaming data analytics (Kinesis, EMR/Spark) - Pop-up Loft Tel Aviv Streaming data analytics (Kinesis, EMR/Spark) - Pop-up Loft Tel Aviv
Streaming data analytics (Kinesis, EMR/Spark) - Pop-up Loft Tel Aviv
 
Best Practices for Building a Data Lake on AWS
Best Practices for Building a Data Lake on AWSBest Practices for Building a Data Lake on AWS
Best Practices for Building a Data Lake on AWS
 
Supercharging the Value of Your Data with Amazon S3
Supercharging the Value of Your Data with Amazon S3Supercharging the Value of Your Data with Amazon S3
Supercharging the Value of Your Data with Amazon S3
 
Building A Modern Data Analytics Architecture on AWS
Building A Modern Data Analytics Architecture on AWSBuilding A Modern Data Analytics Architecture on AWS
Building A Modern Data Analytics Architecture on AWS
 
(BDT310) Big Data Architectural Patterns and Best Practices on AWS | AWS re:I...
(BDT310) Big Data Architectural Patterns and Best Practices on AWS | AWS re:I...(BDT310) Big Data Architectural Patterns and Best Practices on AWS | AWS re:I...
(BDT310) Big Data Architectural Patterns and Best Practices on AWS | AWS re:I...
 
Day 4 - Big Data on AWS - RedShift, EMR & the Internet of Things
Day 4 - Big Data on AWS - RedShift, EMR & the Internet of ThingsDay 4 - Big Data on AWS - RedShift, EMR & the Internet of Things
Day 4 - Big Data on AWS - RedShift, EMR & the Internet of Things
 
(BDT322) How Redfin & Twitter Leverage Amazon S3 For Big Data
(BDT322) How Redfin & Twitter Leverage Amazon S3 For Big Data(BDT322) How Redfin & Twitter Leverage Amazon S3 For Big Data
(BDT322) How Redfin & Twitter Leverage Amazon S3 For Big Data
 
The Power of Big Data - AWS Summit Bahrain 2017
The Power of Big Data - AWS Summit Bahrain 2017The Power of Big Data - AWS Summit Bahrain 2017
The Power of Big Data - AWS Summit Bahrain 2017
 
Data Warehousing in the Era of Big Data: Intro to Amazon Redshift
Data Warehousing in the Era of Big Data: Intro to Amazon RedshiftData Warehousing in the Era of Big Data: Intro to Amazon Redshift
Data Warehousing in the Era of Big Data: Intro to Amazon Redshift
 

Destacado

Customer Sharing: Trend Micro - Trend Micro's DevOps Practices
Customer Sharing: Trend Micro - Trend Micro's DevOps Practices Customer Sharing: Trend Micro - Trend Micro's DevOps Practices
Customer Sharing: Trend Micro - Trend Micro's DevOps Practices Amazon Web Services
 
AWS Partner ConneXions Taiwan - Q3 2016 Technology Update
AWS Partner ConneXions Taiwan - Q3 2016 Technology UpdateAWS Partner ConneXions Taiwan - Q3 2016 Technology Update
AWS Partner ConneXions Taiwan - Q3 2016 Technology UpdateAmazon Web Services
 
Digital Workloads on Amazon Web Services
Digital Workloads on Amazon Web ServicesDigital Workloads on Amazon Web Services
Digital Workloads on Amazon Web ServicesAmazon Web Services
 
Hackproof Your Cloud: Responding to 2016 Threats
Hackproof Your Cloud: Responding to 2016 ThreatsHackproof Your Cloud: Responding to 2016 Threats
Hackproof Your Cloud: Responding to 2016 ThreatsAmazon Web Services
 
DevOps on AWS: Deep Dive on Continuous Delivery and the AWS Developer Tools
DevOps on AWS: Deep Dive on Continuous Delivery and the AWS Developer ToolsDevOps on AWS: Deep Dive on Continuous Delivery and the AWS Developer Tools
DevOps on AWS: Deep Dive on Continuous Delivery and the AWS Developer ToolsAmazon Web Services
 
Big Data Solutions Day - Calgary
Big Data Solutions Day - CalgaryBig Data Solutions Day - Calgary
Big Data Solutions Day - CalgaryAmazon Web Services
 
Database Migration: Simple, Cross-Engine and Cross-Platform Migrations with M...
Database Migration: Simple, Cross-Engine and Cross-Platform Migrations with M...Database Migration: Simple, Cross-Engine and Cross-Platform Migrations with M...
Database Migration: Simple, Cross-Engine and Cross-Platform Migrations with M...Amazon Web Services
 
This One Weird API Request Will Save You Thousands
This One Weird API Request Will Save You ThousandsThis One Weird API Request Will Save You Thousands
This One Weird API Request Will Save You ThousandsAmazon Web Services
 
Maximizing Business Value as You Migrate to AWS
Maximizing Business Value as You Migrate to AWSMaximizing Business Value as You Migrate to AWS
Maximizing Business Value as You Migrate to AWSAmazon Web Services
 
another day, another billion packets
another day, another billion packetsanother day, another billion packets
another day, another billion packetsAmazon Web Services
 
Barracuda WAF: Scalable Security for Applications on AWS
Barracuda WAF: Scalable Security for Applications on AWSBarracuda WAF: Scalable Security for Applications on AWS
Barracuda WAF: Scalable Security for Applications on AWSAmazon Web Services
 
Introduction to IAM + Best Practices
Introduction to IAM + Best PracticesIntroduction to IAM + Best Practices
Introduction to IAM + Best PracticesAmazon Web Services
 
AWS re:Invent 2016: AWS Customers Saving Lives with Mobile and IoT Technology...
AWS re:Invent 2016: AWS Customers Saving Lives with Mobile and IoT Technology...AWS re:Invent 2016: AWS Customers Saving Lives with Mobile and IoT Technology...
AWS re:Invent 2016: AWS Customers Saving Lives with Mobile and IoT Technology...Amazon Web Services
 
Building HPC Clusters as Code in the (Almost) Infinite Cloud | AWS Public Sec...
Building HPC Clusters as Code in the (Almost) Infinite Cloud | AWS Public Sec...Building HPC Clusters as Code in the (Almost) Infinite Cloud | AWS Public Sec...
Building HPC Clusters as Code in the (Almost) Infinite Cloud | AWS Public Sec...Amazon Web Services
 
AWS Mobile Hub - Building Mobile Apps with AWS
AWS Mobile Hub - Building Mobile Apps with AWSAWS Mobile Hub - Building Mobile Apps with AWS
AWS Mobile Hub - Building Mobile Apps with AWSAmazon Web Services
 
Amazon Aurora for the Enterprise - August 2016 Monthly Webinar Series
Amazon Aurora for the Enterprise - August 2016 Monthly Webinar SeriesAmazon Aurora for the Enterprise - August 2016 Monthly Webinar Series
Amazon Aurora for the Enterprise - August 2016 Monthly Webinar SeriesAmazon Web Services
 
Customer Case Study: Achieving PCI Compliance in AWS
Customer Case Study: Achieving PCI Compliance in AWSCustomer Case Study: Achieving PCI Compliance in AWS
Customer Case Study: Achieving PCI Compliance in AWSAmazon Web Services
 
Secure Content Delivery Using Amazon CloudFront and AWS WAF
Secure Content Delivery Using Amazon CloudFront and AWS WAFSecure Content Delivery Using Amazon CloudFront and AWS WAF
Secure Content Delivery Using Amazon CloudFront and AWS WAFAmazon Web Services
 

Destacado (20)

Hadoop in action
Hadoop in actionHadoop in action
Hadoop in action
 
Customer Sharing: Trend Micro - Trend Micro's DevOps Practices
Customer Sharing: Trend Micro - Trend Micro's DevOps Practices Customer Sharing: Trend Micro - Trend Micro's DevOps Practices
Customer Sharing: Trend Micro - Trend Micro's DevOps Practices
 
AWS Partner ConneXions Taiwan - Q3 2016 Technology Update
AWS Partner ConneXions Taiwan - Q3 2016 Technology UpdateAWS Partner ConneXions Taiwan - Q3 2016 Technology Update
AWS Partner ConneXions Taiwan - Q3 2016 Technology Update
 
Digital Workloads on Amazon Web Services
Digital Workloads on Amazon Web ServicesDigital Workloads on Amazon Web Services
Digital Workloads on Amazon Web Services
 
Hackproof Your Cloud: Responding to 2016 Threats
Hackproof Your Cloud: Responding to 2016 ThreatsHackproof Your Cloud: Responding to 2016 Threats
Hackproof Your Cloud: Responding to 2016 Threats
 
DevOps on AWS: Deep Dive on Continuous Delivery and the AWS Developer Tools
DevOps on AWS: Deep Dive on Continuous Delivery and the AWS Developer ToolsDevOps on AWS: Deep Dive on Continuous Delivery and the AWS Developer Tools
DevOps on AWS: Deep Dive on Continuous Delivery and the AWS Developer Tools
 
Big Data Solutions Day - Calgary
Big Data Solutions Day - CalgaryBig Data Solutions Day - Calgary
Big Data Solutions Day - Calgary
 
Database Migration: Simple, Cross-Engine and Cross-Platform Migrations with M...
Database Migration: Simple, Cross-Engine and Cross-Platform Migrations with M...Database Migration: Simple, Cross-Engine and Cross-Platform Migrations with M...
Database Migration: Simple, Cross-Engine and Cross-Platform Migrations with M...
 
This One Weird API Request Will Save You Thousands
This One Weird API Request Will Save You ThousandsThis One Weird API Request Will Save You Thousands
This One Weird API Request Will Save You Thousands
 
Maximizing Business Value as You Migrate to AWS
Maximizing Business Value as You Migrate to AWSMaximizing Business Value as You Migrate to AWS
Maximizing Business Value as You Migrate to AWS
 
another day, another billion packets
another day, another billion packetsanother day, another billion packets
another day, another billion packets
 
AWSome Day Leeds
AWSome Day Leeds AWSome Day Leeds
AWSome Day Leeds
 
Barracuda WAF: Scalable Security for Applications on AWS
Barracuda WAF: Scalable Security for Applications on AWSBarracuda WAF: Scalable Security for Applications on AWS
Barracuda WAF: Scalable Security for Applications on AWS
 
Introduction to IAM + Best Practices
Introduction to IAM + Best PracticesIntroduction to IAM + Best Practices
Introduction to IAM + Best Practices
 
AWS re:Invent 2016: AWS Customers Saving Lives with Mobile and IoT Technology...
AWS re:Invent 2016: AWS Customers Saving Lives with Mobile and IoT Technology...AWS re:Invent 2016: AWS Customers Saving Lives with Mobile and IoT Technology...
AWS re:Invent 2016: AWS Customers Saving Lives with Mobile and IoT Technology...
 
Building HPC Clusters as Code in the (Almost) Infinite Cloud | AWS Public Sec...
Building HPC Clusters as Code in the (Almost) Infinite Cloud | AWS Public Sec...Building HPC Clusters as Code in the (Almost) Infinite Cloud | AWS Public Sec...
Building HPC Clusters as Code in the (Almost) Infinite Cloud | AWS Public Sec...
 
AWS Mobile Hub - Building Mobile Apps with AWS
AWS Mobile Hub - Building Mobile Apps with AWSAWS Mobile Hub - Building Mobile Apps with AWS
AWS Mobile Hub - Building Mobile Apps with AWS
 
Amazon Aurora for the Enterprise - August 2016 Monthly Webinar Series
Amazon Aurora for the Enterprise - August 2016 Monthly Webinar SeriesAmazon Aurora for the Enterprise - August 2016 Monthly Webinar Series
Amazon Aurora for the Enterprise - August 2016 Monthly Webinar Series
 
Customer Case Study: Achieving PCI Compliance in AWS
Customer Case Study: Achieving PCI Compliance in AWSCustomer Case Study: Achieving PCI Compliance in AWS
Customer Case Study: Achieving PCI Compliance in AWS
 
Secure Content Delivery Using Amazon CloudFront and AWS WAF
Secure Content Delivery Using Amazon CloudFront and AWS WAFSecure Content Delivery Using Amazon CloudFront and AWS WAF
Secure Content Delivery Using Amazon CloudFront and AWS WAF
 

Similar a Workshop: Building Your First Big Data Application on AWS

Building Your First Big Data Application on AWS
Building Your First Big Data Application on AWSBuilding Your First Big Data Application on AWS
Building Your First Big Data Application on AWSAmazon Web Services
 
Building Your First Big Data Application on AWS
Building Your First Big Data Application on AWSBuilding Your First Big Data Application on AWS
Building Your First Big Data Application on AWSAmazon Web Services
 
Building Your First Big Data Application on AWS
Building Your First Big Data Application on AWSBuilding Your First Big Data Application on AWS
Building Your First Big Data Application on AWSAmazon Web Services
 
(BDT205) Your First Big Data Application On AWS
(BDT205) Your First Big Data Application On AWS(BDT205) Your First Big Data Application On AWS
(BDT205) Your First Big Data Application On AWSAmazon Web Services
 
AWS APAC Webinar Week - Launching Your First Big Data Project on AWS
AWS APAC Webinar Week - Launching Your First Big Data Project on AWSAWS APAC Webinar Week - Launching Your First Big Data Project on AWS
AWS APAC Webinar Week - Launching Your First Big Data Project on AWSAmazon Web Services
 
AWS September Webinar Series - Building Your First Big Data Application on AWS
AWS September Webinar Series - Building Your First Big Data Application on AWS AWS September Webinar Series - Building Your First Big Data Application on AWS
AWS September Webinar Series - Building Your First Big Data Application on AWS Amazon Web Services
 
DevOps Fest 2019. Alex Casalboni. Configuration management and service discov...
DevOps Fest 2019. Alex Casalboni. Configuration management and service discov...DevOps Fest 2019. Alex Casalboni. Configuration management and service discov...
DevOps Fest 2019. Alex Casalboni. Configuration management and service discov...DevOps_Fest
 
ITB2019 Build Fine-Grained Control of Amazon Web Services in Your CFML App - ...
ITB2019 Build Fine-Grained Control of Amazon Web Services in Your CFML App - ...ITB2019 Build Fine-Grained Control of Amazon Web Services in Your CFML App - ...
ITB2019 Build Fine-Grained Control of Amazon Web Services in Your CFML App - ...Ortus Solutions, Corp
 
Terraform, Ansible, or pure CloudFormation?
Terraform, Ansible, or pure CloudFormation?Terraform, Ansible, or pure CloudFormation?
Terraform, Ansible, or pure CloudFormation?geekQ
 
Building a Serverless Pipeline
Building a Serverless PipelineBuilding a Serverless Pipeline
Building a Serverless PipelineJulien SIMON
 
Serverless architecture with AWS Lambda (June 2016)
Serverless architecture with AWS Lambda (June 2016)Serverless architecture with AWS Lambda (June 2016)
Serverless architecture with AWS Lambda (June 2016)Julien SIMON
 
(BDT205) Your First Big Data Application on AWS | AWS re:Invent 2014
(BDT205) Your First Big Data Application on AWS | AWS re:Invent 2014(BDT205) Your First Big Data Application on AWS | AWS re:Invent 2014
(BDT205) Your First Big Data Application on AWS | AWS re:Invent 2014Amazon Web Services
 
AWS for Startups, London - Programming AWS
AWS for Startups, London - Programming AWSAWS for Startups, London - Programming AWS
AWS for Startups, London - Programming AWSAmazon Web Services
 
Amazon EMR Masterclass
Amazon EMR MasterclassAmazon EMR Masterclass
Amazon EMR MasterclassIan Massingham
 
Deployment and Management on AWS:
 A Deep Dive on Options and Tools
Deployment and Management on AWS:
 A Deep Dive on Options and ToolsDeployment and Management on AWS:
 A Deep Dive on Options and Tools
Deployment and Management on AWS:
 A Deep Dive on Options and ToolsDanilo Poccia
 

Similar a Workshop: Building Your First Big Data Application on AWS (20)

Building Your First Big Data Application on AWS
Building Your First Big Data Application on AWSBuilding Your First Big Data Application on AWS
Building Your First Big Data Application on AWS
 
Building Your First Big Data Application on AWS
Building Your First Big Data Application on AWSBuilding Your First Big Data Application on AWS
Building Your First Big Data Application on AWS
 
Building Your First Big Data Application on AWS
Building Your First Big Data Application on AWSBuilding Your First Big Data Application on AWS
Building Your First Big Data Application on AWS
 
My First Big Data Application
My First Big Data ApplicationMy First Big Data Application
My First Big Data Application
 
(BDT205) Your First Big Data Application On AWS
(BDT205) Your First Big Data Application On AWS(BDT205) Your First Big Data Application On AWS
(BDT205) Your First Big Data Application On AWS
 
AWS APAC Webinar Week - Launching Your First Big Data Project on AWS
AWS APAC Webinar Week - Launching Your First Big Data Project on AWSAWS APAC Webinar Week - Launching Your First Big Data Project on AWS
AWS APAC Webinar Week - Launching Your First Big Data Project on AWS
 
AWS September Webinar Series - Building Your First Big Data Application on AWS
AWS September Webinar Series - Building Your First Big Data Application on AWS AWS September Webinar Series - Building Your First Big Data Application on AWS
AWS September Webinar Series - Building Your First Big Data Application on AWS
 
DevOps Fest 2019. Alex Casalboni. Configuration management and service discov...
DevOps Fest 2019. Alex Casalboni. Configuration management and service discov...DevOps Fest 2019. Alex Casalboni. Configuration management and service discov...
DevOps Fest 2019. Alex Casalboni. Configuration management and service discov...
 
ITB2019 Build Fine-Grained Control of Amazon Web Services in Your CFML App - ...
ITB2019 Build Fine-Grained Control of Amazon Web Services in Your CFML App - ...ITB2019 Build Fine-Grained Control of Amazon Web Services in Your CFML App - ...
ITB2019 Build Fine-Grained Control of Amazon Web Services in Your CFML App - ...
 
Terraform, Ansible, or pure CloudFormation?
Terraform, Ansible, or pure CloudFormation?Terraform, Ansible, or pure CloudFormation?
Terraform, Ansible, or pure CloudFormation?
 
Building a Serverless Pipeline
Building a Serverless PipelineBuilding a Serverless Pipeline
Building a Serverless Pipeline
 
TIAD 2016 : Building a Serverless Pipeline
TIAD 2016 : Building a Serverless PipelineTIAD 2016 : Building a Serverless Pipeline
TIAD 2016 : Building a Serverless Pipeline
 
Serverless architecture with AWS Lambda (June 2016)
Serverless architecture with AWS Lambda (June 2016)Serverless architecture with AWS Lambda (June 2016)
Serverless architecture with AWS Lambda (June 2016)
 
AWS Serverless Workshop
AWS Serverless WorkshopAWS Serverless Workshop
AWS Serverless Workshop
 
(BDT205) Your First Big Data Application on AWS | AWS re:Invent 2014
(BDT205) Your First Big Data Application on AWS | AWS re:Invent 2014(BDT205) Your First Big Data Application on AWS | AWS re:Invent 2014
(BDT205) Your First Big Data Application on AWS | AWS re:Invent 2014
 
AWS for Startups, London - Programming AWS
AWS for Startups, London - Programming AWSAWS for Startups, London - Programming AWS
AWS for Startups, London - Programming AWS
 
Amazed by AWS Series #4
Amazed by AWS Series #4Amazed by AWS Series #4
Amazed by AWS Series #4
 
Amazon EMR Masterclass
Amazon EMR MasterclassAmazon EMR Masterclass
Amazon EMR Masterclass
 
Amazon EMR Masterclass
Amazon EMR MasterclassAmazon EMR Masterclass
Amazon EMR Masterclass
 
Deployment and Management on AWS:
 A Deep Dive on Options and Tools
Deployment and Management on AWS:
 A Deep Dive on Options and ToolsDeployment and Management on AWS:
 A Deep Dive on Options and Tools
Deployment and Management on AWS:
 A Deep Dive on Options and Tools
 

Más de Amazon Web Services

Come costruire servizi di Forecasting sfruttando algoritmi di ML e deep learn...
Come costruire servizi di Forecasting sfruttando algoritmi di ML e deep learn...Come costruire servizi di Forecasting sfruttando algoritmi di ML e deep learn...
Come costruire servizi di Forecasting sfruttando algoritmi di ML e deep learn...Amazon Web Services
 
Big Data per le Startup: come creare applicazioni Big Data in modalità Server...
Big Data per le Startup: come creare applicazioni Big Data in modalità Server...Big Data per le Startup: come creare applicazioni Big Data in modalità Server...
Big Data per le Startup: come creare applicazioni Big Data in modalità Server...Amazon Web Services
 
Esegui pod serverless con Amazon EKS e AWS Fargate
Esegui pod serverless con Amazon EKS e AWS FargateEsegui pod serverless con Amazon EKS e AWS Fargate
Esegui pod serverless con Amazon EKS e AWS FargateAmazon Web Services
 
Costruire Applicazioni Moderne con AWS
Costruire Applicazioni Moderne con AWSCostruire Applicazioni Moderne con AWS
Costruire Applicazioni Moderne con AWSAmazon Web Services
 
Come spendere fino al 90% in meno con i container e le istanze spot
Come spendere fino al 90% in meno con i container e le istanze spot Come spendere fino al 90% in meno con i container e le istanze spot
Come spendere fino al 90% in meno con i container e le istanze spot Amazon Web Services
 
Rendi unica l’offerta della tua startup sul mercato con i servizi Machine Lea...
Rendi unica l’offerta della tua startup sul mercato con i servizi Machine Lea...Rendi unica l’offerta della tua startup sul mercato con i servizi Machine Lea...
Rendi unica l’offerta della tua startup sul mercato con i servizi Machine Lea...Amazon Web Services
 
OpsWorks Configuration Management: automatizza la gestione e i deployment del...
OpsWorks Configuration Management: automatizza la gestione e i deployment del...OpsWorks Configuration Management: automatizza la gestione e i deployment del...
OpsWorks Configuration Management: automatizza la gestione e i deployment del...Amazon Web Services
 
Microsoft Active Directory su AWS per supportare i tuoi Windows Workloads
Microsoft Active Directory su AWS per supportare i tuoi Windows WorkloadsMicrosoft Active Directory su AWS per supportare i tuoi Windows Workloads
Microsoft Active Directory su AWS per supportare i tuoi Windows WorkloadsAmazon Web Services
 
Database Oracle e VMware Cloud on AWS i miti da sfatare
Database Oracle e VMware Cloud on AWS i miti da sfatareDatabase Oracle e VMware Cloud on AWS i miti da sfatare
Database Oracle e VMware Cloud on AWS i miti da sfatareAmazon Web Services
 
Crea la tua prima serverless ledger-based app con QLDB e NodeJS
Crea la tua prima serverless ledger-based app con QLDB e NodeJSCrea la tua prima serverless ledger-based app con QLDB e NodeJS
Crea la tua prima serverless ledger-based app con QLDB e NodeJSAmazon Web Services
 
API moderne real-time per applicazioni mobili e web
API moderne real-time per applicazioni mobili e webAPI moderne real-time per applicazioni mobili e web
API moderne real-time per applicazioni mobili e webAmazon Web Services
 
Database Oracle e VMware Cloud™ on AWS: i miti da sfatare
Database Oracle e VMware Cloud™ on AWS: i miti da sfatareDatabase Oracle e VMware Cloud™ on AWS: i miti da sfatare
Database Oracle e VMware Cloud™ on AWS: i miti da sfatareAmazon Web Services
 
Tools for building your MVP on AWS
Tools for building your MVP on AWSTools for building your MVP on AWS
Tools for building your MVP on AWSAmazon Web Services
 
How to Build a Winning Pitch Deck
How to Build a Winning Pitch DeckHow to Build a Winning Pitch Deck
How to Build a Winning Pitch DeckAmazon Web Services
 
Building a web application without servers
Building a web application without serversBuilding a web application without servers
Building a web application without serversAmazon Web Services
 
AWS_HK_StartupDay_Building Interactive websites while automating for efficien...
AWS_HK_StartupDay_Building Interactive websites while automating for efficien...AWS_HK_StartupDay_Building Interactive websites while automating for efficien...
AWS_HK_StartupDay_Building Interactive websites while automating for efficien...Amazon Web Services
 
Introduzione a Amazon Elastic Container Service
Introduzione a Amazon Elastic Container ServiceIntroduzione a Amazon Elastic Container Service
Introduzione a Amazon Elastic Container ServiceAmazon Web Services
 

Más de Amazon Web Services (20)

Come costruire servizi di Forecasting sfruttando algoritmi di ML e deep learn...
Come costruire servizi di Forecasting sfruttando algoritmi di ML e deep learn...Come costruire servizi di Forecasting sfruttando algoritmi di ML e deep learn...
Come costruire servizi di Forecasting sfruttando algoritmi di ML e deep learn...
 
Big Data per le Startup: come creare applicazioni Big Data in modalità Server...
Big Data per le Startup: come creare applicazioni Big Data in modalità Server...Big Data per le Startup: come creare applicazioni Big Data in modalità Server...
Big Data per le Startup: come creare applicazioni Big Data in modalità Server...
 
Esegui pod serverless con Amazon EKS e AWS Fargate
Esegui pod serverless con Amazon EKS e AWS FargateEsegui pod serverless con Amazon EKS e AWS Fargate
Esegui pod serverless con Amazon EKS e AWS Fargate
 
Costruire Applicazioni Moderne con AWS
Costruire Applicazioni Moderne con AWSCostruire Applicazioni Moderne con AWS
Costruire Applicazioni Moderne con AWS
 
Come spendere fino al 90% in meno con i container e le istanze spot
Come spendere fino al 90% in meno con i container e le istanze spot Come spendere fino al 90% in meno con i container e le istanze spot
Come spendere fino al 90% in meno con i container e le istanze spot
 
Open banking as a service
Open banking as a serviceOpen banking as a service
Open banking as a service
 
Rendi unica l’offerta della tua startup sul mercato con i servizi Machine Lea...
Rendi unica l’offerta della tua startup sul mercato con i servizi Machine Lea...Rendi unica l’offerta della tua startup sul mercato con i servizi Machine Lea...
Rendi unica l’offerta della tua startup sul mercato con i servizi Machine Lea...
 
OpsWorks Configuration Management: automatizza la gestione e i deployment del...
OpsWorks Configuration Management: automatizza la gestione e i deployment del...OpsWorks Configuration Management: automatizza la gestione e i deployment del...
OpsWorks Configuration Management: automatizza la gestione e i deployment del...
 
Microsoft Active Directory su AWS per supportare i tuoi Windows Workloads
Microsoft Active Directory su AWS per supportare i tuoi Windows WorkloadsMicrosoft Active Directory su AWS per supportare i tuoi Windows Workloads
Microsoft Active Directory su AWS per supportare i tuoi Windows Workloads
 
Computer Vision con AWS
Computer Vision con AWSComputer Vision con AWS
Computer Vision con AWS
 
Database Oracle e VMware Cloud on AWS i miti da sfatare
Database Oracle e VMware Cloud on AWS i miti da sfatareDatabase Oracle e VMware Cloud on AWS i miti da sfatare
Database Oracle e VMware Cloud on AWS i miti da sfatare
 
Crea la tua prima serverless ledger-based app con QLDB e NodeJS
Crea la tua prima serverless ledger-based app con QLDB e NodeJSCrea la tua prima serverless ledger-based app con QLDB e NodeJS
Crea la tua prima serverless ledger-based app con QLDB e NodeJS
 
API moderne real-time per applicazioni mobili e web
API moderne real-time per applicazioni mobili e webAPI moderne real-time per applicazioni mobili e web
API moderne real-time per applicazioni mobili e web
 
Database Oracle e VMware Cloud™ on AWS: i miti da sfatare
Database Oracle e VMware Cloud™ on AWS: i miti da sfatareDatabase Oracle e VMware Cloud™ on AWS: i miti da sfatare
Database Oracle e VMware Cloud™ on AWS: i miti da sfatare
 
Tools for building your MVP on AWS
Tools for building your MVP on AWSTools for building your MVP on AWS
Tools for building your MVP on AWS
 
How to Build a Winning Pitch Deck
How to Build a Winning Pitch DeckHow to Build a Winning Pitch Deck
How to Build a Winning Pitch Deck
 
Building a web application without servers
Building a web application without serversBuilding a web application without servers
Building a web application without servers
 
Fundraising Essentials
Fundraising EssentialsFundraising Essentials
Fundraising Essentials
 
AWS_HK_StartupDay_Building Interactive websites while automating for efficien...
AWS_HK_StartupDay_Building Interactive websites while automating for efficien...AWS_HK_StartupDay_Building Interactive websites while automating for efficien...
AWS_HK_StartupDay_Building Interactive websites while automating for efficien...
 
Introduzione a Amazon Elastic Container Service
Introduzione a Amazon Elastic Container ServiceIntroduzione a Amazon Elastic Container Service
Introduzione a Amazon Elastic Container Service
 

Último

EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWEREMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWERMadyBayot
 
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...Orbitshub
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoffsammart93
 
Elevate Developer Efficiency & build GenAI Application with Amazon Q​
Elevate Developer Efficiency & build GenAI Application with Amazon Q​Elevate Developer Efficiency & build GenAI Application with Amazon Q​
Elevate Developer Efficiency & build GenAI Application with Amazon Q​Bhuvaneswari Subramani
 
Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot ModelMcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot ModelDeepika Singh
 
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...Jeffrey Haguewood
 
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...Angeliki Cooney
 
ICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesrafiqahmad00786416
 
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...Zilliz
 
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...apidays
 
Introduction to Multilingual Retrieval Augmented Generation (RAG)
Introduction to Multilingual Retrieval Augmented Generation (RAG)Introduction to Multilingual Retrieval Augmented Generation (RAG)
Introduction to Multilingual Retrieval Augmented Generation (RAG)Zilliz
 
DBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDropbox
 
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Victor Rentea
 
[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdfSandro Moreira
 
MS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsMS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsNanddeep Nachan
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MIND CTI
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyKhushali Kathiriya
 
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdfRising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdfOrbitshub
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century educationjfdjdjcjdnsjd
 

Último (20)

Understanding the FAA Part 107 License ..
Understanding the FAA Part 107 License ..Understanding the FAA Part 107 License ..
Understanding the FAA Part 107 License ..
 
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWEREMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
EMPOWERMENT TECHNOLOGY GRADE 11 QUARTER 2 REVIEWER
 
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
Navigating the Deluge_ Dubai Floods and the Resilience of Dubai International...
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
Elevate Developer Efficiency & build GenAI Application with Amazon Q​
Elevate Developer Efficiency & build GenAI Application with Amazon Q​Elevate Developer Efficiency & build GenAI Application with Amazon Q​
Elevate Developer Efficiency & build GenAI Application with Amazon Q​
 
Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot ModelMcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot Model
Mcleodganj Call Girls 🥰 8617370543 Service Offer VIP Hot Model
 
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
Web Form Automation for Bonterra Impact Management (fka Social Solutions Apri...
 
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
Biography Of Angeliki Cooney | Senior Vice President Life Sciences | Albany, ...
 
ICT role in 21st century education and its challenges
ICT role in 21st century education and its challengesICT role in 21st century education and its challenges
ICT role in 21st century education and its challenges
 
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
 
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
 
Introduction to Multilingual Retrieval Augmented Generation (RAG)
Introduction to Multilingual Retrieval Augmented Generation (RAG)Introduction to Multilingual Retrieval Augmented Generation (RAG)
Introduction to Multilingual Retrieval Augmented Generation (RAG)
 
DBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor Presentation
 
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
 
[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf[BuildWithAI] Introduction to Gemini.pdf
[BuildWithAI] Introduction to Gemini.pdf
 
MS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsMS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectors
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : Uncertainty
 
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdfRising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 

Workshop: Building Your First Big Data Application on AWS

  • 1. © 2016, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Shree Kenghe and Dario Rivera Solution Architect, AWS Building Your First Big Data Application on AWS
  • 2. Your First Big Data Application on AWS PROCESS STORE ANALYZE & VISUALIZE COLLECT
  • 3. Your First Big Data Application on AWS PROCESS: Amazon EMR with Spark & Hive STORE ANALYZE & VISUALIZE: Amazon Redshift and Amazon QuickSight COLLECT: Amazon Kinesis Firehose
  • 5. Setting up the environment
  • 6. Data Storage with Amazon S3 Download all the CLI steps: http://bit.ly/aws-big-data-steps Create an Amazon S3 bucket to store the data collected with Amazon Kinesis Firehose aws s3 mb s3://YOUR-S3-BUCKET-NAME
  • 7. Access Control with IAM Create an IAM role to allow Firehose to write to the S3 bucket firehose-policy.json: { "Version": "2012-10-17", "Statement": { "Effect": "Allow", "Principal": {"Service": "firehose.amazonaws.com"}, "Action": "sts:AssumeRole" } }
  • 8. Access Control with IAM Create an IAM role to allow Firehose to write to the S3 bucket s3-rw-policy.json: { "Version": "2012-10-17", "Statement": { "Effect": "Allow", "Action": "s3:*", "Resource": [ "arn:aws:s3:::YOUR-S3-BUCKET-NAME", "arn:aws:s3:::YOUR-S3-BUCKET-NAME/*" ] } }
  • 9. Access Control with IAM Create an IAM role to allow Firehose to write to the S3 bucket aws iam create-role --role-name firehose-demo --assume-role-policy-document file://firehose-policy.json Copy the value in “Arn” in the output, e.g., arn:aws:iam::123456789:role/firehose-demo aws iam put-role-policy --role-name firehose-demo --policy-name firehose-s3-rw --policy-document file://s3-rw-policy.json
  • 10. Data Collection with Amazon Kinesis Firehose Create a Firehose stream for incoming log data aws firehose create-delivery-stream --delivery-stream-name demo-firehose-stream --s3-destination-configuration RoleARN=YOUR-FIREHOSE-ARN, BucketARN="arn:aws:s3:::YOUR-S3-BUCKET-NAME", Prefix=firehose/, BufferingHints={IntervalInSeconds=60}, CompressionFormat=GZIP
  • 11. Data Processing with Amazon EMR Launch an Amazon EMR cluster with Spark and Hive aws emr create-cluster --name "demo" --release-label emr-4.5.0 --instance-type m3.xlarge --instance-count 2 --ec2-attributes KeyName=YOUR-AWS-SSH-KEY --use-default-roles --applications Name=Hive Name=Spark Name=Zeppelin-Sandbox Record your ClusterId from the output.
  • 12. Access Control with IAM Create an IAM role to allow Redshift to copy from S3 bucket aws iam create-role --role-name redshift-role --assume-role-policy-document file://redshift-policy.json Copy the value in “arn” in the output, e.g., arn:aws:iam::123456789:role/redshift-role aws iam put-role-policy --role-name redshift-role --policy-name redshift-s3 --policy-document file://redshift-s3-policy.json
  • 13. Data Analysis with Amazon Redshift Create a single-node Amazon Redshift data warehouse: aws redshift create-cluster --cluster-identifier demo --db-name demo --node-type dc1.large --cluster-type single-node --iam-roles "arn:aws:iam::YOUR-AWS-ACCOUNT:role/redshift- copy-role" --master-username master --master-user-password YOUR-REDSHIFT-PASSWORD --publicly-accessible --port 8192
  • 15. Weblogs – Common Log Format (CLF) 75.35.230.210 - - [20/Jul/2009:22:22:42 -0700] "GET /images/pigtrihawk.jpg HTTP/1.1" 200 29236 "http://www.swivel.com/graphs/show/1163466" "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.11) Gecko/2009060215 Firefox/3.0.11 (.NET CLR 3.5.30729)"
  • 16. Writing into Amazon Kinesis Firehose Download the demo weblog: http://bit.ly/aws-big-data Open Python and run the following code to import the log into the stream: import boto3 iam = boto3.client('iam') firehose = boto3.client('firehose') with open('weblog', 'r') as f: for line in f: firehose.put_record( DeliveryStreamName='demo-firehose-stream', Record={'Data': line} ) print 'Record added'
  • 18. Apache Spark • Fast, general purpose engine for large-scale data processing • Write applications quickly in Java, Scala, or Python • Combine SQL, streaming, and complex analytics
  • 19. Spark SQL Spark's module for working with structured data using SQL Run unmodified Hive queries on existing data
  • 20. Apache Zeppelin • Web-based notebook for interactive analytics • Multiple language backend • Apache Spark integration • Data visualization • Collaboration https://zeppelin.incubator.apache.org/
  • 21. View the Output Files in Amazon S3 After about 1 minute, you should see files in your S3 bucket: aws s3 ls s3://YOUR-S3-BUCKET-NAME/firehose/ --recursive
  • 22. Connect to Your EMR Cluster and Zeppelin aws emr describe-cluster --cluster-id YOUR-EMR-CLUSTER-ID Copy the MasterPublicDnsName. Use port forwarding so you can access Zeppelin at http://localhost:18890 on your local machine. ssh -i PATH-TO-YOUR-SSH-KEY -L 18890:localhost:8890 hadoop@YOUR-EMR-DNS-NAME Open Zeppelin with your local web browser and create a new “Note”: http://localhost:18890
  • 23. Exploring the Data in Amazon S3 using Spark Download the Zeppelin notebook: http://bit.ly/aws-big-data-zeppelin // Load all the files from S3 into a RDD val accessLogLines = sc.textFile("s3://YOUR-S3-BUCKET-NAME/firehose/*/*/*/*") // Count the lines accessLogLines.count // Print one line as a string accessLogLines.first // delimited by space so split them into fields var accessLogFields = accessLogLines.map(_.split(" ").map(_.trim)) // Print the fields of a line accessLogFields.first
  • 24. Combine Fields: “A, B, C”  “A B C” var accessLogColumns = accessLogFields .map( arrayOfFields => { var temp1 =""; for (field <- arrayOfFields) yield { var temp2 = "" if (temp1.replaceAll("[",""").startsWith(""") && !temp1.endsWith(""")) temp1 = temp1 + " " + field.replaceAll("[|]",""") else temp1 = field.replaceAll("[|]",""") temp2 = temp1 if (temp1.endsWith(""")) temp1 = "" temp2 }}) .map( fields => fields.filter(field => (field.startsWith(""") && field.endsWith(""")) || !field.startsWith(""") )) .map(fields => fields.map(_.replaceAll(""","")))
  • 25. Create a Data Frame and Transform the Data import java.sql.Timestamp import java.net.URL case class accessLogs( ipAddress: String, requestTime: Timestamp, requestMethod: String, requestPath: String, requestProtocol: String, responseCode: String, responseSize: String, referrerHost: String, userAgent: String )
  • 26. Create a Data Frame and Transform the Data val accessLogsDF = accessLogColumns.map(line => { var ipAddress = line(0) var requestTime = new Timestamp(new java.text.SimpleDateFormat("dd/MMM/yyyy:HH:mm:ss Z").parse(line(3)).getTime()) var requestString = line(4).split(" ").map(_.trim()) var requestMethod = if (line(4).toString() != "-") requestString(0) else "" var requestPath = if (line(4).toString() != "-") requestString(1) else "" var requestProtocol = if (line(4).toString() != "-") requestString(2) else "" var responseCode = line(5).replaceAll("-","") var responseSize = line(6).replaceAll("-","") var referrerHost = line(7) var userAgent = line(8) accessLogs(ipAddress, requestTime, requestMethod, requestPath, requestProtocol,responseCode, responseSize, referrerHost, userAgent) }).toDF()
  • 27. Create an External Table Backed by Amazon S3 %sql CREATE EXTERNAL TABLE access_logs ( ip_address String, request_time Timestamp, request_method String, request_path String, request_protocol String, response_code String, response_size String, referrer_host String, user_agent String ) PARTITIONED BY (year STRING,month STRING, day STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY 't' STORED AS TEXTFILE LOCATION 's3://YOUR-S3-BUCKET-NAME/access-log-processed'
  • 28. Configure Hive Partitioning and Compression // set up Hive's "dynamic partitioning” %sql SET hive.exec.dynamic.partition=true // compress output files on Amazon S3 using Gzip %sql SET hive.exec.compress.output=true %sql SET mapred.output.compression.codec=org.apache.hadoop.io.compress.GzipCodec %sql SET io.compression.codecs=org.apache.hadoop.io.compress.GzipCodec %sql SET hive.exec.dynamic.partition.mode=nonstrict;
  • 29. Write Output to Amazon S3 import org.apache.spark.sql.SaveMode accessLogsDF .withColumn("year", year(accessLogsDF("requestTime"))) .withColumn("month", month(accessLogsDF("requestTime"))) .withColumn("day", dayofmonth(accessLogsDF("requestTime"))) .write .partitionBy("year","month","day") .mode(SaveMode.Overwrite) .insertInto("access_logs")
  • 30. Query the Data Using Spark SQL // Check the count of records %sql select count(*) from access_log_processed // Fetch the first 10 records %sql select * from access_log_processed limit 10
  • 31. View the Output Files in Amazon S3 Leave Zeppelin and go back to the console… List the partition prefixes and output files: aws s3 ls s3://YOUR-S3-BUCKET-NAME/access-log-processed/ --recursive
  • 33. Connect to Amazon Redshift Using the PostgreSQL CLI psql -h YOUR-REDSHIFT-ENDPOINT -p 8192 -U master demo Or use any JDBC or ODBC SQL client with the PostgreSQL 8.x drivers or native Amazon Redshift support • Aginity Workbench for Amazon Redshift • SQL Workbench/J
  • 34. Create an Amazon Redshift Table to Hold Your Data CREATE TABLE accesslogs ( host_address varchar(512), request_time timestamp, request_method varchar(5), request_path varchar(1024), request_protocol varchar(10), response_code Int, response_size Int, referrer_host varchar(1024), user_agent varchar(512) ) DISTKEY(host_address) SORTKEY(request_time);
  • 35. Loading Data into Amazon Redshift “COPY” command loads files in parallel from Amazon S3: COPY accesslogs FROM 's3://YOUR-S3-BUCKET-NAME/access-log-processed' CREDENTIALS 'aws_iam_role=arn:aws:iam::YOUR-AWS-ACCOUNT-ID:role/ROLE-NAME' DELIMITER 't' MAXERROR 0 GZIP;
  • 36. Amazon Redshift Test Queries -- find distribution of response codes over days SELECT TRUNC(request_time), response_code, COUNT(1) FROM accesslogs GROUP BY 1,2 ORDER BY 1,3 DESC; -- find the 404 status codes SELECT COUNT(1) FROM accessLogs WHERE response_code = 404; -- show all requests for status as PAGE NOT FOUND SELECT TOP 1 request_path,COUNT(1) FROM accesslogs WHERE response_code = 404 GROUP BY 1 ORDER BY 2 DESC;
  • 38. Learn from big data experts blogs.aws.amazon.com/bigdata