4. Web 2.0 Era Topic Map
Produce Process
Inexpensiv
Data e Storage
Explosion
LAM
Social P
Platform Publishin
s g
Platforms
Situational
Applications
Web 2.0 Mashups
Enterpris SOA
e
4
7. The data just keeps growing…
1024 GIGABYTE= 1 TERABYTE
1024 TERABYTES = 1 PETABYTE
1024 PETABYTES = 1 EXABYTE
1 PETABYTE 13.3 Years of HD Video
20 PETABYTES Amount of Data processed by Google daily
5 EXABYTES All words ever spoken by humanity
8. Mobile
App Economy for Devices Sensor Web
App for this App for that An instrumented and monitored world
Set Top Tablets, etc. Multiple Sensors in your pocket
Boxes
Real-time
Data
The Fractured Web
Opportunity
Facebook Twitter LinkedIn
Service Economy
Service for this Google NetFlix New York Times
Service for that eBay Pandora PayPal Web 2.0 Data Exhaust of
Historical and Real-time Data
Web 2.0 - Connecting People API Foundation
Web as a Platform
8 Web 1.0 - Connecting Machines Infrastructure
22. Storing, Reading and Processing - Apache Hadoop
Cluster technology with a single master and scale out with multiple slaves
It consists of two runtimes:
The Hadoop Distributed File System (HDFS)
Map/Reduce
As data is copied onto the HDFS it ensures the data is blocked and replicated to other
machines to provide redundancy
A self-contained job (workload) is written in Map/Reduce and submitted to the Hadoop
Master which in-turn distributes the job to each slave in the cluster.
Jobs run on data that is on the local disks of the machine they are sent to ensuring data
locality
Node (Slave) failures are handled automatically by Hadoop. Hadoop may execute or re-
execute a job on any node in the cluster.
Want to know more?
22
“Hadoop – The Definitive Guide (2nd Edition)”
23. Delivering Data @ Scale
• Structured Data
• Low Latency & Random Access
• Column Stores (Apache HBase or Apache Cassandra)
• faster seeks
• better compression
• simpler scale out
• De-normalized – Data is written as it is intended to be queried
Want to know more?
23
“HBase – The Definitive Guide” & “Cassandra High Performance
24. Storing, Processing & Delivering : Hadoop + NoSQL
Gather Read/Transfor Low-
m latency Application
Web Data
Nutch Query
Crawl
Serve
Copy
Apache
Hadoop
Log Files
Flume
Connector HDFS NoSQL
Repository
NoSQL
SQOOP Connector/A
Connector PI
Relational
Data
-Clean and Filter Data
(JDBC)
- Transform and Enrich Data
MySQL
- Often multiple Hadoop jobs
24
25. Some things to keep
in mind…
25
– Kanaka Menehune (Flickr)
26. Some things to keep in mind…
• Processing arbitrary types of data (unstructured, semi-
structured, structured) requires normalizing data with many different
kinds of readers
Hadoop is really great at this !
• However, readers won’t really help you process truly unstructured data
such as prose. For that you’re going to have to get handy with Natural
Language Processing. But this is really hard.
Consider using parsing services & APIs like Open Calais
Want to know more?
26
“Programming Pig” (O’REILLY)
28. Statistical real-time decision making
Capture Historical information
Use Machine Learning to build decision making models (such as
Classification, Clustering & Recommendation)
Mesh real-time events (such as sensor data) against Models to make
automated decisions
Want to know more?
28
“Mahout in Action”
36. Apache Pig Script to Join on City to get Zip
Code and Write the results to Vertica
ZipCodes = LOAD 'demo/zipcodes.txt' USING PigStorage('t') AS (State:chararray, City:chararray, ZipCode:int);
CrunchBase = LOAD 'demo/crunchbase.txt' USING PigStorage('t') AS
(Company:chararray,City:chararray,State:chararray,Sector:chararray,Round:chararray,Month:int,Year:int,Investor:chararray,Amount:int);
CrunchBaseZip = JOIN CrunchBase BY (City,State), ZipCodes BY (City,State);
STORE CrunchBaseZip INTO
'{CrunchBaseZip(Company varchar(40), City varchar(40), State varchar(40), Sector varchar(40), Round varchar(40), Month int, Year
int, Investor int, Amount varchar(40))}’
USING com.vertica.pig.VerticaStorer(‘VerticaServer','OSCON','5433','dbadmin','');
39. Total Investments By Zip Code for all Sectors
$1.2 Billion in Boston
$7.3 Billion in San Francisco
$2.9 Billion in Mountain View
$1.7 Billion in Austin
39
40. Total Investments By Zip Code for Consumer Web
$600 Million in Seattle
$1.2 Billion in Chicago
$1.7 Billion in San Francisco
40
41. Total Investments By Zip Code for BioTech
$1.3 Billion in Cambridge
$528 Million in Dallas
$1.1 Billion in San Diego
41
What is Big Data? -- “The challenges, solutions and opportunities around the storage, processing and delivery of data at scale”Tag Cloud created from a week of Tech4Africa Tweets – an example of trend analysis which is a popular Big Data Analytics patternGoals are to explain the importance and opportunity and tell you how to do it. Hadoop/NoSQL deep dive not covered.
As Hardware becomes increasing commoditized, the margin & differentiation moved to software, as software is becoming increasingly commoditized the margin & differentiation is moving to data2000 - Cloud is an IT Sourcing Alternative (Virtualization extends into Cloud)Explosion of Unstructured DataMobile“Let’s create a context in which to think….”Focused on 3 major tipping points in the evolution of the technology. Mention that this is a very web centric view contrasted to Barry Devlin’s Enterprise viewAssumes Networking falls under Hardware & Cloud is at the Intersection of Software and DataWhy should you care?Tipping Point 1: Situational ApplicationsTipping Point 2: Big DataTipping Point 3: Reasoning
Web 2.0(Information Explosion, Now Many Channels - Turning consumers into Producers (Shirky),Tipping point Web Standards allow Rapid Application Development, Advent of Situational Applications, Folksonomies,Social)SOA (Functionality exposed through open interfaces and open standards, Great strides in modularity and re-use whilst reducing complexities around system integration, Still need to be a developer to create applications using theseservice interfaces (WSDL, SOAP, way too complex !) Enter mashups…)Mashups (Place a façade on the service and you have the final step in the evolution of services and service based applications,Now anyone can build applications (i.e. non-programmers). We’ve taken the entire SOA Library and exposed it to non-programmers, What do I mean? Check out this YouTunes app…) 1st example where we saw arbitrary data/content re-purposed in ways the original authors never intended –eg. Craigslist gumtree/ homes for sales scraped and placed on google map mashed up w/ crime statistics. Whole greater than the sum of its parts -> New kinds of Information !!BUT Limitations around how much arbitrary data being scraped and turned into info. Usually no pre-processing and just what can be rendered on a single page.Demo
http://www.housingmaps.com/
“Every 2 days we create as much data as we did from the dawn of humanity until 2003” – We’ve hit the Petabyte & Exabyte age. What does that mean? Lets look (next slide)
Mention Enterprise Growth over time, Mobile/Sensor Data, Web 2.0 Data Exhaust, Social NetworksAdvances in Analytics – keep your data around for deeper business insights and to avoid Enterprise Amnesia
How about we summarize a few of the key trends in the Web as we know it today …. This diagram shows some of the main trends of what Web 3.0 is about…Netflix accounts for 29.7 % of US Traffic, Mention Web 2.0 Summit Points of ControlHaving more data leads to better context which leads to deeper understanding/insight or new discoveriesRefer to Reid Hoffman’s views on what web 3.0 is
Pre-processed though, not flexible, you can’t ask specific questions that have not been pre-processed
Mention folksonomies in Web 2.0 with searching Delicious Bookmarks. Mention Chilean Earthquake Crisis Video using Twitter to do Crisis Mapping.
Talk about Visualizations and InfoGraphics – manual and a lot of work
They are only part of the solution & don’t allow you to ask your own questions
This is the real promise of Big Data
These are not all the problems around Big Data. These are the bigger problems around deriving new information out of web data. There are other issues as well likely inconsistency, skew, etc.
Give a Nutch example
Specifically call out the color coding reasoning for Map/Reduce and HDFS as a single distributed service
Give examples of how one might use Open Calais or Entity Extraction libraries