SlideShare una empresa de Scribd logo
1 de 21
Using a Hadoop Data Pipeline to Build a Graph of Users and Content  Hadoop Summit - June 29, 2011 Bill Graham bill.graham@cbs.com
About me Principal Software Engineer Technology, Business & News BU (TBN) TBN Platform Infrastructure Team Background in SW Systems Engineering and Integration Architecture Contributor: Pig, Hive, HBase Committer: Chukwa
About CBSi – who are we? ENTERTAINMENT  GAMES & MOVIES  SPORTS TECH, BIZ & NEWS  MUSIC
About CBSi - scale Top 10 global web property 235M worldwide monthly uniques1 Hadoop Ecosystem CDH3, Pig, Hive, HBase, Chukwa, Oozie, Sqoop, Cascading Cluster size: Currently workers: 35 DW + 6 TBN (150TB) Next quarter: 100 nodes (500TB) DW peak processing: 400M events/day globally 1 - Source: comScore, March 2011
Abstract    At CBSi we’re developing a scalable, flexible platform to provide the ability to aggregate large volumes of data, to mine it for meaningful relationships and to produce a graph of connected users and content. This will enable us to better understand the connections between our users, our assets, and our authors.
The Problem User always voting on what they find interesting Got-it, want-it, like, share, follow, comment, rate, review, helpful vote, etc. Users have multiple identities Anonymous Registered (logged in) Social Multiple devices Connections between entities are in silo-ized sub-graphs Wealth of valuable user connectedness going unrealized
The Goal Create a back-end platform that enables us to assemble a holistic graph of our users and their connections to: Content Authors Each other Themselves Better understand how our users connect to our content Improved content recommendations Improved user segmentation and content/ad targeting
Requirements Integrate with existing DW/BI Hadoop Infrastructure Aggregate data from across CBSi and beyond Connect disjointed user identities Flexible data model Assemble graph of relationships Enable rapid experimentation, data mining and hypothesis testing Power new site features and advertising optimizations
The Approach Mirror data into HBase  Use MapReduce to process data Export RDF data into a triple store
Data Flow Site Triple Store SPARQL RDF CMS Publishing Site Activity Stream a.k.a. Firehose (JMS) HBase MapReduce ,[object Object]
ImportTsvatomic writes transform & load Social/UGC Systems DW Systems HDFS bulk load CMS Systems Content Tagging Systems
NOSQL Data Models Key-value stores ColumnFamily Document databases Graph databases Data size Data complexity Credit: Emil Eifrem, Neotechnology
Conceptual Graph PageEvent PageEvent contains contains Brand SessionId regId is also is also Asset had  session follow Author anonId like is also Asset follow Asset Author is also authored by Product tagged with tagged with Story tag Activity firehose (real-time) CMS (batch + incr.) Tags (batch) DW (daily)
HBase Schema user_info table
HBase Loading Incremental Consuming from a JMS queue == real-time Batch Pig’s HBaseStorage== quick to develop & iterate HBase’sImportTsv== more efficient
Generating RDF with Pig RDF1 is an XML standard to represent subject-predicate-object relationships Philosophy: Store large amounts of data in Hadoop, be selective of what goes into the triple store For example: “first class” graph citizens we plan to query on Implicit to explicit (i.e., derived) connections Content recommendations User segments Related users Content tags Easily join data to create new triples with Pig Run SPARQL2 queries, examine, refine, reload 1 - http://www.w3.org/RDF, 2 - http://www.w3.org/TR/rdf-sparql-query
Example Pig RDF Script Create RDF triples of users to social events: RAW = LOAD 'hbase://user_info' USING org.apache.pig.backend.hadoop.hbase.HBaseStorage('event:*', '-loadKey true’) 	AS (id:bytearray, event_map:map[]); -- Convert our maps to bags so we can flatten them out  A = FOREACH RAW GENERATE id, FLATTEN(mapToBag(event_map)) AS (social_k, social_v);  -- Convert the JSON events into maps  B = FOREACH A GENERATE id, social_k, jsonToMap(social_v) AS social_map:map[];  -- Pull values from map  C = FOREACH B GENERATE id, social_map#'levt.asid' AS asid, social_map#'levt.xastid' AS astid, social_map#'levt.event' AS event, social_map#'levt.eventt' AS eventt, social_map#'levt.ssite' AS ssite, social_map#'levt.ts' AS eventtimestamp ; EVENT_TRIPLE = FOREACH C GENERATE GenerateRDFTriple( 	'USER-EVENT', id, astid, asid, event, eventt, ssite, eventtimestamp ) ;  STORE EVENT_TRIPLE INTO 'trident/rdf/out/user_event' USING PigStorage ();
Example SPARQL query   Recommend content based on Facebook “liked” items: SELECT ?asset1 ?tagname ?asset2 ?title2 ?pubdt2 WHERE {   # anon-user who Like'd a content asset (news item, blog post) on Facebook   <urn:com.cbs.dwh:ANON-Cg8JIU14kobSAAAAWyQ> <urn:com.cbs.trident:event:LIKE> ?x .   ?x <urn:com.cbs.trident:eventt> "SOCIAL_SITE” .   ?x <urn:com.cbs.trident:ssite> "www.facebook.com" .   ?x <urn:com.cbs.trident:tasset> ?asset1 .   ?asset1 a <urn:com.cbs.rb.contentdb:content_asset> .   # a tag associated with the content asset    ?asset1 <urn:com.cbs.cnb.bttrax:tag> ?tag1 .    ?tag1 <urn:com.cbs.cnb.bttrax:tagname> ?tagname .   # other content assets with the same tag and their title    ?asset2 <urn:com.cbs.cnb.bttrax:tag> ?tag2 . FILTER (?asset2 != ?asset1)   ?tag2 <urn:com.cbs.cnb.bttrax:tagname> ?tagname .   ?asset2 <http://www.w3.org/2005/Atom#title> ?title2 .   ?asset2 <http://www.w3.org/2005/Atom#published> ?pubdt2 . FILTER    (?pubdt2 >= "2011-01-01T00:00:00"^^<http://www.w3.org/2001/XMLSchema#dateTime>)  } ORDER BY DESC (?pubdt2) LIMIT 10
Conclusions I - Power and Flexibility Architecture is flexible with respect to: Data modeling Integration patterns Data processing, querying techniques Multiple approaches for graph traversal SPARQL Traverse HBase MapReduce
Conclusions II – Match Tool with the Job Hadoop - scale and computing horsepower HBase – atomic r/w access, speed, flexibility RDF Triple Store – complex graph querying Pig – rapid MR prototyping and ad-hoc analysis Future: HCatalog – Schema & table management Oozie or Azkaban – Workflow engine Mahout – Machine learning Hama – Graph processing
Conclusions III – OSS, woot! If it doesn’t do what you want, submit a patch.

Más contenido relacionado

La actualidad más candente

HIVE: Data Warehousing & Analytics on Hadoop
HIVE: Data Warehousing & Analytics on HadoopHIVE: Data Warehousing & Analytics on Hadoop
HIVE: Data Warehousing & Analytics on Hadoop
Zheng Shao
 
Introduction To Apache Pig at WHUG
Introduction To Apache Pig at WHUGIntroduction To Apache Pig at WHUG
Introduction To Apache Pig at WHUG
Adam Kawa
 
Introduction To Map Reduce
Introduction To Map ReduceIntroduction To Map Reduce
Introduction To Map Reduce
rantav
 
Beauty and Big Data
Beauty and Big DataBeauty and Big Data
Beauty and Big Data
Sri Ambati
 
Hive vs Pig for HadoopSourceCodeReading
Hive vs Pig for HadoopSourceCodeReadingHive vs Pig for HadoopSourceCodeReading
Hive vs Pig for HadoopSourceCodeReading
Mitsuharu Hamba
 
Chicago Solr Meetup - June 10th: Exploring Hadoop with Search
Chicago Solr Meetup - June 10th: Exploring Hadoop with SearchChicago Solr Meetup - June 10th: Exploring Hadoop with Search
Chicago Solr Meetup - June 10th: Exploring Hadoop with Search
Lucidworks (Archived)
 

La actualidad más candente (20)

HIVE: Data Warehousing & Analytics on Hadoop
HIVE: Data Warehousing & Analytics on HadoopHIVE: Data Warehousing & Analytics on Hadoop
HIVE: Data Warehousing & Analytics on Hadoop
 
Getting Started on Hadoop
Getting Started on HadoopGetting Started on Hadoop
Getting Started on Hadoop
 
Introduction To Apache Pig at WHUG
Introduction To Apache Pig at WHUGIntroduction To Apache Pig at WHUG
Introduction To Apache Pig at WHUG
 
Introduction to Apache Hadoop
Introduction to Apache HadoopIntroduction to Apache Hadoop
Introduction to Apache Hadoop
 
Text Analytics Summit 2009 - Roddy Lindsay - "Social Media, Happiness, Petaby...
Text Analytics Summit 2009 - Roddy Lindsay - "Social Media, Happiness, Petaby...Text Analytics Summit 2009 - Roddy Lindsay - "Social Media, Happiness, Petaby...
Text Analytics Summit 2009 - Roddy Lindsay - "Social Media, Happiness, Petaby...
 
Big Data with BigQuery, presented at DevoxxUK 2014 by Javier Ramirez from teo...
Big Data with BigQuery, presented at DevoxxUK 2014 by Javier Ramirez from teo...Big Data with BigQuery, presented at DevoxxUK 2014 by Javier Ramirez from teo...
Big Data with BigQuery, presented at DevoxxUK 2014 by Javier Ramirez from teo...
 
Building Data Products at LinkedIn with DataFu
Building Data Products at LinkedIn with DataFuBuilding Data Products at LinkedIn with DataFu
Building Data Products at LinkedIn with DataFu
 
Introduction To Map Reduce
Introduction To Map ReduceIntroduction To Map Reduce
Introduction To Map Reduce
 
introduction to data processing using Hadoop and Pig
introduction to data processing using Hadoop and Pigintroduction to data processing using Hadoop and Pig
introduction to data processing using Hadoop and Pig
 
Beauty and Big Data
Beauty and Big DataBeauty and Big Data
Beauty and Big Data
 
Hive vs Pig for HadoopSourceCodeReading
Hive vs Pig for HadoopSourceCodeReadingHive vs Pig for HadoopSourceCodeReading
Hive vs Pig for HadoopSourceCodeReading
 
Intro to cassandra + hadoop
Intro to cassandra + hadoopIntro to cassandra + hadoop
Intro to cassandra + hadoop
 
Recent IT Development and Women: Big Data and The Power of Women in Goryeo
 Recent IT Development and Women: Big Data and The Power of Women in Goryeo Recent IT Development and Women: Big Data and The Power of Women in Goryeo
Recent IT Development and Women: Big Data and The Power of Women in Goryeo
 
140614 bigdatacamp-la-keynote-jon hsieh
140614 bigdatacamp-la-keynote-jon hsieh140614 bigdatacamp-la-keynote-jon hsieh
140614 bigdatacamp-la-keynote-jon hsieh
 
Try It The Google Way .
Try It The Google Way .Try It The Google Way .
Try It The Google Way .
 
The Hive Think Tank: Heron at Twitter
The Hive Think Tank: Heron at TwitterThe Hive Think Tank: Heron at Twitter
The Hive Think Tank: Heron at Twitter
 
Another Intro To Hadoop
Another Intro To HadoopAnother Intro To Hadoop
Another Intro To Hadoop
 
Apache Pig
Apache PigApache Pig
Apache Pig
 
Big data advance topics - part 2.pptx
Big data   advance topics - part 2.pptxBig data   advance topics - part 2.pptx
Big data advance topics - part 2.pptx
 
Chicago Solr Meetup - June 10th: Exploring Hadoop with Search
Chicago Solr Meetup - June 10th: Exploring Hadoop with SearchChicago Solr Meetup - June 10th: Exploring Hadoop with Search
Chicago Solr Meetup - June 10th: Exploring Hadoop with Search
 

Similar a Hadoop Summit 2011 - Using a Hadoop Data Pipeline to Build a Graph of Users and Content

Hadoop World 2011: Building Web Analytics Processing on Hadoop at CBS Interac...
Hadoop World 2011: Building Web Analytics Processing on Hadoop at CBS Interac...Hadoop World 2011: Building Web Analytics Processing on Hadoop at CBS Interac...
Hadoop World 2011: Building Web Analytics Processing on Hadoop at CBS Interac...
Cloudera, Inc.
 
Hive @ Hadoop day seattle_2010
Hive @ Hadoop day seattle_2010Hive @ Hadoop day seattle_2010
Hive @ Hadoop day seattle_2010
nzhang
 
hadoop&zing
hadoop&zinghadoop&zing
hadoop&zing
zingopen
 
Hadoop & Zing
Hadoop & ZingHadoop & Zing
Hadoop & Zing
Long Dao
 

Similar a Hadoop Summit 2011 - Using a Hadoop Data Pipeline to Build a Graph of Users and Content (20)

Hadoop World 2011: Building Web Analytics Processing on Hadoop at CBS Interac...
Hadoop World 2011: Building Web Analytics Processing on Hadoop at CBS Interac...Hadoop World 2011: Building Web Analytics Processing on Hadoop at CBS Interac...
Hadoop World 2011: Building Web Analytics Processing on Hadoop at CBS Interac...
 
Datalake Architecture
Datalake ArchitectureDatalake Architecture
Datalake Architecture
 
Streaming map reduce
Streaming map reduceStreaming map reduce
Streaming map reduce
 
Big dataarchitecturesandecosystem+nosql
Big dataarchitecturesandecosystem+nosqlBig dataarchitecturesandecosystem+nosql
Big dataarchitecturesandecosystem+nosql
 
Hive @ Hadoop day seattle_2010
Hive @ Hadoop day seattle_2010Hive @ Hadoop day seattle_2010
Hive @ Hadoop day seattle_2010
 
hadoop&zing
hadoop&zinghadoop&zing
hadoop&zing
 
AWS Webcast - Amazon Kinesis and Apache Storm
AWS Webcast - Amazon Kinesis and Apache StormAWS Webcast - Amazon Kinesis and Apache Storm
AWS Webcast - Amazon Kinesis and Apache Storm
 
Dowling buso-feature-store-logical-clocks-spark-ai-summit-2020.pptx
Dowling buso-feature-store-logical-clocks-spark-ai-summit-2020.pptxDowling buso-feature-store-logical-clocks-spark-ai-summit-2020.pptx
Dowling buso-feature-store-logical-clocks-spark-ai-summit-2020.pptx
 
Building a Feature Store around Dataframes and Apache Spark
Building a Feature Store around Dataframes and Apache SparkBuilding a Feature Store around Dataframes and Apache Spark
Building a Feature Store around Dataframes and Apache Spark
 
Hadoop & Zing
Hadoop & ZingHadoop & Zing
Hadoop & Zing
 
Mashups in the Information Technology Classroom
Mashups in the Information Technology ClassroomMashups in the Information Technology Classroom
Mashups in the Information Technology Classroom
 
B3 - Business intelligence apps on aws
B3 - Business intelligence apps on awsB3 - Business intelligence apps on aws
B3 - Business intelligence apps on aws
 
BreizhJUG - Janvier 2014 - Big Data - Dataiku - Pages Jaunes
BreizhJUG - Janvier 2014 - Big Data -  Dataiku - Pages JaunesBreizhJUG - Janvier 2014 - Big Data -  Dataiku - Pages Jaunes
BreizhJUG - Janvier 2014 - Big Data - Dataiku - Pages Jaunes
 
(BDT302) Big Data Beyond Hadoop: Running Mahout, Giraph, and R on Amazon EMR ...
(BDT302) Big Data Beyond Hadoop: Running Mahout, Giraph, and R on Amazon EMR ...(BDT302) Big Data Beyond Hadoop: Running Mahout, Giraph, and R on Amazon EMR ...
(BDT302) Big Data Beyond Hadoop: Running Mahout, Giraph, and R on Amazon EMR ...
 
Data science big data and analytics
Data science big data and analyticsData science big data and analytics
Data science big data and analytics
 
Data Ingestion in Big Data and IoT platforms
Data Ingestion in Big Data and IoT platformsData Ingestion in Big Data and IoT platforms
Data Ingestion in Big Data and IoT platforms
 
Tapping into Scientific Data with Hadoop and Flink
Tapping into Scientific Data with Hadoop and FlinkTapping into Scientific Data with Hadoop and Flink
Tapping into Scientific Data with Hadoop and Flink
 
No sql databases
No sql databasesNo sql databases
No sql databases
 
Big data technologies with Case Study Finance and Healthcare
Big data technologies with Case Study Finance and HealthcareBig data technologies with Case Study Finance and Healthcare
Big data technologies with Case Study Finance and Healthcare
 
Evolving Your Big Data Use Cases from Batch to Real-Time - AWS May 2016 Webi...
Evolving Your Big Data Use Cases from Batch to Real-Time - AWS May 2016  Webi...Evolving Your Big Data Use Cases from Batch to Real-Time - AWS May 2016  Webi...
Evolving Your Big Data Use Cases from Batch to Real-Time - AWS May 2016 Webi...
 

Último

+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
?#DUbAI#??##{{(☎️+971_581248768%)**%*]'#abortion pills for sale in dubai@
 

Último (20)

Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CV
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 
Data Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt RobisonData Cloud, More than a CDP by Matt Robison
Data Cloud, More than a CDP by Matt Robison
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
HTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation StrategiesHTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation Strategies
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
 

Hadoop Summit 2011 - Using a Hadoop Data Pipeline to Build a Graph of Users and Content

  • 1. Using a Hadoop Data Pipeline to Build a Graph of Users and Content Hadoop Summit - June 29, 2011 Bill Graham bill.graham@cbs.com
  • 2. About me Principal Software Engineer Technology, Business & News BU (TBN) TBN Platform Infrastructure Team Background in SW Systems Engineering and Integration Architecture Contributor: Pig, Hive, HBase Committer: Chukwa
  • 3. About CBSi – who are we? ENTERTAINMENT GAMES & MOVIES SPORTS TECH, BIZ & NEWS MUSIC
  • 4. About CBSi - scale Top 10 global web property 235M worldwide monthly uniques1 Hadoop Ecosystem CDH3, Pig, Hive, HBase, Chukwa, Oozie, Sqoop, Cascading Cluster size: Currently workers: 35 DW + 6 TBN (150TB) Next quarter: 100 nodes (500TB) DW peak processing: 400M events/day globally 1 - Source: comScore, March 2011
  • 5. Abstract At CBSi we’re developing a scalable, flexible platform to provide the ability to aggregate large volumes of data, to mine it for meaningful relationships and to produce a graph of connected users and content. This will enable us to better understand the connections between our users, our assets, and our authors.
  • 6. The Problem User always voting on what they find interesting Got-it, want-it, like, share, follow, comment, rate, review, helpful vote, etc. Users have multiple identities Anonymous Registered (logged in) Social Multiple devices Connections between entities are in silo-ized sub-graphs Wealth of valuable user connectedness going unrealized
  • 7. The Goal Create a back-end platform that enables us to assemble a holistic graph of our users and their connections to: Content Authors Each other Themselves Better understand how our users connect to our content Improved content recommendations Improved user segmentation and content/ad targeting
  • 8. Requirements Integrate with existing DW/BI Hadoop Infrastructure Aggregate data from across CBSi and beyond Connect disjointed user identities Flexible data model Assemble graph of relationships Enable rapid experimentation, data mining and hypothesis testing Power new site features and advertising optimizations
  • 9. The Approach Mirror data into HBase Use MapReduce to process data Export RDF data into a triple store
  • 10.
  • 11. ImportTsvatomic writes transform & load Social/UGC Systems DW Systems HDFS bulk load CMS Systems Content Tagging Systems
  • 12. NOSQL Data Models Key-value stores ColumnFamily Document databases Graph databases Data size Data complexity Credit: Emil Eifrem, Neotechnology
  • 13. Conceptual Graph PageEvent PageEvent contains contains Brand SessionId regId is also is also Asset had session follow Author anonId like is also Asset follow Asset Author is also authored by Product tagged with tagged with Story tag Activity firehose (real-time) CMS (batch + incr.) Tags (batch) DW (daily)
  • 15. HBase Loading Incremental Consuming from a JMS queue == real-time Batch Pig’s HBaseStorage== quick to develop & iterate HBase’sImportTsv== more efficient
  • 16. Generating RDF with Pig RDF1 is an XML standard to represent subject-predicate-object relationships Philosophy: Store large amounts of data in Hadoop, be selective of what goes into the triple store For example: “first class” graph citizens we plan to query on Implicit to explicit (i.e., derived) connections Content recommendations User segments Related users Content tags Easily join data to create new triples with Pig Run SPARQL2 queries, examine, refine, reload 1 - http://www.w3.org/RDF, 2 - http://www.w3.org/TR/rdf-sparql-query
  • 17. Example Pig RDF Script Create RDF triples of users to social events: RAW = LOAD 'hbase://user_info' USING org.apache.pig.backend.hadoop.hbase.HBaseStorage('event:*', '-loadKey true’) AS (id:bytearray, event_map:map[]); -- Convert our maps to bags so we can flatten them out A = FOREACH RAW GENERATE id, FLATTEN(mapToBag(event_map)) AS (social_k, social_v); -- Convert the JSON events into maps B = FOREACH A GENERATE id, social_k, jsonToMap(social_v) AS social_map:map[]; -- Pull values from map C = FOREACH B GENERATE id, social_map#'levt.asid' AS asid, social_map#'levt.xastid' AS astid, social_map#'levt.event' AS event, social_map#'levt.eventt' AS eventt, social_map#'levt.ssite' AS ssite, social_map#'levt.ts' AS eventtimestamp ; EVENT_TRIPLE = FOREACH C GENERATE GenerateRDFTriple( 'USER-EVENT', id, astid, asid, event, eventt, ssite, eventtimestamp ) ; STORE EVENT_TRIPLE INTO 'trident/rdf/out/user_event' USING PigStorage ();
  • 18. Example SPARQL query Recommend content based on Facebook “liked” items: SELECT ?asset1 ?tagname ?asset2 ?title2 ?pubdt2 WHERE { # anon-user who Like'd a content asset (news item, blog post) on Facebook <urn:com.cbs.dwh:ANON-Cg8JIU14kobSAAAAWyQ> <urn:com.cbs.trident:event:LIKE> ?x . ?x <urn:com.cbs.trident:eventt> "SOCIAL_SITE” . ?x <urn:com.cbs.trident:ssite> "www.facebook.com" . ?x <urn:com.cbs.trident:tasset> ?asset1 . ?asset1 a <urn:com.cbs.rb.contentdb:content_asset> . # a tag associated with the content asset ?asset1 <urn:com.cbs.cnb.bttrax:tag> ?tag1 . ?tag1 <urn:com.cbs.cnb.bttrax:tagname> ?tagname . # other content assets with the same tag and their title ?asset2 <urn:com.cbs.cnb.bttrax:tag> ?tag2 . FILTER (?asset2 != ?asset1) ?tag2 <urn:com.cbs.cnb.bttrax:tagname> ?tagname . ?asset2 <http://www.w3.org/2005/Atom#title> ?title2 . ?asset2 <http://www.w3.org/2005/Atom#published> ?pubdt2 . FILTER (?pubdt2 >= "2011-01-01T00:00:00"^^<http://www.w3.org/2001/XMLSchema#dateTime>) } ORDER BY DESC (?pubdt2) LIMIT 10
  • 19. Conclusions I - Power and Flexibility Architecture is flexible with respect to: Data modeling Integration patterns Data processing, querying techniques Multiple approaches for graph traversal SPARQL Traverse HBase MapReduce
  • 20. Conclusions II – Match Tool with the Job Hadoop - scale and computing horsepower HBase – atomic r/w access, speed, flexibility RDF Triple Store – complex graph querying Pig – rapid MR prototyping and ad-hoc analysis Future: HCatalog – Schema & table management Oozie or Azkaban – Workflow engine Mahout – Machine learning Hama – Graph processing
  • 21. Conclusions III – OSS, woot! If it doesn’t do what you want, submit a patch.

Notas del editor

  1. CBSi has a number of brands, this slide shows the biggest ones. I’m in the TBN group and the work I’ll present is being done for CNET, with the intent to be extended horizontally.
  2. We have a lot of traffic and data. We’ve been using Hadoop quite extensively for a few years now. 135/150TB currently, soon to be 500TB.
  3. Summarize what I’ll discuss
  4. We do a number of these items already, but in disparate systems.
  5. Simplified overview of the approach. Details to be discussed on the next data flow slide.
  6. Multiple data load options – bulk, real-time, incremental update.MapReduce to examine data Export data to RDF in the triple store Analysts and engineers can access HBase or MR to explore data For now we’re using various triple stores for experimentation, we haven’t done a full evaluation yet. Technology for triple store or graph store still TBD.
  7. The slope of this plot is subjective, but conceptually this is the case. HBase would be in the upper left quadrant and a graph store would be in the lower right. Our solution leverages the strength of each and we use MR to go from one to the other.
  8. Just an example a graph we can build. The graph can be adapted to meet use cases. Anonymous user has relationships to other identities, as well as assets that he/she interacts with. The graph is built from items from different datasources: blue=firehose, orange=CMS, green=tagging systems, red=DW
  9. Simple schema.1..* for both aliases and events.
  10. The next few slides will though some specifics of the data flow.How do we get data into HBase? Once of the nice things about HBase is that it supports a number of techniques to load data.
  11. Once data is in HBase, we selectively build RDF relationships to store in the triple store. Pig allows for easy iteration.
  12. One of our more simple scripts. It’s 6 Pig statements to generate this set of RDF. We have a UDF to abstract out the RDF string construction.
  13. Recommend the most recent blog content that is tagged with the same tags as the users FB like.
  14. We’re going to need to support a number of use cases and integration patterns. This approach allows us to have multiple options on the table for each.
  15. We want to be able to create a graph and effectively query it, but we also want to be able to to ad-hoc analytics and experimentation over the entire corpus of entities.