SlideShare una empresa de Scribd logo
1 de 40
Descargar para leer sin conexión
Cloud Elephants and
 Witches: A Big Data
 Tale from Mendeley




         Kris Jack, PhD
   Data Mining Team Lead
Overview

➔
    What's Mendeley?

➔
    The curse that comes with success

➔
    A framework for scaling up (Hadoop + MapReduce)

➔
    Moving to the cloud (AWS)

➔
    Conclusions
What's Mendeley?
What is Mendeley?


...a large data technology
startup company




                       ...and it's on a mission to
                            change the way that
                                 research is done!
Mendeley          Last.fm
                                                   3) Last.fm builds your music
                works like this:                   profile and recommends you
                                                   music you also could like... and
1) Install “Audioscrobbler”                        it’s the world‘s biggest open
                                                   music database




                              2) Listen to music
Mendeley   Last.fm


music libraries                  research libraries


artists                          researchers


songs                            papers


genres                           disciplines
Mendeley provides tools to help users...


...organise
their research
Mendeley provides tools to help users...
                 ...collaborate with
                     one another
...organise
their research
Mendeley provides tools to help users...
                 ...collaborate with
                     one another
...organise                            ...discover new
their research                                research
Mendeley provides tools to help users...
                 ...collaborate with
                     one another
...organise                            ...discover new
their research                                research
The curse that comes
        with success
In the beginning, there was...

➔
    MySQL:
      ➔
        Normalised tables for storing and serving:
        ➔
          User data
        ➔
          Article data
    ➔
      The system was happy


➔
    With this, we launched
    the article catalogue
    ➔
      Lots of number crunching
    ➔
      Many joins for basic stats
Here's where the curse of success comes

➔
  More articles came
➔
  More users came


➔
    The system became unhappy


➔
    Keeping data fresh was a burden
    ➔
      Algorithms relied on global counts
    ➔
      Iterating over tables was slow
    ➔
      Needed to shard tables to grow catalogue

➔
    In short, our system didn't scale
1.6 million+ users; the 20 largest userbases:
                    University of Cambridge
                         Stanford University
                                           MIT
                         University of Michigan
                               Harvard University
                               University of Oxford
                              Sao Paulo University
                            Imperial College London
                              University of Edinburgh
                                    Cornell University
                      University of California at Berkeley
                                              RWTH Aachen
                                       Columbia University
                                                   Georgia Tech
                                       University of Wisconsin
                                                    UC San Diego
                                      University of California at LA
                                                University of Florida
                                           University of North Carolina
50m
                  Real-time data on 28m unique papers:

           Thomson Reuters’
          Web of Knowledge
          (dating from 1934)



      Mendeley after
         16 months:



      >150 million
individual articles,
          (>25TB)
We had serious needs

➔
    Scale up to the millions (billions for some items)
➔
    Keep data fresh
➔
    Support newly planned services
    ➔
        Search
    ➔
        Recommendations
➔
    Business context
    ➔
        Agile development (rapid prototyping)
    ➔
        Cost effective
    ➔
        Going viral
A framework for scaling up
(Hadoop and MapReduce)
What is Hadoop?

The Apache Hadoop project develops open-source
software for reliable, scalable, distributed
computing
                            www.hadoop.apache.org
Hadoop

➔
    Designed to operate on a cluster of computers
    ➔
        1...thousands
    ➔
        Commodity hardware (low cost units)
➔
    Each node offers local computation and storage
➔
    Provides framework for working with petabytes of data

➔
    When learning about Hadoop, you need to learn about:
    ➔
        HDFS
    ➔
        MapReduce
HDFS

➔
    Hadoop Distributed File System
➔
    Based on Google File System
➔
    Replicates data storage (reliability, x3, across racks)
➔
    Designed to handle very large files (e.g. 64MB)
➔
    Provides high-throughput
➔
    File access through Java and Thrift APIs, CL and Wepapp

➔
    Name node is a single point of failure (availability issue)
MapReduce


➔
    MapReduce is a programming model
➔
    Allows distributed processing of large data sets
➔
    Based on Google's MapReduce
➔
    Inspired by functional programming
➔
    Take the program to the data, not the data to the program
MapReduce Example:
  Article Readers by Country
 doc_id1, reader_id1, usa, 2010, …           HDFS
 doc_id2, reader_id2, austria, 2012, …       Large file (150M entries)
 doc_id1, reader_id3, china, 2010, …         Flattened data
                   .
                                             Stored across nodes
                   .
                   .


             Map
      (pivot countries                   doc_id1, {usa, china, usa, uk, china, china...}
         by doc id)                      doc_id2, {austria, austria, china, china, uk …}
                                         ...


     doc_id1, usa, 0.27                Reduce
     doc_id1, china, 0.09      (calc. document stats)
     doc_id1, uk, 0.09
     doc_id2, austria, 0.99
               .
               .
               .
Hadoop



➔
    HDFS for storing data
➔
    MapReduce for processing data

➔
    Together, bring the program to the data
Hadoop's Users
We make a lot of use of HDFS and MapReduce

➔
    Catalogue Stats
➔
    Recommendations (Mahout)
➔
    Log Analysis (business analytics)
➔
    Top Articles
➔
    … and more

➔
    Quick, reliable and scalable
Beware that these benefits have costs

➔
    Migrating to a new system (data consistency)
➔
    Setup costs
    ➔
        Learn black magic to configure
    ➔
        Hardware for cluster
➔
    Administrative costs
    ➔
        High learning curve to administrate Hadoop
    ➔
        Still an immature technology
    ➔
        You may need to debug the source code
➔
    Tips
    ➔
        Get involved in the community (e.g. meetups, forums)
    ➔
        Use good commodity hardware
    ➔
        Consider moving to the cloud...
Moving to the cloud
            (AWS)
What is AWS?

Amazon Web Services (AWS) delivers a set of
services that together form a reliable, scalable,
and inexpensive computing platform “in the
cloud”
                             www.aws.amazon.com
Why move to AWS?

➔
    The cost of running your own cluster can be high
    ➔
        Monetary (e.g. hardware)
    ➔
        Time (e.g. training, setup, administration)
➔
  AWS takes on these problems, renting their
services to you based on your usage
Article Recommendations

➔
    Aim: help researchers to find interest articles
    ➔
        Combat information deluge
    ➔
        Keep up-to-date with recent movements
➔
    1.6M users
➔
    50M articles
➔
  Batch process for generating regular
recommendations (using Mahout)
Article Recommendations in EMR

➔
    Use Amazon's Elastic Map Reduce (EMR)
➔
    Upload input data (user libraries)
➔
    Upload Mahout jar
➔
    Spin up cluster
➔
    Run the job
    ➔
        You decide the number of nodes (cost vs time)
    ➔
        You decide the spec of the nodes (cost vs quality)
➔
    Retrieve the output
Catalogue Search

➔
    50 million articles
➔
    50GB index in Solr
➔
    Variable load (over 24 hours)
    ➔
        1AM is quieter (100 q/s), 1PM is busier (150 q/s)
At 1AM, 150 queries/second
            1PM, 100 queries/second



                                            AWS Instance

         ?, ?, ?...
           queries
           (100/s)
           (150/s)           AWS elastic
                            load balancer                  AWS Instance


                                            AWS Instance




Catalogue Search in Context of Variable Load

➔
    Amazon's Elastic Load Balancer
➔
    Only pay for nodes when you need them
    ➔
        Spin up when load is high
    ➔
        Tear down load is low
➔
    Cost effective and scalable
Problems we've faced

➔
    Lack of control can be an issue
    ➔
        Trade-off administration and control
➔
    Orchestration issues
    ➔
        We have many services to coordinate
    ➔
        Cloud formation & Elastic Beanstalk
➔
    Migrating live services is hard work
Conclusions
Conclusions

➔
 Mendeley has created the world's largest scientific
database
➔
 Storing and processing this data is a large scale
challenge
➔
  Hadoop, through HDFS and MapReduce, provides a
framework for large scale data processing
➔
 Be aware of administration costs when doing this in
house
Conclusions

➔
  AWS can make scaling up efficient and cost
effective
➔
    Tap into the rich big data community out there
➔
 We plan to have make no more substantial
hardware purchases, instead use AWS
➔
  Scaling up isn't a trivial problem, to save pain,
plan for it from the outset
Conclusions

➔
 Magic elephants that live in clouds can lift the
curses of evil witches
www.mendeley.com

Más contenido relacionado

La actualidad más candente

Hadoop Training | Hadoop Training For Beginners | Hadoop Architecture | Hadoo...
Hadoop Training | Hadoop Training For Beginners | Hadoop Architecture | Hadoo...Hadoop Training | Hadoop Training For Beginners | Hadoop Architecture | Hadoo...
Hadoop Training | Hadoop Training For Beginners | Hadoop Architecture | Hadoo...
Simplilearn
 

La actualidad más candente (20)

Hadoop Training | Hadoop Training For Beginners | Hadoop Architecture | Hadoo...
Hadoop Training | Hadoop Training For Beginners | Hadoop Architecture | Hadoo...Hadoop Training | Hadoop Training For Beginners | Hadoop Architecture | Hadoo...
Hadoop Training | Hadoop Training For Beginners | Hadoop Architecture | Hadoo...
 
Hadoop overview
Hadoop overviewHadoop overview
Hadoop overview
 
Hive sq lfor-hadoop
Hive sq lfor-hadoopHive sq lfor-hadoop
Hive sq lfor-hadoop
 
An Introduction to Hadoop
An Introduction to HadoopAn Introduction to Hadoop
An Introduction to Hadoop
 
Introduction to Hadoop
Introduction to HadoopIntroduction to Hadoop
Introduction to Hadoop
 
Scaling Big Data Mining Infrastructure Twitter Experience
Scaling Big Data Mining Infrastructure Twitter ExperienceScaling Big Data Mining Infrastructure Twitter Experience
Scaling Big Data Mining Infrastructure Twitter Experience
 
Introduction to Big Data & Hadoop
Introduction to Big Data & HadoopIntroduction to Big Data & Hadoop
Introduction to Big Data & Hadoop
 
Big Data technology Landscape
Big Data technology LandscapeBig Data technology Landscape
Big Data technology Landscape
 
Hadoop Seminar Report
Hadoop Seminar ReportHadoop Seminar Report
Hadoop Seminar Report
 
Hadoop Family and Ecosystem
Hadoop Family and EcosystemHadoop Family and Ecosystem
Hadoop Family and Ecosystem
 
Apache Hadoop
Apache HadoopApache Hadoop
Apache Hadoop
 
Apache Hadoop In Theory And Practice
Apache Hadoop In Theory And PracticeApache Hadoop In Theory And Practice
Apache Hadoop In Theory And Practice
 
Pptx present
Pptx presentPptx present
Pptx present
 
מיכאל
מיכאלמיכאל
מיכאל
 
HDFS: Hadoop Distributed Filesystem
HDFS: Hadoop Distributed FilesystemHDFS: Hadoop Distributed Filesystem
HDFS: Hadoop Distributed Filesystem
 
getFamiliarWithHadoop
getFamiliarWithHadoopgetFamiliarWithHadoop
getFamiliarWithHadoop
 
Agile Data: Building Hadoop Analytics Applications
Agile Data: Building Hadoop Analytics ApplicationsAgile Data: Building Hadoop Analytics Applications
Agile Data: Building Hadoop Analytics Applications
 
Spark and shark
Spark and sharkSpark and shark
Spark and shark
 
Getting started with Hadoop, Hive, and Elastic MapReduce
Getting started with Hadoop, Hive, and Elastic MapReduceGetting started with Hadoop, Hive, and Elastic MapReduce
Getting started with Hadoop, Hive, and Elastic MapReduce
 
Hadoop introduction
Hadoop introductionHadoop introduction
Hadoop introduction
 

Similar a DataScience Meeting I - Cloud Elephants and Witches: A Big Data Tale from Mendeley

Hadoop @ Sara & BiG Grid
Hadoop @ Sara & BiG GridHadoop @ Sara & BiG Grid
Hadoop @ Sara & BiG Grid
Evert Lammerts
 
Bhupeshbansal bigdata
Bhupeshbansal bigdata Bhupeshbansal bigdata
Bhupeshbansal bigdata
Bhupesh Bansal
 
An Introduction to Apache Hadoop, Mahout and HBase
An Introduction to Apache Hadoop, Mahout and HBaseAn Introduction to Apache Hadoop, Mahout and HBase
An Introduction to Apache Hadoop, Mahout and HBase
Lukas Vlcek
 

Similar a DataScience Meeting I - Cloud Elephants and Witches: A Big Data Tale from Mendeley (20)

Hadoop @ Sara & BiG Grid
Hadoop @ Sara & BiG GridHadoop @ Sara & BiG Grid
Hadoop @ Sara & BiG Grid
 
eScience: A Transformed Scientific Method
eScience: A Transformed Scientific MethodeScience: A Transformed Scientific Method
eScience: A Transformed Scientific Method
 
Time to Science/Time to Results: Transforming Research in the Cloud
Time to Science/Time to Results: Transforming Research in the CloudTime to Science/Time to Results: Transforming Research in the Cloud
Time to Science/Time to Results: Transforming Research in the Cloud
 
Hadoop Technology
Hadoop TechnologyHadoop Technology
Hadoop Technology
 
Mendeley’s Research Catalogue: building it, opening it up and making it even ...
Mendeley’s Research Catalogue: building it, opening it up and making it even ...Mendeley’s Research Catalogue: building it, opening it up and making it even ...
Mendeley’s Research Catalogue: building it, opening it up and making it even ...
 
Hadoop
HadoopHadoop
Hadoop
 
Hadoop seminar
Hadoop seminarHadoop seminar
Hadoop seminar
 
Hadoop basics
Hadoop basicsHadoop basics
Hadoop basics
 
Bhupeshbansal bigdata
Bhupeshbansal bigdata Bhupeshbansal bigdata
Bhupeshbansal bigdata
 
Need for Time series Database
Need for Time series DatabaseNeed for Time series Database
Need for Time series Database
 
Schemaless Databases
Schemaless DatabasesSchemaless Databases
Schemaless Databases
 
Topic 9a-Hadoop Storage- HDFS.pptx
Topic 9a-Hadoop Storage- HDFS.pptxTopic 9a-Hadoop Storage- HDFS.pptx
Topic 9a-Hadoop Storage- HDFS.pptx
 
Architecting Your First Big Data Implementation
Architecting Your First Big Data ImplementationArchitecting Your First Big Data Implementation
Architecting Your First Big Data Implementation
 
Debunking "Purpose-Built Data Systems:": Enter the Universal Database
Debunking "Purpose-Built Data Systems:": Enter the Universal DatabaseDebunking "Purpose-Built Data Systems:": Enter the Universal Database
Debunking "Purpose-Built Data Systems:": Enter the Universal Database
 
Using Containers and HPC to Solve the Mysteries of the Universe by Deborah Bard
Using Containers and HPC to Solve the Mysteries of the Universe by Deborah BardUsing Containers and HPC to Solve the Mysteries of the Universe by Deborah Bard
Using Containers and HPC to Solve the Mysteries of the Universe by Deborah Bard
 
Apache hadoop
Apache hadoopApache hadoop
Apache hadoop
 
Big Data Concepts
Big Data ConceptsBig Data Concepts
Big Data Concepts
 
Seminar ppt
Seminar pptSeminar ppt
Seminar ppt
 
An Introduction to Apache Hadoop, Mahout and HBase
An Introduction to Apache Hadoop, Mahout and HBaseAn Introduction to Apache Hadoop, Mahout and HBase
An Introduction to Apache Hadoop, Mahout and HBase
 
Finding the needles in the haystack. An Overview of Analyzing Big Data with H...
Finding the needles in the haystack. An Overview of Analyzing Big Data with H...Finding the needles in the haystack. An Overview of Analyzing Big Data with H...
Finding the needles in the haystack. An Overview of Analyzing Big Data with H...
 

Último

Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
Joaquim Jorge
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
vu2urc
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
?#DUbAI#??##{{(☎️+971_581248768%)**%*]'#abortion pills for sale in dubai@
 

Último (20)

Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...
 
Advantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your BusinessAdvantages of Hiring UIUX Design Service Providers for Your Business
Advantages of Hiring UIUX Design Service Providers for Your Business
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 

DataScience Meeting I - Cloud Elephants and Witches: A Big Data Tale from Mendeley

  • 1. Cloud Elephants and Witches: A Big Data Tale from Mendeley Kris Jack, PhD Data Mining Team Lead
  • 2. Overview ➔ What's Mendeley? ➔ The curse that comes with success ➔ A framework for scaling up (Hadoop + MapReduce) ➔ Moving to the cloud (AWS) ➔ Conclusions
  • 4. What is Mendeley? ...a large data technology startup company ...and it's on a mission to change the way that research is done!
  • 5. Mendeley Last.fm 3) Last.fm builds your music works like this: profile and recommends you music you also could like... and 1) Install “Audioscrobbler” it’s the world‘s biggest open music database 2) Listen to music
  • 6. Mendeley Last.fm music libraries research libraries artists researchers songs papers genres disciplines
  • 7. Mendeley provides tools to help users... ...organise their research
  • 8. Mendeley provides tools to help users... ...collaborate with one another ...organise their research
  • 9. Mendeley provides tools to help users... ...collaborate with one another ...organise ...discover new their research research
  • 10.
  • 11. Mendeley provides tools to help users... ...collaborate with one another ...organise ...discover new their research research
  • 12. The curse that comes with success
  • 13. In the beginning, there was... ➔ MySQL: ➔ Normalised tables for storing and serving: ➔ User data ➔ Article data ➔ The system was happy ➔ With this, we launched the article catalogue ➔ Lots of number crunching ➔ Many joins for basic stats
  • 14. Here's where the curse of success comes ➔ More articles came ➔ More users came ➔ The system became unhappy ➔ Keeping data fresh was a burden ➔ Algorithms relied on global counts ➔ Iterating over tables was slow ➔ Needed to shard tables to grow catalogue ➔ In short, our system didn't scale
  • 15. 1.6 million+ users; the 20 largest userbases: University of Cambridge Stanford University MIT University of Michigan Harvard University University of Oxford Sao Paulo University Imperial College London University of Edinburgh Cornell University University of California at Berkeley RWTH Aachen Columbia University Georgia Tech University of Wisconsin UC San Diego University of California at LA University of Florida University of North Carolina
  • 16. 50m Real-time data on 28m unique papers: Thomson Reuters’ Web of Knowledge (dating from 1934) Mendeley after 16 months: >150 million individual articles, (>25TB)
  • 17. We had serious needs ➔ Scale up to the millions (billions for some items) ➔ Keep data fresh ➔ Support newly planned services ➔ Search ➔ Recommendations ➔ Business context ➔ Agile development (rapid prototyping) ➔ Cost effective ➔ Going viral
  • 18. A framework for scaling up (Hadoop and MapReduce)
  • 19. What is Hadoop? The Apache Hadoop project develops open-source software for reliable, scalable, distributed computing www.hadoop.apache.org
  • 20. Hadoop ➔ Designed to operate on a cluster of computers ➔ 1...thousands ➔ Commodity hardware (low cost units) ➔ Each node offers local computation and storage ➔ Provides framework for working with petabytes of data ➔ When learning about Hadoop, you need to learn about: ➔ HDFS ➔ MapReduce
  • 21. HDFS ➔ Hadoop Distributed File System ➔ Based on Google File System ➔ Replicates data storage (reliability, x3, across racks) ➔ Designed to handle very large files (e.g. 64MB) ➔ Provides high-throughput ➔ File access through Java and Thrift APIs, CL and Wepapp ➔ Name node is a single point of failure (availability issue)
  • 22. MapReduce ➔ MapReduce is a programming model ➔ Allows distributed processing of large data sets ➔ Based on Google's MapReduce ➔ Inspired by functional programming ➔ Take the program to the data, not the data to the program
  • 23. MapReduce Example: Article Readers by Country doc_id1, reader_id1, usa, 2010, … HDFS doc_id2, reader_id2, austria, 2012, … Large file (150M entries) doc_id1, reader_id3, china, 2010, … Flattened data . Stored across nodes . . Map (pivot countries doc_id1, {usa, china, usa, uk, china, china...} by doc id) doc_id2, {austria, austria, china, china, uk …} ... doc_id1, usa, 0.27 Reduce doc_id1, china, 0.09 (calc. document stats) doc_id1, uk, 0.09 doc_id2, austria, 0.99 . . .
  • 24. Hadoop ➔ HDFS for storing data ➔ MapReduce for processing data ➔ Together, bring the program to the data
  • 26. We make a lot of use of HDFS and MapReduce ➔ Catalogue Stats ➔ Recommendations (Mahout) ➔ Log Analysis (business analytics) ➔ Top Articles ➔ … and more ➔ Quick, reliable and scalable
  • 27. Beware that these benefits have costs ➔ Migrating to a new system (data consistency) ➔ Setup costs ➔ Learn black magic to configure ➔ Hardware for cluster ➔ Administrative costs ➔ High learning curve to administrate Hadoop ➔ Still an immature technology ➔ You may need to debug the source code ➔ Tips ➔ Get involved in the community (e.g. meetups, forums) ➔ Use good commodity hardware ➔ Consider moving to the cloud...
  • 28. Moving to the cloud (AWS)
  • 29. What is AWS? Amazon Web Services (AWS) delivers a set of services that together form a reliable, scalable, and inexpensive computing platform “in the cloud” www.aws.amazon.com
  • 30. Why move to AWS? ➔ The cost of running your own cluster can be high ➔ Monetary (e.g. hardware) ➔ Time (e.g. training, setup, administration) ➔ AWS takes on these problems, renting their services to you based on your usage
  • 31. Article Recommendations ➔ Aim: help researchers to find interest articles ➔ Combat information deluge ➔ Keep up-to-date with recent movements ➔ 1.6M users ➔ 50M articles ➔ Batch process for generating regular recommendations (using Mahout)
  • 32. Article Recommendations in EMR ➔ Use Amazon's Elastic Map Reduce (EMR) ➔ Upload input data (user libraries) ➔ Upload Mahout jar ➔ Spin up cluster ➔ Run the job ➔ You decide the number of nodes (cost vs time) ➔ You decide the spec of the nodes (cost vs quality) ➔ Retrieve the output
  • 33. Catalogue Search ➔ 50 million articles ➔ 50GB index in Solr ➔ Variable load (over 24 hours) ➔ 1AM is quieter (100 q/s), 1PM is busier (150 q/s)
  • 34. At 1AM, 150 queries/second 1PM, 100 queries/second AWS Instance ?, ?, ?... queries (100/s) (150/s) AWS elastic load balancer AWS Instance AWS Instance Catalogue Search in Context of Variable Load ➔ Amazon's Elastic Load Balancer ➔ Only pay for nodes when you need them ➔ Spin up when load is high ➔ Tear down load is low ➔ Cost effective and scalable
  • 35. Problems we've faced ➔ Lack of control can be an issue ➔ Trade-off administration and control ➔ Orchestration issues ➔ We have many services to coordinate ➔ Cloud formation & Elastic Beanstalk ➔ Migrating live services is hard work
  • 37. Conclusions ➔ Mendeley has created the world's largest scientific database ➔ Storing and processing this data is a large scale challenge ➔ Hadoop, through HDFS and MapReduce, provides a framework for large scale data processing ➔ Be aware of administration costs when doing this in house
  • 38. Conclusions ➔ AWS can make scaling up efficient and cost effective ➔ Tap into the rich big data community out there ➔ We plan to have make no more substantial hardware purchases, instead use AWS ➔ Scaling up isn't a trivial problem, to save pain, plan for it from the outset
  • 39. Conclusions ➔ Magic elephants that live in clouds can lift the curses of evil witches