Se ha denunciado esta presentación.
Utilizamos tu perfil de LinkedIn y tus datos de actividad para personalizar los anuncios y mostrarte publicidad más relevante. Puedes cambiar tus preferencias de publicidad en cualquier momento.

Big data for cio 2015

Big Data overview lecture I gave in John Bryce Education to CIO class

  • Sé el primero en comentar

Big data for cio 2015

  1. 1. Zohar Elkayam CTO, Brillix Big Data For CIOs
  2. 2. Who am I? • Zohar Elkayam, CTO at Brillix • DBA, team leader, and a senior consultant for over 17 years • Oracle ACE Associate • Involved with Big Data projects since 2011 • Blogger –
  3. 3. About Brillix • Brillix is a leading company that specialized in Data Management • We provide professional services and consulting for Databases, Security and Big Data solutions 3
  4. 4. Agenda: Big Data • Big Data • Why • What • Where • Who and How • A Big Data Solution: Hadoop • NoSQL vs. RDBMS 4
  5. 5. What is Big Data?
  6. 6. "Big Data"?? Different definitions “Bigdataexceedsthereachofcommonlyusedhardwareenvironments andsoftwaretoolstocapture,manage,andprocessitwithinatolerable elapsedtimeforitsuserpopulation.”-TeradataMagazinearticle,2011 “Bigdatareferstodatasetswhosesizeisbeyondtheabilityoftypical databasesoftwaretoolstocapture,store,manageandanalyze.” - TheMcKinseyGlobalInstitute, 2012 “Bigdataisacollectionofdatasetssolargeandcomplexthatit becomesdifficulttoprocessusingon-handdatabasemanagement tools.” -Wikipedia, 2014
  7. 7.
  8. 8. Success Stories
  9. 9. More success stories
  10. 10. MORE stories.. • Crime Prevention in Los Angeles • Diagnosis and treatment of genetic diseases • Investments in the financial sector • Generation of personalized advertising • Astronomical discoveries
  11. 11. Examples of Big Data Use Cases Today MEDIA/ ENTERTAINMENT Viewers / advertising effectiveness COMMUNICATIONS Location-based advertising EDUCATION & RESEARCH Experiment sensor analysis CONSUMER PACKAGED GOODS Sentiment analysis of what’s hot, problems HEALTH CARE Patient sensors, monitoring, EHRs Quality of care LIFE SCIENCES Clinical trials Genomics HIGH TECHNOLOGY / INDUSTRIAL MFG. Mfg quality Warranty analysis OIL & GAS Drilling exploration sensor analysis FINANCIAL SERVICES Risk & portfolio analysis New products AUTOMOTIVE Auto sensors reporting location, problems RETAIL Consumer sentiment Optimized marketing LAW ENFORCEMENT & DEFENSE Threat analysis - social media monitoring, photo analysis TRAVEL & TRANSPORTATION Sensor analysis for optimal traffic flows Customer sentiment UTILITIES Smart Meter analysis for network capacity, ON-LINE SERVICES / SOCIAL MEDIA People & career matching Web-site optimization
  12. 12. Most Requested Uses of Big Data • Log Analytics & Storage • Smart Grid / Smarter Utilities • RFID Tracking & Analytics • Fraud / Risk Management & Modeling • 360° View of the Customer • Warehouse Extension • Email / Call Center Transcript Analysis • Call Detail Record Analysis 12
  13. 13. The Challenge
  14. 14. The Big Data Challenge
  15. 15. Volume • Big data come in one size: Big. • Size is measured in Terabyte(1012), Petabyte(1015), Exabyte(1018), Zettabyte (1021) • The storing and handling of the data becomes an issue • Producing value out of the data in a reasonable time is an issue 15
  16. 16. Some numbers • How much data in the world? • 800 Terabytes, 2000 • 160 Exabytes, 2006 (1EB = 1018B) • 4.5 Zettabytes, 2012 (1ZB = 1021B) • 44 Zettabytes by 2020 • How much is a zettabyte? • 1,000,000,000,000,000,000,000 bytes • A stack of 1TB hard disks that is 25,400 km high
  17. 17. Growth Rate • How much data generated in a day? • 7 TB, Twitter • 10 TB, Facebook
  18. 18. Data grows fast!
  19. 19. Variety • Big Data extends beyond structured data: including semi-structured and unstructured information: logs, text, audio and videos. • Wide variety of rapidly evolving data types requires highly flexible stores and handling. 19
  20. 20. Structured & Un-Structured Un-Structured Structured Objects Tables Flexible Columns and Rows Structure Unknown Predefined Structure Textual and Binary Mostly Textual
  21. 21. Big Data is ANY data • Some has fixed structure • Some is “bring own structure” • We want to find value in all of it Unstructured, Semi-Structure and Structured
  22. 22. Data Types by Industry
  23. 23. Velocity • The speed in which the data is being generated and collected • Streaming data and large volume data movement • High velocity of data capture – requires rapid ingestion • Might cause the backlog problem 23
  24. 24. Global Internet Device Forecast
  25. 25. Internet of Things
  26. 26. Veracity • Quality of the data can vary greatly • Data sources might be messy or corrupted
  27. 27. So, What Defines Big Data? • When we think that we can produce value from that data and want to handle it • When the data is too big or moves too fast to handle in a sensible amount of time • When the data doesn’t fit conventional database structure • When the solution becomes part of the problem 27
  28. 28.
  29. 29. Why Big Data Now? • Because we have data: • Data is born already in digital form • 40% of data growth per year • Because we can: • 500$ for a drive in which to store all the music of the world • 40 years of Moore's Law = large computational resources • 64% of organizations have invested in big data in 2013 • 34 billion $ invested in big data in 2013 “Because we reached dead end with logic”
  30. 30. How to do Big Data
  31. 31. 31
  32. 32. Big Data in Practice • Big data is big: technological infrastructure solutions needed • Big data is messy: data sources must be cleaned before use • Big data is complicated: need developers and system admins to manage intake of data
  33. 33. Big Data in Practice (cont.) • Data must be broken out of silos in order to be mined, analyzed and transformed into value • The organization must learn how to communicate and interpret the results of analysis
  34. 34. Infrastructure Challenges • Infrastructure that is built for: • Large-scale • Distributed • Data-intensive jobs that spread the problem across clusters of server nodes 34
  35. 35. Infrastructure Challenges (cont.) • Storage: • Efficient and cost-effective enough to capture and store terabytes, if not petabytes, of data • With intelligent capabilities to reduce your data footprint such as: • Data compression • Automatic data tiering • Data deduplication 35
  36. 36. Infrastructure Challenges (cont.) • Network infrastructure that can quickly import large data sets and then replicate it to various nodes for processing • Security capabilities that protect highly-distributed infrastructure and data 36
  37. 37. Goals of Analytics
  38. 38. Positions in Big Data management • DevOps are handling the infrastructure – sys admins and cluster manager • Data scientists are in charge of producing value from the data
  39. 39. Data Scientist
  40. 40. Hadoop
  41. 41. Apache Hadoop • Open source project run by Apache (2006) • Hadoop brings the ability to cheaply process large amounts of data, regardless of its structure • It Is has been the driving force behind the growth of the big data Industry • Get the public release from: • 41
  42. 42. Hadoop Creation History
  43. 43. Key points • An open-source framework that uses a simple programming model to enable distributed processing of large data sets on clusters of computers. • The complete technology stack includes • common utilities • a distributed file system • analytics and data storage platforms • an application layer that manages distributed processing, parallel computation, workflow, and configuration management • Cost-effective for handling large unstructured data sets than conventional approaches, and it offers massive scalability and speed 43
  44. 44. Why use Hadoop? Cost Flexibility Near linear performance up to 1000s of nodes Leverages commodity HW & open source SW Versatility with data, analytics & operation Scalability
  45. 45. What Hadoop Is Not? • Hadoop does not replace DW or relational databases • Hadoop is not for OLTP or real-time systems • Very good for large amount, not so much for smaller sets • Designed for clusters – there is Hadoop monster server (single server)
  46. 46. Hadoop Cluster in Yahoo 46 Cluster of machine running Hadoop at Yahoo! (credit: Yahoo!)
  47. 47. Hadoop under the Hood
  48. 48. Hadoop Main Components • HDFS: Hadoop Distributed File System – distributed file system that runs in a clustered environment. • MapReduce – programming paradigm for running processes over a clustered environments. 48
  49. 49. HDFS is... • A distributed file system • Redundant storage • Designed to reliably store data using commodity hardware • Designed to expect hardware failures • Intended for large files • Designed for batch inserts • The Hadoop Distributed File System 49
  50. 50. MapReduce is... • A programming model for expressing distributed computations at a massive scale • An execution framework for organizing and performing such computations • An open-source implementation called Hadoop 50
  51. 51. MapReduce is good for... • Embarrassingly parallel algorithms • Summing, grouping, filtering, joining • Off-line batch jobs on massive data sets • Analyzing an entire large dataset 51
  52. 52. MapReduce is OK for... • Iterative jobs (i.e., graph algorithms) • Each iteration must read/write data to disk • IO and latency cost of an iteration is high 52
  53. 53. MapReduce is NOT good for... • Jobs that need shared state/coordination • Tasks are shared-nothing • Shared-state requires scalable state store • Low-latency jobs • Jobs on small datasets • Finding individual records 53
  54. 54. Spark • Fast and general MapReduce-like engine for large-scale data processing • Fast • In memory data storage for very fast interactive queries Up to 100 times faster then Hadoop • General • Unified platform that can combine: SQL, Machine Learning , Streaming , Graph & Complex analytics • Ease of use • Can be developed in Java, Scala or Python • Integrated with Hadoop • Can read from HDFS, HBase, Cassandra, and any Hadoop data source. 54
  55. 55. Key Concepts 55 Resilient Distributed Datasets • Collections of objects spread across a cluster, stored in RAM or on Disk • Built through parallel transformations • Automatically rebuilt on failure Operations • Transformations (e.g. map, filter, groupBy) • Actions (e.g. count, collect, save) Write programs in terms of transformations on distributed datasets
  56. 56. Unified Platform • Continued innovation bringing new functionality, e.g.: • Java 8 (Closures, LambaExpressions) • Spark SQL (SQL on Spark, not just Hive) • BlinkDB(Approximate Queries) • SparkR(R wrapper for Spark) 56
  57. 57. Big Data and NoSQL
  58. 58. The Challenge • We want scalable, durable, high volume, high velocity, distributed data storage that can handle non-structured data and that will fit our specific need • RDBMS is too generic and doesn’t cut it any more – it can do the job but it is not cost effective to our usages 58
  59. 59. The Solution: NoSQL • Let’s take some parts of the standard RDBMS out to and design the solution to our specific uses • NoSQL databases have been around for ages under different names/solutions 59
  60. 60. Example Comparison: RDBMS vs. Hadoop 60 Typical Traditional RDBMS Hadoop Data Size Gigabytes Petabytes Access Interactive and Batch Batch – NOT Interactive Updates Read / Write many times Write once, Read many times Structure Static Schema Dynamic Schema Scaling Nonlinear Linear Query Response Time Can be near immediate Has latency (due to batch processing)
  61. 61. Best Used For:  Structured or Not (Flexibility)  Scalability of Storage/Compute  Complex Data Processing  Cheaper compared to RDBMS Relational Database Best Used For:  Interactive OLAP Analytics (<1sec)  Multistep Transactions  100% SQL Compliance Best when used together Hadoop And Relational Database 61
  62. 62. The NOSQL Movement • NOSQL is not a technology – it’s a concept • We need high performance, scale out abilities or agile structure • We are willing to sacrifice our sacred database cows: consistency, transactions, durability • Over 150 different brands and solutions ( 62
  63. 63. Is NoSQL a RDMS Replacement? NO 63 Well... Sometimes it does…
  64. 64. NoSQL Taxonomy Type Examples Key-Value Store Document Store Column Store Graph Store
  65. 65. Key Value Store • Distributed hash tables • Very fast to get a single value • Examples: • Amazon DynamoDB • Berkeley DB • Redis • Riak • Cassandra 65
  66. 66. Document Store • Similar to Key/Value, but value is a document • JSON or something similar, flexible schema • Agile technology • Examples: • MongoDB • CouchDB • CouchBase 66
  67. 67. What is a Column Store Database? • Column Store databases are management systems that uses data managed in a columnar structure format for better analysis of single column data (i.e. aggregation). Data is saved and handled as columns instead of rows. • Examples: • HP Vertica • Pivotal (EMC) GreenPlum • Hadoop Hbase • Amazon’s SimpleDB • Cassandra
  68. 68. Query Data • When we query data, records are read at the order they are organized in the physical structure • Even when we query a single column, we still need to read the entire table and extract the column Row 1 Row 2 Row 3 Row 4 Col 1 Col 2 Col 3 Col 4 Select Col2 From MyTable Select * From MyTable
  69. 69. How Does Column Stores Keep Data Organization in row store Organization in column store Select Col2 From MyTable
  70. 70. Row Format vs. Column Format
  71. 71. Graph Store • Inspired by the graph theory • Data model: nodes, relationships, properties on both sides • Relational database have a hard time to represent a graph in the Database • Example: • Neo4j • InfiniteGraph • RDF 72
  72. 72. Graph Example
  73. 73. Conclusion • We do Big Data to gain Value. Without value, there is no Big Data • Handling Big Data is a challenge – we talked about who uses it, when and where • Hadoop is a solution for Big Data usages but it’s not a magical solution • NoSQL, NewSQL and RDBMS are all solutions we can integrate for different usages • New organizational positions: cluster devops and data scientist.
  74. 74. Q&A
  75. 75. Thank You Zohar Elkayam twitter: @realmgic