Se ha denunciado esta presentación.
Se está descargando tu SlideShare. ×

Data Lakehouse, Data Mesh, and Data Fabric (r2)

Ad

Data Lakehouse, Data
Mesh, and Data Fabric
(the alphabet soup of data architectures)
James Serra
Data & AI Solution Archit...

Ad

About Me
 Microsoft, Data & AI Solution Architect in Microsoft Consulting Services (MCS), now called Industry
Solutions D...

Ad

Agenda
 Data Warehouse
 Data Lake
 Modern Data Warehouse
 Data Fabric
 Data Lakehouse
 Data Mesh

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Ad

Eche un vistazo a continuación

1 de 30 Anuncio
1 de 30 Anuncio

Data Lakehouse, Data Mesh, and Data Fabric (r2)

Descargar para leer sin conexión

So many buzzwords of late: Data Lakehouse, Data Mesh, and Data Fabric.  What do all these terms mean and how do they compare to a modern data warehouse?  In this session I’ll cover all of them in detail and compare the pros and cons of each.  They all may sound great in theory, but I'll dig into the concerns you need to be aware of before taking the plunge. I’ll also include use cases so you can see what approach will work best for your big data needs. And I'll discuss Microsoft version of the data mesh.

So many buzzwords of late: Data Lakehouse, Data Mesh, and Data Fabric.  What do all these terms mean and how do they compare to a modern data warehouse?  In this session I’ll cover all of them in detail and compare the pros and cons of each.  They all may sound great in theory, but I'll dig into the concerns you need to be aware of before taking the plunge. I’ll also include use cases so you can see what approach will work best for your big data needs. And I'll discuss Microsoft version of the data mesh.

Más Contenido Relacionado

Data Lakehouse, Data Mesh, and Data Fabric (r2)

  1. 1. Data Lakehouse, Data Mesh, and Data Fabric (the alphabet soup of data architectures) James Serra Data & AI Solution Architect Microsoft jamesserra@microsoft.com Blog: JamesSerra.com
  2. 2. About Me  Microsoft, Data & AI Solution Architect in Microsoft Consulting Services (MCS), now called Industry Solutions Delivery (ISD)  At Microsoft for most of the last eight years, with a brief stop at EY  Was previously a Data & AI Architect at Microsoft for seven years  In IT for 35 years, worked on many BI and DW projects  Worked as desktop/web/database developer, DBA, BI and DW architect and developer, MDM architect, PDW/APS developer  Been perm employee, contractor, consultant, business owner  Presenter at PASS Summit, SQLBits, Enterprise Data World conference, Big Data Conference Europe, SQL Saturdays  Blog at JamesSerra.com  Former SQL Server MVP  Author of book “Reporting with Microsoft SQL Server 2012”
  3. 3. Agenda  Data Warehouse  Data Lake  Modern Data Warehouse  Data Fabric  Data Lakehouse  Data Mesh
  4. 4. I tried to figure out all these data platform buzzwords on my own… And ended up passed-out drunk in a Denny’s parking lot Let’s prevent that from happening…
  5. 5. What is a Data Warehouse and why use one? A data warehouse is where you store data from multiple data sources to be used for historical and trend analysis reporting. It acts as a central repository for many subject areas and contains the "single version of truth". It is NOT to be used for OLTP applications. Reasons for a data warehouse:  Reduce stress on production system  Optimized for read access, sequential disk scans  Integrate many sources of data  Keep historical records (no need to save hardcopy reports)  Restructure/rename tables and fields, model data  Protect against source system upgrades  Use Master Data Management, including hierarchies  No IT involvement needed for users to create reports  Improve data quality and plugs holes in source systems  One version of the truth  Easy to create BI solutions on top of it (i.e. Azure Analysis Services Cubes)  Don’t need to provide security access for many users to the production systems  Make better business decisions by getting greater insights into your company Why You Need a Data Warehouse
  6. 6. Observation Pattern Theory Hypothesis What will happen? How can we make it happen? Predictive Analytics Prescriptive Analytics What happened? Why did it happen? Descriptive Analytics Diagnostic Analytics Confirmation Theory Hypothesis Observation Two Approaches to getting value out of data: Top-Down + Bottoms-Up
  7. 7. Implement Data Warehouse Physical Design ETL Development Reporting & Analytics Development Install and Tune Reporting & Analytics Design Dimension Modelling ETL Design Setup Infrastructure Understand Corporate Strategy Data Warehousing Uses A Top-Down Approach Data sources Gather Requirements Business Requirements Technical Requirements
  8. 8. The “data lake” Uses A Bottoms-Up Approach Ingest all data regardless of requirements Store all data in native format without schema definition Do analysis Using analytic engines like Hadoop Interactive queries Batch queries Machine Learning Data warehouse Real-time analytics Devices
  9. 9. Data Lake + Data Warehouse Better Together Data sources What happened? Descriptive Analytics Diagnostic Analytics Why did it happen? What will happen? Predictive Analytics Prescriptive Analytics How can we make it happen?
  10. 10. What is a data lake and why use one? A schema-on-read storage repository that holds a vast amount of raw data in its native format until it is needed. Reasons for a data lake: • Inexpensively store unlimited data • Centralized place for multiple subjects (single version of the truth) • Collect all data “just in case” (data hoarding). The data lake is a good place for data that you “might” use down the road • Easy integration of differently-structured data • Store data with no modeling – “Schema on read” • Complements enterprise data warehouse (EDW) • Frees up expensive EDW resources for queries instead of using EDW resources for transformations (avoiding user contention) • Wanting to use technologies/tools (i.e Databricks) to refine/filter data that do the refinement quicker/better than your EDW • Quick user access to data for power users/data scientists (allowing for faster ROI) • Data exploration to see if data valuable before writing ETL and schema for relational database, or use for one-time report • Allows use of Hadoop tools such as ETL and extreme analytics • Place to land IoT streaming data • On-line archive or backup for data warehouse data (i.e. keep three years of data in DW and have older data in data lake with an external table pointing to it) • With Hadoop/ADLS, high availability and disaster recovery built in • It can ingest large files quickly and provide data redundancy • ELT jobs on EDW are taking too long because of increasing data volumes and increasing rate of ingesting (velocity), so offload some of them to the Hadoop data lake • Have a backup of the raw data in case you need to load it again due to an ETL error (and not have to go back to the source). You can keep a long history of raw data • Allows for data to be used many times for different analytic needs and use cases • Cost savings and faster transformations: storage tiers with lifecycle management; separation of storage and compute resources allowing multiple instances of different sizes working with the same data simultaneously vs scaling data warehouse; low-cost storage for raw data saving space on the EDW • Extreme performance for transformations by having multiple compute options each accessing different folders containing data • The ability for an end-user or product to easily access the data from any location
  11. 11. Data Warehouse Serving, Security & Compliance • Business people • Low latency • Complex joins • Interactive ad-hoc query • High number of users • Additional security • Large support for tools • Dashboards • Easily create reports (Self-service BI) • Know questions
  12. 12. Enterprise Data Maturity Stages Structured data is transacted and locally managed. Data used reactively STAGE 2: Informative STAGE 1: Reactive Structured data is managed and analyzed centrally and informs the business Data capture is comprehensive and scalable and leads business decisions based on advanced analytics STAGE 4: Transformative STAGE 3: Predictive Data transforms business to drive desired outcomes. Real-time intelligence Rear-view mirror Any data, any source, anywhere at scale
  13. 13. Modern Data Warehouse
  14. 14. Data Fabric Data Fabric adds to a modern data warehouse: • Data access • Data policies • Metadata catalog/Lineage • Master Data Management (MDM) • Data virtualization • Real-time processing • Data scientist tools • APIs • Building blocks/Services • Products Bottom line: Additional technology to source more data, secure it, and make it available Data Fabric defined
  15. 15. Data Lakehouse
  16. 16. Delta Lake Top features: • ACID transactions • Time travel (data versioning enables rollbacks, audit trail) • Streaming and batch unification • Schema enforcement • Upserts and deletes (MERGE) • Performance improvement Databricks Delta Lake
  17. 17. Use cases for Data Lakehouse Today’s data architectures commonly suffer from four problems: • Reliability: Keeping the data lake and warehouse consistent • Data staleness: Data in warehouse is older • Limited support for advanced analytics: Top ML systems don’t work well on warehouses • Total cost of ownership: Extra cost for data copied to warehouse Lakehouse: A New Generation of Open Platforms that Unify Data Warehousing and Advanced Analytics
  18. 18. Concerns skipping relational database • Speed: Relational databases faster, especially MPP • Security: No RLS, column-level, dynamic data masking • Complexity: Metadata separate from data, file-based world • Missing features: Referential integrity, TDE, workload management; other features require locked into Spark • People used to using a relational database Azure Synapse: starting to see data lake only solutions because can use T-SQL, Power BI (speed, RLS) Data Lakehouse & Synapse
  19. 19. Data Mesh
  20. 20. Data Mesh Credit to Zhamak Dehghani It’s a mindset shift where you go from: • Centralized ownership to decentralized ownership • Pipelines as first-class concern to domain data as first-class concern • Data as a by-product to data as a product • A siloed data engineering team to cross- functional domain-data teams • A centralized data lake/warehouse to an ecosystem of data products
  21. 21. Use cases for Data Mesh Data mesh tries to solve four challenges with a centralized data lake/warehouse: • Lack of ownership: who owns the data – the data source team or the infrastructure team? • Lack of quality: the infrastructure team is responsible for quality but does not know the data well • Organizational scaling: the central team becomes the bottleneck, such as with an enterprise data lake/warehouse • Technical scaling: current big data solutions can’t keep up with additional data requirements
  22. 22. Concerns with Data Mesh • No standard definition of a data mesh • Huge investment in organizational change and technical implementation • Performance of combining data from multiple domains • Duplication of data for performance reasons • Getting quality engineering people for each domain • Inconsistent technical implementations for the domains • Domains don’t want to wait for a data mesh • Need incentives for each domain to counter extra work • Self-serve approach of data requests could be challenging • Duplication of data and ingestion platform • Creation of data silos for domains not able to join data mesh • Not seeing the big picture for combing data Data Mesh: Centralized vs decentralized data architecture Data Mesh: Centralized ownership vs decentralized ownership
  23. 23. Key for a successful Data Mesh • Have current pain points • A company culture open to change • Experience people • Be aware of Data Mesh concerns • Don’t just jump on the latest buzzword • Don’t listen to vendors • Don’t go strictly “by the data mesh book” • Have a very long runway
  24. 24. Real Data Mesh implementations • Large banks • JPMC • Saxo Bank • JPMorgan Chase • Intuit • Adevinta • HelloFresh • DPG Media • Max Schultze • CMC Markets • Kolibri Games • Data Mesh Content
  25. 25. Data Fabric vs Data Mesh If Data Fabric uses data virtualization, how is it different from Data Mesh: • Usually only some of the data is virtualized, so still mostly centralized • Not making data as a product (no contract with domains) • Still have siloed data engineering team
  26. 26. Comparisons of Data Fabric and Data Mesh Areas Data Mesh Data Fabric Framework Focus on data architecture Focus on data architecture, semantic consumption, consumption, through the wide use of Ontologies Ontologies Governance Multiple governance layers Unified governance layer Security Data Products owning the domain data and and applying security and governance applicable to applicable to the domain Focuses on a comprehensive Unified Security Security model across the entire Data Ecosystem Consistency Complex mechanics to ensure consistency of data Focused on enabling and ensuring trust by applying applying automatic consistency Implementation Is complex, even to start a small implementation implementation due to the need of understanding understanding and segregating domain data data By far simpler, due to the inherent use of Data Data Virtualization, meta data and knowledge knowledge graphs
  27. 27. Data Mesh on Azure
  28. 28. Enterprise Scale Analytics and AI (ESA) Enterprise-scale is an architecture approach and reference implementation that enables effective construction and operationalization of landing zones on Azure, at scale and aligned with Azure Roadmap and Cloud Adoption Framework. What is Enterprise Scale Analytics and AI? A scalable analytics framework designed to enable customers building a data platform. • Supports multiple topologies ranging across Data Centric, Lakehouse, Data Fabric and Data Mesh. • Based on inputs from PG and a diverse international group of specialists working with a range of customers. • Separate guidance tailored to Small-Medium and Large enterprises. • ~80% prescribed viewpoint with 20% client customization Enterprise Scale Landing Zones is a prerequisite for Enterprise Scale Analytics since it is built on the core foundation of Enterprise Scale Landing Zones. Consisting of: • Prescriptive architecture • Designed by Subject Matter Experts • Documented End to End Technical Solution • Deployment Templates • Operational Usage Model
  29. 29. Data Mesh on Azure Resources • Piethein Strengholt: Blog - Implementing Data Mesh on Azure , Blog – Data Mesh topologies, Book - Data Management at Scale: Best Practices for Enterprise Architecture • Cloud Adoption Framework: Azure data management and analytics scenario • Data Management & Analytics Scenario - Data Management Zone: Github • Data Management & Analytics Scenario - Data Landing Zone: Github • Enterprise-Scale - Reference Implementation: Github • Microsoft doc: A financial institution scenario for data mesh
  30. 30. Q & A ? James Serra, Microsoft, Data & AI Solution Architect Email me at: jamesserra3@gmail.com Follow me at: @JamesSerra Link to me at: www.linkedin.com/in/JamesSerra Visit my blog at: JamesSerra.com

Notas del editor

  • So many buzzwords of late: Data Lakehouse, Data Mesh, and Data Fabric.  What do all these terms mean and how do they compare to a data warehouse?  In this session I’ll cover all of them in detail and compare the pros and cons of each.  I’ll include use cases so you can see what approach will work best for your big data needs.
  • Fluff, but point is I bring real work experience to the session
  • http://www.ispot.tv/ad/7f64/directv-hang-gliding
  • One version of truth story: different departments using different financial formulas to help bonus

    This leads to reasons to use BI. This is used to convince your boss of need for DW

    Note that you still want to do some reporting off of source system (i.e. current inventory counts).

    It’s important to know upfront if data warehouse needs to be updated in real-time or very frequently as that is a major architectural decision

    JD Edwards has tables names like T117
  • Top down starts with descriptive analytics and progresses to prescriptive analytics. Know the questions to ask. Lot’s of upfront work to get data to where you can use it
    Bottoms up starts with predictive analytics. Don’t know the questions to ask. Little work needs to be done to start using data


    There are two approaches to doing information management for analytics:
    Top-down (deductive approach). This is where analytics is done starting with a clear understanding of corporate strategy where theories and hypothesis are made up front. The right data model is then designed and implemented prior to any data collection. Oftentimes, the top-down approach is good for descriptive and diagnostic analytics. What happened in the past and why did it happen?
    Bottom-up (inductive approach). This is the approach where data is collected up front before any theories and hypothesis are made. All data is kept so that patterns and conclusions can be derived from the data itself. This type of analysis allows for more advanced analytics such as doing predictive or prescriptive analytics: what will happen and/or how can we make it happen?

    In Gartner’s 2013 study, “Big Data Business Benefits Are Hampered by ‘Culture Clash’”, they make the argument that both approaches are needed for innovation to be successful. Oftentimes what happens in the bottom-up approach becomes part of the top-down approach.
    .
  • https://www.jamesserra.com/archive/2017/06/data-lake-details/

    https://blog.pythian.com/reduce-costs-by-adding-a-data-lake-to-your-cloud-data-warehouse/

    Also called bit bucket, staging area, landing zone or enterprise data hub (Cloudera)

    http://www.jamesserra.com/archive/2014/05/hadoop-and-data-warehouses/

    http://www.jamesserra.com/archive/2014/12/the-modern-data-warehouse/

    http://adtmag.com/articles/2014/07/28/gartner-warns-on-data-lakes.aspx

    http://intellyx.com/2015/01/30/make-sure-your-data-lake-is-both-just-in-case-and-just-in-time/

    http://www.blue-granite.com/blog/bid/402596/Top-Five-Differences-between-Data-Lakes-and-Data-Warehouses

    http://www.martinsights.com/?p=1088

    http://data-informed.com/hadoop-vs-data-warehouse-comparing-apples-oranges/

    http://www.martinsights.com/?p=1082

    http://www.martinsights.com/?p=1094

    http://www.martinsights.com/?p=1102
  • Any data, no matter the size, speed, or type

    Adam: 2 min/11 total
    Let’s expand on this concept of leaders versus laggards just a bit. There are different stages of enterprise data maturity as we see on this slide. Organizations go through several stages in this process, from being reactive or informative with data to being predictive and transformative with data. And with every step that an organization takes along these stages, their ability to be successful in digital transformation accelerates. The reason for this acceleration is simple and to me, the secret is found in the seven most important words on this slides – the seven words that define the transformative end of the spectrum here – are “any data, any source, anywhere at scale”.

    This is an essential and an ambitious goal for any organization. What about third-party governmental data about demographics and income? Yes, any data. How about data formats that you have not seen before which come from systems coming across from a recent acquisitions? Yes, any source. What about data generated by devices that are only intermittently connected to the internet? Yes, anywhere. How about data that comes in 100 times as fast as it ever came in before because a movie star mentioned your product or service? Yes, at scale.

    The more data that customers bring to the cloud and make available for AI, the more successful they can become. As customers increasingly realize this, they start to lever AI more and more, creating a demand pipeline for additional data to go to the cloud. Let’s drill down on that next.
  • Data Fabric adds: data access, data policies, data catalog, MDM, data virtualization, data scientist tools, APIs, building blocks, products
  •  Delta Lake, Apache Hudi or Apache Iceberg (see A Thorough Comparison of Delta Lake, Iceberg and Hudi),
  • Reliability. Keeping the data lake and warehouse consistent is difficult and costly. Continuous engineering is required to ETL data between the two systems and make it available to high-performance decision support and BI. Each ETL step also risks incurring failures or introducing bugs that reduce data quality, e.g., due to subtle differences between the data lake and warehouse engines.

    Data staleness. The data in the warehouse is stale compared to that of the data lake, with new data frequently taking days to load. This is a step back compared to the first generation of analytics systems, where new operational data was immediately available for queries. According to a survey by Dimensional Research and Fivetran, 86% of analysts use out-of-date data and 62% report waiting on engineering resources numerous times per month [47].

    Limited support for advanced analytics. Businesses want to ask predictive questions using their warehousing data, e.g., “which customers should I offer discounts to?” Despite much research on the confluence of ML and data management, none of the leading machine learning systems, such as TensorFlow, PyTorch and XGBoost, work well on top of warehouses. Unlike BI queries, which extract a small amount of data, these systems need to process large datasets using complex non-SQL code. Reading this data via ODBC/JDBC is inefficient, and there is no way to directly access the internal
    warehouse proprietary formats. For these use cases, warehouse vendors recommend exporting data to files, which further increases complexity and staleness (adding a third ETL step!). Alternatively, users can run these systems against data lake data in open formats. However, they then lose rich management features from data warehouses, such as ACID transactions, data versioning and indexing.

    Total cost of ownership. Apart from paying for continuous ETL, users pay double the storage cost for data copied to a warehouse, and commercial warehouses lock data into proprietary formats that increase the cost of migrating data or workloads to other systems
  • Speed: Queries against a relational storage will always be faster than against a data lake (roughly 5X) because of missing features in the data lake such as the lack of statistics, query plans, result-set caching, materialized views, in-memory caching, SSD-based caches, indexes, and the ability to design and align data and tables. Counter: DirectParquet, CSV 2.0, query acceleration, predict pushdown, and sql on-demand auto-scaling are some of the features that can make queries against ADLS be nearly as fast as a relational database.  Then there are features like Delta lake and the ability to use statistics for external tables that can add even more performance. Plus you can also import the data into Power BI, use Power BI aggregation tables, or import the data into Azure Analysis Services to get even faster performance. Another thing to keep in mind affecting query performance is Synapse is a Massive parallel processing (MPP) technology that has features such as replicated tables for smaller tables (i.e. dimension tables) and distributed tables for large tables (i.e. fact tables) with the ability to control how they are distributed across storage (hash, round-robin). This could provide much greater performance compared to a data lake that uses HDFS where large files are chunked across the storage
    Security: Row-level security (RLS), column-level security, dynamic data masking, and data discovery & classification are security-related features that are not available in a data lake. Counter: User RLS in Power BI or RLS on external tables instead of RLS on a database table, which then allows you to use result set caching in Synapse
    Complexity: Schema-on-read (ADLS) is more complex to query than schema-on-write (relational database). Schema-on-read means the end-user must define the metadata, where with schema-on-write the metadata was stored along with the data. Then there is the difficulty in querying in a file-based world compared to a relational database world. Counter: Create a SQL relational view on top of files in the data lake so the end-user does not have to create the metadata, which will make it appear to the end-user that the data is in a relational database. Or you could import the data from the data lake into Power BI, creating a star schema model in a Power BI dataset. But I still see it being very difficult to manage a solution with just a data lake when you have data from many sources. Having the metadata along with the data in a relational database allows everyone to be on the same page as to what the data actually means, versus more of a wild west with a data lake
    Missing features: Auditing, referential integrity, ACID compliance, updating/deleting rows of data, data caching, Transparent Data Encryption (TDE), workload management, full support of T-SQL – all are not available in a data lake. Counter: some of these features can be accomplished when using Delta Lake, Apache Hudi or Apache Iceberg (see A Thorough Comparison of Delta Lake, Iceberg and Hudi), but will not be as easy to implement as a relational database and you will be locked into using Spark. Also, features being added to Blob Storage (see More Azure Blob Storage enhancements) can be used instead of resorting to Delta Lake, such as blob versioning as a replacement for time travel in Delta Lake
  • https://datameshlearning.substack.com/p/favorites
  • https://datameshlearning.substack.com/p/favorites
  • I'd say that data mesh can be implemented using the Data Management and Analytics scenario - it contains a lot of synergy's with mesh. For SQL Bits, please push them to an external online event we are aiming to host at end of March where we will go deeper into mesh.

×