SlideShare una empresa de Scribd logo
1 de 29
DATAWAREHOUSE
BEST PRACTICES
Dr. Eduardo Castro, MSc
ecastro@simsasys.com

http://ecastrom.blogspot.com
http://comunidadwindows.org
http://tiny.cc/comwindows
Facebook: ecastrom
Twitter: edocastro
SOURCES



This presentation is based on the following sources

Datawarehouse
              Ravi RanJan
Top 10 Best Practices for Building a Large Scale Relational Data Warehouse
              SQL CAT
Complexities of Creating a Data Warehouse

                 • Incomplete errors
                          • Missing Fields
                          • Records or Fields That, by Design, are not
                                Being Recorded

                 • Incorrect errors
                          • Wrong Calculations, Aggregations
                          • Duplicate Records
                          • Wrong Information Entered into Source System



Source. Datawarehouse. Ravi RanJan
Data Warehouse Pitfalls
     • You are going to spend much time extracting, cleaning,
        and loading data
     • You are going to find problems with systems feeding the
        data warehouse
     • You will find the need to store/validate data not being
        captured/validated by any existing system
     • Large scale data warehousing can become an exercise
        in data homogenizing



Source. Datawarehouse. Ravi RanJan
Data Warehouse Pitfalls…

          • The time it takes to load the warehouse will expand
            to the amount of the time in the available window...
            and then some
          • You are building a HIGH maintenance system
          • You will fail if you concentrate on resource
            optimization to the neglect of project, data, and
            customer management issues and an understanding
            of what adds value to the customer




Source. Datawarehouse. Ravi RanJan
Best Practices
  • Complete requirements and design

  • Prototyping is key to business understanding

  • Utilizing proper aggregations and detailed data

  • Training is an on-going process

  • Build data integrity checks into your system.




Source. Datawarehouse. Ravi RanJan
Top 10 Best Practices for Building a Large
            Scale Relational Data Warehouse
            • Building a large scale relational data warehouse is a
              complex task.
            • This section describes some design techniques that can
              help in architecting an efficient large scale relational data
              warehouse with SQL Server.
            • Most large scale data warehouses use table and index
              partitioning, and therefore, many of the recommendations
              here involve partitioning.
            • Most of these tips are based on experiences building
              large data warehouses on SQL Server


Source. Top 10 Best Practices for Building Large Scale Relational Data Warehouse SQL CAT
Consider partitioning large fact tables
            • Consider partitioning fact tables that are 50 to 100GB or
              larger.
            • Partitioning can provide manageability and often
              performance benefits.
                    • Faster, more granular index maintenance.
                    • More flexible backup / restore options.
                    • Faster data loading and deleting
                    • Faster queries when restricted to a single partition..
            • Typically partition the fact table on the date key.
               • Enables sliding window.
            • Enables partition elimination.


Source. Top 10 Best Practices for Building Large Scale Relational Data Warehouse SQL CAT
Build clustered index on the date key of
            the fact table
            • This supports efficient queries to populate cubes or
                 retrieve a historical data slice.

            • If you load data in a batch window for the clustered index
                 on the fact table then use the options
                      ALLOW_ROW_LOCKS = OFF and
                      ALLOW_PAGE_LOCKS = OFF

            • This helps speed up table scan operations during query
                 time and helps avoid excessive locking activity during
                 large updates.


Source. Top 10 Best Practices for Building Large Scale Relational Data Warehouse SQL CAT
Build clustered index on the date key of
            the fact table
            • Build nonclustered indexes for each foreign key.
              • This helps ‘pinpoint queries' to extract rows based on a selective
                dimension predicate.


            • Use filegroups for administration requirements such as
                 backup / restore, partial database availability, etc.




Source. Top 10 Best Practices for Building Large Scale Relational Data Warehouse SQL CAT
Choose partition grain carefully
            • Most customers use month, quarter, or year.
            • For efficient deletes, you must delete one full partition at a
              time.
            • It is faster to load a complete partition at a time.
                    • Daily partitions for daily loads may be an attractive option.
                    • However, keep in mind that a table can have a maximum of 1000
                         partitions.
            • Partition grain affects query parallelism.




Source. Top 10 Best Practices for Building Large Scale Relational Data Warehouse SQL CAT
Choose partition grain carefully
            • For SQL Server 2005:


                    • Queries touching a single partition can parallelize up to MAXDOP
                         (maximum degree of parallelism).

                    • Queries touching multiple partitions use one thread per partition up
                         to MAXDOP.




Source. Top 10 Best Practices for Building Large Scale Relational Data Warehouse SQL CAT
Choose partition grain carefully

            • For SQL Server 2008:

                    • Parallel threads up to MAXDOP are distributed proportionally to
                         scan partitions, and multiple threads per partition may be used
                         even when several partitions must be scanned.

                    • Avoid a partition design where only 2 or 3 partitions are touched by
                         frequent queries, if you need MAXDOP parallelism (assuming
                         MAXDOP =4 or larger).




Source. Top 10 Best Practices for Building Large Scale Relational Data Warehouse SQL CAT
Design dimension tables appropriately
            • Use integer surrogate keys for all dimensions, other than
                 the Date dimension.

            • Use the smallest possible integer for the dimension
                 surrogate keys. This helps to keep fact table narrow.

            • Use a meaningful date key of integer type derivable from
                 the DATETIME data type (for example: 20060215).

            • Don't use a surrogate Key for the Date dimension



Source. Top 10 Best Practices for Building Large Scale Relational Data Warehouse SQL CAT
Design dimension tables appropriately
            • Build a clustered index on the surrogate key for each
                 dimension table

            • Build a non-clustered index on the Business Key
                 (potentially combined with a row-effective-date) to support
                 surrogate key lookups during loads.

            • Build nonclustered indexes on other frequently searched
                 dimension columns.

            • Avoid partitioning dimension tables.

Source. Top 10 Best Practices for Building Large Scale Relational Data Warehouse SQL CAT
Design dimension tables appropriately
            • Avoid enforcing foreign key relationships between the fact
                 and the dimension tables, to allow faster data loads.

            • You can create foreign key constraints with NOCHECK to
                 document the relationships; but don’t enforce them.

            • Ensure data integrity though Transform Lookups, or
                 perform the data integrity checks at the source of the
                 data.




Source. Top 10 Best Practices for Building Large Scale Relational Data Warehouse SQL CAT
Write effective queries for partition
            elimination
            • Whenever possible, place a query predicate (WHERE
                 condition) directly on the partitioning key (Date dimension
                 key) of the fact table.




Source. Top 10 Best Practices for Building Large Scale Relational Data Warehouse SQL CAT
Use Sliding Window technique to maintain
            data
            • Maintain a rolling time window for online access to the
                 fact tables. Load newest data, unload oldest data.
            •
            • Always keep empty partitions at both ends of the partition
                 range to guarantee that the partition split (before loading
                 new data) and partition merge (after unloading old data)
                 do not incur any data movement.

            • Avoid split or merge of populated partitions. Splitting or
                 merging populated partitions can be extremely inefficient,
                 as this may cause as much as 4 times more log
                 generation, and also cause severe locking.
Source. Top 10 Best Practices for Building Large Scale Relational Data Warehouse SQL CAT
Use Sliding Window technique to maintain
            data
            • Create the load staging table in the same filegroup as the
                 partition you are loading.

            • Create the unload staging table in the same filegroup as
                 the partition you are deleteing.

            • It is fastest to load newest full partition at one time, but
                 only possible when partition size is equal to the data load
                 frequency (for example, you have one partition per day,
                 and you load data once per day).



Source. Top 10 Best Practices for Building Large Scale Relational Data Warehouse SQL CAT
Use Sliding Window technique to maintain
            data
            • If the partition size doesn't match the data load frequency,
                 incrementally load the latest partition.

            • Various options for loading bulk data into a partitioned
              table are discussed in the whitepaper
            •       http://www.microsoft.com/technet/prodtechnol/sql/be
              stpractice/loading_bulk_data_partitioned_table.mspx.

            • Always unload one partition at a time.




Source. Top 10 Best Practices for Building Large Scale Relational Data Warehouse SQL CAT
Efficiently load the initial data
            • Use SIMPLE or BULK LOGGED recovery model during
                 the initial data load.

            • Create the partitioned fact table with the Clustered index.


            • Create non-indexed staging tables for each partition, and
                 separate source data files for populating each partition.

            • Populate the staging tables in parallel.


            • Use multiple BULK INSERT, BCP or SSIS tasks.

Source. Top 10 Best Practices for Building Large Scale Relational Data Warehouse SQL CAT
Efficiently load the initial data
            • Create as many load scripts to run in parallel as there are
                 CPUs, if there is no IO bottleneck. If IO bandwidth is
                 limited, use fewer scripts in parallel.

            • Use 0 batch size in the load. Use 0 commit size in the
                 load.

            • Use TABLOCK.


            • Use BULK INSERT if the sources are flat files on the
                 same server. Use BCP or SSIS if data is being pushed
                 from remote machines.

Source. Top 10 Best Practices for Building Large Scale Relational Data Warehouse SQL CAT
Efficiently load the initial data
            • Build a clustered index on each staging table, then create
                 appropriate CHECK constraints.

            • SWITCH all partitions into the partitioned table.


            • Build nonclustered indexes on the partitioned table.


            • Possible to load 1 TB in under an hour on a 64-CPU
                 server with a SAN capable of 14 GB/Sec throughput (non-
                 indexed table).


Source. Top 10 Best Practices for Building Large Scale Relational Data Warehouse SQL CAT
Efficiently delete old data
            • Use partition switching whenever possible.


            • To delete millions of rows from nonpartitioned, indexed
                 tables
                    • Avoid DELETE FROM ...WHERE ...
                           • Huge locking and logging issues
                           • Long rollback if the delete is canceled


                    • Usually faster to
                           • INSERT the records to keep into a non-indexed table
                           • Create index(es) on the table
                           • Rename the new table to replace the original



Source. Top 10 Best Practices for Building Large Scale Relational Data Warehouse SQL CAT
Efficiently delete old data
            • As an alternative, ‘trickle' deletes using the following
                 repeatedly in a loop


                                     DELETE TOP (1000) ... ;
                                     COMMIT

            • Another alternative is to update the row to mark as
                 deleted, then delete later during non critical time.




Source. Top 10 Best Practices for Building Large Scale Relational Data Warehouse SQL CAT
Manage statistics manually
            • Statistics on partitioned tables are maintained for the table
                 as a whole.

            • Manually update statistics on large fact tables after
                 loading new data.

            • Manually update statistics after rebuilding index on a
                 partition.

            • If you regularly update statistics after periodic loads, you
                 may turn off autostats on that table.

Source. Top 10 Best Practices for Building Large Scale Relational Data Warehouse SQL CAT
Manage statistics manually
            • This is important for optimizing queries that may need to
                 read only the newest data.

            • Updating statistics on small dimension tables after
                 incremental loads may also help performance.

            • Use FULLSCAN option on update statistics on dimension
                 tables for more accurate query plans.




Source. Top 10 Best Practices for Building Large Scale Relational Data Warehouse SQL CAT
Consider efficient backup strategies
            • Backing up the entire database may take significant
                 amount of time for a very large database.

                    • For example, backing up a 2 TB database to a 10-spindle RAID-5
                         disk on a SAN may take 2 hours (at the rate 275 MB/sec).


            • Snapshot backup using SAN technology is a very good
                 option.

            • Reduce the volume of data to backup regularly.




Source. Top 10 Best Practices for Building Large Scale Relational Data Warehouse SQL CAT
Consider efficient backup strategies
            • The filegroups for the historical partitions can be marked
                 as READ ONLY.

            • Perform a filegroup backup once when a filegroup
                 becomes read-only.

            • Perform regular backups only on the read / write
                 filegroups.

            • Note that RESTOREs of the read-only filegroups cannot
                 be performed in parallel.

Source. Top 10 Best Practices for Building Large Scale Relational Data Warehouse SQL CAT

Más contenido relacionado

La actualidad más candente

Democratizing Data Quality Through a Centralized Platform
Democratizing Data Quality Through a Centralized PlatformDemocratizing Data Quality Through a Centralized Platform
Democratizing Data Quality Through a Centralized Platform
Databricks
 
Designing An Enterprise Data Fabric
Designing An Enterprise Data FabricDesigning An Enterprise Data Fabric
Designing An Enterprise Data Fabric
Alan McSweeney
 
Data Mesh Part 4 Monolith to Mesh
Data Mesh Part 4 Monolith to MeshData Mesh Part 4 Monolith to Mesh
Data Mesh Part 4 Monolith to Mesh
Jeffrey T. Pollock
 

La actualidad más candente (20)

Architect’s Open-Source Guide for a Data Mesh Architecture
Architect’s Open-Source Guide for a Data Mesh ArchitectureArchitect’s Open-Source Guide for a Data Mesh Architecture
Architect’s Open-Source Guide for a Data Mesh Architecture
 
Democratizing Data Quality Through a Centralized Platform
Democratizing Data Quality Through a Centralized PlatformDemocratizing Data Quality Through a Centralized Platform
Democratizing Data Quality Through a Centralized Platform
 
Snowflake: The Good, the Bad, and the Ugly
Snowflake: The Good, the Bad, and the UglySnowflake: The Good, the Bad, and the Ugly
Snowflake: The Good, the Bad, and the Ugly
 
Introduction SQL Analytics on Lakehouse Architecture
Introduction SQL Analytics on Lakehouse ArchitectureIntroduction SQL Analytics on Lakehouse Architecture
Introduction SQL Analytics on Lakehouse Architecture
 
Data Mess to Data Mesh | Jay Kreps, CEO, Confluent | Kafka Summit Americas 20...
Data Mess to Data Mesh | Jay Kreps, CEO, Confluent | Kafka Summit Americas 20...Data Mess to Data Mesh | Jay Kreps, CEO, Confluent | Kafka Summit Americas 20...
Data Mess to Data Mesh | Jay Kreps, CEO, Confluent | Kafka Summit Americas 20...
 
Why Data Vault?
Why Data Vault? Why Data Vault?
Why Data Vault?
 
Designing An Enterprise Data Fabric
Designing An Enterprise Data FabricDesigning An Enterprise Data Fabric
Designing An Enterprise Data Fabric
 
Introduction to Data Engineering
Introduction to Data EngineeringIntroduction to Data Engineering
Introduction to Data Engineering
 
Data Architecture Best Practices for Advanced Analytics
Data Architecture Best Practices for Advanced AnalyticsData Architecture Best Practices for Advanced Analytics
Data Architecture Best Practices for Advanced Analytics
 
Operational Data Vault
Operational Data VaultOperational Data Vault
Operational Data Vault
 
Data Mesh Part 4 Monolith to Mesh
Data Mesh Part 4 Monolith to MeshData Mesh Part 4 Monolith to Mesh
Data Mesh Part 4 Monolith to Mesh
 
Master Data Management - Aligning Data, Process, and Governance
Master Data Management - Aligning Data, Process, and GovernanceMaster Data Management - Aligning Data, Process, and Governance
Master Data Management - Aligning Data, Process, and Governance
 
Unified Big Data Processing with Apache Spark (QCON 2014)
Unified Big Data Processing with Apache Spark (QCON 2014)Unified Big Data Processing with Apache Spark (QCON 2014)
Unified Big Data Processing with Apache Spark (QCON 2014)
 
Data Architecture Strategies: Building an Enterprise Data Strategy – Where to...
Data Architecture Strategies: Building an Enterprise Data Strategy – Where to...Data Architecture Strategies: Building an Enterprise Data Strategy – Where to...
Data Architecture Strategies: Building an Enterprise Data Strategy – Where to...
 
TOP_407070357-Data-Governance-Playbook.pptx
TOP_407070357-Data-Governance-Playbook.pptxTOP_407070357-Data-Governance-Playbook.pptx
TOP_407070357-Data-Governance-Playbook.pptx
 
You Need a Data Catalog. Do You Know Why?
You Need a Data Catalog. Do You Know Why?You Need a Data Catalog. Do You Know Why?
You Need a Data Catalog. Do You Know Why?
 
Data Catalog for Better Data Discovery and Governance
Data Catalog for Better Data Discovery and GovernanceData Catalog for Better Data Discovery and Governance
Data Catalog for Better Data Discovery and Governance
 
Data Catalog as a Business Enabler
Data Catalog as a Business EnablerData Catalog as a Business Enabler
Data Catalog as a Business Enabler
 
Data Engineering Basics
Data Engineering BasicsData Engineering Basics
Data Engineering Basics
 
Building Lakehouses on Delta Lake with SQL Analytics Primer
Building Lakehouses on Delta Lake with SQL Analytics PrimerBuilding Lakehouses on Delta Lake with SQL Analytics Primer
Building Lakehouses on Delta Lake with SQL Analytics Primer
 

Similar a Data Warehouse Best Practices

Data Warehouse Best Practices
Data Warehouse Best PracticesData Warehouse Best Practices
Data Warehouse Best Practices
Eduardo Castro
 
Modern Data Platform Modern Data Platform
Modern Data Platform Modern Data PlatformModern Data Platform Modern Data Platform
Modern Data Platform Modern Data Platform
bangel105
 
Crystal xcelsius best practices and workflows for building enterprise solut...
Crystal xcelsius   best practices and workflows for building enterprise solut...Crystal xcelsius   best practices and workflows for building enterprise solut...
Crystal xcelsius best practices and workflows for building enterprise solut...
Yogeeswar Reddy
 
The High Performance DBA Optimizing Databases For High Performance
The High Performance DBA Optimizing Databases For High PerformanceThe High Performance DBA Optimizing Databases For High Performance
The High Performance DBA Optimizing Databases For High Performance
Embarcadero Technologies
 
Storage Systems For Scalable systems
Storage Systems For Scalable systemsStorage Systems For Scalable systems
Storage Systems For Scalable systems
elliando dias
 

Similar a Data Warehouse Best Practices (20)

Data Warehouse Best Practices
Data Warehouse Best PracticesData Warehouse Best Practices
Data Warehouse Best Practices
 
Taming the shrew, Optimizing Power BI Options
Taming the shrew, Optimizing Power BI OptionsTaming the shrew, Optimizing Power BI Options
Taming the shrew, Optimizing Power BI Options
 
Build a modern data platform.pptx
Build a modern data platform.pptxBuild a modern data platform.pptx
Build a modern data platform.pptx
 
Building better SQL Server Databases
Building better SQL Server DatabasesBuilding better SQL Server Databases
Building better SQL Server Databases
 
Data modeling trends for analytics
Data modeling trends for analyticsData modeling trends for analytics
Data modeling trends for analytics
 
Taming the shrew Power BI
Taming the shrew Power BITaming the shrew Power BI
Taming the shrew Power BI
 
Modern Data Platform Modern Data Platform
Modern Data Platform Modern Data PlatformModern Data Platform Modern Data Platform
Modern Data Platform Modern Data Platform
 
high performance databases
high performance databaseshigh performance databases
high performance databases
 
BigData, NoSQL & ElasticSearch
BigData, NoSQL & ElasticSearchBigData, NoSQL & ElasticSearch
BigData, NoSQL & ElasticSearch
 
Crystal xcelsius best practices and workflows for building enterprise solut...
Crystal xcelsius   best practices and workflows for building enterprise solut...Crystal xcelsius   best practices and workflows for building enterprise solut...
Crystal xcelsius best practices and workflows for building enterprise solut...
 
World-class Data Engineering with Amazon Redshift
World-class Data Engineering with Amazon RedshiftWorld-class Data Engineering with Amazon Redshift
World-class Data Engineering with Amazon Redshift
 
Data Lakehouse, Data Mesh, and Data Fabric (r2)
Data Lakehouse, Data Mesh, and Data Fabric (r2)Data Lakehouse, Data Mesh, and Data Fabric (r2)
Data Lakehouse, Data Mesh, and Data Fabric (r2)
 
The High Performance DBA Optimizing Databases For High Performance
The High Performance DBA Optimizing Databases For High PerformanceThe High Performance DBA Optimizing Databases For High Performance
The High Performance DBA Optimizing Databases For High Performance
 
Capacity planning for your data stores
Capacity planning for your data storesCapacity planning for your data stores
Capacity planning for your data stores
 
Relational and non relational database 7
Relational and non relational database 7Relational and non relational database 7
Relational and non relational database 7
 
Storage Systems For Scalable systems
Storage Systems For Scalable systemsStorage Systems For Scalable systems
Storage Systems For Scalable systems
 
MySQL: Know more about open Source Database
MySQL: Know more about open Source DatabaseMySQL: Know more about open Source Database
MySQL: Know more about open Source Database
 
Scalable relational database with SQL Azure
Scalable relational database with SQL AzureScalable relational database with SQL Azure
Scalable relational database with SQL Azure
 
NoSQL.pptx
NoSQL.pptxNoSQL.pptx
NoSQL.pptx
 
Revision
RevisionRevision
Revision
 

Más de Eduardo Castro

Más de Eduardo Castro (20)

Introducción a polybase en SQL Server
Introducción a polybase en SQL ServerIntroducción a polybase en SQL Server
Introducción a polybase en SQL Server
 
Creando tu primer ambiente de AI en Azure ML y SQL Server
Creando tu primer ambiente de AI en Azure ML y SQL ServerCreando tu primer ambiente de AI en Azure ML y SQL Server
Creando tu primer ambiente de AI en Azure ML y SQL Server
 
Seguridad en SQL Azure
Seguridad en SQL AzureSeguridad en SQL Azure
Seguridad en SQL Azure
 
Azure Synapse Analytics MLflow
Azure Synapse Analytics MLflowAzure Synapse Analytics MLflow
Azure Synapse Analytics MLflow
 
SQL Server 2019 con Windows Server 2022
SQL Server 2019 con Windows Server 2022SQL Server 2019 con Windows Server 2022
SQL Server 2019 con Windows Server 2022
 
Novedades en SQL Server 2022
Novedades en SQL Server 2022Novedades en SQL Server 2022
Novedades en SQL Server 2022
 
Introduccion a SQL Server 2022
Introduccion a SQL Server 2022Introduccion a SQL Server 2022
Introduccion a SQL Server 2022
 
Machine Learning con Azure Managed Instance
Machine Learning con Azure Managed InstanceMachine Learning con Azure Managed Instance
Machine Learning con Azure Managed Instance
 
Novedades en sql server 2022
Novedades en sql server 2022Novedades en sql server 2022
Novedades en sql server 2022
 
Sql server 2019 con windows server 2022
Sql server 2019 con windows server 2022Sql server 2019 con windows server 2022
Sql server 2019 con windows server 2022
 
Introduccion a databricks
Introduccion a databricksIntroduccion a databricks
Introduccion a databricks
 
Pronosticos con sql server
Pronosticos con sql serverPronosticos con sql server
Pronosticos con sql server
 
Data warehouse con azure synapse analytics
Data warehouse con azure synapse analyticsData warehouse con azure synapse analytics
Data warehouse con azure synapse analytics
 
Que hay de nuevo en el Azure Data Lake Storage Gen2
Que hay de nuevo en el Azure Data Lake Storage Gen2Que hay de nuevo en el Azure Data Lake Storage Gen2
Que hay de nuevo en el Azure Data Lake Storage Gen2
 
Introduccion a Azure Synapse Analytics
Introduccion a Azure Synapse AnalyticsIntroduccion a Azure Synapse Analytics
Introduccion a Azure Synapse Analytics
 
Seguridad de SQL Database en Azure
Seguridad de SQL Database en AzureSeguridad de SQL Database en Azure
Seguridad de SQL Database en Azure
 
Python dentro de SQL Server
Python dentro de SQL ServerPython dentro de SQL Server
Python dentro de SQL Server
 
Servicios Cognitivos de de Microsoft
Servicios Cognitivos de de Microsoft Servicios Cognitivos de de Microsoft
Servicios Cognitivos de de Microsoft
 
Script de paso a paso de configuración de Secure Enclaves
Script de paso a paso de configuración de Secure EnclavesScript de paso a paso de configuración de Secure Enclaves
Script de paso a paso de configuración de Secure Enclaves
 
Introducción a conceptos de SQL Server Secure Enclaves
Introducción a conceptos de SQL Server Secure EnclavesIntroducción a conceptos de SQL Server Secure Enclaves
Introducción a conceptos de SQL Server Secure Enclaves
 

Último

{Qatar{^🚀^(+971558539980**}})Abortion Pills for Sale in Dubai. .abu dhabi, sh...
{Qatar{^🚀^(+971558539980**}})Abortion Pills for Sale in Dubai. .abu dhabi, sh...{Qatar{^🚀^(+971558539980**}})Abortion Pills for Sale in Dubai. .abu dhabi, sh...
{Qatar{^🚀^(+971558539980**}})Abortion Pills for Sale in Dubai. .abu dhabi, sh...
hyt3577
 

Último (20)

Busty Desi⚡Call Girls in Sector 62 Noida Escorts >༒8448380779 Escort Service
Busty Desi⚡Call Girls in Sector 62 Noida Escorts >༒8448380779 Escort ServiceBusty Desi⚡Call Girls in Sector 62 Noida Escorts >༒8448380779 Escort Service
Busty Desi⚡Call Girls in Sector 62 Noida Escorts >༒8448380779 Escort Service
 
Defensa de JOH insiste que testimonio de analista de la DEA es falso y solici...
Defensa de JOH insiste que testimonio de analista de la DEA es falso y solici...Defensa de JOH insiste que testimonio de analista de la DEA es falso y solici...
Defensa de JOH insiste que testimonio de analista de la DEA es falso y solici...
 
Enjoy Night⚡Call Girls Iffco Chowk Gurgaon >༒8448380779 Escort Service
Enjoy Night⚡Call Girls Iffco Chowk Gurgaon >༒8448380779 Escort ServiceEnjoy Night⚡Call Girls Iffco Chowk Gurgaon >༒8448380779 Escort Service
Enjoy Night⚡Call Girls Iffco Chowk Gurgaon >༒8448380779 Escort Service
 
Enjoy Night⚡Call Girls Rajokri Delhi >༒8448380779 Escort Service
Enjoy Night⚡Call Girls Rajokri Delhi >༒8448380779 Escort ServiceEnjoy Night⚡Call Girls Rajokri Delhi >༒8448380779 Escort Service
Enjoy Night⚡Call Girls Rajokri Delhi >༒8448380779 Escort Service
 
Nara Chandrababu Naidu's Visionary Policies For Andhra Pradesh's Development
Nara Chandrababu Naidu's Visionary Policies For Andhra Pradesh's DevelopmentNara Chandrababu Naidu's Visionary Policies For Andhra Pradesh's Development
Nara Chandrababu Naidu's Visionary Policies For Andhra Pradesh's Development
 
WhatsApp 📞 8448380779 ✅Call Girls In Chaura Sector 22 ( Noida)
WhatsApp 📞 8448380779 ✅Call Girls In Chaura Sector 22 ( Noida)WhatsApp 📞 8448380779 ✅Call Girls In Chaura Sector 22 ( Noida)
WhatsApp 📞 8448380779 ✅Call Girls In Chaura Sector 22 ( Noida)
 
Gujarat-SEBCs.pdf pfpkoopapriorjfperjreie
Gujarat-SEBCs.pdf pfpkoopapriorjfperjreieGujarat-SEBCs.pdf pfpkoopapriorjfperjreie
Gujarat-SEBCs.pdf pfpkoopapriorjfperjreie
 
BDSM⚡Call Girls in Sector 135 Noida Escorts >༒8448380779 Escort Service
BDSM⚡Call Girls in Sector 135 Noida Escorts >༒8448380779 Escort ServiceBDSM⚡Call Girls in Sector 135 Noida Escorts >༒8448380779 Escort Service
BDSM⚡Call Girls in Sector 135 Noida Escorts >༒8448380779 Escort Service
 
05052024_First India Newspaper Jaipur.pdf
05052024_First India Newspaper Jaipur.pdf05052024_First India Newspaper Jaipur.pdf
05052024_First India Newspaper Jaipur.pdf
 
TDP As the Party of Hope For AP Youth Under N Chandrababu Naidu’s Leadership
TDP As the Party of Hope For AP Youth Under N Chandrababu Naidu’s LeadershipTDP As the Party of Hope For AP Youth Under N Chandrababu Naidu’s Leadership
TDP As the Party of Hope For AP Youth Under N Chandrababu Naidu’s Leadership
 
Lorenzo D'Emidio_Lavoro sullaNorth Korea .pptx
Lorenzo D'Emidio_Lavoro sullaNorth Korea .pptxLorenzo D'Emidio_Lavoro sullaNorth Korea .pptx
Lorenzo D'Emidio_Lavoro sullaNorth Korea .pptx
 
2024 02 15 AZ GOP LD4 Gen Meeting Minutes_FINAL_20240228.docx
2024 02 15 AZ GOP LD4 Gen Meeting Minutes_FINAL_20240228.docx2024 02 15 AZ GOP LD4 Gen Meeting Minutes_FINAL_20240228.docx
2024 02 15 AZ GOP LD4 Gen Meeting Minutes_FINAL_20240228.docx
 
2024 04 03 AZ GOP LD4 Gen Meeting Minutes FINAL.docx
2024 04 03 AZ GOP LD4 Gen Meeting Minutes FINAL.docx2024 04 03 AZ GOP LD4 Gen Meeting Minutes FINAL.docx
2024 04 03 AZ GOP LD4 Gen Meeting Minutes FINAL.docx
 
Enjoy Night ≽ 8448380779 ≼ Call Girls In Gurgaon Sector 46 (Gurgaon)
Enjoy Night ≽ 8448380779 ≼ Call Girls In Gurgaon Sector 46 (Gurgaon)Enjoy Night ≽ 8448380779 ≼ Call Girls In Gurgaon Sector 46 (Gurgaon)
Enjoy Night ≽ 8448380779 ≼ Call Girls In Gurgaon Sector 46 (Gurgaon)
 
04052024_First India Newspaper Jaipur.pdf
04052024_First India Newspaper Jaipur.pdf04052024_First India Newspaper Jaipur.pdf
04052024_First India Newspaper Jaipur.pdf
 
Kishan Reddy Report To People (2019-24).pdf
Kishan Reddy Report To People (2019-24).pdfKishan Reddy Report To People (2019-24).pdf
Kishan Reddy Report To People (2019-24).pdf
 
Enjoy Night ≽ 8448380779 ≼ Call Girls In Gurgaon Sector 47 (Gurgaon)
Enjoy Night ≽ 8448380779 ≼ Call Girls In Gurgaon Sector 47 (Gurgaon)Enjoy Night ≽ 8448380779 ≼ Call Girls In Gurgaon Sector 47 (Gurgaon)
Enjoy Night ≽ 8448380779 ≼ Call Girls In Gurgaon Sector 47 (Gurgaon)
 
{Qatar{^🚀^(+971558539980**}})Abortion Pills for Sale in Dubai. .abu dhabi, sh...
{Qatar{^🚀^(+971558539980**}})Abortion Pills for Sale in Dubai. .abu dhabi, sh...{Qatar{^🚀^(+971558539980**}})Abortion Pills for Sale in Dubai. .abu dhabi, sh...
{Qatar{^🚀^(+971558539980**}})Abortion Pills for Sale in Dubai. .abu dhabi, sh...
 
America Is the Target; Israel Is the Front Line _ Andy Blumenthal _ The Blogs...
America Is the Target; Israel Is the Front Line _ Andy Blumenthal _ The Blogs...America Is the Target; Israel Is the Front Line _ Andy Blumenthal _ The Blogs...
America Is the Target; Israel Is the Front Line _ Andy Blumenthal _ The Blogs...
 
Embed-4.pdf lkdiinlajeklhndklheduhuekjdh
Embed-4.pdf lkdiinlajeklhndklheduhuekjdhEmbed-4.pdf lkdiinlajeklhndklheduhuekjdh
Embed-4.pdf lkdiinlajeklhndklheduhuekjdh
 

Data Warehouse Best Practices

  • 1. DATAWAREHOUSE BEST PRACTICES Dr. Eduardo Castro, MSc ecastro@simsasys.com http://ecastrom.blogspot.com http://comunidadwindows.org http://tiny.cc/comwindows Facebook: ecastrom Twitter: edocastro
  • 2. SOURCES This presentation is based on the following sources Datawarehouse Ravi RanJan Top 10 Best Practices for Building a Large Scale Relational Data Warehouse SQL CAT
  • 3. Complexities of Creating a Data Warehouse • Incomplete errors • Missing Fields • Records or Fields That, by Design, are not Being Recorded • Incorrect errors • Wrong Calculations, Aggregations • Duplicate Records • Wrong Information Entered into Source System Source. Datawarehouse. Ravi RanJan
  • 4. Data Warehouse Pitfalls • You are going to spend much time extracting, cleaning, and loading data • You are going to find problems with systems feeding the data warehouse • You will find the need to store/validate data not being captured/validated by any existing system • Large scale data warehousing can become an exercise in data homogenizing Source. Datawarehouse. Ravi RanJan
  • 5. Data Warehouse Pitfalls… • The time it takes to load the warehouse will expand to the amount of the time in the available window... and then some • You are building a HIGH maintenance system • You will fail if you concentrate on resource optimization to the neglect of project, data, and customer management issues and an understanding of what adds value to the customer Source. Datawarehouse. Ravi RanJan
  • 6. Best Practices • Complete requirements and design • Prototyping is key to business understanding • Utilizing proper aggregations and detailed data • Training is an on-going process • Build data integrity checks into your system. Source. Datawarehouse. Ravi RanJan
  • 7. Top 10 Best Practices for Building a Large Scale Relational Data Warehouse • Building a large scale relational data warehouse is a complex task. • This section describes some design techniques that can help in architecting an efficient large scale relational data warehouse with SQL Server. • Most large scale data warehouses use table and index partitioning, and therefore, many of the recommendations here involve partitioning. • Most of these tips are based on experiences building large data warehouses on SQL Server Source. Top 10 Best Practices for Building Large Scale Relational Data Warehouse SQL CAT
  • 8. Consider partitioning large fact tables • Consider partitioning fact tables that are 50 to 100GB or larger. • Partitioning can provide manageability and often performance benefits. • Faster, more granular index maintenance. • More flexible backup / restore options. • Faster data loading and deleting • Faster queries when restricted to a single partition.. • Typically partition the fact table on the date key. • Enables sliding window. • Enables partition elimination. Source. Top 10 Best Practices for Building Large Scale Relational Data Warehouse SQL CAT
  • 9. Build clustered index on the date key of the fact table • This supports efficient queries to populate cubes or retrieve a historical data slice. • If you load data in a batch window for the clustered index on the fact table then use the options ALLOW_ROW_LOCKS = OFF and ALLOW_PAGE_LOCKS = OFF • This helps speed up table scan operations during query time and helps avoid excessive locking activity during large updates. Source. Top 10 Best Practices for Building Large Scale Relational Data Warehouse SQL CAT
  • 10. Build clustered index on the date key of the fact table • Build nonclustered indexes for each foreign key. • This helps ‘pinpoint queries' to extract rows based on a selective dimension predicate. • Use filegroups for administration requirements such as backup / restore, partial database availability, etc. Source. Top 10 Best Practices for Building Large Scale Relational Data Warehouse SQL CAT
  • 11. Choose partition grain carefully • Most customers use month, quarter, or year. • For efficient deletes, you must delete one full partition at a time. • It is faster to load a complete partition at a time. • Daily partitions for daily loads may be an attractive option. • However, keep in mind that a table can have a maximum of 1000 partitions. • Partition grain affects query parallelism. Source. Top 10 Best Practices for Building Large Scale Relational Data Warehouse SQL CAT
  • 12. Choose partition grain carefully • For SQL Server 2005: • Queries touching a single partition can parallelize up to MAXDOP (maximum degree of parallelism). • Queries touching multiple partitions use one thread per partition up to MAXDOP. Source. Top 10 Best Practices for Building Large Scale Relational Data Warehouse SQL CAT
  • 13. Choose partition grain carefully • For SQL Server 2008: • Parallel threads up to MAXDOP are distributed proportionally to scan partitions, and multiple threads per partition may be used even when several partitions must be scanned. • Avoid a partition design where only 2 or 3 partitions are touched by frequent queries, if you need MAXDOP parallelism (assuming MAXDOP =4 or larger). Source. Top 10 Best Practices for Building Large Scale Relational Data Warehouse SQL CAT
  • 14. Design dimension tables appropriately • Use integer surrogate keys for all dimensions, other than the Date dimension. • Use the smallest possible integer for the dimension surrogate keys. This helps to keep fact table narrow. • Use a meaningful date key of integer type derivable from the DATETIME data type (for example: 20060215). • Don't use a surrogate Key for the Date dimension Source. Top 10 Best Practices for Building Large Scale Relational Data Warehouse SQL CAT
  • 15. Design dimension tables appropriately • Build a clustered index on the surrogate key for each dimension table • Build a non-clustered index on the Business Key (potentially combined with a row-effective-date) to support surrogate key lookups during loads. • Build nonclustered indexes on other frequently searched dimension columns. • Avoid partitioning dimension tables. Source. Top 10 Best Practices for Building Large Scale Relational Data Warehouse SQL CAT
  • 16. Design dimension tables appropriately • Avoid enforcing foreign key relationships between the fact and the dimension tables, to allow faster data loads. • You can create foreign key constraints with NOCHECK to document the relationships; but don’t enforce them. • Ensure data integrity though Transform Lookups, or perform the data integrity checks at the source of the data. Source. Top 10 Best Practices for Building Large Scale Relational Data Warehouse SQL CAT
  • 17. Write effective queries for partition elimination • Whenever possible, place a query predicate (WHERE condition) directly on the partitioning key (Date dimension key) of the fact table. Source. Top 10 Best Practices for Building Large Scale Relational Data Warehouse SQL CAT
  • 18. Use Sliding Window technique to maintain data • Maintain a rolling time window for online access to the fact tables. Load newest data, unload oldest data. • • Always keep empty partitions at both ends of the partition range to guarantee that the partition split (before loading new data) and partition merge (after unloading old data) do not incur any data movement. • Avoid split or merge of populated partitions. Splitting or merging populated partitions can be extremely inefficient, as this may cause as much as 4 times more log generation, and also cause severe locking. Source. Top 10 Best Practices for Building Large Scale Relational Data Warehouse SQL CAT
  • 19. Use Sliding Window technique to maintain data • Create the load staging table in the same filegroup as the partition you are loading. • Create the unload staging table in the same filegroup as the partition you are deleteing. • It is fastest to load newest full partition at one time, but only possible when partition size is equal to the data load frequency (for example, you have one partition per day, and you load data once per day). Source. Top 10 Best Practices for Building Large Scale Relational Data Warehouse SQL CAT
  • 20. Use Sliding Window technique to maintain data • If the partition size doesn't match the data load frequency, incrementally load the latest partition. • Various options for loading bulk data into a partitioned table are discussed in the whitepaper • http://www.microsoft.com/technet/prodtechnol/sql/be stpractice/loading_bulk_data_partitioned_table.mspx. • Always unload one partition at a time. Source. Top 10 Best Practices for Building Large Scale Relational Data Warehouse SQL CAT
  • 21. Efficiently load the initial data • Use SIMPLE or BULK LOGGED recovery model during the initial data load. • Create the partitioned fact table with the Clustered index. • Create non-indexed staging tables for each partition, and separate source data files for populating each partition. • Populate the staging tables in parallel. • Use multiple BULK INSERT, BCP or SSIS tasks. Source. Top 10 Best Practices for Building Large Scale Relational Data Warehouse SQL CAT
  • 22. Efficiently load the initial data • Create as many load scripts to run in parallel as there are CPUs, if there is no IO bottleneck. If IO bandwidth is limited, use fewer scripts in parallel. • Use 0 batch size in the load. Use 0 commit size in the load. • Use TABLOCK. • Use BULK INSERT if the sources are flat files on the same server. Use BCP or SSIS if data is being pushed from remote machines. Source. Top 10 Best Practices for Building Large Scale Relational Data Warehouse SQL CAT
  • 23. Efficiently load the initial data • Build a clustered index on each staging table, then create appropriate CHECK constraints. • SWITCH all partitions into the partitioned table. • Build nonclustered indexes on the partitioned table. • Possible to load 1 TB in under an hour on a 64-CPU server with a SAN capable of 14 GB/Sec throughput (non- indexed table). Source. Top 10 Best Practices for Building Large Scale Relational Data Warehouse SQL CAT
  • 24. Efficiently delete old data • Use partition switching whenever possible. • To delete millions of rows from nonpartitioned, indexed tables • Avoid DELETE FROM ...WHERE ... • Huge locking and logging issues • Long rollback if the delete is canceled • Usually faster to • INSERT the records to keep into a non-indexed table • Create index(es) on the table • Rename the new table to replace the original Source. Top 10 Best Practices for Building Large Scale Relational Data Warehouse SQL CAT
  • 25. Efficiently delete old data • As an alternative, ‘trickle' deletes using the following repeatedly in a loop DELETE TOP (1000) ... ; COMMIT • Another alternative is to update the row to mark as deleted, then delete later during non critical time. Source. Top 10 Best Practices for Building Large Scale Relational Data Warehouse SQL CAT
  • 26. Manage statistics manually • Statistics on partitioned tables are maintained for the table as a whole. • Manually update statistics on large fact tables after loading new data. • Manually update statistics after rebuilding index on a partition. • If you regularly update statistics after periodic loads, you may turn off autostats on that table. Source. Top 10 Best Practices for Building Large Scale Relational Data Warehouse SQL CAT
  • 27. Manage statistics manually • This is important for optimizing queries that may need to read only the newest data. • Updating statistics on small dimension tables after incremental loads may also help performance. • Use FULLSCAN option on update statistics on dimension tables for more accurate query plans. Source. Top 10 Best Practices for Building Large Scale Relational Data Warehouse SQL CAT
  • 28. Consider efficient backup strategies • Backing up the entire database may take significant amount of time for a very large database. • For example, backing up a 2 TB database to a 10-spindle RAID-5 disk on a SAN may take 2 hours (at the rate 275 MB/sec). • Snapshot backup using SAN technology is a very good option. • Reduce the volume of data to backup regularly. Source. Top 10 Best Practices for Building Large Scale Relational Data Warehouse SQL CAT
  • 29. Consider efficient backup strategies • The filegroups for the historical partitions can be marked as READ ONLY. • Perform a filegroup backup once when a filegroup becomes read-only. • Perform regular backups only on the read / write filegroups. • Note that RESTOREs of the read-only filegroups cannot be performed in parallel. Source. Top 10 Best Practices for Building Large Scale Relational Data Warehouse SQL CAT