Se ha denunciado esta presentación.
Se está descargando tu SlideShare. ×

Should I move my database to the cloud?

Anuncio
Anuncio
Anuncio
Anuncio
Anuncio
Anuncio
Anuncio
Anuncio
Anuncio
Anuncio
Anuncio
Anuncio
Cargando en…3
×

Eche un vistazo a continuación

1 de 69 Anuncio

Should I move my database to the cloud?

Descargar para leer sin conexión

So you have been running on-prem SQL Server for a while now. Maybe you have taken the step to move it from bare metal to a VM, and have seen some nice benefits. Ready to see a TON more benefits? If you said “YES!”, then this is the session for you as I will go over the many benefits gained by moving your on-prem SQL Server to an Azure VM (IaaS). Then I will really blow your mind by showing you even more benefits by moving to Azure SQL Database (PaaS/DBaaS). And for those of you with a large data warehouse, I also got you covered with Azure SQL Data Warehouse. Along the way I will talk about the many hybrid approaches so you can take a gradual approve to moving to the cloud. If you are interested in cost savings, additional features, ease of use, quick scaling, improved reliability and ending the days of upgrading hardware, this is the session for you!

So you have been running on-prem SQL Server for a while now. Maybe you have taken the step to move it from bare metal to a VM, and have seen some nice benefits. Ready to see a TON more benefits? If you said “YES!”, then this is the session for you as I will go over the many benefits gained by moving your on-prem SQL Server to an Azure VM (IaaS). Then I will really blow your mind by showing you even more benefits by moving to Azure SQL Database (PaaS/DBaaS). And for those of you with a large data warehouse, I also got you covered with Azure SQL Data Warehouse. Along the way I will talk about the many hybrid approaches so you can take a gradual approve to moving to the cloud. If you are interested in cost savings, additional features, ease of use, quick scaling, improved reliability and ending the days of upgrading hardware, this is the session for you!

Anuncio
Anuncio

Más Contenido Relacionado

Presentaciones para usted (20)

Similares a Should I move my database to the cloud? (20)

Anuncio

Más de James Serra (16)

Anuncio

Should I move my database to the cloud?

  1. 1. Should I move my database to the cloud? James Serra Big Data Evangelist Microsoft JamesSerra3@gmail.com (On-prem vs IaaS VM vs SQL DB/DW)
  2. 2. About Me  Microsoft, Big Data Evangelist  In IT for 30 years, worked on many BI and DW projects  Worked as desktop/web/database developer, DBA, BI and DW architect and developer, MDM architect, PDW/APS developer  Been perm employee, contractor, consultant, business owner  Presenter at PASS Business Analytics Conference, PASS Summit, Enterprise Data World conference  Certifications: MCSE: Data Platform, Business Intelligence; MS: Architecting Microsoft Azure Solutions, Design and Implement Big Data Analytics Solutions, Design and Implement Cloud Data Platform Solutions  Blog at JamesSerra.com  Former SQL Server MVP  Author of book “Reporting with Microsoft SQL Server 2012”
  3. 3. Should I move my database to the cloud? Thank your for attending, please fill out the evaluation cards…
  4. 4. Agenda  SQL Server on-prem  SQL Server continuum  SQL Server in an Azure VM (IaaS)  Azure SQL Database (PaaS/DBaaS)  Azure SQL Data Warehouse (PaaS/DBaaS)  Summary
  5. 5. Benefits of the cloud Agility • Grow hardware as demand is needed (unlimited elastic scale). Change hardware instantly • Reduce hardware as demand lessons or turn off if not used (pay for what you need) Innovation • Fire up a server quickly (abbreviated infrastructure implementation build-out times). Low barrier of entry and quicker “Time to market” • Make it easy to experiment, fail fast Risk • Availability - High availability and disaster recovery built-in or easy to implement • Reliability - Four nines SLA, storage durability, network redundancy, automatic geography redundancy • Security - The cloud datacenters have the ultimate in security Other • Cost savings: facility (co-location space, power, cooling, lights), hardware, software license, implementation, etc • No need to manage the hardware infrastructure, reallocating staff • No commitment or long-term vendor lock • Allows companies to benefit from changes in the technology impacting the latest storage solutions • More frequent updates to OS, sql server, etc, done for you • Really helpful for proof-of-concept (POC) or development projects with a known lifespan
  6. 6. Constraints of on-premise data • Scale constrained to on-premise procurement • Capex up-front costs, most companies instead prefer a yearly operating expense (OpEx) • A staff of employees or consultants must be retained to administer and support the hardware and software in place • Expertise needed for tuning and deployment • Lack of room in the datacenter
  7. 7. Reasons not to move a database to cloud • No internet connection (deep mine) or slow internet connection (offshore oil rig) • Millisecond performance required (servers in high-volume package plant) • Applications will stay on-prem • Locked-in lease of datacenter with new equipment • Large amount of on-prem born data • Huge migration effort for a short life span database • Extremely sensitive data This just means some databases should not be moved, but many others can!
  8. 8. SQL Server 2005 SQL Server 2008 SQL Server 2008 R2 SQL Server 2012 SQL Server 2014 SQL Server 2016 Performance & productivity Mission critical Self-service BI Cloud-ready Mission critical & cloud performance Advanced analytics & rich visualizations
  9. 9. It can handle up to 384-cores and 24TB of memory! It use the HPE 3PAR StoreServ 8450 storage array which consists of 192 SSD drives (480GB/drive) for a total of 92TB of disk space.
  10. 10. Options for data warehouse solutions Balancing flexibility and choice By yourself With a reference architecture With an appliance Tuning and optimization Installation Configuration Tuning and optimization Installation Configuration Installation Tuning and optimization HIGH LOW Time to solution Optional, if you have hardware already Existing or procured hardware and support Procured software and support Offerings • SQL Server 2014/2016 • Windows Server 2012 R2/2016 • System Center 2012 R2/2016 Offerings • Private Cloud Fast Track • Data Warehouse Fast Track • Build or purchase Offerings • Analytics Platform System Existing or procured hardware and support Procured software and support Procured appliance and support HIGH Price
  11. 11. A workload-specific database system design and validation program for Microsoft partners and customers Hardware system design • Tight specifications for servers, storage, and networking • Resource balanced and validated • Latest-generation servers and storage, including solid-state disks (SSDs) Database configuration • Workload-specific • Database architecture • SQL Server settings • Windows Server settings • Performance guidance Software • SQL Server 2016 Enterprise • Windows Server 2012 R2 Windows Server 2012 R2 SQL Server 2016 Processors Networking Servers Storage https://www.microsoft.com/en-us/cloud-platform/data-warehouse-fast-track
  12. 12. Parallelism • Uses many separate CPUs running in parallel to execute a single program • Shared Nothing: Each CPU has its own memory and disk (scale-out) • Segments communicate using high-speed network between nodes MPP - Massively Parallel Processing • Multiple CPUs used to complete individual processes simultaneously • All CPUs share the same memory, disks, and network controllers (scale-up) • All SQL Server implementations up until now have been SMP • Mostly, the solution is housed on a shared SAN SMP - Symmetric Multiprocessing
  13. 13. Microsoft Analytics Platform System
  14. 14. SQL Server Azure VM Azure SQL DB Azure SQL DW Fast Track for SQL Server Analytics Platform System SQL Server 2016 + Superdome X Analytics Platform System Hadoop Azure Data Lake Analytics Azure Data Lake Store Relational Federated Query Power BI Azure Machine Learning Azure Data Factory Non-Relational CloudOn-Premises
  15. 15. Who manages what? Infrastructure as a Service Storage Servers Networking O/S Middleware Virtualization Data Applications Runtime ManagedbyMicrosoft Youscale,make resilient&manage Platform as a Service Scale,Resilienceand managementbyMicrosoft Youmanage Storage Servers Networking O/S Middleware Virtualization Applications Runtime Data On Premises Physical / Virtual Youscale,makeresilientandmanage Storage Servers Networking O/S Middleware Virtualization Data Applications Runtime Software as a Service Storage Servers Networking O/S Middleware Virtualization Applications Runtime Data Scale,Resilienceand managementbyMicrosoft Windows Azure Virtual Machines Windows Azure Cloud Services
  16. 16. One consistent platform with common tools. CloudOn-Premises The data platform continuum
  17. 17.  VM hosted on Microsoft Azure Infrastructure (“IaaS”) • From Microsoft images (gallery) or your own images (custom) SQL 2008R2 / 2012 / 2014 / 2016 Web / Standard / Enterprise Images refreshed with latest version, SP, CU • Fast provisioning (~10 minutes). • Accessible via RDP and Powershell • Full compatibility with SQL Server “Box” software  Pay per use • Per minute (only when running) • Cost depends on size and licensing • EA customers can use existing SQL licenses (BYOL) • Network: only outgoing (not incoming) • Storage: only used (not allocated)  Elasticity • 1 core / 2 GB mem / 1 TB   32 cores / 448 GB mem / 64 TB
  18. 18.        DS-Series: Same CPU and memory as D-Series. Support Premium Storage (good for Data, Log, and TempDB!!)    GS-Series: Fastest CPU, most memory. Support Premium Storage    Azure calculator: https://azure.microsoft.com/en-us/pricing/calculator/
  19. 19. VM Gallery Images via Azure Marketplace Certified pre-configured software images (1250 on 2/23/2017) https://azure.microsoft.com/en-us/marketplace/virtual-machines/
  20. 20. Azure Quickstart Templates Free community contributed templates (467 on 2/23/17) https://azure.microsoft.com/en-us/documentation/templates/
  21. 21. Virtual Machine storage architecture C: OS disk (127 GB) Usually 115 GB free E:, F:, etc. Data disks (1 TB) Attach SSD/HDD up to 1TB. These are .vhd files D: Temporary disk (Contents can be lost) SSD/HDD and size depends on VM chosenDisk Cache
  22. 22. Azure Default Blob Storage  Azure Storage Page Blobs, 3 copies  Storage high durability built-in (like have RAID)  VHD disks, up to 1 TB per disk (64 TB total)
  23. 23. Storage configuration Automatically creates one Windows storage space (virtual drive) across all disks. Up to 64 1TB disks for 64TB of drive space.
  24. 24. Azure Regions 40 Regions Worldwide, 34 Generally Available  100+ datacenters  Top 3 networks in the world  2.5x AWS, 7x Google DC Regions  G Series – Largest VM in World, 32 cores, 448GB Ram, SSD…
  25. 25. Migrating Data Migrate from on-prem SQL server to Azure VM IaaS: • Use the Deploy a SQL Server Database to a Microsoft Azure VM wizard. Recommended method for migrating an on-premises user database when the compressed database backup file is less than 1 TB. Use on SQL Server 2005 or greater to SQL Server 2014 or greater • Perform on-premises backup using compression and manually copy the backup file into the Azure virtual machine and then do a restore (only if you cannot use the above wizard or the database backup size is larger than 1 TB). Use on SQL Server 2005 or greater to SQL Server 2005 or greater • Perform a backup to URL and restore into the Azure virtual machine from the URL. Use on SQL Server 2012 SP1 CU2 or greater to SQL Server 2012 SP1 CU2 or greater • Detach and then copy the data and log files to Azure blob storage and then attach to SQL Server in Azure VM from URL. Use on SQL Server 2005 or greater to SQL Server 2014 or greater • Convert on-premises physical machine to Hyper-V VHD, upload to Azure Blob storage, and then deploy as new VM using uploaded VHD. Use when bringing your own SQL Server license, when migrating a database that you will run on an older version of SQL Server, or when migrating system and user databases together as part of the migration of database dependent on other user databases and/or system databases. Use on SQL Server 2005 or greater to SQL Server 2005 or greater • Ship hard drive using Windows Import/Export Service. Use when manual copy method is too slow, such as with very large databases. Use on SQL Server 2005 or greater to SQL Server 2005 or greater • If you have an AlwaysOn deployment on-premises and want to minimize downtime, use the Add Azure Replica Wizard to create a replica in Azure and then failover, pointing users to the Azure database instance. Use on SQL Server 2012 or greater to SQL Server 2012 or greater • If you do not have an AlwaysOn deployment on-premises and want to minimize downtime, use SQL Server transactional replication to configure the Azure SQL Server instance as a subscriber and then disable replication, pointing users to the Azure database instance. Use on SQL Server 2005 or greater to SQL Server 2005 or greater • Others: data-tier application, transact-SQL scripts, sql server import and export wizard, SSIS, copy database wizard
  26. 26. Scale VMs
  27. 27. Scale VMs PowerShell script
  28. 28. HA/DR deployment architectures Azure Only Availability replicas running across multiple datacenters in Azure VMs for disaster recovery. Cross-region solution protects against complete site outage. Hybrid Some availability replicas running in Azure VMs and other replicas running on- premises for cross- site disaster recovery. HA only, not DR FCI on a two-node WSFC running in Azure VMs with storage supported by a third-party clustering solution. FCI on a two-node WSFC running in Azure VMs with remote iSCSI Target shared block storage via ExpressRoute. Azure Only Principal and mirror and servers running in different datacenters for disaster recovery. Principal, Mirror, and Witness run within same Azure data center, deployed using a DC or server certificates for HA. Hybrid One partner running in an Azure VM and the other running on-premises for cross-site disaster recovery using server certificates. For DR only / Hybrid only One server running in an Azure VM and the other running on- premises for cross- site disaster recovery. Log shipping depends on Windows file sharing, so a VPN connection between the Azure virtual network and the on- premises network is required. Requires AD deployment on DR site. On-prem or Azure production databases backed up directly to Azure blob storage for disaster recovery. SQL 2016: Backup to Azure with file snapshots Simpler BCDR story Site Recovery makes it easy to handle replication, failover and recovery for your on-premises workloads and applications (not data!). Flexible replication You can replicate on- premises servers, Hyper-V virtual machines, and VMware virtual machines. Eliminate the need for secondary Native support for SQL Server data files stored as Azure blobs
  29. 29. SQL Server in Azure VM Best Practices https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-sql-server-performance-best-practices/
  30. 30. SQL Database Service A relational database-as-a-service, fully managed by Microsoft. For cloud-designed apps when near-zero administration and enterprise-grade capabilities are key. Perfect for organizations looking to dramatically increase the DB:IT ratio.
  31. 31. Azure SQL Database benefits *Data source & customer quotes: The Business Value of Microsoft Azure SQL Database Services, IDC, March 2015 “Now, those people can do development and create more revenue opportunities for us.” Increased productivity 47% staff hours reclaimed for other tasks “We can get things out faster with Azure SQL Database” Faster time to market 75% faster app deployment cycles “To be able to do what we’re doing in Azure, we’d need an investment of millions.” Lower TCO 53% less expensive than on-prem/hosted “The last time we had downtime, a half a day probably lost us $100k” Reduced risks 71% fewer cases of unplanned downtime Other Azure SQL Database DB management hours
  32. 32. Designed for predictable performance Across Basic, Standard, and Premium, each performance level is assigned a defined level of throughput Introducing the Database Transaction Unit (DTU) which represents database power and replaces hardware specs Redefined Measure of power % CPU % read % write % memory Basic — 5 DTU S0 — 10 DTU S1 — 20 DTU S2 — 50 DTU S3 — 100 DTU DTU is defined by the bounding box for the resources required by a database workload and measures power across the six performance levels. P1 — 125 DTU P2 — 250 DTU P4 — 500 DTU P6 — 1,000 DTU P11 — 1,750 DTU P15 — 4,000 DTU
  33. 33. Scale DTU’s
  34. 34. Setup Disaster Recovery
  35. 35. Reads are completed at the primary Writes are replicated to secondaries Single logical database Write Write Ack Ack Read value write Ack Critical capabilities:  Create new replica  Synchronize data  Stay consistent  Detect failures  Failover  99.99% availability High-availability platform
  36. 36. Protect from data loss or corruption Automatic backups Self-service restore Tiered retention policy – 7 days Basic – 35 days Standard & Premium – Weekly backups up to 10 years (public preview) Restore from backup SQL Database Backups sabcp01bl21 Azure Storage sabcp01bl21 Restore to point-in-time or to point-of-deletion
  37. 37. Restore from geo-redundant backups maintained in Azure Storage Restore to any Azure region Built-in disaster recovery capability available for every database Geo-restore protects from disaster SQL Database Backups sabcp01bl21 Azure Storage sabcp01bl21 Restore to any Azure region Geo-redundant
  38. 38. Active geo-replication Mission critical business continuity Up to 4 secondaries Service levels Basic, Standard and Premium Self Service Readable Secondaries Up to 4 Regions available Any Azure region Replication Automatic, Asynchronous Manageability tools REST API, PowerShell or Azure Portal Recovery Time Objective (RTO) <1 hour Recovery Point Objective <5 mins Failover On Demand
  39. 39. Azure SQL Database service tiers
  40. 40. SQL Server Management Studio (SSMS) SQL Azure Migration Wizard (SAMW) SQL Server Data Tools in Visual Studio Microsoft Data Migration Assistant (DMA) Migration tools My blog: Migrate from on-prem SQL server to Azure SQL Database
  41. 41. Which one to use? SQL Server in Azure VM Need a specific version of SQL Server or Windows Need instance-level SQL features (e.g. Agent Job, Linked Servers, DTC) Ok configuring/managing SQL Server and Windows (patching, high availability, backups) Great for migrating existing apps Azure SQL Database Don’t need a specific version of SQL Server or Windows Don’t need instance-level SQL features Don’t want to configure and manage SQL Server or Windows (high availability built-in, auto backups) Great for new apps Many customers use both
  42. 42. SQL Server in Azure VM You access a VM with SQL Server installed You manage SQL Server and Windows (patching, high availability, backups) You select the SQL Server and Windows version and edition Different VM sizes: A0 (1 core, 1GB mem, 20GB) to GS5 (32 cores, 448GB mem, 64TB) VM availability SLA: 99.95% (No SQL SLA) Azure SQL Database You access a database Database is fully managed Runs latest SQL Server version with Enterprise edition Different DB sizes: Basic (2GB, 5tps) to Premium (1TB, 4000tps) DB availability SLA: 99.99% Details
  43. 43. When to use IaaS vs PaaS See https://docs.microsoft.com/en-us/azure/sql-database/sql-database-paas-vs-sql-server-iaas
  44. 44. Limitations and Enhancements Limitations: • Database Size • VNET • Cross database joins • Resource Governor • SQL Agent • SSIS • CLR • Limited scaling options Enhancements • Database Advisor (recommendations: index tuning, parameterized queries, schema issues) • Query performance insight • Query store • Auditing and threat detection See https://docs.microsoft.com/en-us/azure/sql-database/sql-database-features, https://docs.microsoft.com/en-us/azure/sql-database/sql-database-transact-sql-information
  45. 45. SQL DW: Building on SQL DB Foundation Elastic, Petabyte Scale DW Optimized 99.99% uptime SLA, Geo-restore Azure Compliance (ISO, HIPAA, EU, etc.) True SQL Server Experience; Existing Tools Just Work SQL DW SQL DB Service Tiers
  46. 46. Elastic scale & performance Real-time elasticity Resize in <1 minute On-demand compute Expand or reduce as needed
  47. 47. Scale DWU’s
  48. 48. Market leading price/performance Query unstructured data via PolyBase/T-SQL PolyBase Scale out compute SQL DW Instance Hadoop VMs / Azure Storage Any data, any size, anywhere
  49. 49. When Paused, Pay only for Storage Use it only when you need it – no reloading / restoring of data Save Costs with Dynamic Pause and Resume • When paused, cloud-scale storage is min cost. • Policy-based (i.e. Nights/weekends) • Automate via PowerShell/REST API • Data remains in place
  50. 50. • Auto backups, every 4 hours • On-demand backups in Azure Storage • REST API, PowerShell or Azure Portal • Scheduled exports • Near-online backup/restore • Backups retention policy: • Auto backups, up to 35 days • On-demand backups retained indefinitely Geo- replicated Restore from backup SQL DW backups sabcp01bl21 Azure Storage sabcp01bl21 Automatic backup and geo-restore Recover from data deletion or alteration or disaster
  51. 51. Summary: Azure SQL DW Service A relational data warehouse-as-a-service, fully managed by Microsoft. Industries first elastic cloud data warehouse with enterprise-grade capabilities. Support your smallest to your largest data storage needs while handling queries up to 100x faster.
  52. 52. Limitations and Enhancements Limitations: • ANSI joins on updates • ANSI joins on deletes • merge statement • cross-database joins • cursors • INSERT..EXEC • output clause • inline user-defined functions • multi-statement functions • common table expressions • [recursive common table expressions (CTE)](#Recursive-common-table- expressions-(CTE) • CLR functions and procedures • $partition function • table variables • table value parameters • distributed transactions • commit / rollback work • save transaction • execution contexts (EXECUTE AS) • group by clause with rollup / cube / grouping sets options • nesting levels beyond 8 • updating through views • use of select for variable assignment • no MAX data type for dynamic SQL strings Enhancements: • TBD See https://docs.microsoft.com/en-us/azure/sql-data-warehouse/sql-data-warehouse-migrate-code Best practices: https://docs.microsoft.com/en-us/azure/sql-data-warehouse/sql-data-warehouse-best-practices Load data: https://docs.microsoft.com/en-us/azure/sql-data-warehouse/sql-data-warehouse-overview-load
  53. 53. Why Azure SQL Database? yougetmaximumcontrolovereverything… App Optimization Scaling High Availability Disaster Recovery Backup Database Patches OS Patches App Optimization Scaling High Availability Disaster Recovery Backup Database Patches OS Patches App Optimization SQL Server in a VM Azure SQL Database Focus on your app Gain 406% ROI Microsoft fully manages • • You gain: • • • • • •
  54. 54. SQL 2016/APS/SQL DW At-a-Glance Workload OLTP or Mixed (OLTP-DW-BI) Data Warehouse Data Warehouse Good for: High volume of simple queries Good for: Mid-high complex queries Good for: Mid-high complex queries Limited complex queries Low volume of transactional queries Low volume of transactional queries SQL Server 2016 Analytics Platform System Azure SQL Data Warehouse Data Volume Up to 100TB (DW Fast Track tested) 6 petabyte 1 petabyte Deployment DIY/Reference Architecture Integrated System PaaS Architecture SMP/Scale-up/Hybrid MPP/Scale-out/On-Premises MPP/Scale-out/Cloud Competitors Oracle Exadata, IBM DB2 Teradata, IBM PureSystems AWS, various start-ups Key Features In-Database Analytics In-Database Analytics* In-Database Analytics PolyBase PolyBase PolyBase CCI CCI CCI Geospatial Geospatial* Geospatial* Always Encrypted Always Encrypted* Always Encrypted* In-memory Stretch DB *This feature is on the product roadmap for APS and SQL DW
  55. 55. In closing… Moving to the cloud is a “no brainer”, it’s just a question of when!
  56. 56. Resources  Should you move your data to the cloud? http://bit.ly/1xuXbKU  Migrating SQL Server Database to Azure eBook: http://bit.ly/27s6slX
  57. 57. Other Related Presentations  Benefits of the Azure cloud  Should I move my database to the cloud?  Implement SQL Server on a Azure VM  Relational databases vs Non-relational databases  Introducing Azure SQL Database  Introducing Azure SQL Data Warehouse Visit my blog at: JamesSerra.com (where these slide decks are posted under the “Presentation” tab)
  58. 58. Azure getting started • Free Azure account, $200 in credit, https://azure.microsoft.com/en-us/free/ • Startups: BizSpark, $750/month free Azure, BizSpark Plus - $120k/year free Azure, https://www.microsoft.com/bizspark/ • MSDN subscription, $150/month free Azure, https://azure.microsoft.com/en-us/pricing/member- offers/msdn-benefits/ • Microsoft Educator Grant Program, faculty - $250/month free Azure for a year, students - $100/month free Azure for 6 months, https://azure.microsoft.com/en-us/pricing/member- offers/msdn-benefits/ • Microsoft Azure for Research Grant, http://research.microsoft.com/en- us/projects/azure/default.aspx • DreamSpark for students, https://www.dreamspark.com/Student/Default.aspx • DreamSpark for academic institutions: https://www.dreamspark.com/Institution/Subscription.aspx • Various Microsoft funds
  59. 59. Q & A ? James Serra, Big Data Evangelist Email me at: JamesSerra3@gmail.com Follow me at: @JamesSerra Link to me at: www.linkedin.com/in/JamesSerra Visit my blog at: JamesSerra.com (where this slide deck is posted under the “Presentations” tab)

Notas del editor

  • So you have been running on-prem SQL Server for a while now.  Maybe you have taken the step to move it from bare metal to a VM, and have seen some nice benefits.  Ready to see a TON more benefits?  If you said “YES!”, then this is the session for you as I will go over the many benefits gained by moving your on-prem SQL Server to an Azure VM (IaaS).  Then I will really blow your mind by showing you even more benefits by moving to Azure SQL Database (PaaS/DBaaS).  And for those of you with a large data warehouse, I also got you covered with Azure SQL Data Warehouse.  Along the way I will talk about the many hybrid approaches so you can take a gradual approve to moving to the cloud.  If you are interested in cost savings, additional features, ease of use, quick scaling, improved reliability and ending the days of upgrading hardware, this is the session for you!
  • Fluff, but point is I bring real work experience to the session
  • Four Reasons to Migrate Your SQL Server Databases to the Cloud: Security, Agility, Availability, and Reliability

    Reasons not to move to the cloud:
    Security concerns (potential for compromised information, issues of privacy when data is stored on a public facility, might be more prone to outside security threats because its high-profile, some providers might not implement the same layers of protection you can achieve in-house)
    Lack of operational control: Lack of access to servers (i.e. say you are hacked and want to get to security and system log files; if something goes wrong you have no way of controlling how and when a response is carried out; the provider can update software, change configuration settings, and allocate resources without your input or your blessing; you must conform to the environment and standards implemented by the provider)
    Lack of ownership (an outside agency can get to data easier in the cloud data center that you don’t own vs getting to data in your onsite location that you own.  Or a concern that you share a cloud data center with other companies and someone from another company can be onsite near your servers)
    Compliance restrictions
    Regulations (health, financial)
    Legal restrictions (i.e. data can’t leave your country)
    Company policies
    You may be sharing resources on your server, as well as competing for system and network resources
    Data getting stolen in-flight (i.e. from the cloud data center to the on-prem user)
  • As you can see, SQL Server is not just a database. We have been adding capabilities across these three phases of the data lifecycle for years. Our engineering team continually aims to build new functionalities into the platform so customers don’t have to acquire and stitch solutions together.
    Let’s take in-memory technology as an example: We first introduced in-memory technology back in 2008 R2 and started improving analytics by building in-memory into PowerPivot to analyze millions of rows of data in Excel. Then in SQL Server 2012, we expanded our in-memory footprint by adding in-memory to Analysis Services so IT could build data models much faster, and introduced an in-memory column store that could improve query speeds. With SQL Server 2014, we introduced an in-memory OLTP solution to significantly speed transactional performance.
    If you’re running SQL Server 2005, you should start planning your upgrade before end of support hits next April. If you have an EA with Software Assurance, licensing costs for upgrade are included. After support ends:
    You will no longer receive security updates
    Maintenance costs will increase
    You may encounter compliance concerns
  • http://www.jamesserra.com/archive/2016/02/hp-superdome-x-for-high-end-oltpdw/

    As you can see, SQL Server is not just a database. We have been adding capabilities across these three phases of the data lifecycle for years. Our engineering team continually aims to build new functionalities into the platform so customers don’t have to acquire and stitch solutions together.
    Let’s take in-memory technology as an example: We first introduced in-memory technology back in 2008 R2 and started improving analytics by building in-memory into PowerPivot to analyze millions of rows of data in Excel. Then in SQL Server 2012, we expanded our in-memory footprint by adding in-memory to Analysis Services so IT could build data models much faster, and introduced an in-memory column store that could improve query speeds. With SQL Server 2014, we introduced an in-memory OLTP solution to significantly speed transactional performance.
    If you’re running SQL Server 2005, you should start planning your upgrade before end of support hits next April. If you have an EA with Software Assurance, licensing costs for upgrade are included. After support ends:
    You will no longer receive security updates
    Maintenance costs will increase
    You may encounter compliance concerns
  • RAs
    Some assembly required OR
    No assembly required (through Partners)
    FT Training
    Understand File System layouts
    How you physically implement DW has to be like the guidelines

    Appliance
    MPP Training
    Queries are different
    Modeling is different
    Data structures are different
    Partitioning is different

    Key decision points:
    Data Volumes: If you get above 95 Terabytes
    Procurement and Operation Logistics: What is your HW SKU process? Can you put a brand new HW system onto the data center?
    Workload Characteristics: Highly concurrent data?
    DW Organizational Maturity: Do you already know MPP?
  • SMP is one server where each CPU in the server shares the same memory, disk, and network controllers (scale-up). MPP means data is distributed among many independent servers running in parallel and is a shared-nothing architecture, where each server operates self-sufficiently and controls its own memory and disk (scale-out).

  • In this session we will take a closer look at the third pillar or key investment area for SQL Server 2014, which is build a data platform for hybrid cloud. One of the key design points we have taken to approaching cloud computing is to drive towards a consistent data platform with common tools from on-premises to cloud. When it comes to Microsoft’s cloud, we have two offerings that can be used to run relational databases in the cloud depending on your application needs. Lets take a closer look at both of these Microsoft Azure options.
  • https://docs.microsoft.com/en-us/azure/virtual-machines/virtual-machines-windows-sizes

    Note the IOPS for standard storage is the maximum rather than an expected number. For premium, IOPS are not just maximum but expected levels of performance.

    Standard Disk Storage (HDD)
    1GB-1023GB, 500 IOPs, 60 MB/s throughput, $2/month per 100GB (locally redundant), $5/month per 100GB (geo-redundant), $6/month per 100GB (read-access geo-redundant)
     
    Premium Disk Storage (SSD)
    P10, 128GB, 500 IOPs, 100 MB/s throughput, $20/month
    P20, 512GB, 2300 IOPs, 150 MB/s throughput, $73/month
    P30, 1024GB, 5000 IOPs, 200 MB/s throughput, $135/month
  • Azure web portal -> New -> Marketplace (see all)
  • https://azure.microsoft.com/en-us/documentation/templates/
  • https://blogs.msdn.microsoft.com/mast/2013/12/06/understanding-the-temporary-drive-on-windows-azure-virtual-machines/

    SSD/HDD storage included in A-series, D-series, and Dv2-series VMs is local temporary storage.

    DS-series, G-series, GS-series SSD's have less local temporary storage due to storage used for caching purposes to ensure predictable levels of performance associated with premium storage.
    DS-series and GS-series support premium storage disks, which means you can attach SSD's to the VM (the other series support only standard storage disks).
    The pricing and billing meters for the DS sizes are the same as D-series and the GS sizes are the same as G-series.

    When you create an Azure virtual machine, it has a disk for the operating system mapped to drive C (size is 127GB) that is on Blob storage and a local temporary disk mapped to drive D. You can choose standard disk type or premium (if DS-series or GS-series) for your local temporary disk - the size of which is based on the series you choose (i.e. A0 is 20GB). You can also attach new disks - specify standard or premium, for standard: specify size (1GB-1023GB), for premium: specify P10, P20, or P30. the disks are .vhd files that reside in an Azure storage account
  • http://www.jamesserra.com/archive/2015/11/redundancy-options-in-azure-blob-storage/

    Geo-replication in Azure disks does not support the data file and log file of the same database to be stored on separate disks. GRS replicates changes on each disk independently and asynchronously. This mechanism guarantees the write order within a single disk on the geo-replicated copy, but not across geo-replicated copies of multiple disks. If you configure a database to store its data file and its log file on separate disks, the recovered disks after a disaster may contain a more up-to-date copy of the data file than the log file, which breaks the write-ahead log in SQL Server and the ACID properties of transactions. If you do not have the option to disable geo-replication on the storage account, you should keep all data and log files for a given database on the same disk. If you must use more than one disk due to the size of the database, you need to deploy one of the disaster recovery solutions listed above to ensure data redundancy. https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-windows-sql-high-availability-dr/
  • In the past, after provisioning a SQL Server VM, you had to manually attach and configure the right number of data disks to provide the desired number of IOPs or throughput (MB/s). Then you need to stripe your SQL files across the disks or create a Storage Pool to divide the IOPs or throughput across them. Finally, you’d have to configure SQL Server according to the performance best practices for Azure VM.
    We’ve now made this part of the provisioning experience. You can easily configure the desired IOPs, throughput, and storage size within the limits of the selected VM size, as well as the target workload to optimize for (online transaction processing or data warehousing). As you change the IOPs, throughput, and storage size, we’ll automatically select the right number of disks to attach to the VM. During the VM provisioning, if more than one disk is required for your specified settings, we’ll automatically create one Windows storage space (virtual drive) across all disks.
  • https://azure.microsoft.com/en-us/regions/
  • https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-migrate-onpremises-database/

    http://itproguru.com/expert/2015/03/how-to-move-or-migrate-sql-server-workload-to-azure-sql-database-cloud-services-or-azure-vm-all-version-of-sql-server-step-by-step/
  • http://www.sqlservercentral.com/blogs/sqlsailorcom/2015/09/24/azure-virtual-machine-blog-series-changing-the-size-of-a-vm/
  • https://azure.microsoft.com/en-us/blog/resize-virtual-machines

    https://buildwindows.wordpress.com/2015/10/11/azure-virtual-machine-resizing-consideration/

    When a VM is running it is deployed to a physical server. The physical servers in Azure regions are grouped together in clusters of common physical hardware. A running VM can easily be resized to any VM size supported by the current cluster of hardware supporting the VM.
  • https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-windows-classic-sql-dr/

    It is up to you to ensure that your database system possesses the HADR capabilities that the service-level agreement (SLA) requires. The fact that Azure provides high availability mechanisms, such as service healing for cloud services and failure recovery detection for the Virtual Machines (https://azure.microsoft.com/en-us/blog/service-healing-auto-recovery-of-virtual-machines), does not itself guarantee you can meet the desired SLA. These mechanisms protect the high availability of the VMs but not the high availability of SQL Server running inside the VMs. It is possible for the SQL Server instance to fail while the VM is online and healthy. Moreover, even the high availability mechanisms provided by Azure allow for downtime of the VMs due to events such as recovery from software or hardware failures and operating system upgrades.
  • https://docs.microsoft.com/en-us/azure/sql-database/sql-database-service-tiers
  • Changing the service tier and/or performance level of a database creates a replica of the original database at the new performance level, and then switches connections over to the replica. No data is lost during this process but during the brief moment when we switch over to the replica, connections to the database are disabled, so some transactions in flight may be rolled back. This window varies, but is on average under 4 seconds, and in more than 99% of cases is less than 30 seconds. If there are large numbers of transactions in flight at the moment connections are disabled, this window may be longer.

    The duration of the entire scale-up process depends on both the size and service tier of the database before and after the change. For example, a 250 GB database that is changing to, from, or within a Standard service tier, should complete within 6 hours. For a database of the same size that is changing performance levels within the Premium service tier, it should complete within 3 hours.
  • By storing your data in Azure SQL Database, you take advantage of many fault tolerance and secure infrastructure capabilities that you would otherwise have to design, acquire, implement, and manage. Azure SQL Database has a built-in high availability subsystem that protects your database from failures of individual servers and devices in a datacenter. Azure SQL Database maintains multiple copies of all data in different physical nodes located across fully independent physical sub-systems to mitigate outages due to failures of individual server components, such as hard drives, network interface adapters, or even entire servers. At any one time, three database replicas are running—one primary and two or more replicas. Data is written to the primary and one secondary replica using a quorum based commit scheme before the transaction is considered committed. If the hardware fails on the primary replica, Azure SQL Database detects the failure and fails over to the secondary replica. In case of a physical loss of a replica, a new replica is automatically created. So there are always at minimum two physical, transactionally consistent copies of your data in the datacenter.
  • A self-service feature available for Basic, Standard, and Premium databases. Supports business continuity by recovering a database from a recent backup after accidental data corruption or deletion.

    Automatic backups
    Full backups weekly
    Differential backup daily
    Log backups every 5 minutes
    Daily and weekly backups automatically uploaded to geo-redundant Azure Storage

    Self-service restore
    Point-in-time up to a second granularity
    REST API, Windows PowerShell, or Portal
    Creates a new database in the same logical server

    Tiered retention policy
    Basic - 7 days
    Standard - 14 days
    Premium - 35 days
    No additional cost to retain backups

  • With all tiers of SQL Database now supporting active geo-replication, do you still talk to customers about geo-restore?  Or do you tell them to just use active geo-replication?

    Bill Gibson: Both are valid.  Geo replication still doubles the cost and for many customers is more than they want.

    A little more on this.  MYOB, our largest external user, relies on geo-restore exclusively at this point, as an example.
     
    Regarding doubling the cost, we still have a now-deprecated option, Standard Geo-Replication, intended for DR only.  This option produces a single non-accessible secondary, which is only accessible after failover.  A Standard  geo-replication secondary is charged at 75% of the full database price.  This option is being discontinued and being replaced by active geo-replication, which as you observe is now available on all editions, which, because it results in a readable secondary, is charged at 100%.  
  • Enjoy business continuity and focus on building apps instead of keeping things running.

    Active geo-replication—available for Premium databases today—gives you the richest business continuity solution with the least risk of data loss and the most rapid recovery time. 
    Extends standard geo-replication with up to four geo-replicated secondaries online and readable at all times, and that can also be used for load balancing or to provide low-latency access to replicated data anywhere in the world. 
    Coming in 2016 for Basic and Standard databases.
    Coming in 2016 manual and automatic failover options
  • Based on customer feedback, Azure SQL Database is introducing new service tiers to help customers more easily innovate with cloud-designed database workloads. At the heart of this change, the new tiers deliver predictable performance across a spectrum of six performance levels for light- to heavy-weight transactional application demands. Additionally, the new tiers offer a spectrum of business-continuity features, a stronger uptime SLA, larger database sizes for less money, and an improved billing experience.
  • Migration tools
    Tools used include SQL Server Management Studio (SSMS), the SQL Server tooling in Visual Studio, and the SQL Azure Migration Wizard (SAMW), as well the preview of the new Azure management portal. Be sure to install the latest versions of the client tools as earlier versions are not compatible with the preview of the latest SQL Database Update. The role of each tool is summarized below together with a link for installing/accessing the latest version.
    SQL Server Management Studio (SSMS)
    SSMS can be used to deploy a compatible database directly to Azure SQL Database or to export a logical backup of the database as a BACPAC, which can then be imported, still using SSMS, to create a new Azure SQL Database. You cannot use the preview portal to import a BACPAC yet.
    You must use the latest version of SSMS to work with the preview of Azure SQL Database Latest Update which is available in CU5 of SQL Server 2014 or by downloading the latest version of the tools from http://msdn.microsoft.com/en-us/evalcenter/dn434042.aspx.
    SQL Azure Migration Wizard (SAMW)
    SAMW can be used to analyze the schema of an existing database for compatibility with Azure SQL Database, and in many cases can be used to generate and then deploy a T-SQL script containing schema and data. The wizard will report errors during the transformation if it encounters schema content that it cannot transform. If this occurs, the generated script will require further editing before it can be deployed successfully. SAMW will process the body of functions or stored procedures which is normally excluded from validation performed by the SQL Server tooling in Visual Studio (see below) so may find issues that might not otherwise be reported by validation in Visual Studio alone. Combining use of SAMW with the SQL Server tooling in Visual Studio can substantially reduce the amount of work required to migrate a complex schema.
    Be sure to use the latest version of the SQL Azure Migration Wizard from CodePlex at http://sqlazuremw.codeplex.com/.
    SQL Server tooling in Visual Studio
    The SQL Server tooling in Visual Studio can be used to create and manage a database project comprising a set of T-SQL files for each object in the schema. The project can be imported from a database or from a script file. Once created, the project can be targeted at the latest preview of Azure SQL Database; building the project then validates schema compatibility. Clicking on an error opens the corresponding T-SQL file allowing it to be edited and the error corrected. Once all the errors are fixed the project can be published, either directly to SQL Database to create an empty database or back to (a copy of) the original SQL Server database to update its schema, which allows the database to be deployed with its data using SSMS as above.
    You must install and use the preview of the SQL Server database tooling for Visual Studio with support for the preview of Azure SQL Database Latest Update V12. Make sure you have Visual Studio 2013 with Update 4 installed first. See this blog post for more details of this preview release and how to install it: http://blogs.msdn.com/b/ssdt/archive/2014/12/18/sql-server-database-tooling-preview-release-for-the-latest-azure-sql-database-update-v12-preview.aspx.
    You can keep track of updates to this software on the team blog at http://blogs.msdn.com/b/ssdt/.
  • 58
  • Enables query capabilities across common Hadoop distributions (HDP & Cloudera) and Hadoop file formats in Azure storage.
    Polybase for querying & managing non-relational Hadoop and relational data

    Allows leveraging existing SQL skills and BI tools
    Supports multiple non-relational file formats
    Improved time-to-insights & simplified ETL
  • When it comes to key BI investments we are making it much easier to manage relational and non-relational data with Polybase technology that allows you to query Hadoop data and SQL Server relational data through single T-SQL query. One of the challenges we see with Hadoop is there are not enough people out there with Hadoop and Map Reduce skillset and this technology simplifies the skillset needed to manage Hadoop data. This can also work across your on-premises environment or SQL Server running in Azure.
  • 62
  • Auto backups, every 4 hours, in Azure Storage and geo-replicated
    On-demand backups in Azure Storage, user can enable geo-replication
    REST API, PowerShell or Azure Portal
    Scheduled exports for long-term retention
    Near-Online backup/restore based on storage snapshots
    Backups retention policy:
    Auto backups, up to 35 days
    On-demand backups retained indefinitely

    Does SQL DW support data redundancy? 
    Yes.  As SQL Data Warehouse separates compute and storage, all your data is directly written to geo-redundant Azure Storage (RA-GRS).  Geo-redundant storage replicates your data to a secondary region that is hundreds of miles away from the primary region.  In both primary and secondary regions, your data is replicated three times each, across separate fault domains and upgrade domains.  This ensures that your data is durable even in the case of a complete regional outage or disaster that renders one of the regions unavailable.  To learn more about Read-Access Geo-Redundant Storage, read Azure Storage Redundancy Options.
    Is geo-restore used for disaster recovery?
    Geo-Restore is designed to recover your database in case it becomes unavailable due to a disruptive event.  You can restore a database from a geo-redundant backup to create a new database in any Azure region.  Because the backup is geo-redundant it can be used to recover a database even if the database is inaccessible due to an outage.  Geo-Restore feature comes with no additional charges.  To learn more about Geo-Restore, refer to Recover an Azure SQL Database from an outage.
    Does SQL DW support a point-in-time restore?  Are the databases automatically backed up?
    Yes to both.  A point-in-Time restore is designed to restore your database to an earlier point in time.  Azure SQL Data Warehouse service protects all databases with automatic backups every 4 hours and retains them for 35 days to provide you with a discrete set of restore points.  These backups are stored on RA-GRS Azure Storage and are therefore geo-redundant by default.  The automatic backup and point-in-time restore features come with no additional charges and provide a zero-cost and zero-admin way to protect databases from accidental corruption or deletion.  To learn more about Point In Time Restore, refer to Azure SQL Database Point in Time Restore.


    [‎4/‎15/‎2016 5:38 PM]
    Hey Matt, since SQL DW data is on RA-GRS, can I fire up another SQL DW to use that read-only copy?
    [‎4/‎15/‎2016 5:38 PM] Matt Usher:
    No
    [‎4/‎15/‎2016 5:39 PM]
    k...if the primary region goes out, how do I use that read-only copy?
    [‎4/‎15/‎2016 5:39 PM] Matt Usher:
    The data is for redundancy that can be recovered by engineering for brand protection.
    We snapshot every 4/8 hours and you can create a geo-restore from that to any other DC.
     

  • 64
  • One of the key differentiators of Azure is its breadth of managed data services that you have at your disposal. You can run anything you want in a VM, but you’re responsible for a lot of the maintenance and management. You do have ultimate control over everything, but do you really need to control everything or are you ok giving up some of this sense of control for reduced cost, improved functionality, easier scaling, and reduced human overheard needs?
  • On-prem heavy
  • 1) Copy source data into the Azure Data Lake Store (twitter data example) 2) Massage/filter the data using Hadoop (or skip using Hadoop and use stored procedures in SQL DW/DB to massage data after step #5) 3) Pass data into Azure ML to build models using Hive query (or pass in directly from Azure Data Lake Store) 4) Azure ML feeds prediction results into the data warehouse 5) Non-relational data in Azure Data Lake Store copied to data warehouse in relational format (optionally use PolyBase with external tables to avoid copying data) 6) Power BI pulls data from data warehouse to build dashboards and reports 7) Azure Data Catalog captures metadata from Azure Data Lake Store and SQL DW/DB 8) Power BI and Excel can pull data from the Azure Data Lake Store via HDInsight 9) To support high concurrency if using SQL DW, or for easier end-user data layer, create an SSAS cube
  • Offer structures: A la carte, Data Intensive, Analytics Intensive, Stream Intensive, All-inclusive

×