Esta presentación muestra los resultados de la prueba de conceptos realizada sobre SQL Server 2014 Enterprise en una empresa del rubro Bancario.
En resumen, este documento evidencia un mejoramiento en el rendimiento de hasta 6 veces en las consultas adhoc en SQL, y un 30% de mayor eficiencia en las tasas de compresión de datos.
Esta PPT la expuse en un evento organizado por Microsoft en junio del año 2014, en el marco de las nuevas herramientas que Microsoft ha desarrollado para satisfacer las necesidades en el ámbito de BI
1. POC SQL Server 2014
División Gestión del Riesgo en
Rubro Bancario.
Sebastián Rodríguez Robotham
11 de Junio 2014.
2. Resultados POC y
Conclusiones.
Contexto División
Riesgos y
Arquitectura DW
Actual.
POC – Objetivos y
Plan de Trabajo
3. Contexto.
• Gestión de Riesgo Crediticio:
Posible Pérdida que asume un banco como
consecuencia del inclumplimiento de
obligaciones contraidas por un cliente.
• El que cobra primero cobra mejor.
• Clientes Buenos NO ESPERAN.
4. Areas y Procesos Involucrados
Gerencias
- Riesgo Personas.
- Riesgo Empresas.
- Riesgo Pyme.
- Modelos & Herramientas.
- Banca Comercial.
Ciclo de Vida del Cliente (Visión Riesgo Crediticio)
Tipos de Usuarios
Usuarios
de Negocio
Usuarios
Avanzados
Administradores
5. Arquitectura
Fuentes de Datos
- Excel
- TXT
- CSV
- SQL 2000 - 2005
- Oracle
- Access
Algunos Datos Adicionales…
- Cerca de 55 Proceso de Carga Diarios.
- Tiempo de Proceso: 2 ½ horas.
- ENFOQUE “SIN OPERADOR”
- Todo se hace con T-SQL:
- Conectar con FTP y descargar archivos.
- Cargar fuentes de datos automáticas.
- Procesamiento diario automático
- Envío de correos y notificaciones.
- 1.600 millones de filas.
- 10 Cubos.
- 27 Dimensiones.
- 150 usuarios.
- 1TB información comprimida (2008-R2)
- Modificaciones de datos y estructuras es muy
frecuente
- Implementado en ROLAP.
8 núcleos de 2.3GHz (Intel Xeon E5-2630), 38GB Ram, 2TB Disco Duro,
Windows Server X64 Enterprise 2008R2, SQL Server 2008 R2 Enterprise
X64.
6. Arquitectura
Tipo de Modelo Tablas sin índices Tablas 3FN Modelo Estrella
Objetivo
Cargas / Limpieza
Básica
Integridad,
Trazabilidad, Reglas
de Negocio
Reporting, Versión
única de la verdad.
Características
A. Bajo impacto de error al
modificar una Fuente de Datos.
B. Alto grado de trazabilidad de la
información: Auditorías.
C. Permite modificar fácilmente
lógicas de negocio en las cargas
de dimensiones.
D. Fácil administración de
Filegroup / Particiones / Backup
E. Adaptabilidad a cambios en el
negocio.
7. Resultados POC y
Conclusiones.
Contexto División
Riesgos y
Arquitectura DW
Actual.
POC – Objetivos y
Plan de Trabajo
8. Objetivos POC
Objetivo General
Probar las nuevas funcionalidades de SQL Server 2014 para
- Mejorar el rendimiento de almacenamiento
- Mejorar el rendimiento de tiempo de respuestas de consultas
- Mejorar el rendimiento de tiempo en procesos de carga.
Administración
Agilidad / Eficiencia Analistas
Oportunidad de la información
Específicos
- Tiempos de Respuesta: disminuir consultas más pesadas (sobre 30 segundos)
- Disponibilidad: Antes de las 9:00 hrs; lo antes posible cuando hay retrasos.
- Compresión: Aumentar información histórica disponible en Datamart.
- Administración: Simplicidad por enfoque NO OPERADOR
9. Arquitectura: Cambios en el modelo.
SQL Server 2014 In-Memory
In-Memory,
ColumnStore, Row
ColumnStore
SQL Server 2008
R2
Tablas sin índices ClusterIndex, FK ClusterIndex, Indices
Pruebas
A. Mínimo 3 ejecuciones en cada
prueba
B. Comparaciones son en base a
RowTables.
10. Plan de Pruebas
Planificación de Actividades.
A. Creación de script de bases de
datos
B. Poblar información real.
C. Pruebas de ejecución de
consultas
D. Pruebas con usuarios finales en
ambiente real.
Pruebas a Realizar en cada Ambiente
Ambiente
Carga
Ambiente
ODS
Ambiente
OLAP
Optimización Carga de
Información
Optimización Procesamiento
de datos
Optimización de Consultas.
Optimización Procesamiento
de Datos
Optimización de Consultas
Mejoramiento en
Administración del ambiente
Mejoramiento en Backup
Mejoramiento Accesibilidad
Usuarios Finales.
11. Resultados POC y
Conclusiones.
Contexto División
Riesgos y
Arquitectura DW
Actual.
POC – Objetivos y
Plan de Trabajo
12. dbcc DropCleanBuffers
Elimina todo el contenido de páginas de datos que existen
en memoria.
dbcc FreeProcCache
Elimina todos los planes de ejecución de la caché de
procedimiento
dbcc FreeSystemCache('ALL')
Libera todas las entradas de caché no utilizadas de todas las
memorias caché.
dbcc FreeSessionCache
Vacía la memoria caché de conexión de las consultas
distribuidas utilizada por las consultas distribuidas con una
instancia de Microsoft SQL Server.
13. Resultados: Carga de Información.
# Prueba
Normal (mm:ss) Column Store (mm:ss)
Ej
Prom
Información de la prueba.
- Tablas con 1 millón de registros.
- 207 campos.
- Números en formato texto con separadores
- Fechas en formato texto.
Diferencia
(mm:ss)
% Mejora
In-Memory (mm:ss)
Diferencia
(mm:ss)
%
Ej 1 Ej 2 Ej 3 Ej 1 Ej 2 Ej 3
Mejora
Ej
Prom
Ej 1 Ej 2 Ej 3
Ej
Prom
1
Ejecución procedimiento
almacenado de carga de archivo
fuente de información
12:49 12:49 12:49 12:49 17:57 17:57 17:57 17:57 -0:05:08 -40,05% 10:08 10:08 10:08 10:08 02:41 20,94%
Resultados.
- Carga de información diaria es 21% más rápida
con tablas en memoria
- Beneficio impacta directamente disponibilidad
de información diaria.
14. Resultados: Procesamiento de Información
# Prueba
Normal (mm:ss) Column Store (mm:ss)
Diferencia
(mm:ss)
% Mejora
In-Memory (mm:ss)
Diferencia
(mm:ss)
%
Ej
Prom
Ej 1 Ej 2 Ej 3 Ej 1 Ej 2 Ej 3
Mejora
Ej
Prom
Ej 1 Ej 2 Ej 3
Ej
Prom
1
Nueva columna en dimensión:
crear, actualizar y alterar (no nulos)
17:32 17:32 17:32 17:32 34:15 34:15 34:15 34:15 -0:16:43 -95,34%
2
Actualización de campo en tabla de
dimensión
01:05 01:58 01:11 01:25 04:26 05:06 04:52 04:48 -0:03:23 -240,16% 00:17 00:23 00:18 00:19 01:05 77,17%
3
Actualización de campos en tabla
de dimensión
01:34 01:30 01:24 01:29 05:02 04:53 05:04 05:00 -0:03:30 -235,45% 00:18 00:16 00:17 00:17 01:12 80,97%
4
Actualización de campos en tabla
de dimensión con filtro
01:24 00:49 01:10 01:08 03:28 03:25 03:46 03:33 -0:02:25 -214,78% 00:12 00:14 00:12 00:13 00:55 81,28%
5 Eliminación simple de filas con filtro 00:47 00:51 01:26 01:01 00:45 00:14 00:14 00:24 00:37 60,33% 00:02 00:15 00:01 00:06 00:55 90,22%
Información de la prueba.
- Tablas con 1 millón de registros.
- 42 y 109 campos.
Resultados.
- Procesamiento de información es 6 veces más
rápido con tablas en memoria
- Beneficio impacta directamente disponibilidad
de información diaria y reprocesos.
15. Resultados: Ejecución de Consultas (Querys)
# Prueba
Normal (mm:ss) Column Store (mm:ss)
Diferencia
(mm:ss)
% Mejora
In-Memory (mm:ss)
Diferencia
(mm:ss)
%
Ej
Prom
Ej 1 Ej 2 Ej 3 Ej 1 Ej 2 Ej 3
Mejora
Ej
Prom
Ej 1 Ej 2 Ej 3
Ej
Prom
1
Estadísticas básicas de tabla
TabFAC_DeudaAnaMes_2 con filtro
periodo
01:19 01:05 01:10 01:11 00:04 00:04 00:03 00:04 01:08 94,86%
2
Consulta típica sobre tabla de
hechos cruzada con tablas de
dimensiones con filtro, agrupación
y orden
00:32 00:31 00:31 00:31 00:09 00:09 00:09 00:09 00:22 71,28%
3
Cruce de tabla in-memory con tabla
de diferente tipo (normal, column
store e in-memory)
03:15 03:02 03:18 03:12 01:39 01:34 01:04 01:26 01:46 55,30% 01:19 01:20 01:20 01:20 01:52 58,43%
Información de la prueba.
- Tablas con 90 millones de registros.
- 159 campos.
Resultados.
- Querys adhoc son 3 veces más rápidas
utilizando tablas columnares.
- Mejora en procesos de usuarios avanzados.
16. Resultados: Ejecución de Consultas (SRD)
# Prueba
Normal (mm:ss) Column Store (mm:ss)
Diferencia
(mm:ss)
% Mejora
In-Memory (mm:ss)
Diferencia
(mm:ss)
%
Ej
Prom
Ej 1 Ej 2 Ej 3 Ej 1 Ej 2 Ej 3
Mejora
Ej
Prom
Ej 1 Ej 2 Ej 3
Ej
Prom
1
Informe SRD: Análisis CV Cliente
(Judicial)
01:35 01:10 01:16 01:20 00:21 00:21 00:20 00:21 01:00 74,27%
2
Informe SRD: Primas de Riesgo por
Producto
02:33 01:14 01:12 01:40 00:41 00:12 00:11 00:21 01:18 78,60%
3 Informe SRD: CM IMC Convenios 03:12 01:39 01:45 02:12 00:53 00:23 00:23 00:33 01:39 75,00%
Información de la prueba.
- Tablas con 90 millones de registros.
- 159 campos.
Resultados.
- Sistema de Reportes es 5 veces más rápido
utilizando tablas columnares.
- Mejora en Reporting diarios y mensuales,
ejecutado por analistas de negocio.
17. Resultados: Pruebas de Almacenamiento.
Tipos de Tablas Tablas
Normales (MB)
Tablas
Columnares
(MB)
Ahorro en
MB
N° Filas
Dimensiones 3.684 935 2.749 (75%) 35.515.177
Hechos 54.823 40.141 14.642 (27%) 459.148.171
Total 58.507 41.116 17.391 (30%) 494.663.348
Información de la prueba.
- Servidor 200GB espacio disponible
- Tablas fueron cargadas parcialmente para
realizer pruebas.
- 80% de columnas en tablas de hechos son datos
numéricos (deudas, dias de mora, flujos…)
Resultados.
- Con esta configuración se puede mejorar en
30% el rendimiento de compresión de datos,
sin perjudicar rendimiento.
- Ratio puede mejorar incorporando Archival
Compression.
18. Resultados: Resumen
Pruebas Realizadas
Ambiente
Carga
Ambiente
ODS
Ambiente
OLAP
Optimización Carga de
Información: 20%
Optimización Procesamiento de
datos: 6X
Optimización de Consultas: 3X
Optimización Procesamiento de
Datos: 6X
Optimización de Consultas: 5X
Mejoramiento en
Administración del ambiente:
30% almacenamiento
Mejoramiento en Backup:
30% almacenamiento
Mejoramiento Accesibilidad
Usuarios Finales.
Impactos de la Implementación
A. Implementación Manual de
Integridad Referencial
• Actualmente implementadas con FK – constraint.
• Se implementarán triggers tipo “instead of” para
evitar inconsistencia.
• Impacto en programación: Bajo.
B. Consultas entre bases de datos con
Tablas en Memoria.
• Ambientes “CARGA” Y “ODS” comparten tablas
para tareas ETL y traspaso de información entre
modelos.
• Se deberán crear tablas de memoria
especializadas en cada modelo
• Impacto en programación: Medio.
In this session we will take a closer look at the unique design points of Microsoft SQL Server’s in-memory solution and the significant impact it can have on your business.
This is the track agenda slide, meant to orient your audience on where they are in the day. This session is the first session in the Data Platform track, and should be delivered before “Drive Business Faster with SQL Server 2014 In-Memory Technologies” and “New Hybrid Cloud Scenarios with SQL Server 2014.” Please edit this slide to align with your event’s agenda.
Before we get into the details of in-memory, lets talk about why so many people in today’s business world are interested in this technology. One of the key goals businesses are trying to achieve today is getting to a state—or getting closer to a state—where the business can be driven in real-time, powered by real-time insights. The barriers here are speed and throughput. With data volumes getting larger and larger how do you gain insights quickly across your transactional business, your historical data, third party data (relational or non-relational), so you can make better and faster business decisions.
This is the heart of the problem that in-memory technology helps to solve. With Microsoft SQL Server 2014 in-memory technologies built-in across all database workloads, we’ll show how we can drive significantly faster transactions, faster queries and faster insights. It also increases throughput in terms of number of users. The key thing to remember in this presentation is that all of the in-memory capabilities we will share are built into SQL Server 2014, meaning a single sku, not options and add-ons.
Now lets take a closer look at how we can impact your business with our in-memory technology. We are the only provider to date that can speed transactions as well as queries and insights with in-memory technology optimized for each workload: OLTP, data warehousing, and analytics.
With our new in-memory OLTP engine in SQL Server 2014, we have customers that have seen up to 30x faster transaction processing. I am not talking about query speed, but actual transaction write speed, up to 30x faster. I know many of you might be thinking, well Oracle and other database vendors are talking 100x. What they are talking about is query speed, not transactional speed. We are the only vendor that delivers an in-memory engine designed for OLTP transaction performance increase.
There’s also built-in In-Memory columnstore for data warehousing workloads to speed queries. We were already benchmarking over 100x performance gains with many customers in the SQL Server 2012 release of the in-memory columnstore. With SQL Server 2014, the in-memory columnstore gets even better, we will talk about that in just a few minutes. Again, we can also increase query speed by over 100x.
Finally we offer business users the ability to analyze data and data models much faster with built-in in-memory capabilities for Excel through PowerPivot, and Analysis Services. The benefit is that you can analyze billions of rows of data per second in Excel. Meaning a business user can analyze data of nearly any size with the tools they are most familiar with.
This is what we mean when we say “driving real-time business with real-time insights.” We can significantly speed your transaction business tied to your revenue stream. We can massively speed the process to analyze both real-time transaction data, along with historical and third party data from IT or business users. This is why we are already seeing in-memory technologies transforming the way businesses run.
Lets start off by taking a look at how in-memory technologies have evolved in SQL Server. This may be a surprise to many of you, but our in-memory journey actually started way back in SQL Server 2008 R2, when our engineering team made a key design decision to build in-memory technology into the core data platform, rather than acquire and stitch in an in-memory solution to run in parallel to the core database. Around this timeframe, both Oracle and IBM went down the acquire-and-stitch route: Oracle acquired TimesTen in-memory database and IBM acquired Natiza. The challenge with the acquire-and-stitch solution is that in-memory impacts to the overall database, which in the past were designed to run on disk. So the challenge we often hear from customers using TimesTen or Natiza is that using in-memory breaks other core DB2 and Oracle database functionality. For example, we have heard from many customers that RAC break when you use TimesTen. Not only is there a functional compatibility issue, the customer is also forced to learn a new set of APIs because it is a completely different database.
This is not the design approach we took. We believe our customers want to utilize other key capabilities that SQL Server and the broader Microsoft Data Platform have to offer in conjunction with in-memory. They don’t want to use a different tool set for in-memory database than they do for disk database, they still wan to use T-SQL and SQL Server management when enabling in-memory. This is the unique design approach we took back when we first started improving analytics by building in-memory into PowerPivot for billions for rows of data analysis in Excel.
Then in SQL Server 2012, we expanded our in-memory footprint with the same built-in approach by adding in-memory to Analysis Services so IT could build data models much faster, and introduced an in-memory column store that could improve query speeds by 100x.
With SQL Server 2014, we are covering the final workload by introducing an in-memory OLTP solution—or in-memory rowstore—to significantly speed transactional performance. We also enhanced the in-memory columnstore with faster performance and significantly higher data compression so memory utilization can be optimized.
Now, here’s one point to note if you listened to the keynote at Oracle openworld. Larry announced a new in-memory columnstore that would be built-in to the core database. So as you can see Oracle is following our footsteps and realizing the stitch strategy doesn’t work when it comes to something as critical to the overall database as in-memory. And he also mentioned that it will be an optio—we all know what that means. And their new solution is in an early beta. SQL Server 2014 is our third release of in-memory solutions across the data platform, and in a single sku, no options or add ons.
Lets start off by taking a look at how in-memory technologies have evolved in SQL Server. This may be a surprise to many of you, but our in-memory journey actually started way back in SQL Server 2008 R2, when our engineering team made a key design decision to build in-memory technology into the core data platform, rather than acquire and stitch in an in-memory solution to run in parallel to the core database. Around this timeframe, both Oracle and IBM went down the acquire-and-stitch route: Oracle acquired TimesTen in-memory database and IBM acquired Natiza. The challenge with the acquire-and-stitch solution is that in-memory impacts to the overall database, which in the past were designed to run on disk. So the challenge we often hear from customers using TimesTen or Natiza is that using in-memory breaks other core DB2 and Oracle database functionality. For example, we have heard from many customers that RAC break when you use TimesTen. Not only is there a functional compatibility issue, the customer is also forced to learn a new set of APIs because it is a completely different database.
This is not the design approach we took. We believe our customers want to utilize other key capabilities that SQL Server and the broader Microsoft Data Platform have to offer in conjunction with in-memory. They don’t want to use a different tool set for in-memory database than they do for disk database, they still wan to use T-SQL and SQL Server management when enabling in-memory. This is the unique design approach we took back when we first started improving analytics by building in-memory into PowerPivot for billions for rows of data analysis in Excel.
Then in SQL Server 2012, we expanded our in-memory footprint with the same built-in approach by adding in-memory to Analysis Services so IT could build data models much faster, and introduced an in-memory column store that could improve query speeds by 100x.
With SQL Server 2014, we are covering the final workload by introducing an in-memory OLTP solution—or in-memory rowstore—to significantly speed transactional performance. We also enhanced the in-memory columnstore with faster performance and significantly higher data compression so memory utilization can be optimized.
Now, here’s one point to note if you listened to the keynote at Oracle openworld. Larry announced a new in-memory columnstore that would be built-in to the core database. So as you can see Oracle is following our footsteps and realizing the stitch strategy doesn’t work when it comes to something as critical to the overall database as in-memory. And he also mentioned that it will be an optio—we all know what that means. And their new solution is in an early beta. SQL Server 2014 is our third release of in-memory solutions across the data platform, and in a single sku, no options or add ons.
This is the track agenda slide, meant to orient your audience on where they are in the day. This session is the first session in the Data Platform track, and should be delivered before “Drive Business Faster with SQL Server 2014 In-Memory Technologies” and “New Hybrid Cloud Scenarios with SQL Server 2014.” Please edit this slide to align with your event’s agenda.
Before we jump into Microsoft’s in-memory engineering design points, lets take a look at a couple of key trends that have impacted our design. One, of course, is the significant drop in-memory pricing that makes in-memory databases feasible for customers. The second is around CPU performance flattening out, meaning that just throwing more compute at a problem may not resolve performance bottlenecks. Our design approach took into account how to better utilize existing CPU capacity, as we often hear from customers that typical CPU utilization is below the 50% mark—often due to contention.
Lets start off by taking a look at how in-memory technologies have evolved in SQL Server. This may be a surprise to many of you, but our in-memory journey actually started way back in SQL Server 2008 R2, when our engineering team made a key design decision to build in-memory technology into the core data platform, rather than acquire and stitch in an in-memory solution to run in parallel to the core database. Around this timeframe, both Oracle and IBM went down the acquire-and-stitch route: Oracle acquired TimesTen in-memory database and IBM acquired Natiza. The challenge with the acquire-and-stitch solution is that in-memory impacts to the overall database, which in the past were designed to run on disk. So the challenge we often hear from customers using TimesTen or Natiza is that using in-memory breaks other core DB2 and Oracle database functionality. For example, we have heard from many customers that RAC break when you use TimesTen. Not only is there a functional compatibility issue, the customer is also forced to learn a new set of APIs because it is a completely different database.
This is not the design approach we took. We believe our customers want to utilize other key capabilities that SQL Server and the broader Microsoft Data Platform have to offer in conjunction with in-memory. They don’t want to use a different tool set for in-memory database than they do for disk database, they still wan to use T-SQL and SQL Server management when enabling in-memory. This is the unique design approach we took back when we first started improving analytics by building in-memory into PowerPivot for billions for rows of data analysis in Excel.
Then in SQL Server 2012, we expanded our in-memory footprint with the same built-in approach by adding in-memory to Analysis Services so IT could build data models much faster, and introduced an in-memory column store that could improve query speeds by 100x.
With SQL Server 2014, we are covering the final workload by introducing an in-memory OLTP solution—or in-memory rowstore—to significantly speed transactional performance. We also enhanced the in-memory columnstore with faster performance and significantly higher data compression so memory utilization can be optimized.
Now, here’s one point to note if you listened to the keynote at Oracle openworld. Larry announced a new in-memory columnstore that would be built-in to the core database. So as you can see Oracle is following our footsteps and realizing the stitch strategy doesn’t work when it comes to something as critical to the overall database as in-memory. And he also mentioned that it will be an optio—we all know what that means. And their new solution is in an early beta. SQL Server 2014 is our third release of in-memory solutions across the data platform, and in a single sku, no options or add ons.
Lets take a look at the first design point our engineering team took back during the release of SQL Server 2008 R2, which was to build-in in-memory technology and make it pervasive throughout the platform—from analytics, to data warehousing to OLTP. One of the main benefits of building in-memory into the platform is that you as the customer don’t have to learn new development tools or new APIs and you can take advantage of all of the other rich features and capabilities in SQL Server along with in-memory. You can even take advantage of other data platform services on-premises or in the cloud with Microsoft Azure along with in-memory performance. This is not the case when you look at competitive technologies that have chosen to acquire-and-stitch together a solution like TimesTen from Oracle or Natiza from IBM. The stitched solutions often break core database functionality for both DB2 and Oracle as the core databases were designed to run on disk. For example, RAC does not work with TimesTen and you have to learn a whole new set of APIs and tools to use TimesTen.
Ferranti computer systems—we’ll take a closer look them later in the session—designs software for utility companies. They are helping to transform the utility industry and revolutionize the way electricity is consumed and sold by improving the way utilities leverage data. They not only need the help of in-memory technology to quickly process large amounts of relational data, but they also need a solution for tackling non-relational data. Because SQL Server 2014 offered the built-in approach for in-memory, they were able to utilize in-memory OLTP as well as our Microsoft Azure HDInsight service to tackle big data. They are now able to write more than 200 million rows in 15 minutes.
The last point I will make on “built-in” is that because we have designed in-memory into the platform, it’s not only pervasive throughout the platform across all workloads, it is also built-in to a single enterprise sku, so you don’t have to pay more or purchase additional sku’s to gain all of the in-memory capabilities.
This is the track agenda slide, meant to orient your audience on where they are in the day. This session is the first session in the Data Platform track, and should be delivered before “Drive Business Faster with SQL Server 2014 In-Memory Technologies” and “New Hybrid Cloud Scenarios with SQL Server 2014.” Please edit this slide to align with your event’s agenda.
Now lets take a closer look at our unique in-memory design points—from our engineers deciding to make in-memory pervasive by building it in to the data platform to how we have made it easy to implement in-memory into your applications.
Lets take a look at the first design point our engineering team took back during the release of SQL Server 2008 R2, which was to build-in in-memory technology and make it pervasive throughout the platform—from analytics, to data warehousing to OLTP. One of the main benefits of building in-memory into the platform is that you as the customer don’t have to learn new development tools or new APIs and you can take advantage of all of the other rich features and capabilities in SQL Server along with in-memory. You can even take advantage of other data platform services on-premises or in the cloud with Microsoft Azure along with in-memory performance. This is not the case when you look at competitive technologies that have chosen to acquire-and-stitch together a solution like TimesTen from Oracle or Natiza from IBM. The stitched solutions often break core database functionality for both DB2 and Oracle as the core databases were designed to run on disk. For example, RAC does not work with TimesTen and you have to learn a whole new set of APIs and tools to use TimesTen.
Ferranti computer systems—we’ll take a closer look them later in the session—designs software for utility companies. They are helping to transform the utility industry and revolutionize the way electricity is consumed and sold by improving the way utilities leverage data. They not only need the help of in-memory technology to quickly process large amounts of relational data, but they also need a solution for tackling non-relational data. Because SQL Server 2014 offered the built-in approach for in-memory, they were able to utilize in-memory OLTP as well as our Microsoft Azure HDInsight service to tackle big data. They are now able to write more than 200 million rows in 15 minutes.
The last point I will make on “built-in” is that because we have designed in-memory into the platform, it’s not only pervasive throughout the platform across all workloads, it is also built-in to a single enterprise sku, so you don’t have to pay more or purchase additional sku’s to gain all of the in-memory capabilities.
Lets take a look at the first design point our engineering team took back during the release of SQL Server 2008 R2, which was to build-in in-memory technology and make it pervasive throughout the platform—from analytics, to data warehousing to OLTP. One of the main benefits of building in-memory into the platform is that you as the customer don’t have to learn new development tools or new APIs and you can take advantage of all of the other rich features and capabilities in SQL Server along with in-memory. You can even take advantage of other data platform services on-premises or in the cloud with Microsoft Azure along with in-memory performance. This is not the case when you look at competitive technologies that have chosen to acquire-and-stitch together a solution like TimesTen from Oracle or Natiza from IBM. The stitched solutions often break core database functionality for both DB2 and Oracle as the core databases were designed to run on disk. For example, RAC does not work with TimesTen and you have to learn a whole new set of APIs and tools to use TimesTen.
Ferranti computer systems—we’ll take a closer look them later in the session—designs software for utility companies. They are helping to transform the utility industry and revolutionize the way electricity is consumed and sold by improving the way utilities leverage data. They not only need the help of in-memory technology to quickly process large amounts of relational data, but they also need a solution for tackling non-relational data. Because SQL Server 2014 offered the built-in approach for in-memory, they were able to utilize in-memory OLTP as well as our Microsoft Azure HDInsight service to tackle big data. They are now able to write more than 200 million rows in 15 minutes.
The last point I will make on “built-in” is that because we have designed in-memory into the platform, it’s not only pervasive throughout the platform across all workloads, it is also built-in to a single enterprise sku, so you don’t have to pay more or purchase additional sku’s to gain all of the in-memory capabilities.
Lets take a look at the first design point our engineering team took back during the release of SQL Server 2008 R2, which was to build-in in-memory technology and make it pervasive throughout the platform—from analytics, to data warehousing to OLTP. One of the main benefits of building in-memory into the platform is that you as the customer don’t have to learn new development tools or new APIs and you can take advantage of all of the other rich features and capabilities in SQL Server along with in-memory. You can even take advantage of other data platform services on-premises or in the cloud with Microsoft Azure along with in-memory performance. This is not the case when you look at competitive technologies that have chosen to acquire-and-stitch together a solution like TimesTen from Oracle or Natiza from IBM. The stitched solutions often break core database functionality for both DB2 and Oracle as the core databases were designed to run on disk. For example, RAC does not work with TimesTen and you have to learn a whole new set of APIs and tools to use TimesTen.
Ferranti computer systems—we’ll take a closer look them later in the session—designs software for utility companies. They are helping to transform the utility industry and revolutionize the way electricity is consumed and sold by improving the way utilities leverage data. They not only need the help of in-memory technology to quickly process large amounts of relational data, but they also need a solution for tackling non-relational data. Because SQL Server 2014 offered the built-in approach for in-memory, they were able to utilize in-memory OLTP as well as our Microsoft Azure HDInsight service to tackle big data. They are now able to write more than 200 million rows in 15 minutes.
The last point I will make on “built-in” is that because we have designed in-memory into the platform, it’s not only pervasive throughout the platform across all workloads, it is also built-in to a single enterprise sku, so you don’t have to pay more or purchase additional sku’s to gain all of the in-memory capabilities.
Lets take a look at the first design point our engineering team took back during the release of SQL Server 2008 R2, which was to build-in in-memory technology and make it pervasive throughout the platform—from analytics, to data warehousing to OLTP. One of the main benefits of building in-memory into the platform is that you as the customer don’t have to learn new development tools or new APIs and you can take advantage of all of the other rich features and capabilities in SQL Server along with in-memory. You can even take advantage of other data platform services on-premises or in the cloud with Microsoft Azure along with in-memory performance. This is not the case when you look at competitive technologies that have chosen to acquire-and-stitch together a solution like TimesTen from Oracle or Natiza from IBM. The stitched solutions often break core database functionality for both DB2 and Oracle as the core databases were designed to run on disk. For example, RAC does not work with TimesTen and you have to learn a whole new set of APIs and tools to use TimesTen.
Ferranti computer systems—we’ll take a closer look them later in the session—designs software for utility companies. They are helping to transform the utility industry and revolutionize the way electricity is consumed and sold by improving the way utilities leverage data. They not only need the help of in-memory technology to quickly process large amounts of relational data, but they also need a solution for tackling non-relational data. Because SQL Server 2014 offered the built-in approach for in-memory, they were able to utilize in-memory OLTP as well as our Microsoft Azure HDInsight service to tackle big data. They are now able to write more than 200 million rows in 15 minutes.
The last point I will make on “built-in” is that because we have designed in-memory into the platform, it’s not only pervasive throughout the platform across all workloads, it is also built-in to a single enterprise sku, so you don’t have to pay more or purchase additional sku’s to gain all of the in-memory capabilities.
Lets take a look at the first design point our engineering team took back during the release of SQL Server 2008 R2, which was to build-in in-memory technology and make it pervasive throughout the platform—from analytics, to data warehousing to OLTP. One of the main benefits of building in-memory into the platform is that you as the customer don’t have to learn new development tools or new APIs and you can take advantage of all of the other rich features and capabilities in SQL Server along with in-memory. You can even take advantage of other data platform services on-premises or in the cloud with Microsoft Azure along with in-memory performance. This is not the case when you look at competitive technologies that have chosen to acquire-and-stitch together a solution like TimesTen from Oracle or Natiza from IBM. The stitched solutions often break core database functionality for both DB2 and Oracle as the core databases were designed to run on disk. For example, RAC does not work with TimesTen and you have to learn a whole new set of APIs and tools to use TimesTen.
Ferranti computer systems—we’ll take a closer look them later in the session—designs software for utility companies. They are helping to transform the utility industry and revolutionize the way electricity is consumed and sold by improving the way utilities leverage data. They not only need the help of in-memory technology to quickly process large amounts of relational data, but they also need a solution for tackling non-relational data. Because SQL Server 2014 offered the built-in approach for in-memory, they were able to utilize in-memory OLTP as well as our Microsoft Azure HDInsight service to tackle big data. They are now able to write more than 200 million rows in 15 minutes.
The last point I will make on “built-in” is that because we have designed in-memory into the platform, it’s not only pervasive throughout the platform across all workloads, it is also built-in to a single enterprise sku, so you don’t have to pay more or purchase additional sku’s to gain all of the in-memory capabilities.
Lets take a look at the first design point our engineering team took back during the release of SQL Server 2008 R2, which was to build-in in-memory technology and make it pervasive throughout the platform—from analytics, to data warehousing to OLTP. One of the main benefits of building in-memory into the platform is that you as the customer don’t have to learn new development tools or new APIs and you can take advantage of all of the other rich features and capabilities in SQL Server along with in-memory. You can even take advantage of other data platform services on-premises or in the cloud with Microsoft Azure along with in-memory performance. This is not the case when you look at competitive technologies that have chosen to acquire-and-stitch together a solution like TimesTen from Oracle or Natiza from IBM. The stitched solutions often break core database functionality for both DB2 and Oracle as the core databases were designed to run on disk. For example, RAC does not work with TimesTen and you have to learn a whole new set of APIs and tools to use TimesTen.
Ferranti computer systems—we’ll take a closer look them later in the session—designs software for utility companies. They are helping to transform the utility industry and revolutionize the way electricity is consumed and sold by improving the way utilities leverage data. They not only need the help of in-memory technology to quickly process large amounts of relational data, but they also need a solution for tackling non-relational data. Because SQL Server 2014 offered the built-in approach for in-memory, they were able to utilize in-memory OLTP as well as our Microsoft Azure HDInsight service to tackle big data. They are now able to write more than 200 million rows in 15 minutes.
The last point I will make on “built-in” is that because we have designed in-memory into the platform, it’s not only pervasive throughout the platform across all workloads, it is also built-in to a single enterprise sku, so you don’t have to pay more or purchase additional sku’s to gain all of the in-memory capabilities.
So Ferranti is just on example.
But imagine what you could do with you business if you could determine customer purchasing trends in real-time—not at the end of the day, week or month—but as they happened. And if you could tie that to social sentiment data in real-time and combine it with historical data or current events or even mobile data. How could you transform your business?
Speed and throughput of data are at the heart of this puzzle and in-memory technology will revolutionize databases, much like virtualization revolutionized physical compute.
Imagine a new level of customer personalization by acting on a world of data in real-time