Este documento describe las capacidades y ventajas de SQL Server, incluyendo su rendimiento líder en la industria, seguridad mejorada, capacidades de inteligencia artificial y aprendizaje automático, y soporte para una variedad de cargas de trabajo y escenarios en la nube y localmente. SQL Server ofrece acceso unificado a todos los datos, administración simplificada y herramientas para crear aplicaciones inteligentes.
Not to be confused with Oracle Database Vault (a commercial db security product), Data Vault Modeling is a specific data modeling technique for designing highly flexible, scalable, and adaptable data structures for enterprise data warehouse repositories. It is not a replacement for star schema data marts (and should not be used as such). This approach has been used in projects around the world (Europe, Australia, USA) for the last 10 years but is still not widely known or understood. The purpose of this presentation is to provide attendees with a detailed introduction to the technical components of the Data Vault Data Model, what they are for and how to build them. The examples will give attendees the basics for how to build, and design structures when using the Data Vault modeling technique. The target audience is anyone wishing to explore implementing a Data Vault style data model for an Enterprise Data Warehouse, Operational Data Warehouse, or Dynamic Data Integration Store. See more content like this by following my blog http://kentgraziano.com or follow me on twitter @kentgraziano.
An introduction to self-service data with Dremio. Dremio reimagines analytics for modern data. Created by veterans of open source and big data technologies, Dremio is a fundamentally new approach that dramatically simplifies and accelerates time to insight. Dremio empowers business users to curate precisely the data they need, from any data source, then accelerate analytical processing for BI tools, machine learning, data science, and SQL clients. Dremio starts to deliver value in minutes, and learns from your data and queries, making your data engineers, analysts, and data scientists more productive.
Tomer Shiran est le fondateur et chef de produit (CPO) de Dremio. Tomer était le 4e employé et vice-président produit de MapR, un pionnier de l'analyse du Big Data. Il a également occupé de nombreux postes de gestion de produits et d'ingénierie chez IBM Research et Microsoft, et a fondé plusieurs sites Web qui ont servi des millions d'utilisateurs. Il est titulaire d'un Master en génie informatique de l'Université Carnegie Mellon et d'un Bachelor of Science en informatique du Technion - Israel Institute of Technology.
Le Modern Data Stack meetup est ravi d'accueillir Tomer Shiran. Depuis Apache Drill, Apache Arrow maintenant Apache Iceberg, il ancre avec ses équipes des choix pour Dremio avec une vision de la plateforme de données “ouverte” basée sur des technologies open source. En plus, de ces valeurs qui évitent le verrouillage de clients dans des formats propriétaires, il a aussi le souci des coûts qu’engendrent de telles plateformes. Il sait aussi proposer un certain nombre de fonctionnalités qui transforment la gestion de données grâce à des initiatives telles Nessie qui ouvre la route du Data As Code et du transactionnel multi-processus.
Le Modern Data Stack Meetup laisse “carte blanche” à Tomer Shiran afin qu’il nous partage son expérience et sa vision quant à l’Open Data Lakehouse.
How to identify the correct Master Data subject areas & tooling for your MDM...Christopher Bradley
1. What are the different Master Data Management (MDM) architectures?
2. How can you identify the correct Master Data subject areas & tooling for your MDM initiative?
3. A reference architecture for MDM.
4. Selection criteria for MDM tooling.
chris.bradley@dmadvisors.co.uk
Delta Lake delivers reliability, security and performance to data lakes. Join this session to learn how customers have achieved 48x faster data processing, leading to 50% faster time to insight after implementing Delta Lake. You’ll also learn how Delta Lake provides the perfect foundation for a cost-effective, highly scalable lakehouse architecture.
Not to be confused with Oracle Database Vault (a commercial db security product), Data Vault Modeling is a specific data modeling technique for designing highly flexible, scalable, and adaptable data structures for enterprise data warehouse repositories. It is not a replacement for star schema data marts (and should not be used as such). This approach has been used in projects around the world (Europe, Australia, USA) for the last 10 years but is still not widely known or understood. The purpose of this presentation is to provide attendees with a detailed introduction to the technical components of the Data Vault Data Model, what they are for and how to build them. The examples will give attendees the basics for how to build, and design structures when using the Data Vault modeling technique. The target audience is anyone wishing to explore implementing a Data Vault style data model for an Enterprise Data Warehouse, Operational Data Warehouse, or Dynamic Data Integration Store. See more content like this by following my blog http://kentgraziano.com or follow me on twitter @kentgraziano.
An introduction to self-service data with Dremio. Dremio reimagines analytics for modern data. Created by veterans of open source and big data technologies, Dremio is a fundamentally new approach that dramatically simplifies and accelerates time to insight. Dremio empowers business users to curate precisely the data they need, from any data source, then accelerate analytical processing for BI tools, machine learning, data science, and SQL clients. Dremio starts to deliver value in minutes, and learns from your data and queries, making your data engineers, analysts, and data scientists more productive.
Tomer Shiran est le fondateur et chef de produit (CPO) de Dremio. Tomer était le 4e employé et vice-président produit de MapR, un pionnier de l'analyse du Big Data. Il a également occupé de nombreux postes de gestion de produits et d'ingénierie chez IBM Research et Microsoft, et a fondé plusieurs sites Web qui ont servi des millions d'utilisateurs. Il est titulaire d'un Master en génie informatique de l'Université Carnegie Mellon et d'un Bachelor of Science en informatique du Technion - Israel Institute of Technology.
Le Modern Data Stack meetup est ravi d'accueillir Tomer Shiran. Depuis Apache Drill, Apache Arrow maintenant Apache Iceberg, il ancre avec ses équipes des choix pour Dremio avec une vision de la plateforme de données “ouverte” basée sur des technologies open source. En plus, de ces valeurs qui évitent le verrouillage de clients dans des formats propriétaires, il a aussi le souci des coûts qu’engendrent de telles plateformes. Il sait aussi proposer un certain nombre de fonctionnalités qui transforment la gestion de données grâce à des initiatives telles Nessie qui ouvre la route du Data As Code et du transactionnel multi-processus.
Le Modern Data Stack Meetup laisse “carte blanche” à Tomer Shiran afin qu’il nous partage son expérience et sa vision quant à l’Open Data Lakehouse.
How to identify the correct Master Data subject areas & tooling for your MDM...Christopher Bradley
1. What are the different Master Data Management (MDM) architectures?
2. How can you identify the correct Master Data subject areas & tooling for your MDM initiative?
3. A reference architecture for MDM.
4. Selection criteria for MDM tooling.
chris.bradley@dmadvisors.co.uk
Delta Lake delivers reliability, security and performance to data lakes. Join this session to learn how customers have achieved 48x faster data processing, leading to 50% faster time to insight after implementing Delta Lake. You’ll also learn how Delta Lake provides the perfect foundation for a cost-effective, highly scalable lakehouse architecture.
Data Lakehouse, Data Mesh, and Data Fabric (r1)James Serra
So many buzzwords of late: Data Lakehouse, Data Mesh, and Data Fabric. What do all these terms mean and how do they compare to a data warehouse? In this session I’ll cover all of them in detail and compare the pros and cons of each. I’ll include use cases so you can see what approach will work best for your big data needs.
Power BI Desktop | Power BI Tutorial | Power BI Training | EdurekaEdureka!
This Edureka "Power BI Desktop" tutorial will help you to understand what is Power BI Desktop with examples and demo. Below are the topics covered in this tutorial:
1. Why Power BI?
2. What Power BI?
3. Who use Power BI?
4. Flow of Work
5. Power BI Trends
Databricks: A Tool That Empowers You To Do More With DataDatabricks
In this talk we will present how Databricks has enabled the author to achieve more with data, enabling one person to build a coherent data project with data engineering, analysis and science components, with better collaboration, better productionalization methods, with larger datasets and faster.
The talk will include a demo that will illustrate how the multiple functionalities of Databricks help to build a coherent data project with Databricks jobs, Delta Lake and auto-loader for data engineering, SQL Analytics for Data Analysis, Spark ML and MLFlow for data science, and Projects for collaboration.
Making Data Timelier and More Reliable with Lakehouse TechnologyMatei Zaharia
Enterprise data architectures usually contain many systems—data lakes, message queues, and data warehouses—that data must pass through before it can be analyzed. Each transfer step between systems adds a delay and a potential source of errors. What if we could remove all these steps? In recent years, cloud storage and new open source systems have enabled a radically new architecture: the lakehouse, an ACID transactional layer over cloud storage that can provide streaming, management features, indexing, and high-performance access similar to a data warehouse. Thousands of organizations including the largest Internet companies are now using lakehouses to replace separate data lake, warehouse and streaming systems and deliver high-quality data faster internally. I’ll discuss the key trends and recent advances in this area based on Delta Lake, the most widely used open source lakehouse platform, which was developed at Databricks.
Delta Lake OSS: Create reliable and performant Data Lake by Quentin AmbardParis Data Engineers !
Delta Lake is an open source framework living on top of parquet in your data lake to provide Reliability and performances. It has been open-sourced by Databricks this year and is gaining traction to become the defacto delta lake format.
We’ll see all the goods Delta Lake can do to your data with ACID transactions, DDL operations, Schema enforcement, batch and stream support etc !
Building a Logical Data Fabric using Data Virtualization (ASEAN)Denodo
Watch full webinar here: https://bit.ly/3FF1ubd
In the recent Building the Unified Data Warehouse and Data Lake report by leading industry analysts TDWI, we have discovered 64% of organizations stated the objective for a unified Data Warehouse and Data Lakes is to get more business value and 84% of organizations polled felt that a unified approach to Data Warehouses and Data Lakes was either extremely or moderately important.
In this session, you will learn how your organization can apply a logical data fabric and the associated technologies of machine learning, artificial intelligence, and data virtualization can reduce time to value. Hence, increasing the overall business value of your data assets.
KEY TAKEAWAYS:
- How a Logical Data Fabric is the right approach to assist organizations to unify their data.
- The advanced features of a Logical Data Fabric that assist with the democratization of data, providing an agile and governed approach to business analytics and data science.
- How a Logical Data Fabric with Data Virtualization enhances your legacy data integration landscape to simplify data access and encourage self-service.
The world of data architecture began with applications. Next came data warehouses. Then text was organized into a data warehouse.
Then one day the world discovered a whole new kind of data that was being generated by organizations. The world found that machines generated data that could be transformed into valuable insights. This was the origin of what is today called the data lakehouse. The evolution of data architecture continues today.
Come listen to industry experts describe this transformation of ordinary data into a data architecture that is invaluable to business. Simply put, organizations that take data architecture seriously are going to be at the forefront of business tomorrow.
This is an educational event.
Several of the authors of the book Building the Data Lakehouse will be presenting at this symposium.
Data Lakehouse, Data Mesh, and Data Fabric (r2)James Serra
So many buzzwords of late: Data Lakehouse, Data Mesh, and Data Fabric. What do all these terms mean and how do they compare to a modern data warehouse? In this session I’ll cover all of them in detail and compare the pros and cons of each. They all may sound great in theory, but I'll dig into the concerns you need to be aware of before taking the plunge. I’ll also include use cases so you can see what approach will work best for your big data needs. And I'll discuss Microsoft version of the data mesh.
Accelerating Data Ingestion with Databricks AutoloaderDatabricks
Tracking which incoming files have been processed has always required thought and design when implementing an ETL framework. The Autoloader feature of Databricks looks to simplify this, taking away the pain of file watching and queue management. However, there can also be a lot of nuance and complexity in setting up Autoloader and managing the process of ingesting data using it. After implementing an automated data loading process in a major US CPMG, Simon has some lessons to share from the experience.
This session will run through the initial setup and configuration of Autoloader in a Microsoft Azure environment, looking at the components used and what is created behind the scenes. We’ll then look at some of the limitations of the feature, before walking through the process of overcoming these limitations. We will build out a practical example that tackles evolving schemas, applying transformations to your stream, extracting telemetry from the process and finally, how to merge the incoming data into a Delta table.
After this session you will be better equipped to use Autoloader in a data ingestion platform, simplifying your production workloads and accelerating the time to realise value in your data!
Power BI Interview Questions and Answers | Power BI Certification | Power BI ...Edureka!
( Power BI Training - https://www.edureka.co/power-bi-training )
This Edureka "PowerBI Interview Questions and Answers" tutorial will help you unravel concepts of Power BI and touch those topics that are very vital for succeeding in Power BI Interviews.
This video helps you to learn the following topics:
1. General Power BI Questions
2. DAX
3. Power Pivot
4. Power Query
5. Power Map
6. Additional Questions
Check out our Power BI Playlist: https://goo.gl/97sJv1
Start today on a relevant and incremental MDM journey.
A turnkey MDM solution allows you to collaborate on, maintain and provision accurate and reliable data across the enterprise; however, extended implementation times can delay time to value. Many successful MDM projects start small and grow over time. Open source provides a vehicle to start your MDM journey and deliver value - today.
This slideshow will show you:
* How an integrated solution for data integration, data quality and master data management can speed up and simplify implementation
* Why an active data model allows you to quickly reflect unique data requirements
* The importance of a dynamic MDM interface that enables immediate collaboration and stewardship
To view the entire webinar with the demonstration, click on : http://nxy.in/bhl3z
If you wish to see other webinars, click on: http://nxy.in/hkidj
For Live Webinars, click here: http://nxy.in/pjeph
Delivering Data Democratization in the Cloud with SnowflakeKent Graziano
This is a brief introduction to Snowflake Cloud Data Platform and our revolutionary architecture. It contains a discussion of some of our unique features along with some real world metrics from our global customer base.
Agenda:
Architectural Overview
Presentation to the Client
Presentation to the Server/DB
High Availability and Disaster Recovery
Extended Architecture
Setup / Installation
Tests
Use cases
Perspective 12c
Data Lakehouse, Data Mesh, and Data Fabric (r1)James Serra
So many buzzwords of late: Data Lakehouse, Data Mesh, and Data Fabric. What do all these terms mean and how do they compare to a data warehouse? In this session I’ll cover all of them in detail and compare the pros and cons of each. I’ll include use cases so you can see what approach will work best for your big data needs.
Power BI Desktop | Power BI Tutorial | Power BI Training | EdurekaEdureka!
This Edureka "Power BI Desktop" tutorial will help you to understand what is Power BI Desktop with examples and demo. Below are the topics covered in this tutorial:
1. Why Power BI?
2. What Power BI?
3. Who use Power BI?
4. Flow of Work
5. Power BI Trends
Databricks: A Tool That Empowers You To Do More With DataDatabricks
In this talk we will present how Databricks has enabled the author to achieve more with data, enabling one person to build a coherent data project with data engineering, analysis and science components, with better collaboration, better productionalization methods, with larger datasets and faster.
The talk will include a demo that will illustrate how the multiple functionalities of Databricks help to build a coherent data project with Databricks jobs, Delta Lake and auto-loader for data engineering, SQL Analytics for Data Analysis, Spark ML and MLFlow for data science, and Projects for collaboration.
Making Data Timelier and More Reliable with Lakehouse TechnologyMatei Zaharia
Enterprise data architectures usually contain many systems—data lakes, message queues, and data warehouses—that data must pass through before it can be analyzed. Each transfer step between systems adds a delay and a potential source of errors. What if we could remove all these steps? In recent years, cloud storage and new open source systems have enabled a radically new architecture: the lakehouse, an ACID transactional layer over cloud storage that can provide streaming, management features, indexing, and high-performance access similar to a data warehouse. Thousands of organizations including the largest Internet companies are now using lakehouses to replace separate data lake, warehouse and streaming systems and deliver high-quality data faster internally. I’ll discuss the key trends and recent advances in this area based on Delta Lake, the most widely used open source lakehouse platform, which was developed at Databricks.
Delta Lake OSS: Create reliable and performant Data Lake by Quentin AmbardParis Data Engineers !
Delta Lake is an open source framework living on top of parquet in your data lake to provide Reliability and performances. It has been open-sourced by Databricks this year and is gaining traction to become the defacto delta lake format.
We’ll see all the goods Delta Lake can do to your data with ACID transactions, DDL operations, Schema enforcement, batch and stream support etc !
Building a Logical Data Fabric using Data Virtualization (ASEAN)Denodo
Watch full webinar here: https://bit.ly/3FF1ubd
In the recent Building the Unified Data Warehouse and Data Lake report by leading industry analysts TDWI, we have discovered 64% of organizations stated the objective for a unified Data Warehouse and Data Lakes is to get more business value and 84% of organizations polled felt that a unified approach to Data Warehouses and Data Lakes was either extremely or moderately important.
In this session, you will learn how your organization can apply a logical data fabric and the associated technologies of machine learning, artificial intelligence, and data virtualization can reduce time to value. Hence, increasing the overall business value of your data assets.
KEY TAKEAWAYS:
- How a Logical Data Fabric is the right approach to assist organizations to unify their data.
- The advanced features of a Logical Data Fabric that assist with the democratization of data, providing an agile and governed approach to business analytics and data science.
- How a Logical Data Fabric with Data Virtualization enhances your legacy data integration landscape to simplify data access and encourage self-service.
The world of data architecture began with applications. Next came data warehouses. Then text was organized into a data warehouse.
Then one day the world discovered a whole new kind of data that was being generated by organizations. The world found that machines generated data that could be transformed into valuable insights. This was the origin of what is today called the data lakehouse. The evolution of data architecture continues today.
Come listen to industry experts describe this transformation of ordinary data into a data architecture that is invaluable to business. Simply put, organizations that take data architecture seriously are going to be at the forefront of business tomorrow.
This is an educational event.
Several of the authors of the book Building the Data Lakehouse will be presenting at this symposium.
Data Lakehouse, Data Mesh, and Data Fabric (r2)James Serra
So many buzzwords of late: Data Lakehouse, Data Mesh, and Data Fabric. What do all these terms mean and how do they compare to a modern data warehouse? In this session I’ll cover all of them in detail and compare the pros and cons of each. They all may sound great in theory, but I'll dig into the concerns you need to be aware of before taking the plunge. I’ll also include use cases so you can see what approach will work best for your big data needs. And I'll discuss Microsoft version of the data mesh.
Accelerating Data Ingestion with Databricks AutoloaderDatabricks
Tracking which incoming files have been processed has always required thought and design when implementing an ETL framework. The Autoloader feature of Databricks looks to simplify this, taking away the pain of file watching and queue management. However, there can also be a lot of nuance and complexity in setting up Autoloader and managing the process of ingesting data using it. After implementing an automated data loading process in a major US CPMG, Simon has some lessons to share from the experience.
This session will run through the initial setup and configuration of Autoloader in a Microsoft Azure environment, looking at the components used and what is created behind the scenes. We’ll then look at some of the limitations of the feature, before walking through the process of overcoming these limitations. We will build out a practical example that tackles evolving schemas, applying transformations to your stream, extracting telemetry from the process and finally, how to merge the incoming data into a Delta table.
After this session you will be better equipped to use Autoloader in a data ingestion platform, simplifying your production workloads and accelerating the time to realise value in your data!
Power BI Interview Questions and Answers | Power BI Certification | Power BI ...Edureka!
( Power BI Training - https://www.edureka.co/power-bi-training )
This Edureka "PowerBI Interview Questions and Answers" tutorial will help you unravel concepts of Power BI and touch those topics that are very vital for succeeding in Power BI Interviews.
This video helps you to learn the following topics:
1. General Power BI Questions
2. DAX
3. Power Pivot
4. Power Query
5. Power Map
6. Additional Questions
Check out our Power BI Playlist: https://goo.gl/97sJv1
Start today on a relevant and incremental MDM journey.
A turnkey MDM solution allows you to collaborate on, maintain and provision accurate and reliable data across the enterprise; however, extended implementation times can delay time to value. Many successful MDM projects start small and grow over time. Open source provides a vehicle to start your MDM journey and deliver value - today.
This slideshow will show you:
* How an integrated solution for data integration, data quality and master data management can speed up and simplify implementation
* Why an active data model allows you to quickly reflect unique data requirements
* The importance of a dynamic MDM interface that enables immediate collaboration and stewardship
To view the entire webinar with the demonstration, click on : http://nxy.in/bhl3z
If you wish to see other webinars, click on: http://nxy.in/hkidj
For Live Webinars, click here: http://nxy.in/pjeph
Delivering Data Democratization in the Cloud with SnowflakeKent Graziano
This is a brief introduction to Snowflake Cloud Data Platform and our revolutionary architecture. It contains a discussion of some of our unique features along with some real world metrics from our global customer base.
Agenda:
Architectural Overview
Presentation to the Client
Presentation to the Server/DB
High Availability and Disaster Recovery
Extended Architecture
Setup / Installation
Tests
Use cases
Perspective 12c
Cómo mejorar la eficiencia de sus Bases de Datos migrando Oracle a una solución profesional sobre PostgreSQL, aunando las ventajas del software libre y del comercial.
Integración de Datos sin límites con PentahoDatalytics
Presentación de Pentaho Data Integration dada durante el foro "Las Dimensiones del BI" en Medellín (COL), donde se presentó la problemática de la integración de datos en la actualidad (cada vez más información, fuentes más diversas, datos no estructurados, etc.), como muchas empresas aún hoy intentan resolver este problema con programación SQL o similar y como Pentaho Data Integration puede no solo resolver este problema de una manera muy ágil, si no también como puede utilizarse para comenzar a analizar la información y realizar tareas de Data Discovery y Data Visualization antes de la generación de cubos, reportes, etc.
I FESTIVAL DE INFORMÁTICA EDUCATIVA 2010
Nombrar de forma general la funcionalidad de Oracle como motor de bases de datos, su funcionalidad, integración y ventajas.
Ponente: Ing. Harold Flores
Consultor de Oracle para SSA Sistemas
SolidQ Business Analytics Day | Una nueva plataforma de gestión de informació...SolidQ
Presentación de Eladio Rincón y Javier Torrenteras durante el SolidQ Business Analytics Day el pasado 13 de Marzo 2013 en Valencia (Alicante)
- Nueva ola de SQL Server
- Grupos de Disponibilidad
-SQL Server y CLoud
- SQL Server y gestión de grandes volúmenes
www.bisql.com
Webinar Vault IT: Analítica avanzada y Machine Learning con virtualización de...Denodo
Watch full webinar here: https://bit.ly/36j4ATO
Las técnicas avanzadas de ciencia de datos, como el aprendizaje automático (machine learning), son herramientas extremadamente útiles para obtener información valiosa de los datos. Sin embargo, suponen más presión para los data scientists, que tienen que buscar los datos correctos y limpiarlos para que sean utilizables. Este proceso, al final, consume la mayor parte de su tiempo.
En este webinar, explicaremos cómo la virtualización de datos ayuda a obtener la información necesaria de una manera más eficiente y ágil. Asista para descubrir:
- Cómo la virtualización de datos acelera la adquisición y el procesamiento de datos
- Cómo la solución de virtualización de datos de Denodo se integra con herramientas como Spark, Python, Zeppelin, etc.
- Cómo la virtualización de datos permite una gestión más eficiente de grandes volúmenes de datos
- Dos casos de éxito de clientes y una demo de analítica predictiva
Machine Learning con Azure Managed InstanceEduardo Castro
En esta presentación mostramos las opciones para implementar Machine Learning dentro de Azure, así como las formas de configurar y utilizar Python dentro de Azure Managed Instance
(PROYECTO) Límites entre el Arte, los Medios de Comunicación y la Informáticavazquezgarciajesusma
En este proyecto de investigación nos adentraremos en el fascinante mundo de la intersección entre el arte y los medios de comunicación en el campo de la informática.
La rápida evolución de la tecnología ha llevado a una fusión cada vez más estrecha entre el arte y los medios digitales, generando nuevas formas de expresión y comunicación.
Continuando con el desarrollo de nuestro proyecto haremos uso del método inductivo porque organizamos nuestra investigación a la particular a lo general. El diseño metodológico del trabajo es no experimental y transversal ya que no existe manipulación deliberada de las variables ni de la situación, si no que se observa los fundamental y como se dan en su contestó natural para después analizarlos.
El diseño es transversal porque los datos se recolectan en un solo momento y su propósito es describir variables y analizar su interrelación, solo se desea saber la incidencia y el valor de uno o más variables, el diseño será descriptivo porque se requiere establecer relación entre dos o más de estás.
Mediante una encuesta recopilamos la información de este proyecto los alumnos tengan conocimiento de la evolución del arte y los medios de comunicación en la información y su importancia para la institución.
Actualmente, y debido al desarrollo tecnológico de campos como la informática y la electrónica, la mayoría de las bases de datos están en formato digital, siendo este un componente electrónico, por tanto se ha desarrollado y se ofrece un amplio rango de soluciones al problema del almacenamiento de datos.
En este documento analizamos ciertos conceptos relacionados con la ficha 1 y 2. Y concluimos, dando el porque es importante desarrollar nuestras habilidades de pensamiento.
Sara Sofia Bedoya Montezuma.
9-1.
Inteligencia Artificial y Ciberseguridad.pdfEmilio Casbas
Recopilación de los puntos más interesantes de diversas presentaciones, desde los visionarios conceptos de Alan Turing, pasando por la paradoja de Hans Moravec y la descripcion de Singularidad de Max Tegmark, hasta los innovadores avances de ChatGPT, y de cómo la IA está transformando la seguridad digital y protegiendo nuestras vidas.
Índice del libro "Big Data: Tecnologías para arquitecturas Data-Centric" de 0...Telefónica
Índice del libro "Big Data: Tecnologías para arquitecturas Data-Centric" de 0xWord escrito por Ibón Reinoso ( https://mypublicinbox.com/IBhone ) con Prólogo de Chema Alonso ( https://mypublicinbox.com/ChemaAlonso ). Puedes comprarlo aquí: https://0xword.com/es/libros/233-big-data-tecnologias-para-arquitecturas-data-centric.html
Diagrama de flujo basada en la reparacion de automoviles.pdf
Novedades en SQL Server 2019
1.
2. Insights en minutos
con reportes y dashboards
Elección de la plataforma
y el idioma
Más seguro
en los últimos 8 años5
0
20
40
60
80
100
120
140
160
180
200
Vulnerabilities(2010-2017)
Lo mejor de Power BI y
SQL Server Reporting Services
con Power BI Report Server
Performance líder en la
industria
#1 desempeño de OLTP1
rendimiento de #1 DW en 1
TB2, TB3, y TB4
Procesamiento inteligente
de consultas
Líderes en desempeño y seguridad, con inteligencia sobre todos sus datos
Nube privada Nube pública
Plataforma de datos más
consistente
In-memory a través de todas las
cargas de trabajo
1/10th el costo de Oracle
T-SQL
Java
C/C++
Php
Node.js
C#/VB.net
Python
Ruby
Todas las reclamaciones TPC a partir de 1/19/2018.
1 http://www.tpc.org/4081; 2 http://www.tpc.org/3331; 3 http://www.tpc.org/3326; 4 http://www.tpc.org/3321; 5 Base de datos integral de vulnerabilidad del Instituto Nacional de normas y tecnología
Inteligencia sobre
cualquier dato
AI y la máquina de
aprendizaje sobre todos los
datos con el poder de SQL y
Apache Spark
3. Crear aplicaciones inteligentes y
AI con todos sus datos
Analizar todos los datos
Administre de forma fácil y segura
datos grandes y pequeños
Administrar todos los datos
Administración y análisis simplificados mediante un despliegue, gobernanza y herramientas unificados
Acceso unificado a todos sus datos con un
rendimiento incomparable
Integración de todos los datos
5. SQL Server, Spark y Data Lake
Almacene datos de alto volumen en un lago de
datos y acceda fácilmente mediante SQL o Spark
Los servicios de administración, el portal de
administración y la seguridad integrada hacen
que sea fácil de administrar
SQL
Server
Virtualización de datos
Combinar datos de muchas fuentes sin moverlos
ni replicarlos
Escalar cálculos y almacenamiento en caché para
aumentar el rendimiento
T-SQL
Analytics Aplicaciones
Conectividad
de base de
datos abierta
Nosql Bases de
datos
relacionales
HDFS
Plataforma AI completa
Alimente fácilmente datos integrados de muchas
fuentes a su entrenamiento de modelo
Recopile y prepare datos y, a continuación,
entrene, almacene y haga operativa sus modelos
de machine learning en un solo sistema
Tablas externas de SQL Server
Pools para cálculo y Pools de datos
Spark
Almacenamiento de información
escalable y compartido (HDFS)
Orígenes de
datos
externos
Portal de administración y servicios de
administración
Seguridad integrada con Active Directory
SQL Server
ML Services
Spark
Spark ML
HDFS
Contenedores Rest
API para modelos
6. Compute pool
SQL Compute
Node
SQL Compute
Node
SQL Compute
Node
…
Compute pool
SQL Compute
Node
IoT data
Directly
read from
HDFS
Persistent storage
…
Storage pool
SQL
Server
Spark
HDFS Data Node
SQL
Server
Spark
HDFS Data Node
SQL
Server
Spark
HDFS Data Node
Kubernetes pod
Analytics
Custom
apps BI
SQL Server
master instance
Node Node Node Node Node Node Node
SQL
Data mart
SQL Data
Node
SQL Data
Node
Compute pool
SQL Compute
Node
Storage Storage
10. Mantener en ejecución SQL Server
Grupos de disponibilidad en Kubernetes
AG
Pod
Operator
Pod
Load balancer
Pod
SQL Server
primary
AG agent
Pod
Load balancer
Pod
SQL Server
secondary
AG agent
Pod
SQL Server
secondary
AG agent
SQL Server
primary
SQL Server
secondary
11. Acelerar el desarrollo de
aplicaciones y administración
con nuevas mejoras
Desarrolle en la
plataforma de su
preferencia
15. Azure Data Studio es una herramienta de código
abierto, herramienta de gestión gráfica
multiplataforma y editor de código
Permite una experiencia DevOps moderna para
desarrolladores de bases de datos y DBAs en su
plataforma de elección
Simplifica el desarrollo, la configuración, la
administración, el monitoreo y la solución de
problemas para las bases de datos SQL
onPremises y en la nube
Nuevo
Utilice SQL Server Management Studio 18,0 vista
previa para acceder, configurar, administrar y
administrar todos los componentes de SQL
Server