Enviar búsqueda
Cargar
Serialization (Avro, Message Pack, Kryo)
•
1 recomendación
•
2,716 vistas
오석 한
Seguir
2010년 세미나 자료
Leer menos
Leer más
Tecnología
Denunciar
Compartir
Denunciar
Compartir
1 de 13
Descargar ahora
Descargar para leer sin conexión
Recomendados
OpenShift 4 installation deep dive
OpenShift 4 installation
OpenShift 4 installation
Robert Bohne
A comprehensive walkthrough of how to manage infrastructure-as-code using Terraform. This presentation includes an introduction to Terraform, a discussion of how to manage Terraform state, how to use Terraform modules, an overview of best practices (e.g. isolation, versioning, loops, if-statements), and a list of gotchas to look out for. For a written and more in-depth version of this presentation, check out the "Comprehensive Guide to Terraform" blog post series: https://blog.gruntwork.io/a-comprehensive-guide-to-terraform-b3d32832baca
Comprehensive Terraform Training
Comprehensive Terraform Training
Yevgeniy Brikman
What are important considerations when modernizing middleware and moving towards serverless and/or cloud native integration architectures? How can we make the most of flexible technologies such as Camel K, Kafka, Quarkus and OpenShift. Claus is working as project lead on Apache Camel and has extensive experience from open source product development. The talk was recorded and runs for 30 minutes and published on youtube at: https://www.youtube.com/watch?v=d1Hr78a7Lww
Best Practices for Middleware and Integration Architecture Modernization with...
Best Practices for Middleware and Integration Architecture Modernization with...
Claus Ibsen
SpringOne 2021 Session Title: Next-Generation Cloud Native Apps with Spring Cloud and Kubernetes Speaker: Ryan Baxter, Staff Software Engineer at VMware
Next-Generation Cloud Native Apps with Spring Cloud and Kubernetes
Next-Generation Cloud Native Apps with Spring Cloud and Kubernetes
VMware Tanzu
Container Orchestration using Kubernetes
Container Orchestration using Kubernetes
Container Orchestration using Kubernetes
Hesham Amin
OpenShift cloud technology high-level overview given at the Athens Area Software Developer Meet-up in Athens, Georgia, January 2014.
OpenShift Overview
OpenShift Overview
roundman
OpenShift-Technical-Overview.
OpenShift-Technical-Overview.pdf
OpenShift-Technical-Overview.pdf
JuanSalinas593459
gRPC is best suited for microservice communication. gRPC is fast, clear and powerful. It is an excellent alternative to address the verbose client problem when architecting a microservice infrastructure. But the legacy environment is always a big hurdle for changes. You must support existing clients that only understand RESTful HTTP API. In other cases, you need to provide RESTful APIs to the outside world. This session suggests solutions to resolve these problems. The session covers: - Why the team chose gRPC as the inter-service communication protocol while moving from a monolith to microservices and the challenges they faced. - How they leveraged Istio to support RESTful APIs using gRPC servers without additional development. - How they set up CI/CD to deliver API changes (including legacy API) using Helm and Spinnaker. - What they have learned through it and future improvements. gRPC는 마이크로서비스 커뮤니케이션에 가장 적합합니다. gRPC는 빠르고 명확하고 강력합니다. 이는 마이크로서비스 인프라를 설계할 때 복잡한 클라이언트의 문제를 해결하는데 있어 훌륭한 대안입니다. 하지만 기존 레거시 환경은 항상 변화의 큰 장애물입니다. RESTful HTTP API만을 이해하는 기존 클라이언트를 지원해야 합니다. 다른 경우, RESTful API를 외부에 제공해야 합니다. 본 세션에서는 이러한 문제를 해결할 솔루션을 제안합니다. 이 세션에서 다루는 내용: - 팀이 모놀리스에서 마이크로서비스로 전환하면서 서비스 간 커뮤니케이션 프로토콜로 gPRC를 선택한 이유 및 직면했던 난관들 - 추가 개발 없이 gRPC 서버를 이용해 RESTful API를 지원하기 위해 이스티오를 활용한 방법 - 헬름 및 스피네이커를 사용해 API 변경 사항 (레거시 API 포함)을 전달하기 위해 CI/CD를 설정하는 방법 - 이를 통해 배운 것과 앞으로 개선할 점
Running gRPC Services for Serving Legacy API on Kubernetes
Running gRPC Services for Serving Legacy API on Kubernetes
Sungwon Lee
Recomendados
OpenShift 4 installation deep dive
OpenShift 4 installation
OpenShift 4 installation
Robert Bohne
A comprehensive walkthrough of how to manage infrastructure-as-code using Terraform. This presentation includes an introduction to Terraform, a discussion of how to manage Terraform state, how to use Terraform modules, an overview of best practices (e.g. isolation, versioning, loops, if-statements), and a list of gotchas to look out for. For a written and more in-depth version of this presentation, check out the "Comprehensive Guide to Terraform" blog post series: https://blog.gruntwork.io/a-comprehensive-guide-to-terraform-b3d32832baca
Comprehensive Terraform Training
Comprehensive Terraform Training
Yevgeniy Brikman
What are important considerations when modernizing middleware and moving towards serverless and/or cloud native integration architectures? How can we make the most of flexible technologies such as Camel K, Kafka, Quarkus and OpenShift. Claus is working as project lead on Apache Camel and has extensive experience from open source product development. The talk was recorded and runs for 30 minutes and published on youtube at: https://www.youtube.com/watch?v=d1Hr78a7Lww
Best Practices for Middleware and Integration Architecture Modernization with...
Best Practices for Middleware and Integration Architecture Modernization with...
Claus Ibsen
SpringOne 2021 Session Title: Next-Generation Cloud Native Apps with Spring Cloud and Kubernetes Speaker: Ryan Baxter, Staff Software Engineer at VMware
Next-Generation Cloud Native Apps with Spring Cloud and Kubernetes
Next-Generation Cloud Native Apps with Spring Cloud and Kubernetes
VMware Tanzu
Container Orchestration using Kubernetes
Container Orchestration using Kubernetes
Container Orchestration using Kubernetes
Hesham Amin
OpenShift cloud technology high-level overview given at the Athens Area Software Developer Meet-up in Athens, Georgia, January 2014.
OpenShift Overview
OpenShift Overview
roundman
OpenShift-Technical-Overview.
OpenShift-Technical-Overview.pdf
OpenShift-Technical-Overview.pdf
JuanSalinas593459
gRPC is best suited for microservice communication. gRPC is fast, clear and powerful. It is an excellent alternative to address the verbose client problem when architecting a microservice infrastructure. But the legacy environment is always a big hurdle for changes. You must support existing clients that only understand RESTful HTTP API. In other cases, you need to provide RESTful APIs to the outside world. This session suggests solutions to resolve these problems. The session covers: - Why the team chose gRPC as the inter-service communication protocol while moving from a monolith to microservices and the challenges they faced. - How they leveraged Istio to support RESTful APIs using gRPC servers without additional development. - How they set up CI/CD to deliver API changes (including legacy API) using Helm and Spinnaker. - What they have learned through it and future improvements. gRPC는 마이크로서비스 커뮤니케이션에 가장 적합합니다. gRPC는 빠르고 명확하고 강력합니다. 이는 마이크로서비스 인프라를 설계할 때 복잡한 클라이언트의 문제를 해결하는데 있어 훌륭한 대안입니다. 하지만 기존 레거시 환경은 항상 변화의 큰 장애물입니다. RESTful HTTP API만을 이해하는 기존 클라이언트를 지원해야 합니다. 다른 경우, RESTful API를 외부에 제공해야 합니다. 본 세션에서는 이러한 문제를 해결할 솔루션을 제안합니다. 이 세션에서 다루는 내용: - 팀이 모놀리스에서 마이크로서비스로 전환하면서 서비스 간 커뮤니케이션 프로토콜로 gPRC를 선택한 이유 및 직면했던 난관들 - 추가 개발 없이 gRPC 서버를 이용해 RESTful API를 지원하기 위해 이스티오를 활용한 방법 - 헬름 및 스피네이커를 사용해 API 변경 사항 (레거시 API 포함)을 전달하기 위해 CI/CD를 설정하는 방법 - 이를 통해 배운 것과 앞으로 개선할 점
Running gRPC Services for Serving Legacy API on Kubernetes
Running gRPC Services for Serving Legacy API on Kubernetes
Sungwon Lee
DevOps @ OpenShift Online Presenter: Adam Miller As the Release Engineer and a member of Operations team for OpenShift Online, a downstream consumer of OpenShift Origin and the largest Public implementation of OpenShift to date, Adam Miller will discuss what it's like behind the scenes at OpenShift.com and share lessons learned and bring his thoughts and feedback on the future direction of Origin.
DevOps @ OpenShift Online
DevOps @ OpenShift Online
OpenShift Origin
Presentation by Em Campbell-Pretty and Adrienne Wilson at the Global SAFe Summit 2020. Patterns for preparing a Feature Backlog for PI Planning for an Agile Release Train.
Stayin' Alive! Feature Disco Your Way to PI Planning
Stayin' Alive! Feature Disco Your Way to PI Planning
Em Campbell-Pretty
Spinnaker is a continuous delivery platform by Netflix and open sourced in late 2015. Fast-forward 3 years, Spinnaker can deploy to 9 (!) cloud providers and platforms; with many project contributions coming from the cloud providers themselves (Google, Amazon, Microsoft, etc.). This DevOps Toronto talk will feature a quick overview of what Spinnaker can do. http://decks.pierre-nick.com/201904_Spinnaker_DevOpsTO/ https://github.com/pndurette/spinnaker-playground https://github.com/pndurette/decks
An Overview of Spinnaker
An Overview of Spinnaker
Pierre-Nicolas Durette
The journey to agile maturity is neither fast nor straightforward. What do you need to know? What challenges might you face? Which tools will best meet your organization where it's at? Join our experts on February 23, 2023 to explore what you should expect to see across the five phases of Agile maturity. In part 1 of this series, we will focus on Phases 1 and 2. Cprime’s Sneha Crews (Managing Director, Solutions Engineering), Rod Morrison (Partnerships Director, EMEA), and Drew Garvey (Enterprise Solutions Architect), will share valuable advice about negotiating the turns, avoiding roadblocks, and enjoying the ride in your agile maturity journey. Plus, we’ll talk about the optimal tools to support you—enterprise product management software, like Atlassian Jira and Jira Align. Learn: - Common maturity elements of Phase 1 of agile maturity (The Agile Team) and Phase 2 of agile maturity (The Team of Agile Teams) - Challenges you may face in the beginning of your agile maturity journey and how to overcome them - Software tools, features, and functionality that can support scaling
The Five Phases of Agile Maturity (Part 1): Phase 1 and 2
The Five Phases of Agile Maturity (Part 1): Phase 1 and 2
Cprime
Tekton Pipelines is an open-source project that focuses on providing a Kubernetes-native, lightweight, easy-to-manage serverless CI/CD framework. Tekton is built for Kubernetes and runs delivery pipelines in pods to scale on demand, and allow teams to fully control their pipelines and dependencies. ArgoCD is a declarative GitOps operator and makes continuous delivery possible by using Git as source of truth for declarative infrastructure and applications. In this session, you will learn how to combine the power of Tekton Pipelines with ArgoCD for a declarative approach to CI/CD based on GitOps principles.
ArgoCD and Tekton: Match made in Kubernetes heaven | DevNation Tech Talk
ArgoCD and Tekton: Match made in Kubernetes heaven | DevNation Tech Talk
Red Hat Developers
Branching and merging strategy under DevOps + Agile environment
Branching and merging strategy
Branching and merging strategy
Rahul Janghel
Learn the benefits of Infrastructure-as-Code (IaC), what Terraform is and why people love it, along with a breakdown of the basics (including live demo deployments). Then wrap up with a comparison of Azure Resource Manager (ARM) templates versus Terraform, consider some best practices, and walk away with some key resources in your Terraform learning adventure.
Infrastructure-as-Code (IaC) using Terraform
Infrastructure-as-Code (IaC) using Terraform
Adin Ermie
Presentation helps understand how HEAT, an orchestration project within openstack works and helps in autoscaling based on demand
Openstack heat & How Autoscaling works
Openstack heat & How Autoscaling works
CoreStack
Infrastructure as Code, tools, benefits, paradigms and more. Presentation from DigitalOnUs DevOps: Infrastructure as Code Meetup (September 20, 2018 - Monterrey Nuevo Leon MX)
DevOps: Infrastructure as Code
DevOps: Infrastructure as Code
Julio Aziz Flores Casab
In this new presentation, we will cover advanced Terraform topics (full-on DevOps). We will compare the deployment of Terraform using Azure DevOps, GitHub/GitHub Actions, and Terraform Cloud. We wrap everything up with some key takeaway learning resources in your Terraform learning adventure. NOTE: A recording of this presenting is available here: https://www.youtube.com/watch?v=fJ8_ZbOIdto&t=5574s
Infrastructure-as-Code (IaC) Using Terraform (Advanced Edition)
Infrastructure-as-Code (IaC) Using Terraform (Advanced Edition)
Adin Ermie
The Case for Value Stream Architecture - DevOps Entrprise Summit 2017
The Case for Value Stream Architecture (Mik Kersten, Carmen DeArdo)
The Case for Value Stream Architecture (Mik Kersten, Carmen DeArdo)
Carmen DeArdo
Whether you want to get started with Infrastructure as Code on Azure or using it already today, in this session Philip will talk about Bicep and Terraform as well as the battle between them. Is there even a battle or is it just marketing? Do I have to decide? What is the best choice for my use case? If you have ever asked yourself one of those questions or even are just curious about one of those tools, this talk is for you. In short: There is no battle. Both tools have different scopes. Philip will talk about the pros and cons, and what the tools themselves focus on. Join his talk to learn more!
AzDevCom2021 - Bicep vs Terraform
AzDevCom2021 - Bicep vs Terraform
Philip Welz
Openshift Fundamentals
Red Hat Openshift Fundamentals.pptx
Red Hat Openshift Fundamentals.pptx
ssuser18b1c6
Talk at FrOSCon, 2017, Sankt Augustin
Terraform -- Infrastructure as Code
Terraform -- Infrastructure as Code
Martin Schütte
Azure and Kubernetes go together like peanut butter and jelly with Azure offering many options to host Kubernetes. In this session, we'll show you how to mix the Open Source tools you already use with the powerful Kubernetes hosting options on Azure. Take your deployment and orchestration to the next level!
Best Practices with Azure & Kubernetes
Best Practices with Azure & Kubernetes
Microsoft Tech Community
When interacting with analytics dashboards, in order to achieve a smooth user experience, two major key requirements are quick response time and data freshness. To meet the requirements of creating fast interactive BI dashboards over streaming data, organizations often struggle with selecting a proper serving layer. Cluster computing frameworks such as Hadoop or Spark work well for storing large volumes of data, although they are not optimized for making it available for queries in real time. Long query latencies also make these systems suboptimal choices for powering interactive dashboards and BI use cases. This talk presents an open source real time data analytics stack using Apache Kafka, Druid, and Superset. The stack combines the low-latency streaming and processing capabilities of Kafka with Druid, which enables immediate exploration and provides low-latency queries over the ingested data streams. Superset provides the visualization and dashboarding that integrates nicely with Druid. In this talk we will discuss why this architecture is well suited to interactive applications over streaming data, present an end-to-end demo of complete stack, discuss its key features, and discuss performance characteristics from real-world use cases. Speaker Nishant Bangarwa, Software Engineer, Hortonworks
Interactive real time dashboards on data streams using Kafka, Druid, and Supe...
Interactive real time dashboards on data streams using Kafka, Druid, and Supe...
DataWorks Summit
Presentation given at Open Source Summit Japan 2016 about the state of the cloud native technology (Cloud Native Computing Foundation) and the standardization of container technology (Open Container Initiative)
Cloud Native Landscape (CNCF and OCI)
Cloud Native Landscape (CNCF and OCI)
Chris Aniszczyk
제니스앤컴퍼니에서 발표하신 "Kubernetes on Azure" 자료입니다.
Cloud for Kubernetes : Session4
Cloud for Kubernetes : Session4
WhaTap Labs
In this session, Diógenes gives an introduction of the basic concepts that make OpenShift, giving special attention to its relationship with Linux containers and Kubernetes.
OpenShift Introduction
OpenShift Introduction
Red Hat Developers
openshift 4 update
Open shift 4-update
Open shift 4-update
SaeidVarmazyar
Serialization and performance by Sergey Morenets
Serialization and performance by Sergey Morenets
Alex Tumanoff
3 apache-avro
3 apache-avro
zafargilani
Más contenido relacionado
La actualidad más candente
DevOps @ OpenShift Online Presenter: Adam Miller As the Release Engineer and a member of Operations team for OpenShift Online, a downstream consumer of OpenShift Origin and the largest Public implementation of OpenShift to date, Adam Miller will discuss what it's like behind the scenes at OpenShift.com and share lessons learned and bring his thoughts and feedback on the future direction of Origin.
DevOps @ OpenShift Online
DevOps @ OpenShift Online
OpenShift Origin
Presentation by Em Campbell-Pretty and Adrienne Wilson at the Global SAFe Summit 2020. Patterns for preparing a Feature Backlog for PI Planning for an Agile Release Train.
Stayin' Alive! Feature Disco Your Way to PI Planning
Stayin' Alive! Feature Disco Your Way to PI Planning
Em Campbell-Pretty
Spinnaker is a continuous delivery platform by Netflix and open sourced in late 2015. Fast-forward 3 years, Spinnaker can deploy to 9 (!) cloud providers and platforms; with many project contributions coming from the cloud providers themselves (Google, Amazon, Microsoft, etc.). This DevOps Toronto talk will feature a quick overview of what Spinnaker can do. http://decks.pierre-nick.com/201904_Spinnaker_DevOpsTO/ https://github.com/pndurette/spinnaker-playground https://github.com/pndurette/decks
An Overview of Spinnaker
An Overview of Spinnaker
Pierre-Nicolas Durette
The journey to agile maturity is neither fast nor straightforward. What do you need to know? What challenges might you face? Which tools will best meet your organization where it's at? Join our experts on February 23, 2023 to explore what you should expect to see across the five phases of Agile maturity. In part 1 of this series, we will focus on Phases 1 and 2. Cprime’s Sneha Crews (Managing Director, Solutions Engineering), Rod Morrison (Partnerships Director, EMEA), and Drew Garvey (Enterprise Solutions Architect), will share valuable advice about negotiating the turns, avoiding roadblocks, and enjoying the ride in your agile maturity journey. Plus, we’ll talk about the optimal tools to support you—enterprise product management software, like Atlassian Jira and Jira Align. Learn: - Common maturity elements of Phase 1 of agile maturity (The Agile Team) and Phase 2 of agile maturity (The Team of Agile Teams) - Challenges you may face in the beginning of your agile maturity journey and how to overcome them - Software tools, features, and functionality that can support scaling
The Five Phases of Agile Maturity (Part 1): Phase 1 and 2
The Five Phases of Agile Maturity (Part 1): Phase 1 and 2
Cprime
Tekton Pipelines is an open-source project that focuses on providing a Kubernetes-native, lightweight, easy-to-manage serverless CI/CD framework. Tekton is built for Kubernetes and runs delivery pipelines in pods to scale on demand, and allow teams to fully control their pipelines and dependencies. ArgoCD is a declarative GitOps operator and makes continuous delivery possible by using Git as source of truth for declarative infrastructure and applications. In this session, you will learn how to combine the power of Tekton Pipelines with ArgoCD for a declarative approach to CI/CD based on GitOps principles.
ArgoCD and Tekton: Match made in Kubernetes heaven | DevNation Tech Talk
ArgoCD and Tekton: Match made in Kubernetes heaven | DevNation Tech Talk
Red Hat Developers
Branching and merging strategy under DevOps + Agile environment
Branching and merging strategy
Branching and merging strategy
Rahul Janghel
Learn the benefits of Infrastructure-as-Code (IaC), what Terraform is and why people love it, along with a breakdown of the basics (including live demo deployments). Then wrap up with a comparison of Azure Resource Manager (ARM) templates versus Terraform, consider some best practices, and walk away with some key resources in your Terraform learning adventure.
Infrastructure-as-Code (IaC) using Terraform
Infrastructure-as-Code (IaC) using Terraform
Adin Ermie
Presentation helps understand how HEAT, an orchestration project within openstack works and helps in autoscaling based on demand
Openstack heat & How Autoscaling works
Openstack heat & How Autoscaling works
CoreStack
Infrastructure as Code, tools, benefits, paradigms and more. Presentation from DigitalOnUs DevOps: Infrastructure as Code Meetup (September 20, 2018 - Monterrey Nuevo Leon MX)
DevOps: Infrastructure as Code
DevOps: Infrastructure as Code
Julio Aziz Flores Casab
In this new presentation, we will cover advanced Terraform topics (full-on DevOps). We will compare the deployment of Terraform using Azure DevOps, GitHub/GitHub Actions, and Terraform Cloud. We wrap everything up with some key takeaway learning resources in your Terraform learning adventure. NOTE: A recording of this presenting is available here: https://www.youtube.com/watch?v=fJ8_ZbOIdto&t=5574s
Infrastructure-as-Code (IaC) Using Terraform (Advanced Edition)
Infrastructure-as-Code (IaC) Using Terraform (Advanced Edition)
Adin Ermie
The Case for Value Stream Architecture - DevOps Entrprise Summit 2017
The Case for Value Stream Architecture (Mik Kersten, Carmen DeArdo)
The Case for Value Stream Architecture (Mik Kersten, Carmen DeArdo)
Carmen DeArdo
Whether you want to get started with Infrastructure as Code on Azure or using it already today, in this session Philip will talk about Bicep and Terraform as well as the battle between them. Is there even a battle or is it just marketing? Do I have to decide? What is the best choice for my use case? If you have ever asked yourself one of those questions or even are just curious about one of those tools, this talk is for you. In short: There is no battle. Both tools have different scopes. Philip will talk about the pros and cons, and what the tools themselves focus on. Join his talk to learn more!
AzDevCom2021 - Bicep vs Terraform
AzDevCom2021 - Bicep vs Terraform
Philip Welz
Openshift Fundamentals
Red Hat Openshift Fundamentals.pptx
Red Hat Openshift Fundamentals.pptx
ssuser18b1c6
Talk at FrOSCon, 2017, Sankt Augustin
Terraform -- Infrastructure as Code
Terraform -- Infrastructure as Code
Martin Schütte
Azure and Kubernetes go together like peanut butter and jelly with Azure offering many options to host Kubernetes. In this session, we'll show you how to mix the Open Source tools you already use with the powerful Kubernetes hosting options on Azure. Take your deployment and orchestration to the next level!
Best Practices with Azure & Kubernetes
Best Practices with Azure & Kubernetes
Microsoft Tech Community
When interacting with analytics dashboards, in order to achieve a smooth user experience, two major key requirements are quick response time and data freshness. To meet the requirements of creating fast interactive BI dashboards over streaming data, organizations often struggle with selecting a proper serving layer. Cluster computing frameworks such as Hadoop or Spark work well for storing large volumes of data, although they are not optimized for making it available for queries in real time. Long query latencies also make these systems suboptimal choices for powering interactive dashboards and BI use cases. This talk presents an open source real time data analytics stack using Apache Kafka, Druid, and Superset. The stack combines the low-latency streaming and processing capabilities of Kafka with Druid, which enables immediate exploration and provides low-latency queries over the ingested data streams. Superset provides the visualization and dashboarding that integrates nicely with Druid. In this talk we will discuss why this architecture is well suited to interactive applications over streaming data, present an end-to-end demo of complete stack, discuss its key features, and discuss performance characteristics from real-world use cases. Speaker Nishant Bangarwa, Software Engineer, Hortonworks
Interactive real time dashboards on data streams using Kafka, Druid, and Supe...
Interactive real time dashboards on data streams using Kafka, Druid, and Supe...
DataWorks Summit
Presentation given at Open Source Summit Japan 2016 about the state of the cloud native technology (Cloud Native Computing Foundation) and the standardization of container technology (Open Container Initiative)
Cloud Native Landscape (CNCF and OCI)
Cloud Native Landscape (CNCF and OCI)
Chris Aniszczyk
제니스앤컴퍼니에서 발표하신 "Kubernetes on Azure" 자료입니다.
Cloud for Kubernetes : Session4
Cloud for Kubernetes : Session4
WhaTap Labs
In this session, Diógenes gives an introduction of the basic concepts that make OpenShift, giving special attention to its relationship with Linux containers and Kubernetes.
OpenShift Introduction
OpenShift Introduction
Red Hat Developers
openshift 4 update
Open shift 4-update
Open shift 4-update
SaeidVarmazyar
La actualidad más candente
(20)
DevOps @ OpenShift Online
DevOps @ OpenShift Online
Stayin' Alive! Feature Disco Your Way to PI Planning
Stayin' Alive! Feature Disco Your Way to PI Planning
An Overview of Spinnaker
An Overview of Spinnaker
The Five Phases of Agile Maturity (Part 1): Phase 1 and 2
The Five Phases of Agile Maturity (Part 1): Phase 1 and 2
ArgoCD and Tekton: Match made in Kubernetes heaven | DevNation Tech Talk
ArgoCD and Tekton: Match made in Kubernetes heaven | DevNation Tech Talk
Branching and merging strategy
Branching and merging strategy
Infrastructure-as-Code (IaC) using Terraform
Infrastructure-as-Code (IaC) using Terraform
Openstack heat & How Autoscaling works
Openstack heat & How Autoscaling works
DevOps: Infrastructure as Code
DevOps: Infrastructure as Code
Infrastructure-as-Code (IaC) Using Terraform (Advanced Edition)
Infrastructure-as-Code (IaC) Using Terraform (Advanced Edition)
The Case for Value Stream Architecture (Mik Kersten, Carmen DeArdo)
The Case for Value Stream Architecture (Mik Kersten, Carmen DeArdo)
AzDevCom2021 - Bicep vs Terraform
AzDevCom2021 - Bicep vs Terraform
Red Hat Openshift Fundamentals.pptx
Red Hat Openshift Fundamentals.pptx
Terraform -- Infrastructure as Code
Terraform -- Infrastructure as Code
Best Practices with Azure & Kubernetes
Best Practices with Azure & Kubernetes
Interactive real time dashboards on data streams using Kafka, Druid, and Supe...
Interactive real time dashboards on data streams using Kafka, Druid, and Supe...
Cloud Native Landscape (CNCF and OCI)
Cloud Native Landscape (CNCF and OCI)
Cloud for Kubernetes : Session4
Cloud for Kubernetes : Session4
OpenShift Introduction
OpenShift Introduction
Open shift 4-update
Open shift 4-update
Destacado
Serialization and performance by Sergey Morenets
Serialization and performance by Sergey Morenets
Alex Tumanoff
3 apache-avro
3 apache-avro
zafargilani
Igor Anishchenko Odessa Java TechTalks Lohika - May, 2012 Let's take a step back and compare data serialization formats, of which there are plenty. What are the key differences between Apache Thrift, Google Protocol Buffers and Apache Avro. Which is "The Best"? Truth of the matter is, they are all very good and each has its own strong points. Hence, the answer is as much of a personal choice, as well as understanding of the historical context for each, and correctly identifying your own, individual requirements.
Thrift vs Protocol Buffers vs Avro - Biased Comparison
Thrift vs Protocol Buffers vs Avro - Biased Comparison
Igor Anishchenko
chapter 8 powerpoint
Chapter 8 big data and privacy
Chapter 8 big data and privacy
opeyemiatilola1992
おひろめ会:Javaにおけるデータシリアライズ手法
おひろめ会:Javaにおけるデータシリアライズ手法
moai kids
LivePerson moved from an ETL based data platform to a new data platform based on emerging technologies from the Open Source community: Hadoop, Kafka, Storm, Avro and more. This presentation tells the story and focuses on Kafka.
From a kafkaesque story to The Promised Land
From a kafkaesque story to The Promised Land
Ran Silberman
Recorded at SpringOne2GX 2013 in Santa Clara, CA Speaker: Adam Shook This session assumes absolutely no knowledge of Apache Hadoop and will provide a complete introduction to all the major aspects of the Hadoop ecosystem of projects and tools. If you are looking to get up to speed on Hadoop, trying to work out what all the Big Data fuss is about, or just interested in brushing up your understanding of MapReduce, then this is the session for you. We will cover all the basics with detailed discussion about HDFS, MapReduce, YARN (MRv2), and a broad overview of the Hadoop ecosystem including Hive, Pig, HBase, ZooKeeper and more. Learn More about Spring XD at: http://projects.spring.io/spring-xd Learn More about Gemfire XD at: http://www.gopivotal.com/big-data/pivotal-hd
Hadoop - Just the Basics for Big Data Rookies (SpringOne2GX 2013)
Hadoop - Just the Basics for Big Data Rookies (SpringOne2GX 2013)
VMware Tanzu
Avro Data | Washington DC HUG
Avro Data | Washington DC HUG
Cloudera, Inc.
1. Serialization overview 2. Java libraries for serialization. 3. Benchmarking
Serialization and performance in Java
Serialization and performance in Java
Strannik_2013
Avro introduction
Avro introduction
Nanda8904648951
Some initial analysis of the Hadoop Stack using vProbes
Hadoop I/O Analysis
Hadoop I/O Analysis
Richard McDougall
This slide deck is used as an introduction to the internals of Hadoop MapReduce, as part of the Distributed Systems and Cloud Computing course I hold at Eurecom. Course website: http://michiard.github.io/DISC-CLOUD-COURSE/ Sources available here: https://github.com/michiard/DISC-CLOUD-COURSE
Hadoop Internals
Hadoop Internals
Pietro Michiardi
Event Stream Processing with Kafka and Samza, presented at Iowa Code Camp Fall 2014.
Event Stream Processing with Kafka and Samza
Event Stream Processing with Kafka and Samza
Zach Cox
Type safe, versioned, and rewindable stream processing with Apache {Avro, Kafka} and Scala.
Type safe, versioned, and rewindable stream processing with Apache {Avro, K...
Type safe, versioned, and rewindable stream processing with Apache {Avro, K...
Hisham Mardam-Bey
The only way to get where we need to be in security analysis is if we use Security Intelligence. This means working harder and understanding the big picture of your data.
Big Data, Security Intelligence, (And Why I Hate This Title)
Big Data, Security Intelligence, (And Why I Hate This Title)
Coastal Pet Products, Inc.
Organizations need to perform increasingly complex analysis on data — streaming analytics, ad-hoc querying, and predictive analytics — in order to get better customer insights and actionable business intelligence. Apache Spark has recently emerged as the framework of choice to address many of these challenges. In this session, we show you how to use Apache Spark on AWS to implement and scale common big data use cases such as real-time data processing, interactive data science, predictive analytics, and more. We will talk about common architectures, best practices to quickly create Spark clusters using Amazon EMR, and ways to integrate Spark with other big data services in AWS. Learning Objectives: • Learn why Spark is great for ad-hoc interactive analysis and real-time stream processing. • How to deploy and tune scalable clusters running Spark on Amazon EMR. • How to use EMR File System (EMRFS) with Spark to query data directly in Amazon S3. • Common architectures to leverage Spark with Amazon DynamoDB, Amazon Redshift, Amazon Kinesis, and more.
Best Practices for Using Apache Spark on AWS
Best Practices for Using Apache Spark on AWS
Amazon Web Services
An introduction to Apache Avro
Avro intro
Avro intro
Randy Abernethy
Overview of Apache Avro just before 1.4 release
Apache Avro and You
Apache Avro and You
Eric Wendelin
Peter Wood started looking at Big Data as a solution for Advanced Threat Protection in 2013. This presentation examines how Big Data is being used for security in 2015, how this market is developing and how realistic vendor offerings are.
Big Data and Security - Where are we now? (2015)
Big Data and Security - Where are we now? (2015)
Peter Wood
Create a Colder Storage Tier for Hadoop & Spark Using IBM Elastic Storage Server & HDFS Transparency
Hadoop and Spark Analytics over Better Storage
Hadoop and Spark Analytics over Better Storage
Sandeep Patil
Destacado
(20)
Serialization and performance by Sergey Morenets
Serialization and performance by Sergey Morenets
3 apache-avro
3 apache-avro
Thrift vs Protocol Buffers vs Avro - Biased Comparison
Thrift vs Protocol Buffers vs Avro - Biased Comparison
Chapter 8 big data and privacy
Chapter 8 big data and privacy
おひろめ会:Javaにおけるデータシリアライズ手法
おひろめ会:Javaにおけるデータシリアライズ手法
From a kafkaesque story to The Promised Land
From a kafkaesque story to The Promised Land
Hadoop - Just the Basics for Big Data Rookies (SpringOne2GX 2013)
Hadoop - Just the Basics for Big Data Rookies (SpringOne2GX 2013)
Avro Data | Washington DC HUG
Avro Data | Washington DC HUG
Serialization and performance in Java
Serialization and performance in Java
Avro introduction
Avro introduction
Hadoop I/O Analysis
Hadoop I/O Analysis
Hadoop Internals
Hadoop Internals
Event Stream Processing with Kafka and Samza
Event Stream Processing with Kafka and Samza
Type safe, versioned, and rewindable stream processing with Apache {Avro, K...
Type safe, versioned, and rewindable stream processing with Apache {Avro, K...
Big Data, Security Intelligence, (And Why I Hate This Title)
Big Data, Security Intelligence, (And Why I Hate This Title)
Best Practices for Using Apache Spark on AWS
Best Practices for Using Apache Spark on AWS
Avro intro
Avro intro
Apache Avro and You
Apache Avro and You
Big Data and Security - Where are we now? (2015)
Big Data and Security - Where are we now? (2015)
Hadoop and Spark Analytics over Better Storage
Hadoop and Spark Analytics over Better Storage
Similar a Serialization (Avro, Message Pack, Kryo)
In the space of building products with data, either by dealing with huge amounts of data or by applying machine learning, many different ecosystems meet. Larger volumes of data have to be passed between these systems. The handling of the data is not only down to divide between systems written in Java that need to pass it on to the machine learning model in Python. When you take into account that you want to integrate with the existing business infrastructure, you also need to cater for legacy systems as well do you need to bring the large volumes of data to the user via UIs.
Berlin Buzzwords 2019 - Taming the language border in data analytics and scie...
Berlin Buzzwords 2019 - Taming the language border in data analytics and scie...
Uwe Korn
21-Jan-2022. Friday 9:45 AM — 10 min. DataMinutes. Apache Pulsar with MQTT for Edge Computing. https://datagrillen.com/dataminutes/ Apache Pulsar with MQTT for Edge Computing Lightning - 2022 Tim Spann
Data minutes #2 Apache Pulsar with MQTT for Edge Computing Lightning - 2022
Data minutes #2 Apache Pulsar with MQTT for Edge Computing Lightning - 2022
Timothy Spann
Update on Apache Arrow project and not-for-profit Ursa Labs org for 2019 https://ursalabs.org/. Active projects and development objectives
Ursa Labs and Apache Arrow in 2019
Ursa Labs and Apache Arrow in 2019
Wes McKinney
Aeolus is Comcast’s new internal Big Data system for providing access to an integrated view of a wide variety of high-quality, near-real-time and batch data. Such integration can enable data scientists to uncover otherwise hidden trends, anomalies, and powerful predictors of business successes and failures. But integrating data across silos in a large enterprise is fraught with peril. There typically are few standards on naming conventions and data representation, and spotty documentation at best. The old rule of thumb often applies: 70% of the analysts’ time goes into data wrangling, while only 30% goes toward the actual analyses and simulations. The goal of the Athene Data Governance Platform within Aeolus is to invert this ratio. This talk will explain how Comcast is using Apache Avro and Atlas for end-to-end data governance, the challenges faced, and methods used to address these challenges. Avro provides a lingua franca for data representation, data integration, and schema evolution. All data published for community consumption must have an associated avro schema in Atlas. Every step in its journey through Aeolus, in flight or at rest, is captured in Atlas. Atlas’ extensibility has allowed us to add or update various entity types (e.g., avro schemas, kafka topics, object store pseudo-directories) and lineage types (e.g., storing streaming data in object storage; embellishing and re-publishing streaming data; performing aggregations and other transformations on data at rest; and evolution of schemas with compatibility flags). Transformation services notify Atlas of lineage links via custom asynchronous kafka messaging. Atlas provides self-service data discovery and lineage browsing and querying, via full-text search, DSL query language, or gremlin graph query language. Example queries: “Where is data from kafka topic X stored?” “Display the journey of data currently stored in pseudo-directory X since it entered the Aeolus system”. “Show me all earlier versions of schema S, and whether they are forward/backward compatible with each other.”
End-to-end Data Governance with Apache Avro and Atlas
End-to-end Data Governance with Apache Avro and Atlas
DataWorks Summit
ApacheCon2022_Deep Dive into Building Streaming Applications with Apache Pulsar In this session I will get you started with real-time cloud native streaming programming with Java, Golang, Python and Apache NiFi. If there’s a preferred language that the attendees pick, we will focus only on that one. I will start off with an introduction to Apache Pulsar and setting up your first easy standalone cluster in docker. We will then go into terms and architecture so you have an idea of what is going on with your events. I will then show you how to produce and consume messages to and from Pulsar topics. As well as using some of the command line and REST interfaces to monitor, manage and do CRUD on things like tenants, namespaces and topics. We will discuss Functions, Sinks, Sources, Pulsar SQL, Flink SQL and Spark SQL interfaces. We also discuss why you may want to add protocols such as MoP (MQTT), AoP (AMQP/RabbitMQ) or KoP (Kafka) to your cluster. We will also look at WebSockets as a producer and consumer. I will demonstrate a simple web page that sends and receives Pulsar messages with basic JavaScript. After this session you will be able to build simple real-time streaming and messaging applications with your chosen language or tool of your choice. apache pulsar tim spann developer advocate streamnative datainmotion.dev
ApacheCon2022_Deep Dive into Building Streaming Applications with Apache Pulsar
ApacheCon2022_Deep Dive into Building Streaming Applications with Apache Pulsar
Timothy Spann
Short presentation on some techniques to gain performance of the NATS messaging system as it was rewritten in Go.
High Performance Systems in Go - GopherCon 2014
High Performance Systems in Go - GopherCon 2014
Derek Collison
PyData Paris 2016 about the importance and recent developments on the Python side of Apache Arrow and Apache Parquet.
How Apache Arrow and Parquet boost cross-language interoperability
How Apache Arrow and Parquet boost cross-language interoperability
Uwe Korn
Scaling with Symfony2. Yes, you can do it :)
Scaling with Symfony - PHP UK
Scaling with Symfony - PHP UK
Ricard Clau
Delivered at SciPy 2018 -- July 11, 2018
Apache Arrow: Cross-language Development Platform for In-memory Data
Apache Arrow: Cross-language Development Platform for In-memory Data
Wes McKinney
Devfest uk & ireland using apache nifi with apache pulsar for fast data on-ramp 2022 As the Pulsar communities grows, more and more connectors will be added. To enhance the availability of sources and sinks and to make use of the greater Apache Streaming community, joining forces between Apache NiFi and Apache Pulsar is a perfect fit. Apache NiFi also adds the benefits of ELT, ETL, data crunching, transformation, validation and batch data processing. Once data is ready to be an event, NiFi can launch it into Pulsar at light speed. I will walk through how to get started, some use cases and demos and answer questions. https://www.devfest-uki.com/schedule https://linktr.ee/tspannhw
Devfest uk & ireland using apache nifi with apache pulsar for fast data on-r...
Devfest uk & ireland using apache nifi with apache pulsar for fast data on-r...
Timothy Spann
This slide show how can improve ceph storage performance using ssd and how to use ceph storage for container.
NAVER Ceph Storage on ssd for Container
NAVER Ceph Storage on ssd for Container
Jangseon Ryu
Deep Dive into Building Streaming Applications with Apache Pulsar philly ete apache pulsar with java, python, flink, nifi, spark
Deep Dive into Building Streaming Applications with Apache Pulsar
Deep Dive into Building Streaming Applications with Apache Pulsar
Timothy Spann
Real time cloud native open source streaming of any data to apache solr Utilizing Apache Pulsar and Apache NiFi we can parse any document in real-time at scale. We receive a lot of documents via cloud storage, email, social channels and internal document stores. We want to make all the content and metadata to Apache Solr for categorization, full text search, optimization and combination with other datastores. We will not only stream documents, but all REST feeds, logs and IoT data. Once data is produced to Pulsar topics it can instantly be ingested to Solr through Pulsar Solr Sink. Utilizing a number of open source tools, we have created a real-time scalable any document parsing data flow. We use Apache Tika for Document Processing with real-time language detection, natural language processing with Apache OpenNLP, Sentiment Analysis with Stanford CoreNLP, Spacy and TextBlob. We will walk everyone through creating an open source flow of documents utilizing Apache NiFi as our integration engine. We can convert PDF, Excel and Word to HTML and/or text. We can also extract the text to apply sentiment analysis and NLP categorization to generate additional metadata about our documents. We also will extract and parse images that if they contain text we can extract with TensorFlow and Tesseract.
Real time cloud native open source streaming of any data to apache solr
Real time cloud native open source streaming of any data to apache solr
Timothy Spann
OSS EU: Deep Dive into Building Streaming Applications with Apache Pulsar In this session I will get you started with real-time cloud native streaming programming with Java, Golang, Python and Apache NiFi. If there’s a preferred language that the attendees pick, we will focus only on that one. I will start off with an introduction to Apache Pulsar and setting up your first easy standalone cluster in docker. We will then go into terms and architecture so you have an idea of what is going on with your events. I will then show you how to produce and consume messages to and from Pulsar topics. As well as using some of the command line and REST interfaces to monitor, manage and do CRUD on things like tenants, namespaces and topics. We will discuss Functions, Sinks, Sources, Pulsar SQL, Flink SQL and Spark SQL interfaces. We also discuss why you may want to add protocols such as MoP (MQTT), AoP (AMQP/RabbitMQ) or KoP (Kafka) to your cluster. We will also look at WebSockets as a producer and consumer. I will demonstrate a simple web page that sends and receives Pulsar messages with basic JavaScript. After this session you will be able to build simple real-time streaming and messaging applications with your chosen language or tool of your choice. apache pulsar
OSS EU: Deep Dive into Building Streaming Applications with Apache Pulsar
OSS EU: Deep Dive into Building Streaming Applications with Apache Pulsar
Timothy Spann
@LaraConf Taiwan 2019
High Concurrency Architecture and Laravel Performance Tuning
High Concurrency Architecture and Laravel Performance Tuning
Albert Chen
Slides on Apache Arrow development from DataEngConf Barcelona 2018
Apache Arrow at DataEngConf Barcelona 2018
Apache Arrow at DataEngConf Barcelona 2018
Wes McKinney
Ways you can scale your software and organization with Wordnik's Swagger framework
Scaling with swagger
Scaling with swagger
Tony Tam
NoSQL afternoon in Japan Kumofs & MessagePack
NoSQL afternoon in Japan Kumofs & MessagePack
Sadayuki Furuhashi
NoSQL afternoon in Japan kumofs & MessagePack
NoSQL afternoon in Japan kumofs & MessagePack
Sadayuki Furuhashi
DBCC 2021 - FLiP Stack for Cloud Data Lakes With Apache Pulsar, Apache NiFi, Apache Flink. The FLiP(N) Stack for Event processing and IoT. With StreamNative Cloud. DBCC International – Friday 15.10.2021 Powered by Apache Pulsar, StreamNative provides a cloud-native, real-time messaging and streaming platform to support multi-cloud and hybrid cloud strategies.
DBCC 2021 - FLiP Stack for Cloud Data Lakes
DBCC 2021 - FLiP Stack for Cloud Data Lakes
Timothy Spann
Similar a Serialization (Avro, Message Pack, Kryo)
(20)
Berlin Buzzwords 2019 - Taming the language border in data analytics and scie...
Berlin Buzzwords 2019 - Taming the language border in data analytics and scie...
Data minutes #2 Apache Pulsar with MQTT for Edge Computing Lightning - 2022
Data minutes #2 Apache Pulsar with MQTT for Edge Computing Lightning - 2022
Ursa Labs and Apache Arrow in 2019
Ursa Labs and Apache Arrow in 2019
End-to-end Data Governance with Apache Avro and Atlas
End-to-end Data Governance with Apache Avro and Atlas
ApacheCon2022_Deep Dive into Building Streaming Applications with Apache Pulsar
ApacheCon2022_Deep Dive into Building Streaming Applications with Apache Pulsar
High Performance Systems in Go - GopherCon 2014
High Performance Systems in Go - GopherCon 2014
How Apache Arrow and Parquet boost cross-language interoperability
How Apache Arrow and Parquet boost cross-language interoperability
Scaling with Symfony - PHP UK
Scaling with Symfony - PHP UK
Apache Arrow: Cross-language Development Platform for In-memory Data
Apache Arrow: Cross-language Development Platform for In-memory Data
Devfest uk & ireland using apache nifi with apache pulsar for fast data on-r...
Devfest uk & ireland using apache nifi with apache pulsar for fast data on-r...
NAVER Ceph Storage on ssd for Container
NAVER Ceph Storage on ssd for Container
Deep Dive into Building Streaming Applications with Apache Pulsar
Deep Dive into Building Streaming Applications with Apache Pulsar
Real time cloud native open source streaming of any data to apache solr
Real time cloud native open source streaming of any data to apache solr
OSS EU: Deep Dive into Building Streaming Applications with Apache Pulsar
OSS EU: Deep Dive into Building Streaming Applications with Apache Pulsar
High Concurrency Architecture and Laravel Performance Tuning
High Concurrency Architecture and Laravel Performance Tuning
Apache Arrow at DataEngConf Barcelona 2018
Apache Arrow at DataEngConf Barcelona 2018
Scaling with swagger
Scaling with swagger
NoSQL afternoon in Japan Kumofs & MessagePack
NoSQL afternoon in Japan Kumofs & MessagePack
NoSQL afternoon in Japan kumofs & MessagePack
NoSQL afternoon in Japan kumofs & MessagePack
DBCC 2021 - FLiP Stack for Cloud Data Lakes
DBCC 2021 - FLiP Stack for Cloud Data Lakes
Más de 오석 한
2011년 세미나 자료
Smart work
Smart work
오석 한
2011년 세미나 자료
RPC protocols
RPC protocols
오석 한
2010년 카페 세미나 자료
Cassandra
Cassandra
오석 한
2010년 카페 세미나 자료
Smart Phone CPU
Smart Phone CPU
오석 한
2010년 카페 세미나 자료
Functional progrmming with scala
Functional progrmming with scala
오석 한
2009년 카페 세미나 자료
Linux tips
Linux tips
오석 한
2009년 카페 세미나 자료
Apache Click
Apache Click
오석 한
2009년 11월 05일 카페 세미나 자료
JAVA NIO
JAVA NIO
오석 한
2009년 3월 12일 카페 세미나 자료
예제로 쉽게 배우는 Log4j 기초 활용법
예제로 쉽게 배우는 Log4j 기초 활용법
오석 한
카페 세미나 자료
Vi 단축키명령어
Vi 단축키명령어
오석 한
2008년 9월 25일 카페 세미나 자료
Perl Script Document
Perl Script Document
오석 한
2008년 9월 25일 카페 세미나 자료
Perl Script
Perl Script
오석 한
2008년 8월 28일 카페 세미나 자료
정규 표현식 기본 메타문자 요약
정규 표현식 기본 메타문자 요약
오석 한
2008년 8월 28일 카페 세미나 자료
정규표현식의 이해와 활용
정규표현식의 이해와 활용
오석 한
Más de 오석 한
(14)
Smart work
Smart work
RPC protocols
RPC protocols
Cassandra
Cassandra
Smart Phone CPU
Smart Phone CPU
Functional progrmming with scala
Functional progrmming with scala
Linux tips
Linux tips
Apache Click
Apache Click
JAVA NIO
JAVA NIO
예제로 쉽게 배우는 Log4j 기초 활용법
예제로 쉽게 배우는 Log4j 기초 활용법
Vi 단축키명령어
Vi 단축키명령어
Perl Script Document
Perl Script Document
Perl Script
Perl Script
정규 표현식 기본 메타문자 요약
정규 표현식 기본 메타문자 요약
정규표현식의 이해와 활용
정규표현식의 이해와 활용
Último
Tech Trends Report 2024 Future Today Institute
Tech Trends Report 2024 Future Today Institute.pdf
Tech Trends Report 2024 Future Today Institute.pdf
hans926745
Presentation from Melissa Klemke from her talk at Product Anonymous in April 2024
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
Product Anonymous
Abhishek Deb(1), Mr Abdul Kalam(2) M. Des (UX) , School of Design, DIT University , Dehradun. This paper explores the future potential of AI-enabled smartphone processors, aiming to investigate the advancements, capabilities, and implications of integrating artificial intelligence (AI) into smartphone technology. The research study goals consist of evaluating the development of AI in mobile phone processors, analyzing the existing state as well as abilities of AI-enabled cpus determining future patterns as well as chances together with reviewing obstacles as well as factors to consider for more growth.
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
debabhi2
Stay safe, grab a drink and join us virtually for our upcoming "GenAI Risks & Security" Meetup to hear about how to uncover critical GenAI risks and vulnerabilities, AI security considerations in every company, and how a CISO should navigate through GenAI Risks.
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdf
lior mazor
45-60 minute session deck from introducing Google Apps Script to developers, IT leadership, and other technical professionals.
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
wesley chun
My presentation at the Lehigh Carbon Community College (LCCC) NSA GenCyber Cyber Security Day event that is intended to foster an interest in the cyber security field amongst college students.
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day Presentation
Michael W. Hawkins
ICT role in 21 century education. How to ICT help in education
presentation ICT roal in 21st century education
presentation ICT roal in 21st century education
jfdjdjcjdnsjd
MySQL Webinar, presented on the 25th of April, 2024. Summary: MySQL solutions enable the deployment of diverse Database Architectures tailored to specific needs, including High Availability, Disaster Recovery, and Read Scale-Out. With MySQL Shell's AdminAPI, administrators can seamlessly set up, manage, and monitor these solutions, ensuring efficiency and ease of use in their administration. MySQL Router, on the other hand, provides transparent routing from the application traffic to the backend servers in the architectures, requiring minimal configuration. Completely built in-house and supported by Oracle, these solutions have been adopted by enterprises of all sizes for their business-critical applications. In this presentation, we'll delve into various database architecture solutions to help you choose the right one based on your business requirements. Focusing on technical details and the latest features to maximize the potential of these solutions.
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Miguel Araújo
If you are a Domino Administrator in any size company you already have a range of skills that make you an expert administrator across many platforms and technologies. In this session Gab explains how to apply those skills and that knowledge to take your career wherever you want to go.
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
Gabriella Davis
Cisco CCNA
CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Service
giselly40
Presented by Mike Hicks
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
ThousandEyes
How to get Oracle DBA Job as fresher.
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
Remote DBA Services
Breathing New Life into MySQL Apps With Advanced Postgres Capabilities
🐬 The future of MySQL is Postgres 🐘
🐬 The future of MySQL is Postgres 🐘
RTylerCroy
Presented by Sergio Licea and John Hendershot
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
ThousandEyes
Slides from the presentation on Machine Learning for the Arts & Humanities seminar at the University of Bologna (Digital Humanities and Digital Knowledge program)
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed texts
Maria Levchenko
With more memory available, system performance of three Dell devices increased, which can translate to a better user experience Conclusion When your system has plenty of RAM to meet your needs, you can efficiently access the applications and data you need to finish projects and to-do lists without sacrificing time and focus. Our test results show that with more memory available, three Dell PCs delivered better performance and took less time to complete the Procyon Office Productivity benchmark. These advantages translate to users being able to complete workflows more quickly and multitask more easily. Whether you need the mobility of the Latitude 5440, the creative capabilities of the Precision 3470, or the high performance of the OptiPlex Tower Plus 7010, configuring your system with more RAM can help keep processes running smoothly, enabling you to do more without compromising performance.
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
Principled Technologies
BooK Now Call us at +918448380779 to hire a gorgeous and seductive call girl for sex. Take a Delhi Escort Service. The help of our escort agency is mostly meant for men who want sexual Indian Escorts In Delhi NCR. It should be noted that any impersonator will get 100 attention from our Young Girls Escorts in Delhi. They will assume the position of reliable allies. VIP Call Girl With Original Photos Book Tonight +918448380779 Our Cheap Price 1 Hour not available 2 Hours 5000 Full Night 8000 TAG: Call Girls in Delhi, Noida, Gurgaon, Ghaziabad, Connaught Place, Greater Kailash Delhi, Lajpat Nagar Delhi, Mayur Vihar Delhi, Chanakyapuri Delhi, New Friends Colony Delhi, Majnu Ka Tilla, Karol Bagh, Malviya Nagar, Saket, Khan Market, Noida Sector 18, Noida Sector 76, Noida Sector 51, Gurgaon Mg Road, Iffco Chowk Gurgaon, Rajiv Chowk Gurgaon All Delhi Ncr Free Home Deliver
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
Delhi Call girls
Three things you will take away from the session: • How to run an effective tenant-to-tenant migration • Best practices for before, during, and after migration • Tips for using migration as a springboard to prepare for Copilot in Microsoft 365 Main ideas: Migration Overview: The presentation covers the current reality of cross-tenant migrations, the triggers, phases, best practices, and benefits of a successful tenant migration Considerations: When considering a migration, it is important to consider the migration scope, performance, customization, flexibility, user-friendly interface, automation, monitoring, support, training, scalability, data integrity, data security, cost, and licensing structure Next Wave: The next wave of change includes the launch of Copilot, which requires businesses to be prepared for upcoming changes related to Copilot and the cloud, and to consolidate data and tighten governance ShareGate: ShareGate can help with pre-migration analysis, configurable migration tool, and automated, end-user driven collaborative governance
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
sammart93
Enterprise Knowledge’s Urmi Majumder, Principal Data Architecture Consultant, and Fernando Aguilar Islas, Senior Data Science Consultant, presented "Driving Behavioral Change for Information Management through Data-Driven Green Strategy" on March 27, 2024 at Enterprise Data World (EDW) in Orlando, Florida. In this presentation, Urmi and Fernando discussed a case study describing how the information management division in a large supply chain organization drove user behavior change through awareness of the carbon footprint of their duplicated and near-duplicated content, identified via advanced data analytics. Check out their presentation to gain valuable perspectives on utilizing data-driven strategies to influence positive behavioral shifts and support sustainability initiatives within your organization. In this session, participants gained answers to the following questions: - What is a Green Information Management (IM) Strategy, and why should you have one? - How can Artificial Intelligence (AI) and Machine Learning (ML) support your Green IM Strategy through content deduplication? - How can an organization use insights into their data to influence employee behavior for IM? - How can you reap additional benefits from content reduction that go beyond Green IM?
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...
Enterprise Knowledge
Read about the journey the Adobe Experience Manager team has gone through in order to become and scale API-first throughout the organisation.
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
Radu Cotescu
Último
(20)
Tech Trends Report 2024 Future Today Institute.pdf
Tech Trends Report 2024 Future Today Institute.pdf
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdf
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day Presentation
presentation ICT roal in 21st century education
presentation ICT roal in 21st century education
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Service
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
🐬 The future of MySQL is Postgres 🐘
🐬 The future of MySQL is Postgres 🐘
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed texts
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
Serialization (Avro, Message Pack, Kryo)
1.
Serialization Avro, Message Pack,
Kryo Han O Seok
2.
What is Serialization?
3.
What is Serialization?
4.
5.
Avro • Apache Foundation •
JSON defineded Schema
6.
Avro • Created by
Doug Cutting, the Creator of Hadoop • Data is always accompanied by a schema - Support for dynamic typing-code generation is not required
7.
Performance of Avro •
Avro is not the fastest, But is in the top half
8.
Message Pack • Rich
data structures - JSON • Interface Definition Language(IDL) - thrift • Create Schema Based Annotaion • RPC Sync,Async Support Event-Driven I/O
9.
Format of Message
Pack
10.
Performance of Message Pack
11.
Kryo • Google Code •
Easy to Collect Serializers Case By Class • Support Compression • Kryo TCP & UDP client/server library
12.
BenchmarkingV2 • http://code.google.com/p/thrift-protobuf- compare/wiki/BenchmarkingV2
13.
Thanks :)
Descargar ahora