As the leading IT Service Management and IT Operations Management platform in the marketplace, ServiceNow is used by many organizations to address everything from self service IT requests to Change, Incident and Problem Management. The strength of the platform is in the workflows and processes that are built around the shared data model, represented in the CMDB. This provides the ‘single source of truth’ for the organization.
Puppet Enterprise is a leading automation platform focused on the IT Configuration Management and Compliance space. Puppet Enterprise has a unique perspective on the state of systems being managed, constantly being updated and kept accurate as part of the regular Puppet operation. Puppet Enterprise is the automation engine ensuring that the environment stays consistent and in compliance.
In this webinar, we will explore how to maximize the value of both solutions, with Puppet Enterprise automating the actions required to drive a change, and ServiceNow governing the process around that change, from definition to approval. We will introduce and demonstrate several published integration points between the two solutions, in the areas of Self-Service Infrastructure, Enriched Change Management and Automated Incident Registration.
Modernizing the Analytics and Data Science Lifecycle for the Scalable Enterpr...Data Con LA
Data Con LA 2020
Description
It’s no secret that the roots of Data Science date back to the 1960’s and were first mainstreamed in the 1990’s with the emergence of Data Mining. This occurred when commercially affordable computers started offering the horsepower and storage necessary to perform advanced statistics to scale.
However, the words “to scale” have evolved over time. The leap to “Big Data” is only one serial aspect of growth. Beyond the typical 1-off studies that catalyzed the field of Data Mining, Data Science now fulfills enterprise and multi-enterprise use cases spanning much broader and deeper data sets and integrations. For example, AI and Machine Learning frameworks can interoperate with a variety of other systems to drive alerting, feedback loops, predictive frameworks, prescriptive engines, continual learning, and more. The deployment of AI/ML processes themselves often involves integration with contemporary DevOps tools.
Now segue to SEAL – the Scalable Enterprise Analytic Lifecycle. In this presentation, you’ll learn how to cover the major bases of a modern Data Science projects – and Citizen Data Science as well – from conception, learning, and evaluation through integration, implementation, monitoring, and continual improvement. And as the name implies, your deployments will be performant and scale as expected in today’s environments.
Speaker
Jeff Bertman, CTO, Dfuse Technologies
Building an Authorization Solution for Microservices Using Neo4j and OPANeo4j
1. The document discusses building an authorization solution for microservices using Neo4j and OPA.
2. It describes modeling authorization data in a graph database for role-based access control and efficient authorization queries.
3. The proposed solution uses OPA as a centralized decision engine to evaluate authorization policies for microservices in a scalable way.
This presentation has been presented at the "Vienna DevOps & Security Meetup" in 2021.
It discusses the state of monitoring, what Opentelemetry is and why should you care about it.
Concepts and basics are discussed and presented in a full example extracting traces, metrics and logs.
Demo: https://github.com/secustor/opentelemetry-meetup
This document provides an overview of Splunk, including how to install Splunk, configure licenses, perform searches, set up alerts and reports, and manage deployments. It discusses indexing data, extracting fields, tagging events, and using the web interface. The goal is to get users started with the basic functions of Splunk like searching, reporting and monitoring.
This document summarizes a presentation titled "Everyday I'm Shuffling: An alternative approach to DevOps for Microsoft Dynamics 365" by Jonas Rapp. The presentation introduces an approach called CRM Shuffle that combines solution components and configuration data into a single deployment package. It utilizes tools like SolutionPackager, PackageDeployer, and custom tasks for Visual Studio Team Services to automate builds, deployments, and release management for Dynamics 365. The approach aims to provide quality, efficiency, reproducibility and remove human errors from the process.
The document discusses how observability is becoming a core competency for organizations as digital environments grow more complex. It predicts that observability will be the new face of digital transformation, with a focus on digital experience. Additionally, it predicts that IT spending will need to deliver clear value in 2023's uncertain economic environment. Finally, it predicts that as observability becomes standard, automation will be the next differentiator for organizations, enabled by improvements in machine learning and the need to address talent shortages and cost pressures.
The document discusses Pure Storage and its all-flash storage solutions. It provides 10 reasons to choose Pure, including that Pure storage is built for NVMe, has a disruptively simple architecture, proven high availability of 99.9999%, AI-driven management and predictive support, application support, self-protecting storage, an open full stack, Evergreen upgrades, and industry recognition as a leader in Gartner reports. The document then discusses Pure's business model, customer experience, technology advantages, and culture that emphasize customer satisfaction.
TLC303_Walkthrough Setting up a Highly Available Communications Platform on AWSAmazon Web Services
Come join this workshop to set up a highly available and fault-tolerant real-time communication platform on AWS. We walk you through setting up load balancing and multiple failover mechanisms for the unique requirements of real-time communication. You set up both SBC and PBX servers, use WebRTC and SIP standards, and learn how to optimize this platform on Amazon EC2. The workshop also guides you with an example. The example uses Amazon Polly for lifelike text-to-speech and integrates it with the communication platform that you build. Lastly, you learn how to test this communication platform in a distributed manner with Amazon EC2 Systems Manager.
Modernizing the Analytics and Data Science Lifecycle for the Scalable Enterpr...Data Con LA
Data Con LA 2020
Description
It’s no secret that the roots of Data Science date back to the 1960’s and were first mainstreamed in the 1990’s with the emergence of Data Mining. This occurred when commercially affordable computers started offering the horsepower and storage necessary to perform advanced statistics to scale.
However, the words “to scale” have evolved over time. The leap to “Big Data” is only one serial aspect of growth. Beyond the typical 1-off studies that catalyzed the field of Data Mining, Data Science now fulfills enterprise and multi-enterprise use cases spanning much broader and deeper data sets and integrations. For example, AI and Machine Learning frameworks can interoperate with a variety of other systems to drive alerting, feedback loops, predictive frameworks, prescriptive engines, continual learning, and more. The deployment of AI/ML processes themselves often involves integration with contemporary DevOps tools.
Now segue to SEAL – the Scalable Enterprise Analytic Lifecycle. In this presentation, you’ll learn how to cover the major bases of a modern Data Science projects – and Citizen Data Science as well – from conception, learning, and evaluation through integration, implementation, monitoring, and continual improvement. And as the name implies, your deployments will be performant and scale as expected in today’s environments.
Speaker
Jeff Bertman, CTO, Dfuse Technologies
Building an Authorization Solution for Microservices Using Neo4j and OPANeo4j
1. The document discusses building an authorization solution for microservices using Neo4j and OPA.
2. It describes modeling authorization data in a graph database for role-based access control and efficient authorization queries.
3. The proposed solution uses OPA as a centralized decision engine to evaluate authorization policies for microservices in a scalable way.
This presentation has been presented at the "Vienna DevOps & Security Meetup" in 2021.
It discusses the state of monitoring, what Opentelemetry is and why should you care about it.
Concepts and basics are discussed and presented in a full example extracting traces, metrics and logs.
Demo: https://github.com/secustor/opentelemetry-meetup
This document provides an overview of Splunk, including how to install Splunk, configure licenses, perform searches, set up alerts and reports, and manage deployments. It discusses indexing data, extracting fields, tagging events, and using the web interface. The goal is to get users started with the basic functions of Splunk like searching, reporting and monitoring.
This document summarizes a presentation titled "Everyday I'm Shuffling: An alternative approach to DevOps for Microsoft Dynamics 365" by Jonas Rapp. The presentation introduces an approach called CRM Shuffle that combines solution components and configuration data into a single deployment package. It utilizes tools like SolutionPackager, PackageDeployer, and custom tasks for Visual Studio Team Services to automate builds, deployments, and release management for Dynamics 365. The approach aims to provide quality, efficiency, reproducibility and remove human errors from the process.
The document discusses how observability is becoming a core competency for organizations as digital environments grow more complex. It predicts that observability will be the new face of digital transformation, with a focus on digital experience. Additionally, it predicts that IT spending will need to deliver clear value in 2023's uncertain economic environment. Finally, it predicts that as observability becomes standard, automation will be the next differentiator for organizations, enabled by improvements in machine learning and the need to address talent shortages and cost pressures.
The document discusses Pure Storage and its all-flash storage solutions. It provides 10 reasons to choose Pure, including that Pure storage is built for NVMe, has a disruptively simple architecture, proven high availability of 99.9999%, AI-driven management and predictive support, application support, self-protecting storage, an open full stack, Evergreen upgrades, and industry recognition as a leader in Gartner reports. The document then discusses Pure's business model, customer experience, technology advantages, and culture that emphasize customer satisfaction.
TLC303_Walkthrough Setting up a Highly Available Communications Platform on AWSAmazon Web Services
Come join this workshop to set up a highly available and fault-tolerant real-time communication platform on AWS. We walk you through setting up load balancing and multiple failover mechanisms for the unique requirements of real-time communication. You set up both SBC and PBX servers, use WebRTC and SIP standards, and learn how to optimize this platform on Amazon EC2. The workshop also guides you with an example. The example uses Amazon Polly for lifelike text-to-speech and integrates it with the communication platform that you build. Lastly, you learn how to test this communication platform in a distributed manner with Amazon EC2 Systems Manager.
The document compares and contrasts Kibana and Grafana. It discusses that in Kibana, user privileges are based on base and feature privileges that determine read and write access. In Grafana, user permission is determined by their role, team access, and specific dashboard/folder permissions. Both tools allow for visualizing time-series data, but Kibana focuses on log analysis using Elasticsearch while Grafana supports multiple data sources and is designed more for metrics and monitoring. Key differences include Kibana being integrated with ELK while Grafana supports various plugins, and Grafana having built-in alerts while Kibana relies on plugins.
This document provides an overview of Apache NiFi and dataflow. It begins with an introduction to the challenges of moving data effectively within and between systems. It then discusses Apache NiFi's key features for addressing these challenges, including guaranteed delivery, data buffering, prioritized queuing, and data provenance. The document outlines NiFi's architecture and components like repositories and extension points. It also previews a live demo and invites attendees to further discuss Apache NiFi at a Birds of a Feather session.
Lightning-Fast Analytics for Workday Transactional Data with Pavel Hardak and...Databricks
Workday Prism Analytics enables data discovery and interactive Business Intelligence analysis for Workday customers. Workday is a “pure SaaS” company, providing a suite of Financial and HCM (Human Capital Management) apps to about 2000 companies around the world, including more than 30% from Fortune-500 list. There are significant business and technical challenges to support millions of concurrent users and hundreds of millions daily transactions. Using memory-centric graph-based architecture allowed to overcome most of these problems.
As Workday grew, data transactions from existing and new customers generated vast amounts of valuable and highly sensitive data. The next big challenge was to provide in-app analytics platform, which for the multiple types of accumulated data, and also would allow using blend in external datasets. Workday users wanted it to be super-fast, but also intuitive and easy-to-use both for the financial and HR analysts and for regular, less technical users. Existing backend technologies were not a good fit, so we turned to Apache Spark.
In this presentation, we will share the lessons we learned when building highly scalable multi-tenant analytics service for transactional data. We will start with the big picture and business requirements. Then describe the architecture with batch and interactive modules for data preparation, publishing, and query engine, noting the relevant Spark technologies. Then we will dive into the internals of Prism’s Query Engine, focusing on Spark SQL, DataFrames and Catalyst compiler features used. We will describe the issues we encountered while compiling and executing complex pipelines and queries, and how we use caching, sampling, and query compilation techniques to support interactive user experience.
Finally, we will share the future challenges for 2018 and beyond.
More Than Monitoring: How Observability Takes You From Firefighting to Fire P...DevOps.com
For some, observability is just a hollow rebranding of monitoring, for others it’s monitoring on steroids. But what if we told you observability is the new way to find out why—not just if—your distributed system or application isn’t working as expected? Today, we see that traditional monitoring approaches can fall short if a system or application doesn’t adequately externalize its state.
This is truer as workloads move into the cloud and leverage ephemeral technologies, such as microservices and containers. To reach observability, IT and DevOps teams need to correlate different sources from logs, metrics, traces, events and more. This becomes even more challenging when defining the online revenue impact of a failed container—after all, this is what really matters to the business.
This webinar will cover:
The differences between observability and monitoring
Why it is a bigger challenge in a multicloud and containerized world
How observability results in less firefighting and more fire prevention
How new platforms can help gain observability (on premises and in the cloud) for containers, microservices and even SAP or mainframes
Dive into a reference architecture that demonstrates the patterns and practices for securely connecting microservices together using Apigee Edge integration for Pivotal Cloud Foundry.
We will discuss:
- basics for building cloud-native applications as microservices on - Pivotal Cloud Foundry using Spring Boot and Spring Cloud Services
- patterns and practices that are enabling small autonomous microservice teams to provision backing services for their applications
- how to securely expose microservices over HTTP using Apigee Edge for PCF
Watch the webcast here: https://youtu.be/ETT6WP-3me0
DevOps has established itself as an indispensable software development methodology and the DevOps market is expected to exceed $20 billion by 2026. The document discusses several trends expected to emerge in DevOps in 2023, including increased use of serverless computing, microservices architecture, low-code applications, infrastructure as code, DevSecOps, Kubernetes and GitOps, and integrating AI and ML into the software development lifecycle. Adopting these trends can help organizations achieve greater efficiency, cost savings, and accelerate software delivery.
Microservices architecture involves many services that are being distributed over the network resulting in many more ways of failure. This session will try to cover the available tools that can help you when designing/building such distributed system in Go
Remote Log Analytics Using DDS, ELK, and RxJSSumant Tambe
Autonomous Probing and Diagnostics for remote IT log data using RTI Connext Data Distribution Service (DDS), Elasticsearch-Logstash-Kibana (ELK), and Reactive Extensions for JavaScript (RxJS). Github: https://github.com/rticommunity/rticonnextdds-reactive/tree/master/javascript
Which Change Data Capture Strategy is Right for You?Precisely
Change Data Capture or CDC is the practice of moving the changes made in an important transactional system to other systems, so that data is kept current and consistent across the enterprise. CDC keeps reporting and analytic systems working on the latest, most accurate data.
Many different CDC strategies exist. Each strategy has advantages and disadvantages. Some put an undue burden on the source database. They can cause queries or applications to become slow or even fail. Some bog down network bandwidth, or have big delays between change and replication.
Each business process has different requirements, as well. For some business needs, a replication delay of more than a second is too long. For others, a delay of less than 24 hours is excellent.
Which CDC strategy will match your business needs? How do you choose?
View this webcast on-demand to learn:
• Advantages and disadvantages of different CDC methods
• The replication latency your project requires
• How to keep data current in Big Data technologies like Hadoop
MoP(MQTT on Pulsar) - a Powerful Tool for Apache Pulsar in IoT - Pulsar Summi...StreamNative
MQTT (Message Queuing Telemetry Transport,) is a message protocol based on the pub/sub model with the advantages of compact message structure, low resource consumption, and high efficiency, which is suitable for IoT applications with low bandwidth and unstable network environments.
This session will introduce MQTT on Pulsar, which allows developers users of MQTT transport protocol to use Apache Pulsar. I will share the architecture, principles and future planning of MoP, to help you understand Apache Pulsar's capabilities and practices in the IoT industry.
From on premises monolith to cloud microservicesAlbert Lombarte
Presentation from Devops Barcelona conference on June 2019.
Step by step process to migrate from a monolith to several microservices in the cloud.
See with transitions at https://docs.google.com/presentation/d/10PvqjwDwBv96Ga2k0ZLrfi83NrQQnGQyLPKn0XsYC40/edit#slide=id.g585eb34422_0_592
Grafana is an open source analytics and monitoring tool that uses InfluxDB to store time series data and provide visualization dashboards. It collects metrics like application and server performance from Telegraf every 10 seconds, stores the data in InfluxDB using the line protocol format, and allows users to build dashboards in Grafana to monitor and get alerts on metrics. An example scenario is using it to collect and display load time metrics from a QA whitelist VM.
The document summarizes the ONF Transport API (TAPI) Project. TAPI aims to develop a software-centric API to facilitate SDN control of transport networks. The TAPI SDK 1.0 provides a technology-agnostic API framework with modular and extensible functional features. TAPI fits into the broader SDN architecture developed by ONF and other standards bodies. Next steps for TAPI 2.0 include expanding its capabilities in areas like node constraints, protection, and multi-technology testing.
Promgen is a Prometheus management tool that allows web-based management of server configurations and alerting rules. It addresses the need for an easier way to manage Prometheus server configurations than manually editing YAML files. Promgen stores configuration data in a MySQL database and generates YAML files from the stored configurations. It aims to provide a simple interface for configuring Prometheus exporters, ports, alerts and other settings across multiple servers and projects.
This document provides an overview and agenda for a meetup on distributed tracing using Jaeger. It begins with introducing the speaker and their background. The agenda then covers an introduction to distributed tracing, open tracing, and Jaeger. It details a hello world example, Jaeger terminology, and building a full distributed application with Jaeger. It concludes with wrapping up the demo, reviewing Jaeger architecture, and discussing open tracing's ability to propagate context across services.
ServiceNow and Puppet- better together, Kevin ReeuwijkPuppet
ServiceNow and Puppet can be integrated in four key areas: 1) Self-service infrastructure allows non-Puppet experts to control infrastructure through a ServiceNow interface; 2) Enriched change management automatically generates ServiceNow change requests from Puppet changes and populates them with impact details; 3) Automated incident registration forwards details of configuration drift corrections in Puppet to ServiceNow to create incidents; and 4) Up-to-date asset management would periodically upload Puppet inventory data to ServiceNow to keep the CMDB accurate without disruptive discovery runs.
ServiceNow and Puppet- better together, Kevin ReeuwijkPuppet
The document discusses four ways that Puppet and ServiceNow can integrate to improve IT operations: 1) Self-service infrastructure allows developers to configure aspects of systems through ServiceNow without compromising compliance or security; 2) Enriched change management automatically generates change requests in ServiceNow from Puppet code changes and links approvals to deployments; 3) Automated incident registration creates and closes ServiceNow tickets for configuration drift corrections by Puppet; 4) Up-to-date asset management reduces discovery impacts by periodically uploading Puppet asset details to ServiceNow.
The document compares and contrasts Kibana and Grafana. It discusses that in Kibana, user privileges are based on base and feature privileges that determine read and write access. In Grafana, user permission is determined by their role, team access, and specific dashboard/folder permissions. Both tools allow for visualizing time-series data, but Kibana focuses on log analysis using Elasticsearch while Grafana supports multiple data sources and is designed more for metrics and monitoring. Key differences include Kibana being integrated with ELK while Grafana supports various plugins, and Grafana having built-in alerts while Kibana relies on plugins.
This document provides an overview of Apache NiFi and dataflow. It begins with an introduction to the challenges of moving data effectively within and between systems. It then discusses Apache NiFi's key features for addressing these challenges, including guaranteed delivery, data buffering, prioritized queuing, and data provenance. The document outlines NiFi's architecture and components like repositories and extension points. It also previews a live demo and invites attendees to further discuss Apache NiFi at a Birds of a Feather session.
Lightning-Fast Analytics for Workday Transactional Data with Pavel Hardak and...Databricks
Workday Prism Analytics enables data discovery and interactive Business Intelligence analysis for Workday customers. Workday is a “pure SaaS” company, providing a suite of Financial and HCM (Human Capital Management) apps to about 2000 companies around the world, including more than 30% from Fortune-500 list. There are significant business and technical challenges to support millions of concurrent users and hundreds of millions daily transactions. Using memory-centric graph-based architecture allowed to overcome most of these problems.
As Workday grew, data transactions from existing and new customers generated vast amounts of valuable and highly sensitive data. The next big challenge was to provide in-app analytics platform, which for the multiple types of accumulated data, and also would allow using blend in external datasets. Workday users wanted it to be super-fast, but also intuitive and easy-to-use both for the financial and HR analysts and for regular, less technical users. Existing backend technologies were not a good fit, so we turned to Apache Spark.
In this presentation, we will share the lessons we learned when building highly scalable multi-tenant analytics service for transactional data. We will start with the big picture and business requirements. Then describe the architecture with batch and interactive modules for data preparation, publishing, and query engine, noting the relevant Spark technologies. Then we will dive into the internals of Prism’s Query Engine, focusing on Spark SQL, DataFrames and Catalyst compiler features used. We will describe the issues we encountered while compiling and executing complex pipelines and queries, and how we use caching, sampling, and query compilation techniques to support interactive user experience.
Finally, we will share the future challenges for 2018 and beyond.
More Than Monitoring: How Observability Takes You From Firefighting to Fire P...DevOps.com
For some, observability is just a hollow rebranding of monitoring, for others it’s monitoring on steroids. But what if we told you observability is the new way to find out why—not just if—your distributed system or application isn’t working as expected? Today, we see that traditional monitoring approaches can fall short if a system or application doesn’t adequately externalize its state.
This is truer as workloads move into the cloud and leverage ephemeral technologies, such as microservices and containers. To reach observability, IT and DevOps teams need to correlate different sources from logs, metrics, traces, events and more. This becomes even more challenging when defining the online revenue impact of a failed container—after all, this is what really matters to the business.
This webinar will cover:
The differences between observability and monitoring
Why it is a bigger challenge in a multicloud and containerized world
How observability results in less firefighting and more fire prevention
How new platforms can help gain observability (on premises and in the cloud) for containers, microservices and even SAP or mainframes
Dive into a reference architecture that demonstrates the patterns and practices for securely connecting microservices together using Apigee Edge integration for Pivotal Cloud Foundry.
We will discuss:
- basics for building cloud-native applications as microservices on - Pivotal Cloud Foundry using Spring Boot and Spring Cloud Services
- patterns and practices that are enabling small autonomous microservice teams to provision backing services for their applications
- how to securely expose microservices over HTTP using Apigee Edge for PCF
Watch the webcast here: https://youtu.be/ETT6WP-3me0
DevOps has established itself as an indispensable software development methodology and the DevOps market is expected to exceed $20 billion by 2026. The document discusses several trends expected to emerge in DevOps in 2023, including increased use of serverless computing, microservices architecture, low-code applications, infrastructure as code, DevSecOps, Kubernetes and GitOps, and integrating AI and ML into the software development lifecycle. Adopting these trends can help organizations achieve greater efficiency, cost savings, and accelerate software delivery.
Microservices architecture involves many services that are being distributed over the network resulting in many more ways of failure. This session will try to cover the available tools that can help you when designing/building such distributed system in Go
Remote Log Analytics Using DDS, ELK, and RxJSSumant Tambe
Autonomous Probing and Diagnostics for remote IT log data using RTI Connext Data Distribution Service (DDS), Elasticsearch-Logstash-Kibana (ELK), and Reactive Extensions for JavaScript (RxJS). Github: https://github.com/rticommunity/rticonnextdds-reactive/tree/master/javascript
Which Change Data Capture Strategy is Right for You?Precisely
Change Data Capture or CDC is the practice of moving the changes made in an important transactional system to other systems, so that data is kept current and consistent across the enterprise. CDC keeps reporting and analytic systems working on the latest, most accurate data.
Many different CDC strategies exist. Each strategy has advantages and disadvantages. Some put an undue burden on the source database. They can cause queries or applications to become slow or even fail. Some bog down network bandwidth, or have big delays between change and replication.
Each business process has different requirements, as well. For some business needs, a replication delay of more than a second is too long. For others, a delay of less than 24 hours is excellent.
Which CDC strategy will match your business needs? How do you choose?
View this webcast on-demand to learn:
• Advantages and disadvantages of different CDC methods
• The replication latency your project requires
• How to keep data current in Big Data technologies like Hadoop
MoP(MQTT on Pulsar) - a Powerful Tool for Apache Pulsar in IoT - Pulsar Summi...StreamNative
MQTT (Message Queuing Telemetry Transport,) is a message protocol based on the pub/sub model with the advantages of compact message structure, low resource consumption, and high efficiency, which is suitable for IoT applications with low bandwidth and unstable network environments.
This session will introduce MQTT on Pulsar, which allows developers users of MQTT transport protocol to use Apache Pulsar. I will share the architecture, principles and future planning of MoP, to help you understand Apache Pulsar's capabilities and practices in the IoT industry.
From on premises monolith to cloud microservicesAlbert Lombarte
Presentation from Devops Barcelona conference on June 2019.
Step by step process to migrate from a monolith to several microservices in the cloud.
See with transitions at https://docs.google.com/presentation/d/10PvqjwDwBv96Ga2k0ZLrfi83NrQQnGQyLPKn0XsYC40/edit#slide=id.g585eb34422_0_592
Grafana is an open source analytics and monitoring tool that uses InfluxDB to store time series data and provide visualization dashboards. It collects metrics like application and server performance from Telegraf every 10 seconds, stores the data in InfluxDB using the line protocol format, and allows users to build dashboards in Grafana to monitor and get alerts on metrics. An example scenario is using it to collect and display load time metrics from a QA whitelist VM.
The document summarizes the ONF Transport API (TAPI) Project. TAPI aims to develop a software-centric API to facilitate SDN control of transport networks. The TAPI SDK 1.0 provides a technology-agnostic API framework with modular and extensible functional features. TAPI fits into the broader SDN architecture developed by ONF and other standards bodies. Next steps for TAPI 2.0 include expanding its capabilities in areas like node constraints, protection, and multi-technology testing.
Promgen is a Prometheus management tool that allows web-based management of server configurations and alerting rules. It addresses the need for an easier way to manage Prometheus server configurations than manually editing YAML files. Promgen stores configuration data in a MySQL database and generates YAML files from the stored configurations. It aims to provide a simple interface for configuring Prometheus exporters, ports, alerts and other settings across multiple servers and projects.
This document provides an overview and agenda for a meetup on distributed tracing using Jaeger. It begins with introducing the speaker and their background. The agenda then covers an introduction to distributed tracing, open tracing, and Jaeger. It details a hello world example, Jaeger terminology, and building a full distributed application with Jaeger. It concludes with wrapping up the demo, reviewing Jaeger architecture, and discussing open tracing's ability to propagate context across services.
ServiceNow and Puppet- better together, Kevin ReeuwijkPuppet
ServiceNow and Puppet can be integrated in four key areas: 1) Self-service infrastructure allows non-Puppet experts to control infrastructure through a ServiceNow interface; 2) Enriched change management automatically generates ServiceNow change requests from Puppet changes and populates them with impact details; 3) Automated incident registration forwards details of configuration drift corrections in Puppet to ServiceNow to create incidents; and 4) Up-to-date asset management would periodically upload Puppet inventory data to ServiceNow to keep the CMDB accurate without disruptive discovery runs.
ServiceNow and Puppet- better together, Kevin ReeuwijkPuppet
The document discusses four ways that Puppet and ServiceNow can integrate to improve IT operations: 1) Self-service infrastructure allows developers to configure aspects of systems through ServiceNow without compromising compliance or security; 2) Enriched change management automatically generates change requests in ServiceNow from Puppet code changes and links approvals to deployments; 3) Automated incident registration creates and closes ServiceNow tickets for configuration drift corrections by Puppet; 4) Up-to-date asset management reduces discovery impacts by periodically uploading Puppet asset details to ServiceNow.
DevOps Workflows in the Windows Ecosystem - April 21Puppet
This document summarizes a webinar about using Puppet to automate DevOps workflows in the Windows ecosystem. It discusses how Puppet can be used to:
1) Scale PowerShell automation through the use of Puppet Tasks and Plans.
2) Bring continuous integration/continuous delivery (CI/CD) workflows to Windows infrastructure.
3) Augment existing Windows tools like SCCM and GPO with Puppet for greater flexibility and automation.
DevOps Workflows in the Windows Ecosystem - 21 April 2020Puppet
This document summarizes a webinar about using Puppet to automate DevOps workflows in the Windows ecosystem. It discusses how Puppet can be used to:
1) Scale PowerShell automation through the use of Puppet Tasks and Plans.
2) Bring continuous integration/continuous delivery (CI/CD) workflows to Windows infrastructure.
3) Augment existing Windows tools like SCCM and GPO with Puppet for greater flexibility and automation.
Cloud continuous integration- A distributed approach using distinct servicesAndré Agostinho
In cloud computing services the ability to share and deliver services, scale computing resources and distribute data storage and files requires a deployment process aligned with agility and scalability. The continuous integration can automate process reducing operational effort, improving code quality and reducing time to market. This presentation shows a proposal for distributed continuous integration to use differents cloud computing services, from planning to execution of scenarios.
Puppet Enterprise is an automation software platform that helps companies deliver better software faster and more securely. The presentation introduces Puppet Enterprise and discusses how it can be used to automate infrastructure from devices to applications across on-premise and cloud environments using a common language. Automation best practices are also covered, such as starting with core infrastructure and working up. Next steps suggested include downloading the learning VM, Puppet Enterprise, and scheduling a technical discussion.
Introduction to Puppet Enterprise 10/03/2018Puppet
Register today and learn more about Puppet Enterprise
Join Puppet on Wednesday, 3 October 2018 at 9:00 a.m. PDT for our upcoming webinar, Introduction to Puppet Enterprise.
If you're new to Puppet Enterprise, this is the webinar for you. You'll learn why thousands of companies rely on Puppet to automate the delivery and operation of their software and see it in action with a live demo.
We'll cover how to use Puppet Enterprise to:
Gain situational awareness and drive change with confidence
Orchestrate changes to infrastructure and applications
Continually enforce your desired state and remediate any unexpected changes
Get real-time visibility and reporting to prove compliance
We will also explore our new products, Puppet Discovery and Puppet Pipelines and what’s new in 2018.1 and will leave plenty of time to answer your questions.
Featured Speakers: Abir Majumdar, Sales Engineer, and Anthony Rodriguez, Sales Development.
In the digital age, engineers leverage automation tools to boost productivity, enhance efficiency, and save time. These software solutions enable real-time identification of risks and vulnerabilities, along with streamlined refactoring processes. Market research indicates that approximately 35% of companies currently utilize testing automation tools, with another 29% planning to adopt them in the future. Automation has become a prevalent topic of discussion, driven by its ability to accelerate work, increase intelligence, and improve overall productivity.
Pivotal CloudFoundry on Google cloud platformRonak Banka
This document is a slide presentation by Ronak Banka on using Pivotal Cloud Foundry (PCF) and Google Cloud Platform (GCP) together. It discusses how PCF provides a platform for deploying applications on GCP that enables both developer and operator productivity through features like automated deployments, service integration, and operations. It also highlights benefits of using PCF on GCP like performance, scale, cost savings, and access to differentiated GCP services.
Secrets of Successful Cloud Foundry AdoptersVMware Tanzu
This document discusses secrets of successful adoptions of Cloud Foundry. It provides examples of companies that have used Cloud Foundry to improve operations, increase developer productivity, and enhance security. Specific outcomes mentioned include reducing wait times, increasing revenue, and performing updates more frequently. It also discusses metrics for measuring the success of digital transformations and emphasizes the importance of measuring the right metrics.
As we enter a new age of automation — where every company needs to be able to deliver better software, faster — our goal is to provide the tools you need to iterate faster, ship sooner and deliver more customer value.
In October, we announced brand new products, Puppet Tasks™ and Puppet Discovery™, to give you greater control and end-to-end visibility over your software delivery.
Join Eric Sorenson, Director of Product Management, on 7 December at 11:00 a.m. AEDT for an in-depth look at what’s new:
Puppet Discovery is a new offering that lets you see everything you have in real time across your on-premises, cloud and container infrastructure, and know what you need to automate next.
Puppet Tasks, a new family of offerings that encompass both Puppet Bolt™and Puppet Enterprise Task Management, makes it simple to automate ad hoc tasks, deploy one-off changes, and execute sequenced actions in an imperative way.
With Puppet Pipelines, we’re uniting the entire software delivery lifecycle, to bring you a platform built for the enterprise, that integrates with a wide variety of tools and helps you avoid vendor lock-in.
Introduction to Puppet Enterprise - Jan 30, 2019Puppet
If you're new to Puppet Enterprise, this is the webinar for you. You'll learn why thousands of companies rely on Puppet to automate the delivery and operation of their software, and see it in action with a live demo.
We'll cover how to use Puppet Enterprise to:
Discover what you have using Puppet Discovery
Orchestrate changes to infrastructure and applications
Continually enforce your desired state and remediate any unexpected changes
Get real-time visibility and reporting to prove compliance
Automatically build, test and promote Puppet code changes using Continuous Delivery for Puppet Enterprise
Data-Driven DevOps: Mining Machine Data for 'Metrics that Matter' in a DevOps...Splunk
IT organizations are increasingly using machine data - including in DevOps practices - to get away from 'vanity metrics' and instead to generate 'metrics that matter'. These metrics provide visibility into the delivery of new application code and the business value of DevOps, to both IT and business stakeholders.
Machine data provides DevOps teams and others - including QA, secops, CxOs and LOB leaders - with meaningful and actionable metrics. This allows stakeholders to monitor, measure, and continuously improve the velocity and quality of code throughout the software lifecycle, from dev/test to customer-facing outcomes and business impact.
In this session Andi Mann, chief technology advocate at Splunk, will share core methodologies, interesting case studies, key success factors and 'gotcha' moments from real-world experience with mining machine data to produce 'metrics that matter' in a DevOps context.
The document discusses monitoring and managing infrastructure as a service (IaaS) and platform as a service (PaaS) solutions with Hyperic HQ. It notes that current data center realities often fall short of goals like resilience, efficiency, and accommodation of new technologies. Open source tools provide opportunities for innovation through virtualization, standardization, and cloud computing. Case studies show how open source technologies helped commercial and government clients reduce costs, improve flexibility and provisioning, and consolidate infrastructure.
The document provides an introduction to Puppet Enterprise, an automation platform. It discusses:
- Puppet's workflow using classic and direct modes to define configurations with code and enforce them on nodes
- Modeling server configurations with resources and defining relationships between them
- How Puppet can automate infrastructure provisioning, application deployment, and ensure security and compliance across devices
- Customer examples demonstrating how Puppet allows faster deployment and savings of over $1 million.
This document provides an introduction and overview of Puppet Enterprise. It begins with an agenda for the meeting and introductions of the speakers. It then discusses challenges organizations face with digital transformation, DevOps initiatives, and other trends. The core of Puppet Enterprise is to define configurations once and automate them endlessly across all environments. It allows organizations to know what they have, control and enforce consistency, secure and ensure compliance, and modernize infrastructure. Puppet Enterprise provides capabilities for defining and deploying policies, continuously monitoring for drift, and gaining visibility to prove compliance. It delivers significant benefits and results for customers such as increased deployment speed, fewer outages, and reduced security fix time. The document concludes by discussing where to start
This document discusses the need for continuous delivery in software development. It defines continuous delivery as making sure software can be reliably released at any time. The document outlines some key aspects of continuous delivery including automated testing, infrastructure as code, continuous integration, and blue/green deployments. It provides an example of implementing continuous delivery for a large retail company using tools like Jenkins, Puppet, Logstash and practices like infrastructure as code and automated testing.
Webinar: Deploying the Combined Virtual and Physical InfrastructurePepperweed Consulting
Delivering complex business services in your organization demands a rigid approach to server deployment and management. Modern data centers often have distributed virtual and physical servers as well as management teams which make the challenge even more difficult. Increasing headcount in your group is typically not an acceptable answer so how do you manage the growing complexity? The answer lies in a complete physical and virtual server life cycle management solution which provides the automation of application deployments.
In Part IV of its five-part webinar series "Managing IT Operations in a Virtualized World", Pepperweed Consulting will discuss how a combination of HP Server Automation and HP Operations Orchestration can streamline the deployment of your operating systems, software and patches for both your physical and virtual infrastructure. We will also analyze how compliance and application release management play a key role in ensuring control over your server deployments.
The Business Value of PaaS Automation - Kieron Sambrook-Smith - Presentation ...eZ Systems
Kieron Sambrook-Smith, Chief Commercial Officer at Platform.sh spoke at eZ Conference 2017 in London about the business value of Platform as a Service (PaaS) Automation.
He covers the many aspects of the advantages of using a PaaS. The business value you can expect to reap will range from hosting cost savings, better workflow and team productivity, new project delivery concepts, and greater competitive advantage. Discover a more advanced implementation of your service offering.
Puppet Enterprise provides tools to automate infrastructure management at scale through configuration management, reporting and compliance features, full stack orchestration, and support. It offers packaging, out-of-the-box scalability, role-based access control, visualization, orchestration capabilities, supported modules and platforms, and enterprise support. Customers report being able to reduce deployment times from months to days or hours to minutes through standardized configurations and automation with Puppet Enterprise.
Similar a Automating it management with Puppet + ServiceNow (20)
Puppet camp2021 testing modules and controlrepoPuppet
This document discusses testing Puppet code when using modules versus a control repository. It recommends starting with simple syntax and unit tests using PDK or rspec-puppet for modules, and using OnceOver for testing control repositories, as it is specially designed for this purpose. OnceOver allows defining classes, nodes, and a test matrix to run syntax, unit, and acceptance tests across different configurations. Moving from simple to more complex testing approaches like acceptance tests is suggested. PDK and OnceOver both have limitations for testing across operating systems that may require customizing spec tests. Infrastructure for running acceptance tests in VMs or containers is also discussed.
This document appears to be for a PuppetCamp 2021 presentation by Corey Osman of NWOPS, LLC. It includes information about Corey Osman and NWOPS, as well as sections on efficient development, presentation content, demo main points, Git strategies including single branch and environment branch strategies, and workflow improvements. Contact information is provided at the bottom.
The document discusses operational verification and how Puppet is working on a new module to provide more confidence in infrastructure health. It introduces the concept of adding check resources to catalogs to validate configurations and service health directly during Puppet runs. Examples are provided of how this could detect issues earlier than current methods. Next steps outlined include integrating checks into more resource types, fixing reporting, integrating into modules, and gathering feedback. This allows testing and monitoring to converge by embedding checks within configurations.
This document provides tips and tricks for using Puppet with VS Code, including links to settings examples and recommended extensions to install like Gitlens, Remote Development Pack, Puppet Extension, Ruby, YAML Extension, and PowerShell Extension. It also mentions there will be a demo.
- The document discusses various patterns and techniques the author has found useful when working with Puppet modules over 10+ years, including some that may be considered unorthodox or anti-patterns by some.
- Key topics covered include optimization of reusable modules, custom data types, Bolt tasks and plans, external facts, Hiera classification, ensuring resources for presence/absence, application abstraction with Tiny Puppet, and class-based noop management.
- The author argues that some established patterns like roles and profiles can evolve to be more flexible, and that running production nodes in noop mode with controls may be preferable to fully enforcing on all nodes.
Applying Roles and Profiles method to compliance codePuppet
This document discusses adapting the roles and profiles design pattern to writing compliance code in Puppet modules. It begins by noting the challenges of writing compliance code, such as it touching many parts of nodes and leading to sprawling code. It then provides an overview of the roles and profiles pattern, which uses simple "front-end" roles/interfaces and more complex "back-end" profiles/implementations. The rest of the document discusses how to apply this pattern when authoring Puppet modules for compliance - including creating interface and implementation classes, using Hiera for configuration, and tools for reducing boilerplate code. It aims to provide a maintainable structure and simplify adapting to new compliance frameworks or requirements.
This document discusses Kinney Group's Puppet compliance framework for automating STIG compliance and reporting. It notes that customers often implement compliance Puppet code poorly or lack appropriate Puppet knowledge. The framework aims to standardize compliance modules that are data-driven and customizable. It addresses challenges like conflicting modules and keeping compliance current after implementation. The framework generates automated STIG checklists and plans future integration with Puppet Enterprise and Splunk for continued compliance reporting. Kinney Group cites practical experience implementing the framework for various military and government customers.
Enforce compliance policy with model-driven automationPuppet
This document discusses model-driven automation for enforcing compliance. It begins with an overview of compliance benchmarks and the CIS benchmarks. It then discusses implementing benchmarks, common challenges around configuration drift and lack of visibility, and how to define compliance policy as code. The key points are that automation is essential for compliance at scale; a model-driven approach defines how a system should be configured and uses desired-state enforcement to keep systems compliant; and defining compliance policy as code, managing it with source control, and automating it with CI/CD helps achieve continuous compliance.
This document discusses how organizations can move from a reactive approach to compliance to a proactive approach using automation. It notes that over 50% of CIOs cite security and compliance as a barrier to IT modernization. Puppet offers an end-to-end compliance solution that allows organizations to automatically eliminate configuration drift, enforce compliance at scale across operating systems and environments, and define policy as code. The solution helps organizations improve compliance from 50% to over 90% compliant. The document argues that taking a proactive automation approach to compliance can turn it into a competitive advantage by improving speed and innovation.
This document promotes Puppet as a tool for hardening Windows environments. It states that Puppet can be used to harden Windows with one line of code, detect drift from desired configurations, report on missing or changing requirements, reverse engineer existing configurations, secure IIS, and export configurations to the cloud. Benefits of Puppet mentioned include hardening Windows environments, finding drift for investigation, easily passing audits, compliance reporting, easy exceptions, and exporting configurations. It also directs users to Puppet Forge modules for securing Windows and IIS.
Simplified Patch Management with Puppet - Oct. 2020Puppet
Does your company struggle with patching systems? If so, you’re not alone — most organizations have attempted to solve this issue by cobbling together multiple tools, processes, and different teams, which can make an already complicated issue worse.
Puppet helps keep hosts healthy, secure and compliant by replacing time-consuming and error prone patching processes with Puppet’s automated patching solution.
Join this webinar to learn how to do the following with Puppet:
Eliminate manual patching processes with pre-built patching automation for Windows and Linux systems.
Gain visibility into patching status across your estate regardless of OS with new patching solution from the PE console.
Ensure your systems are compliant and patched in a healthy state
How Puppet Enterprise makes patch management easy across your Windows and Linux operating systems.
Presented by: Margaret Lee, Product Manager, Puppet, and Ajay Sridhar, Sr. Sales Engineer, Puppet.
The document discusses how Puppet can be used to accelerate adoption of Microsoft Azure. It describes lift and shift migration of on-premises workloads to Azure virtual machines. It also covers infrastructure as code using Puppet and Terraform for provisioning, configuration management using Puppet Bolt, and implementing immutable infrastructure patterns on Azure. Integrations with Azure services like Key Vault, Blob Storage and metadata service are presented. Patch management and inventory of Azure resources with Puppet are also summarized.
This document discusses using Puppet Catalog Diff to analyze the impact of changes between Puppet environments or catalogs. It provides the command line usage and options for Puppet Catalog Diff. It also discusses how to integrate Puppet Catalog Diff into CI/CD pipelines for automated impact analysis when merging code changes. Additional resources like GitHub projects and Dev.to posts are provided for learning more about diffing Puppet environments and catalogs.
This document discusses how Puppet Relay uses Tekton pipelines to orchestrate containerized workflows. It provides an overview of how Tekton fits into the Relay architecture, with Tekton controllers managing taskrun pods to execute workflow steps defined in YAML. Triggers can initiate workflows based on events, with reusable and composable steps for tasks like provisioning infrastructure or clearing resources. Relay also includes features for parameters, secrets, outputs, and approvals to customize workflows. An ecosystem of open source integrations provides sample workflows and steps for common use cases.
100% Puppet Cloud Deployment of Legacy SoftwarePuppet
This document discusses deploying legacy software into the AWS cloud using Puppet. It proposes modeling AWS resources like security groups, autoscaling groups, and launch configurations as Puppet resources. This would allow Puppet to provision the underlying AWS infrastructure and configure servers launched in autoscaling groups. It acknowledges challenges around server reboots but suggests they can be addressed. In summary, it argues custom Puppet resources can easily model AWS resources and using Puppet to configure autoscaling servers is possible despite some challenges around rebooting servers during deployment.
This document discusses a partnership between Republic Polytechnic's School of Infocomm and Puppet to promote DevOps practices. It introduces several people involved with the partnership and outlines their mission to prepare more IT companies and individuals for jobs in the DevOps field through training courses. The document describes some short courses offered on DevOps topics and using the Puppet and Microsoft Azure platforms. It provides an example of how Republic Polytechnic has automated infrastructure configuration using Puppet to save time and reduce errors. There is a request at the end for readers to register their interest in DevOps by completing a survey.
This document discusses continuous compliance and DevSecOps best practices followed by financial services organizations.
Continuous compliance is defined as an ongoing process of proactive risk management that delivers predictable, transparent, and cost-effective compliance results. It involves continuously monitoring compliance controls, providing real-time alerts for failures and remediation recommendations, and maintaining up-to-date policies. Best practices for continuous compliance discussed include defining CIS controls and benchmarks, achieving transparent compliance dashboards and automated fixes for breaches.
DevSecOps is introduced as bringing security earlier in the application development lifecycle to minimize vulnerabilities. It aims to make everyone accountable for security. Challenges discussed include security teams struggling to keep up with DevOps pace and
The Dynamic Duo of Puppet and Vault tame SSL Certificates, Nick MaludyPuppet
The document discusses using Puppet and Vault together to dynamically manage SSL certificates. Puppet can use the vault_cert resource to request signed certificates from Vault and configure services to use the certificates. On Windows, some additional logic is needed to retrieve certificates' thumbprints and bind services to certificates using those thumbprints. This approach provides automated certificate renewal and distribution across platforms.
The document discusses the Puppet Server Helm chart, which provides a way to deploy Puppet infrastructure on Kubernetes. It achieves high availability, horizontal scaling, easy upgrades and rollbacks through deploying Puppet Server, PuppetDB, PostgreSQL and other components as Kubernetes objects. The chart handles tasks like load balancing, storage orchestration and TLS certificate management. It allows deploying the Puppet stack on multiple nodes while maintaining shared resources and configurations. Questions are taken at the end regarding using the chart.
This document summarizes improvements made to the Bolt installer for Windows systems. It overviews making the installer faster by bundling files into a single archive, tracking file changes transactionally, and verifying files with hashes. It also discusses enhancing the user experience through new native PowerShell cmdlets that follow standard naming conventions and provide help documentation directly within PowerShell.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Webinar: Designing a schema for a Data WarehouseFederico Razzoli
Are you new to data warehouses (DWH)? Do you need to check whether your data warehouse follows the best practices for a good design? In both cases, this webinar is for you.
A data warehouse is a central relational database that contains all measurements about a business or an organisation. This data comes from a variety of heterogeneous data sources, which includes databases of any type that back the applications used by the company, data files exported by some applications, or APIs provided by internal or external services.
But designing a data warehouse correctly is a hard task, which requires gathering information about the business processes that need to be analysed in the first place. These processes must be translated into so-called star schemas, which means, denormalised databases where each table represents a dimension or facts.
We will discuss these topics:
- How to gather information about a business;
- Understanding dictionaries and how to identify business entities;
- Dimensions and facts;
- Setting a table granularity;
- Types of facts;
- Types of dimensions;
- Snowflakes and how to avoid them;
- Expanding existing dimensions and facts.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Digital Marketing Trends in 2024 | Guide for Staying AheadWask
https://www.wask.co/ebooks/digital-marketing-trends-in-2024
Feeling lost in the digital marketing whirlwind of 2024? Technology is changing, consumer habits are evolving, and staying ahead of the curve feels like a never-ending pursuit. This e-book is your compass. Dive into actionable insights to handle the complexities of modern marketing. From hyper-personalization to the power of user-generated content, learn how to build long-term relationships with your audience and unlock the secrets to success in the ever-shifting digital landscape.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
2. Housekeeping
● Please submit questions in the Q&A chat box. We will
address as many as we have time for at the end of the
webinar.
● Technical difficulties? Let us know via the Q&A chat
and we can help.
● This webinar will be recorded and shared in the next
few days via email.
4. The right approach for your challenges. Task, model-based and event-driven automation
using agentless and agent-based technologies
Puppet Enterprise Platform
Intelligence
Automation
Orchestration
ModelTask Event
Forge
Certified Content
Community Content
Custom Modules & Tasks
3rd Party
Custom Content
Integrations
Automation
5. Intelligence
Automation
Orchestration
Seamlessly integration into core IT systems.
Extend automation across IT via APIs and/or custom user interfaces.
API
Collaboration
Continuous Delivery and
Release Automation
Data Aggregation & Monitoring
Security
Cloud Provisioning
Service Management
Puppet Enterprise Platform
Custom Modules & Tasks
3rd Party
Custom Content
Forge
Certified Content
Community Content
6. Platform & Infrastructure Teams
Config Management,
Compliance & Impact Analysis
VP I&O
IT Ops, Audit & InfoSec Teams
Remediation,
Patch Management & Audit
CISO
Application Development Teams
VP of Apps
Application Provisioning &
Orchestration for DevOps
IT Ops Teams & Cloud Ops
Self-service Automation &
Infrastructure Provisioning
Puppet Enterprise Platform Teams and Uses
Custom Modules
& Tasks
Forge
Intelligence
Orchestration
Automation
Integrations
7. 7
Provide standardized and
consistent physical and
virtual infrastructure, resulting
in fewer security and compliance
issues
Risk mitigation
Enable faster deployment and
configuration of infrastructure
in response to changing
stakeholder demands
Agility, Innovation
& productivity gains
Drive efficient configuration
management and provide
a flexible framework for delivering
and managing infrastructure
Cost-efficiencies
The Value of Puppet Enterprise Platform
Puppet makes infrastructure actionable, scalable and intelligent
9. ServiceNow is the smarter way to workflow™
Widely used by IT organizations to manage CMDB, change requests, ticketing and self-service.
10. ServiceNow provides a shared data model for ITSM
Replace silos of disconnected tools & databases with a central, integrated & connected system
11. ServiceNow ITSM consists of 5 components
Matching to their respective processes as described in ITIL
Incident
Management
Problem
Management
Change
Management
Request
Management
(Service Catalog)
CMDB
12. ServiceNow and Puppet are often used in parallel
Catering to different, but adjacent, aspects of IT operations
13. Now, you can finally connect them together
Enable bi-directional data sharing between ServiceNow and Puppet Enterprise
14. Puppet integrates with ServiceNow in 4 areas
Let developers control aspects of their
own systems without sacrificing
compliance, security, or operational
predictability.
Self-Service Infrastructure
Reduce the risk of change by enriching
change requests with impact analysis
details and letting ServiceNow control
approvals of Puppet changes.
Enriched Change Management
Reduce the time and effort required to
maintain an accurate drift remediation log.
Automated Incident Registration
Get accurate, up-to-date information
about your CMDB assets in ServiceNow,
without having to perform frequent
discovery runs.
Up-to-date Asset Management
16. Self-Service Infrastructure
Reduce the risk of change by enriching
change requests with impact analysis
details and letting ServiceNow control
approvals of Puppet changes
Enriched Change Management
Reduce the time and effort required to
maintain an accurate drift remediation log
Automated Incident Registration
Get accurate, up-to-date information
about your CMDB assets in ServiceNow,
without having to perform frequent
discovery runs
Up-to-date Asset Management
Let developers control aspects of their
own systems without sacrificing
compliance, security, or operational
predictability
Self-Service Infrastructure
17. Integration: Self-Service Infrastructure
WHY
● Teaching the entire company to use
Puppet for making changes is
unrealistic.
● In order for everyone to easily leverage
Puppet automation, a better way to
interact with Puppet is needed.
?
18. Integration: Self-Service Infrastructure
WHAT
● Let teams control their own systems
without needing any Puppet skills.
● Expose control of specific aspects of
Puppet automation directly from the
ServiceNow user interface.
● Leverage ServiceNow workflows to
streamline common changes.
19. Integration: Self-Service Infrastructure
HOW
● Puppet reads the fields for a system
from a ServiceNow table of choice, and
provides the information as facts for that
node.
● Use custom fields in ServiceNow to
automate any use case, by writing
Puppet logic that uses this data.
● Full node classification is possible as
well, for even higher levels of flexibility.
{
"authenticated" : "remote",
"certname" : "server1.puppet.com",
"domain" : "puppet.com",
"extensions" : { },
"external" : {
"servicenow" : {
"category" : "Hardware",
"classification" : "Production",
"name" : "server1.puppet.com",
"os" : “CentOS”,
"os_version" : "7.7.1908",
"puppet_classes" : {
"role::dbserver" : { }
},
"puppet_environment" :
"production",
"sys_class_name" : "Server",
"u_enforced_packages" :
"{"openssl":"present",
"redis":"absent"}",
}
}
}
trusted
20. • Being able to do something like
adding a few extra packages through
ServiceNow, is a game-changer for
us. It lets us expose the power and
flexibility of Puppet to users who do
not have Puppet expertise
• Having the ServiceNow CMDB data
directly available as facts is super
useful for when we need to drive
Puppet behavior based on details
that are stored in ServiceNow
“This is going to make our
lives so much easier”
Cloud Infrastructure
Engineer at a major
bank in London, UK
21. Exposing ServiceNow CI data as Trusted Facts
Data is freshly retrieved from ServiceNow before each Puppet agent run
Configuration Item, with extra fields added as needed:
- Name:
srv1.company.com
- Manufacturer: Amazon
EC2
- Enforced Packages: {
nano: present,
vim: present
}
PE enforces state with
info from ServiceNow
srv1.company.com
Package[nano] => present
Package[vim] => present
Puppet
Enterprise
PE retrieves State info for node
22. Using ServiceNow as a Node Classifier for Puppet
Fully control Puppet code assignments from ServiceNow, including parameters for classes
Configuration Item, with extra fields added as needed:
- Name:
srv1.company.com
- Manufacturer: Amazon
EC2
- Puppet Environment: production
- Puppet Classes:
role::dbserver{}
PE enforces state based
on classification from
ServiceNow
srv1.company.com
environment: production
classes => role::dbserver{}
Puppet
Enterprise
PE retrieves classification info for node
25. Enriched Change Management
Let developers control aspects of their
own systems without sacrificing
compliance, security, or operational
predictability.
Self-Service Infrastructure
Reduce the time and effort required to
maintain an accurate drift remediation log.
Automated Incident Registration
Get accurate, up-to-date information
about your CMDB assets in ServiceNow,
without having to perform frequent
discovery runs.
Up-to-date Asset Management
Reduce the risk of change by enriching
change requests with impact analysis
details and letting ServiceNow control
approvals of Puppet changes.
Enriched Change Management
26. Integration: Enriched Change Management
WHY
● When you have:
○ Puppet for change execution
○ ServiceNow for change workflow
connecting the two is the obvious
choice to get the most out of DevOps.
● Reduce manual effort and ensure full
registration of the change impact. DevOps
27. Integration: Enriched Change Management
WHAT
● Automatically generate ServiceNow
change requests for proposed Puppet
code changes
● Automatically populate change requests
with details from Puppet’s Impact
Analysis result.
● Automatically deploy changes when the
change request is approved.
28. Integration: Enriched Change Management
HOW
● Integrates CD4PE with ServiceNow
● Interacts with the ServiceNow Change
Management API to create change
requests, associate affected systems
and populate relevant details.
● Comes with a Business Rule for
ServiceNow to orchestrate the
automated deployment of approved
Puppet changes.
29. • It can be challenging to know exactly
what the impact of a proposed
change will be to the larger
environment. Will it affect multiple
applications? Multiple systems?
• ServiceNow provides the workflow
process around Change
Management, while Puppet with
CD4PE automates the
implementation of the change.
“Using Impact Analysis to de-
risk the Change approvals
process and completely
changes the way we work”
Director of Cloud
Architecture at a major
health insurance
provider in the U.S.
30. Automated change requests from CD4PE
Delegate control to ServiceNow for approving production changes
Change Request:
- Name: CHG0030023
- Risk and Impact: <Impact Analysis info>
- Affected CIs: srv3.company.com
-
srv5.company.com
CD4PE creates Change Request and
populates info from Impact Analysis
Admin proposes
Puppet code change,
triggering CD4PE
Git
CD4PE
Approval workflow
Upon approval, ServiceNow interacts
with CD4PE to deploy the change
33. Automated Incident Registration
Let developers control aspects of their
own systems without sacrificing
compliance, security, or operational
predictability.
Self-Service Infrastructure
Reduce the risk of change by enriching
change requests with impact analysis
details and letting ServiceNow control
approvals of Puppet changes.
Enriched Change Management
Get accurate, up-to-date information
about your CMDB assets in ServiceNow,
without having to perform frequent
discovery runs.
Up-to-date Asset Management
Reduce the time and effort required to
maintain an accurate drift remediation log.
Automated Incident Registration
34. Integration: Automated Incident Registration
WHY
● When system configuration that drifted
out of compliance is corrected, this
information should be registered in
ServiceNow.
● Ideally, you want custom business logic
to determine when an incident should
be created
● Doing all of this manually would not be
feasible at scale.
35. Integration: Automated Incident Registration
WHAT
● Automatically forward relevant details to
ServiceNow when Puppet corrects a
system that drifted out of compliance.
● Either create incidents directly, or
publish events to ServiceNow Event
Management to enable custom logic for
when incidents should be created.
36. Integration: Automated Incident Registration
HOW
● Puppet agent run reports are scanned
for corrective changes and failures
● When changes or failures are detected,
relevant details are forwarded to the
ServiceNow API
● This can either be done as events,
enabling custom logic, or directly as
regular incidents.
37. Registering Incidents from Puppet Agent runs
Automatically create & close incidents based on corrective changes made by Puppet
Incident:
- Name:
INC0010483
- Configuration Item: srv1.company.com
- Description: <info
on config
corrected by
Puppet>
Node submits change
report after run
srv1.company.com
Puppet
Enterprise
PE creates & closes incident
40. Up-to-date Asset Management
Let developers control aspects of their
own systems without sacrificing
compliance, security, or operational
predictability.
Self-Service Infrastructure
Reduce the risk of change by enriching
change requests with impact analysis
details and letting ServiceNow control
approvals of Puppet changes.
Enriched Change Management
Reduce the time and effort required to
maintain an accurate drift remediation log.
Automated Incident Registration
Get accurate, up-to-date information
about your CMDB assets in ServiceNow,
without having to perform frequent
discovery runs.
Up-to-date Asset Management
41. Integration: Up-to-date Asset Management
WHY
● Without Puppet, you need ServiceNow
Discovery to keep the details of systems
in the CMDB up-to-date
● Such discovery runs are known to have
an unwanted stability impact on
production systems.
● It is more efficient to update the details
in the CMDB from Puppet’s database
directly.
on the
roadmap
42. Integration: Up-to-date Asset Management
WHAT
● Inventory data from the Puppet
database is periodically gathered and
uploaded to ServiceNow.
● A Puppet app for ServiceNow
processes the staged data and updates
the CMDB as necessary.
● Focus ServiceNow Discovery usage to
detecting new/rogue systems only,
while Puppet keeps information up-to-
date for all known systems.
on the
roadmap
43. Integration: Up-to-date Asset Management
HOW
● Puppet will periodically upload details
about the systems it knows about to a
holding area in ServiceNow.
● A new Puppet app for ServiceNow will
then process the uploaded information
and update CI details as necessary with
the latest information.
on the
roadmap
task: servicenow_assets::get_node_facts
schedule: daily
params:
- targets: [srv1.company.com, …]
- facts: [serialnumber, operatingsystem, …]
ServiceNow MID Server
JSON JSON JSON JSON
update
Puppet CMDB Sync
(ServiceNow
Marketplace app)
44. • Runs are agentless, so you have to
manage lots of credentials
• Discovery runs negatively affect the
performance & stability of our
production systems
• Puppet CMDB update sync would
significantly reduce the need for
discovery runs just for keeping
CMDB information up to date
“ServiceNow Discovery has
been the bane of my
existence”
Configuration Manager
at a major bank in
Columbus, Ohio
45. Update ServiceNow CMDB from Puppet facts
Automatically update CI records with Puppet captured data
Configuration item:
- Name:
srv1.company.com
- Manufacturer: Amazon
EC2
- Model ID:
t3a.medium
- Serial number:
ec2c60a0-2e4b-230
- Operating System: CentOS
- OS Version:
7.6.1810
Node submits facts
during agent runs
srv1.company.com
Puppet
Enterprise
PE periodically uploads facts about
known nodes:
- bios_vendor
- serialnumber
- operatingsystem
- operatingsystemrelease
ServiceNow periodically processes the received fact
upload data and updates information in the CMDB
46. ServiceNow App high level architecture
CMDB CI:
- Name: srv1.company.com
- Manufacturer: Amazon EC2
- Model ID: t3a.medium
- Serial number: ec2c60a0-2e4b-230
- Operating System: CentOS
- OS Version: 7.6.1810
PE Orchestrator API
Endpoint: /v1/command/task
task: servicenow_tasks::get_node_facts
Params:
- targets: [srv1.company.com, …]
-facts:[serialnumber, operationsystem, …]
Puppet Connector App:
- Mapping:
certname <-> Name
bios_vendor <-> Manufacturer
serialnumber <-> Serial Number
operatingsystem <-> Operating System
operatingsystemrelease <-> OS
Version
Node Facts:
- certname: srv1.company.com
- serialnumber: ec2c60a0-2e4b-230
- operatingsystem: centos
- operatingsystemrelease: 7.6.1810
- bios_vendor: Amazon EC2
PDB
ETL
47. The 4 integrations of Puppet and ServiceNow
Type: Puppet Module
Available: Now
Where: Puppet Forge
Name: servicenow_cmdb_integration
Self-Service Infrastructure
Type: Puppet Module
Available: Now
Where: Puppet Forge
Name: servicenow_change_requests
Enriched Change Management
Type: Puppet Module
Available: Now
Where: Puppet Forge
Name: servicenow_reporting_integration
Automated Incident Registration
Type: ServiceNow App
Available: TBD
Where: ServiceNow Marketplace
Name: TBD
Up-to-date Asset Management
First a very brief introduction to the world of Puppet. You are attending Puppetize, so presumably you know something about it, but let’s frame it a specific way for this conversation.
The first layer of Enterprise features provides Automation content. Whether it’s task-based actions, infrastructure-as-code or event-driven automation, we provide access to a vast collection of community and certified content to accelerate your efforts to automate the systems you have today. More and more 3rd parties are providing Puppet automation content directly as well, and of course you have the ability to create your own.
The orchestration layer provides API access to all of Puppet’s features. This gives you powerful ways of extending automation to and from your other investments.
We provide a number of prebuilt integrations with systems like ServiceNow, VMware, Splunk and Tenable, to ensure you see immediate value from using these products together.
The breadth of control & insight that the Puppet Enterprise platform now provides makes it a useful resource for teams reporting to many different Lines of Business. The Platform and Infrastructure teams use Puppet for configuration management and Compliance. The IT Operations and Provisioning teams use Puppet for Self-Service automation, and the CISO uses Puppet to ensure the infrastructure remains fully patched, in line with compliance standards and clear of vulnerabilities.
As you can see, the Puppet Enterprise platform makes infrastructure actionable, scalable and intelligent.
We help you improve agility and productivity by faster deployments and better control over infrastructure configuration
We help you boost efficiency by reducing provisioning time and providing a single framework for all automation activity
Finally we reduce the risk of change by standardizing systems and improving consistency, resulting in less operational disruption or security issues.
Now lets do a brief introduction to the world of ServiceNow. I suspect some of you know quite a bit about it, but let’s get some basic ITSM terminology and process out the way.
You’re probably well aware of what ServiceNow is, but just in case you’re not: ServiceNow is one of the leading IT Service Management & IT Operations Management platforms on the market. It is one of the first all-cloud Platform-as-a-Service solutions in this space, with competitors being mostly on-premise or partner-hosted solutions.
Good ITSM platforms give you a shared data model to replace lots of individual tools, spreadsheets, databases, email processes, etc.
Everyone can work from a single source of truth, and every process can be governed by a single tool.
The core ITSM offering of ServiceNow is made up of these 5 components:
A CMDB to track all your IT assets
Request Management to enable self-service to users via a Service Catalog
Change Management to document & approve planned changes
Incident Management to document & track service disruptions
Problem Management to track known issues
These processes, that stem from ITIL, are closely related to activities that Puppet performs in your infrastructure.
Note that we can also send Events to ServiceNow where ultimately those may end up as Incidents.
A way to look at it is this: where Puppet takes action to execute a change, the process around it -from definition to approval- is governed by ServiceNow.
Therefore we can think of ServiceNow and Puppet as opposite sides of the same coin. The vast majority of Puppet’s customers have ServiceNow in place, working mostly in parallel. That also means that some information is duplicated across both platforms, and some level of manual effort is involved to get information of one platform into the other.
Over time, several of our customers have built their own integrations to make this more efficient; but more generally supported integrations have been a long-standing desire from our customers.
Well, today I’m happy to announce that the wait is over! We have been working hard to connect ServiceNow and Puppet in several different ways, making it easier to share data between both platforms. We worked with our customers to determine the specific areas where bi-directional data sharing & automation is needed, and developed integrations that cater to those needs.
We are going to look at 3 brand-new integrations that are available for you today, and we are announcing a 4th integration that we plan to deliver in the future.
Available now are:
An integration that allows you to use ServiceNow as a self-service frontend for controlling Puppet automation .This may eventually just be called the node classifier. ServiceNow publishes a note on how to do this. Its pages and pages long and it’s quite complex. What you will see is a much simpler approach
An integration that connects planned Puppet changes to the ITIL change management process in ServiceNow
An integration that sends change events to ServiceNow for correlation & analysis, generating incidents when needed
Planned for later is an integration to update the ServiceNow CMDB with data from Puppet, we will come back to that at the end. Looking at asset management, writing it as ServiceNow actions, ensuring that it stays supportable
So let’s start with Self Service.
There’s a very good reason for why you’d want an integration like this: not everybody in the company knows how to use Puppet to automate changes. There’s probably a fairly limited amount of people in your company today that directly work with Puppet. Others would likely benefit from using Puppet as well, but expecting everyone to get properly trained to do so is unrealistic. So we need to make it far easier to leverage Puppet, which is what this integration does.
Customers would rather use the ServiceNow Integration Hub and have catalog items that they can control. These would be Spokes, some standard offerings, e.g. reboot, patch, install, etc.
In a nutshell, the integration allows you to use ServiceNow as a friendly user interface to control either all or parts of Puppet automation.
For the most user friendly experience, create custom fields on your CMDB table for anything you’d like to give users direct control over, for example:
Additional packages that can be installed at will
OS kernel settings
Application tuning parameters
This data can then be easily acted upon in your Puppet automation.
You can allow direct changes to the fields provided, or make them read-only and use ServiceNow workflows to allow controlled modifications only.
The way the integration works, is that we read the fields for a system directly from ServiceNow, and exposes the information as a fact.
That makes the content directly usable in your Puppet code, as input for your automation. You remain in complete control over what happens.
For the utmost level of self service, you can create a Puppet Environment and a Puppet Classes field in ServiceNow, turning it into a fully fledged node classifier.
The integration provides built-in logic to parse those 2 specific fields, converting the data into proper Puppet classes and Hiera data automatically.
However, direct control for users over this option would require a bit of Puppet knowledge, so direct control might not always be the best approach there. A good alternative is to have these fields be read-only for users, and use ServiceNow workflows to facilitate controlled modifications to the classification data.
This group presented earlier today in the Puppetize EMEA session. Their key benefit was allowing end users to add packages easily and safely.
Note: Requires PE 2019.3 or higher, since that’s where trusted external fact capability was added.
Next, let’s take a look at Change Management.
For most of our customers, Puppet is their primary change execution platform. For many of those customers, ServiceNow is their primary change registration & approval platform. Naturally this should be one, interconnected, solution. That’s exactly what this integration provides.
This one is all about the handoffs between one platform to the other and vice versa.
When you propose a Puppet code change, this integration automatically generates a ServiceNow Change Request from the Puppet Impact Analysis report.
Once the Change Request has been approved and reaches the Implement stage, the integration lets ServiceNow orchestrate the promotion of the change into production.
This integration eliminates a number of manual steps in the change process, enabling greater efficiency, improved change documentation and faster cycle times.
This integration is built in the Continuous Delivery for Puppet Enterprise add-on, and uses its Impact Analysis feature to generate relevant details that need to go into the ServiceNow Change Request.
The integration interacts with the ServiceNow Change Management API to create the Change Request and associate affected systems to the change.
The integration also provides a Business Rule for ServiceNow to automate the promotion of the Puppet code into production. It can even approve deployments for protected environments in CD4PE, for an additional layer of security.
Next, let’s take a look at registering incidents from change events.
While the previous integration deals with planned changes, this integration is for compliance drift corrections: changes that were made to bring a system back to the correct configuration after it drifted out of compliance.
Since there is no way to know when this happens, the Change Management process is not suited for this scenario. Instead, this should be documented as an event and incident should be created if the circumstances warrant it. This is definitely best left for robots to do, instead of humans.
With this integration, Puppet forwards summary data of Puppet run results to ServiceNow. This can be in the form of basic incidents (when a corrective changed occured), or -and this is better- you can send the information as events to ServiceNow Event Management. In the latter case, you can create your own Event Rules to correlate events and create consolidated Incidents when certain thresholds are crossed. This way, there will be 1 incident for the same change happening on multiple systems.
The integration comes in the form of a Puppet report processor, that automatically analyses incoming Puppet run reports for changes and failures.
In its basic mode, it will create an incident whenever a corrective change or failure occurs on a system. In its advanced mode, it will send events to ServiceNow Event Management so that you can more finely control what happens to these events, when alerts are generated and if that should result in the creation of an Incident.
Finally, I have one more integration to announce. This integration is not yet ready, but planned for the future.
It is intended to provide a better way to keep your CMDB up-to-date, without having to resort to frequent discovery runs.
The biggest struggle of maintaining a CMDB is to keep the information up-to-date. Outdated CMDBs have been a problem for many organizations, and have given rise to discovery & inventory add-ons like ServiceNow Discovery. However such tools often run agentlessly and are known to cause a performance impact or stability problems on the systems they interrogate. As a result, customer don’t like running these tools frequently.
By contrast, Puppet already knows everything about the systems it manages, and stores that information in its own database.
Updating the CMDB in ServiceNow directly from Puppet’s database, would be a lot more efficient compared to agentless discovery tools.
Puppet automatically collects information about the systems it manages, as part of its normal enforcement runs:
Software Inventory
Standard facts about the hardware and operating system
Custom facts you create yourself
The integration will allow you to select which data you’d like to send to ServiceNow for updating existing CI’s.
This reduces the scope for ServiceNow Discovery to just finding new or rogue systems on the network.
On the Puppet Enterprise side, a Task will periodically gather the latest information from the Puppet database, and upload it to a staging location on a ServiceNow MID server. On the ServiceNow side, a Puppet App (from the ServiceNow Marketplace) periodically processes the uploaded data and updates fields in the CMDB as necessary.
The three integration I discussed first, are available right now as modules on the Puppet Forge. They come with clear instructions on how to setup each integration.
The CMDB update integration will be delivered as an app in the ServiceNow Marketplace, probably somewhere in 2021. Stay tuned for more info on that as we move forward.