This two-hour workshop will focus on the following:
Improving operational efficiency for managing Windows infrastructure
Applying configuration baselines to Windows Server and IIS web servers
Utilizing PowerShell and Bolt to automate day to day management tasks
What's Bolt? Puppet Bolt is the easiest way to get started with DevOps and does not require Puppet knowledge. During this workshop you will utilize WinRM or SSH to communicate with your server environments.
You will leave this workshop with a working knowledge of Bolt, and your laptop equipped to start tackling automation challenges across your organization.
Leveraging Azure DevOps across the EnterpriseAndrew Kelleher
In this presentation we exploring how teams across the enterprise can leverage Azure DevOps' by diving into its different capabilities and services. Specifically in the context of Azure platform teams that can leverage agile and DevOps practices when deploying and supporting services within Azure.
From development environments to production deployments with Docker, Compose,...Jérôme Petazzoni
In this session, we will learn how to define and run multi-container applications with Docker Compose. Then, we will show how to deploy and scale them seamlessly to a cluster with Docker Swarm; and how Amazon EC2 Container Service (ECS) eliminates the need to install,operate, and scale your own cluster management infrastructure. We will also walk through some best practice patterns used by customers for running their microservices platforms or batch jobs. Sample code and Compose templates will be provided on GitHub afterwards.
This presentation by Serhii Abanichev (System Architect, Consultant, GlobalLogic) was delivered at GlobalLogic Kharkiv DevOps TechTalk #1 on October 8, 2019.
In this talk were covered:
- Full coverage of DevOps with Azure DevOps Services:
- Create, test and deploy in any programming language, to any cloud or local environment.
- Run concurrently on Linux, macOS, and Windows, deploying containers for individual hosts or Kubernetes.
- Azure DevOps Services: a Microsoft solution that replaces dozens of tools ensuring smooth delivery to end users.
Event materials: https://www.globallogic.com/ua/events/kharkiv-devops-techtalk-1/
This document discusses Docker Registry API V2, a new model for image distribution that addresses limitations in the previous V1 API. Key changes include making layers content-addressable using cryptographic digests for identification and verification. Images are now described by manifests containing layer digests. The registry stores content in repositories and no longer exposes internal image details. Early adoption shows V2 providing significantly better performance than V1 with 80% fewer requests and 60% less bandwidth used. Future goals include improving documentation, adding features like pull-through caching, and developing the Docker distribution components to provide a foundation for more advanced distribution models.
Learn All Aspects Of Maven step by step, Enhance your skills & Launch Your Career, On-Demand Course affordable price & classes on virtually every topic.Try Before You Buy
Today I gave a presentation on the DevOps workflow and build pipeline. I talked about why you and your team might want to employ it, and gave a demo of how to create one using Jenkins. Here are the slides
Using Azure DevOps to continuously build, test, and deploy containerized appl...Adrian Todorov
Using Azure DevOps and containers, developers can continuously build, test, and deploy applications to Kubernetes with ease. Azure DevOps provides tools for continuous integration, release management, and monitoring that integrate well with containerized applications on Kubernetes. Developers benefit from being able to focus on writing code while operations manages the infrastructure. Azure Kubernetes Service (AKS) makes it simple to deploy and manage Kubernetes clusters in Azure without having to worry about installing or maintaining the Kubernetes master components.
Leveraging Azure DevOps across the EnterpriseAndrew Kelleher
In this presentation we exploring how teams across the enterprise can leverage Azure DevOps' by diving into its different capabilities and services. Specifically in the context of Azure platform teams that can leverage agile and DevOps practices when deploying and supporting services within Azure.
From development environments to production deployments with Docker, Compose,...Jérôme Petazzoni
In this session, we will learn how to define and run multi-container applications with Docker Compose. Then, we will show how to deploy and scale them seamlessly to a cluster with Docker Swarm; and how Amazon EC2 Container Service (ECS) eliminates the need to install,operate, and scale your own cluster management infrastructure. We will also walk through some best practice patterns used by customers for running their microservices platforms or batch jobs. Sample code and Compose templates will be provided on GitHub afterwards.
This presentation by Serhii Abanichev (System Architect, Consultant, GlobalLogic) was delivered at GlobalLogic Kharkiv DevOps TechTalk #1 on October 8, 2019.
In this talk were covered:
- Full coverage of DevOps with Azure DevOps Services:
- Create, test and deploy in any programming language, to any cloud or local environment.
- Run concurrently on Linux, macOS, and Windows, deploying containers for individual hosts or Kubernetes.
- Azure DevOps Services: a Microsoft solution that replaces dozens of tools ensuring smooth delivery to end users.
Event materials: https://www.globallogic.com/ua/events/kharkiv-devops-techtalk-1/
This document discusses Docker Registry API V2, a new model for image distribution that addresses limitations in the previous V1 API. Key changes include making layers content-addressable using cryptographic digests for identification and verification. Images are now described by manifests containing layer digests. The registry stores content in repositories and no longer exposes internal image details. Early adoption shows V2 providing significantly better performance than V1 with 80% fewer requests and 60% less bandwidth used. Future goals include improving documentation, adding features like pull-through caching, and developing the Docker distribution components to provide a foundation for more advanced distribution models.
Learn All Aspects Of Maven step by step, Enhance your skills & Launch Your Career, On-Demand Course affordable price & classes on virtually every topic.Try Before You Buy
Today I gave a presentation on the DevOps workflow and build pipeline. I talked about why you and your team might want to employ it, and gave a demo of how to create one using Jenkins. Here are the slides
Using Azure DevOps to continuously build, test, and deploy containerized appl...Adrian Todorov
Using Azure DevOps and containers, developers can continuously build, test, and deploy applications to Kubernetes with ease. Azure DevOps provides tools for continuous integration, release management, and monitoring that integrate well with containerized applications on Kubernetes. Developers benefit from being able to focus on writing code while operations manages the infrastructure. Azure Kubernetes Service (AKS) makes it simple to deploy and manage Kubernetes clusters in Azure without having to worry about installing or maintaining the Kubernetes master components.
Docker is an open platform for developing, shipping, and running applications. It allows separating applications from infrastructure and treating infrastructure like code. Docker provides lightweight containers that package code and dependencies together. The Docker architecture includes images that act as templates for containers, a client-server model with a daemon, and registries for storing images. Key components that enable containers are namespaces, cgroups, and capabilities. The Docker ecosystem includes services like Docker Hub, Docker Swarm for clustering, and Docker Compose for orchestration.
Microsoft recently released Azure DevOps, a set of services that help developers and IT ship software faster, and with higher quality. These services cover planning, source code, builds, deployments, and artifacts.
One of the great things about Azure DevOps is that it works great for any app and on any platform regardless of frameworks.
In this session, I will give you a quick overview of what Azure DevOps is and how you can quickly get started and incorporate it into your continuous integration and deployment processes.
What Is A Docker Container? | Docker Container Tutorial For Beginners| Docker...Simplilearn
This presentation on Docker Container will help you understand what is Docker, the architecture of Docker, what is a Docker Container, how to create a Docker Container, benefits of Docker Container, basic commands of Containers and you will also see a demo on creating Docker Container. Docker is a very lightweight software container and containerization platform. Docker containers provide a way to run software in isolation. It is an open source platform that helps to package an application and its dependencies into a Docker container for the development and deployment of software and a Docker COntainer is a portable executable package which includes applications and their dependencies. With Docker Containers, applications can work efficiently in different computer environments.
Below DevOps tools are explained in this Docker Container presentation:
1. What is Docker?
2. The architecture of Docker?
3. What is a Docker Container?
4. How to create a Docker Container?
5. Benefits of Docker Containers
6. Basic commands of Containers
Simplilearn's DevOps Certification Training Course will prepare you for a career in DevOps, the fast-growing field that bridges the gap between software developers and operations. You’ll become an expert in the principles of continuous development and deployment, automation of configuration management, inter-team collaboration and IT service agility, using modern DevOps tools such as Git, Docker, Jenkins, Puppet and Nagios. DevOps jobs are highly paid and in great demand, so start on your path today.
Why learn DevOps?
Simplilearn’s DevOps training course is designed to help you become a DevOps practitioner and apply the latest in DevOps methodology to automate your software development lifecycle right out of the class. You will master configuration management; continuous integration deployment, delivery and monitoring using DevOps tools such as Git, Docker, Jenkins, Puppet and Nagios in a practical, hands-on and interactive approach. The DevOps training course focuses heavily on the use of Docker containers, a technology that is revolutionizing the way apps are deployed in the cloud today and is a critical skillset to master in the cloud age.
After completing the DevOps training course you will achieve hands-on expertise in various aspects of the DevOps delivery model. The practical learning outcomes of this Devops training course are:
An understanding of DevOps and the modern DevOps toolsets
The ability to automate all aspects of a modern code delivery and deployment pipeline using:
1. Source code management tools
2. Build tools
3. Test automation tools
4. Containerization through Docker
5. Configuration management tools
6. Monitoring tools
DevOps jobs are the third-highest tech role ranked by employer demand on Indeed.com but have the second-highest talent deficit.
Learn more at https://www.simplilearn.com/cloud-computing/devops-practitioner-certification-training
Building and Evolving a Dependency-Graph Based Microservice Architecture (La...confluent
With the rising adoption of stream- and event-driven processing microservice architectures are becoming more and more complex. One challenge that many businesses face during the initial and ongoign development of these solutions is how to properly model and maintain dependencies between microservices. One specific example for this that will be used throughout this talk cleansing and enrichment of data that has been ingested into a streaming platform. For most use cases there are a lot of minor tasks that need to be performed on every piece of data before it is fully usable for processing. Some common examples are: normalize phone numbers, normalize street addresses, geocode addresses, lookup customer data and enrich record, ... Most of these tasks are completely independent of each other, but some have dependencies to be run before or after other tasks - geocoding, for example, should be done only after address normalization has finished. Defining and orchestrating a complex graph of these operations is no small feat. This talk will focus on outlining the requirements and challenges that one needs to solve when trying to implement a flexible framework for solving this use case. It will then build on these requirements and present the blueprint of a generic solution and show how Kafka and Kafka Streams are a perfect fit to address and overcome most challenges. This talk, while offering some technical details is mostly targeted at people at the architecture, rather than the code, level. Listeners will gain a thorough understanding of the challenges that stream processing offers but also be provided with generic patterns that can be used to solve these challenges in their specific infrastructure.
Learn how Azure DevOps has empowered Horizons LIMS to streamline their collaboration and CI / CD process to accelerate their enterprise digital transformation. You will also hear about the latest Azure DevOps features and how to integrate DevOps with GetHub, Jenkins, and leverage transformation workloads like Kubernetes and Microsoft Common Data Service to deliver products and services faster.
This document summarizes CI/CD on AWS by Bhargav Amin. It introduces DevOps practices like continuous integration, continuous delivery, and continuous deployment. It explains how to design a CI/CD pipeline and create one on AWS using services like CodeCommit, CodeBuild, CodeDeploy, and CodePipeline. The document provides examples of integrating these services to automate building, testing, and deploying code changes. It also includes a link to a demo repository and discusses managing infrastructure with CI/CD by updating CloudFormation templates in a pipeline.
Jenkins is an open-source tool for continuous integration that was originally developed as the Hudson project. It allows developers to commit code frequently to a shared repository, where Jenkins will automatically build and test the code. Jenkins is now the leading replacement for Hudson since Oracle stopped maintaining Hudson. It helps teams catch issues early and deliver software more rapidly through continuous integration and deployment.
Devops core principles
CI/CD basics
CI/CD with asp.net core webapi and Angular app
Iac Why and What?
Demo using Azure and Azure Devops
Docker why and what ?
Demo using Azure and Azure Devops
Kubernetes why and what?
Demo using Azure and Azure Devops
The document provides an overview of Jenkins, a popular open source continuous integration (CI) tool. It discusses what CI is, describes Jenkins' architecture and features like plugin extensibility. It also covers installing and configuring Jenkins, including managing plugins, nodes and jobs. The document demonstrates how to set up a sample job and outlines benefits like supporting Agile development through continuous integration and access to working software copies.
DevOps originated from the Toyota Production System which pioneered lean manufacturing practices like just-in-time production and continuous improvement. These concepts influenced early software development methodologies like agile, Scrum, and extreme programming. As software development aimed to deliver value faster, operations struggled to keep up, highlighting the need for closer collaboration between development and operations teams. In 2008, Patrick Debois coined the term "DevOps" to describe this integration. Since then, DevOps adoption has grown significantly, though its core goals of empowering employees, delivering value, and embracing change remain the same.
By attending this webinar, you will be able to learn from the product developers on what WSO2 Enterprise Integrator 7.1.0 is, and what features it brings in to cater to integration with seamless developer experience. Key features include:
- Support for both centralized ESB and microservices-based deployments
- Streaming ETL support with CDC, file scraping, flow monitoring and more
- New observability solution based on Grafana, Prometheus, Jaeger, and Loki
- A CI/CD pipeline using Docker, Jenkins, Kubernetes and more
- New connectors for CSV transformation, Azure Data Lake and more
- Improvements to WSO2 Integration Studio (Tooling) UI and connector configuration view
On-demand webinar: https://wso2.com/library/webinars/wso2-enterprise-integrator-7-1-0-release/
Docker is a tool that allows users to package applications into containers to run on Linux servers. Containers provide isolation and resource sharing benefits compared to virtual machines. Docker simplifies deployment of containers by adding images, repositories and version control. Popular components include Dockerfiles to build images, Docker Hub for sharing images, and Docker Compose for defining multi-container apps. Docker has gained widespread adoption due to reducing complexity of managing containers across development and operations teams.
Docker 101 - High level introduction to dockerDr Ganesh Iyer
This document provides an overview of Docker containers and their benefits. It begins by explaining what Docker containers are, noting that they wrap up software code and dependencies into lightweight packages that can run consistently on any hardware platform. It then discusses some key benefits of Docker containers like their portability, efficiency, and ability to eliminate compatibility issues. The document provides examples of how Docker solves problems related to managing multiple software stacks and environments. It also compares Docker containers to virtual machines. Finally, it outlines some common use cases for Docker like application development, CI/CD workflows, microservices, and hybrid cloud deployments.
DevOps is a set of practices intended to reduce the time between committing a change to a system and deploying it to production while ensuring high quality. It focuses on bridging the gap between developers and operations teams. Key DevOps principles include systems thinking, amplifying feedback loops, and a culture of experimentation. DevOps aims to achieve continuous delivery through practices like automated deployments, infrastructure as code, and deployment strategies like blue-green deployments and rolling upgrades.
Four Strategies to Create a DevOps Culture & System that Favors Innovation & ...Amazon Web Services
The document discusses strategies for creating a DevOps culture and system that fosters innovation and customer obsession. It recommends forming a cross-functional reliability team to gather user experience data, perform correlational analysis, and devise plans for building and maintaining resiliency. The document also advocates investing in continuous delivery, agile methodology, and intelligent analytics to achieve optimized software quality, velocity, and costs while driving business value.
Bolt Workshop virtual event on May 5th, 2020. The workshop will introduce Bolt, an agentless automation tool from Puppet. Attendees will learn how to use Bolt to run commands, scripts, tasks and plans across Linux and Windows nodes. The document provides an agenda for the workshop including an introduction to Bolt's capabilities and functionality. Links are also provided for workshop files and materials.
Learn how to use Bolt in an interactive workshop with hands-on labs.
Join us for an interactive, virtual Bolt workshop on 28 April 2020. You’ll learn how to install and configure common Bolt activities and leave with your laptops Puppet-ready, with Bolt + PDK + Puppet Agent + VS Code. Plus, you’ll get to speak with experts from Puppet and the community.
What's Bolt? Bolt is an open source, agentless multi-platform automation tool that reduces your time to automation and makes it easier to get started with DevOps. Bolt makes automation much more accessible without requiring any Puppet knowledge, agents, or master. It uses SSH or WinRM to communicate and execute tasks on remote systems.
Your teams can perform various tasks like starting and stopping services, rebooting remote systems, and gathering packages and systems facts from your workstation or laptop on any platform (Linux and Windows).
Docker is an open platform for developing, shipping, and running applications. It allows separating applications from infrastructure and treating infrastructure like code. Docker provides lightweight containers that package code and dependencies together. The Docker architecture includes images that act as templates for containers, a client-server model with a daemon, and registries for storing images. Key components that enable containers are namespaces, cgroups, and capabilities. The Docker ecosystem includes services like Docker Hub, Docker Swarm for clustering, and Docker Compose for orchestration.
Microsoft recently released Azure DevOps, a set of services that help developers and IT ship software faster, and with higher quality. These services cover planning, source code, builds, deployments, and artifacts.
One of the great things about Azure DevOps is that it works great for any app and on any platform regardless of frameworks.
In this session, I will give you a quick overview of what Azure DevOps is and how you can quickly get started and incorporate it into your continuous integration and deployment processes.
What Is A Docker Container? | Docker Container Tutorial For Beginners| Docker...Simplilearn
This presentation on Docker Container will help you understand what is Docker, the architecture of Docker, what is a Docker Container, how to create a Docker Container, benefits of Docker Container, basic commands of Containers and you will also see a demo on creating Docker Container. Docker is a very lightweight software container and containerization platform. Docker containers provide a way to run software in isolation. It is an open source platform that helps to package an application and its dependencies into a Docker container for the development and deployment of software and a Docker COntainer is a portable executable package which includes applications and their dependencies. With Docker Containers, applications can work efficiently in different computer environments.
Below DevOps tools are explained in this Docker Container presentation:
1. What is Docker?
2. The architecture of Docker?
3. What is a Docker Container?
4. How to create a Docker Container?
5. Benefits of Docker Containers
6. Basic commands of Containers
Simplilearn's DevOps Certification Training Course will prepare you for a career in DevOps, the fast-growing field that bridges the gap between software developers and operations. You’ll become an expert in the principles of continuous development and deployment, automation of configuration management, inter-team collaboration and IT service agility, using modern DevOps tools such as Git, Docker, Jenkins, Puppet and Nagios. DevOps jobs are highly paid and in great demand, so start on your path today.
Why learn DevOps?
Simplilearn’s DevOps training course is designed to help you become a DevOps practitioner and apply the latest in DevOps methodology to automate your software development lifecycle right out of the class. You will master configuration management; continuous integration deployment, delivery and monitoring using DevOps tools such as Git, Docker, Jenkins, Puppet and Nagios in a practical, hands-on and interactive approach. The DevOps training course focuses heavily on the use of Docker containers, a technology that is revolutionizing the way apps are deployed in the cloud today and is a critical skillset to master in the cloud age.
After completing the DevOps training course you will achieve hands-on expertise in various aspects of the DevOps delivery model. The practical learning outcomes of this Devops training course are:
An understanding of DevOps and the modern DevOps toolsets
The ability to automate all aspects of a modern code delivery and deployment pipeline using:
1. Source code management tools
2. Build tools
3. Test automation tools
4. Containerization through Docker
5. Configuration management tools
6. Monitoring tools
DevOps jobs are the third-highest tech role ranked by employer demand on Indeed.com but have the second-highest talent deficit.
Learn more at https://www.simplilearn.com/cloud-computing/devops-practitioner-certification-training
Building and Evolving a Dependency-Graph Based Microservice Architecture (La...confluent
With the rising adoption of stream- and event-driven processing microservice architectures are becoming more and more complex. One challenge that many businesses face during the initial and ongoign development of these solutions is how to properly model and maintain dependencies between microservices. One specific example for this that will be used throughout this talk cleansing and enrichment of data that has been ingested into a streaming platform. For most use cases there are a lot of minor tasks that need to be performed on every piece of data before it is fully usable for processing. Some common examples are: normalize phone numbers, normalize street addresses, geocode addresses, lookup customer data and enrich record, ... Most of these tasks are completely independent of each other, but some have dependencies to be run before or after other tasks - geocoding, for example, should be done only after address normalization has finished. Defining and orchestrating a complex graph of these operations is no small feat. This talk will focus on outlining the requirements and challenges that one needs to solve when trying to implement a flexible framework for solving this use case. It will then build on these requirements and present the blueprint of a generic solution and show how Kafka and Kafka Streams are a perfect fit to address and overcome most challenges. This talk, while offering some technical details is mostly targeted at people at the architecture, rather than the code, level. Listeners will gain a thorough understanding of the challenges that stream processing offers but also be provided with generic patterns that can be used to solve these challenges in their specific infrastructure.
Learn how Azure DevOps has empowered Horizons LIMS to streamline their collaboration and CI / CD process to accelerate their enterprise digital transformation. You will also hear about the latest Azure DevOps features and how to integrate DevOps with GetHub, Jenkins, and leverage transformation workloads like Kubernetes and Microsoft Common Data Service to deliver products and services faster.
This document summarizes CI/CD on AWS by Bhargav Amin. It introduces DevOps practices like continuous integration, continuous delivery, and continuous deployment. It explains how to design a CI/CD pipeline and create one on AWS using services like CodeCommit, CodeBuild, CodeDeploy, and CodePipeline. The document provides examples of integrating these services to automate building, testing, and deploying code changes. It also includes a link to a demo repository and discusses managing infrastructure with CI/CD by updating CloudFormation templates in a pipeline.
Jenkins is an open-source tool for continuous integration that was originally developed as the Hudson project. It allows developers to commit code frequently to a shared repository, where Jenkins will automatically build and test the code. Jenkins is now the leading replacement for Hudson since Oracle stopped maintaining Hudson. It helps teams catch issues early and deliver software more rapidly through continuous integration and deployment.
Devops core principles
CI/CD basics
CI/CD with asp.net core webapi and Angular app
Iac Why and What?
Demo using Azure and Azure Devops
Docker why and what ?
Demo using Azure and Azure Devops
Kubernetes why and what?
Demo using Azure and Azure Devops
The document provides an overview of Jenkins, a popular open source continuous integration (CI) tool. It discusses what CI is, describes Jenkins' architecture and features like plugin extensibility. It also covers installing and configuring Jenkins, including managing plugins, nodes and jobs. The document demonstrates how to set up a sample job and outlines benefits like supporting Agile development through continuous integration and access to working software copies.
DevOps originated from the Toyota Production System which pioneered lean manufacturing practices like just-in-time production and continuous improvement. These concepts influenced early software development methodologies like agile, Scrum, and extreme programming. As software development aimed to deliver value faster, operations struggled to keep up, highlighting the need for closer collaboration between development and operations teams. In 2008, Patrick Debois coined the term "DevOps" to describe this integration. Since then, DevOps adoption has grown significantly, though its core goals of empowering employees, delivering value, and embracing change remain the same.
By attending this webinar, you will be able to learn from the product developers on what WSO2 Enterprise Integrator 7.1.0 is, and what features it brings in to cater to integration with seamless developer experience. Key features include:
- Support for both centralized ESB and microservices-based deployments
- Streaming ETL support with CDC, file scraping, flow monitoring and more
- New observability solution based on Grafana, Prometheus, Jaeger, and Loki
- A CI/CD pipeline using Docker, Jenkins, Kubernetes and more
- New connectors for CSV transformation, Azure Data Lake and more
- Improvements to WSO2 Integration Studio (Tooling) UI and connector configuration view
On-demand webinar: https://wso2.com/library/webinars/wso2-enterprise-integrator-7-1-0-release/
Docker is a tool that allows users to package applications into containers to run on Linux servers. Containers provide isolation and resource sharing benefits compared to virtual machines. Docker simplifies deployment of containers by adding images, repositories and version control. Popular components include Dockerfiles to build images, Docker Hub for sharing images, and Docker Compose for defining multi-container apps. Docker has gained widespread adoption due to reducing complexity of managing containers across development and operations teams.
Docker 101 - High level introduction to dockerDr Ganesh Iyer
This document provides an overview of Docker containers and their benefits. It begins by explaining what Docker containers are, noting that they wrap up software code and dependencies into lightweight packages that can run consistently on any hardware platform. It then discusses some key benefits of Docker containers like their portability, efficiency, and ability to eliminate compatibility issues. The document provides examples of how Docker solves problems related to managing multiple software stacks and environments. It also compares Docker containers to virtual machines. Finally, it outlines some common use cases for Docker like application development, CI/CD workflows, microservices, and hybrid cloud deployments.
DevOps is a set of practices intended to reduce the time between committing a change to a system and deploying it to production while ensuring high quality. It focuses on bridging the gap between developers and operations teams. Key DevOps principles include systems thinking, amplifying feedback loops, and a culture of experimentation. DevOps aims to achieve continuous delivery through practices like automated deployments, infrastructure as code, and deployment strategies like blue-green deployments and rolling upgrades.
Four Strategies to Create a DevOps Culture & System that Favors Innovation & ...Amazon Web Services
The document discusses strategies for creating a DevOps culture and system that fosters innovation and customer obsession. It recommends forming a cross-functional reliability team to gather user experience data, perform correlational analysis, and devise plans for building and maintaining resiliency. The document also advocates investing in continuous delivery, agile methodology, and intelligent analytics to achieve optimized software quality, velocity, and costs while driving business value.
Bolt Workshop virtual event on May 5th, 2020. The workshop will introduce Bolt, an agentless automation tool from Puppet. Attendees will learn how to use Bolt to run commands, scripts, tasks and plans across Linux and Windows nodes. The document provides an agenda for the workshop including an introduction to Bolt's capabilities and functionality. Links are also provided for workshop files and materials.
Learn how to use Bolt in an interactive workshop with hands-on labs.
Join us for an interactive, virtual Bolt workshop on 28 April 2020. You’ll learn how to install and configure common Bolt activities and leave with your laptops Puppet-ready, with Bolt + PDK + Puppet Agent + VS Code. Plus, you’ll get to speak with experts from Puppet and the community.
What's Bolt? Bolt is an open source, agentless multi-platform automation tool that reduces your time to automation and makes it easier to get started with DevOps. Bolt makes automation much more accessible without requiring any Puppet knowledge, agents, or master. It uses SSH or WinRM to communicate and execute tasks on remote systems.
Your teams can perform various tasks like starting and stopping services, rebooting remote systems, and gathering packages and systems facts from your workstation or laptop on any platform (Linux and Windows).
Bolt provides agentless automation capabilities to execute commands, scripts, tasks, and plans against remote targets. It allows authentication via SSH, WinRM, or PCP and supports running automation in any language the remote system supports. The document discusses setting up an environment for Bolt workshops, including creating a Boltdir directory and configuration files. It also covers various Bolt capabilities like commands, scripts, tasks, plans, and applying Puppet manifests, as well as cross-platform automation and connecting to Puppet Enterprise for desired state management.
Puppet Virtual Bolt Workshop - 23 April 2020 (Singapore)Puppet
Bolt can be used to execute agentless automation against remote hosts. It allows running commands, scripts, tasks, and plans on targets via SSH, WinRM, or PCP without requiring any agents. The workshop covers using Bolt commands, scripts, tasks, and plans. It teaches converting scripts to tasks and tasks to plans. Participants learn to use bolt.yaml for configuration, inventory files for targets, and Puppetfiles to manage dependencies. Later labs cover applying Puppet manifests with Bolt and building cross-platform plans. The recap emphasizes the progression from interactive tools to reusable automation and leveraging existing modules and Puppet Enterprise.
This document provides an overview of a Bolt workshop covering the use of Bolt to execute commands, scripts, tasks, plans, and Puppet manifests across Linux and Windows systems. The workshop includes labs on running basic Bolt commands, using Bolt configuration files and inventory, converting scripts to tasks, writing Bolt tasks with metadata, creating and running a Bolt plan, applying a Puppet manifest, and developing a cross-platform Bolt plan. Attendees will learn how to progress from interactive commands to reusable automation using Bolt and leverage existing Puppet modules and desired state configuration. Connecting Bolt automation to Puppet Enterprise is discussed to allow continuous enforcement of infrastructure as code.
Virtual Puppet Ecosystem Workshop - March 18,2020Puppet
Join us a hands-on virtual Puppet workshop exploring our open source tools and products, including Bolt, Puppet Remediate, and Project Nebula. This event will be held on 18 March from 1:00 p.m. - 3:00 p.m. CST.
In this virtual workshop, you can expect to learn how to utilize Puppet tools to automate away repetitive takes in your Windows and Linux environments. Plus, you’ll get to mingle with experts from Puppet and the community.
This document provides an overview of a Bolt workshop that will be held virtually on April 1, 2020. It introduces two presenters, Stephen P Potter and Josef Singer, and provides information about their backgrounds and areas of focus. The document also provides instructions for submitting questions during the webinar and notes that presentation materials will be shared after the event.
DevOps Automation with Puppet Bolt & Puppet EnterpriseEficode
Learn how you can easily automate complex application deployments with Puppet Bolt and ensure continuous compliance in day-to-day operations with Puppet Enterprise. Presented at Eficode's DevOps Tooling Morning 2019.
This document provides an agenda for a Bolt workshop on April 8, 2020. It introduces the presenters - John Laffey, Jerry Mozes, Matt Stone, and Ryan Russell-Yates. It discusses using Bolt to automate tasks across operating systems like Linux and Windows. The workshop covers using Bolt commands, scripts, and configuration files to efficiently run automation. Future sessions will cover Bolt tasks, plans, and using Puppet modules with Bolt for declarative automation.
This document provides an introduction to PowerShell for database developers. It begins by stating the goals of the presentation which are to amaze with PowerShell capabilities, convince that PowerShell is needed, provide a basic understanding of PowerShell programming, and point to support resources. It then provides an overview of what PowerShell is, including its history and why Windows needed a shell. It discusses PowerShell concepts like cmdlets, variables, operators, loops, and functions. It also provides examples of PowerShell scripts and best practices. Throughout it emphasizes PowerShell's power and integration with Windows and databases.
Jenkins Pipeline allows automating the process of software delivery with continuous integration and deployment. It uses Jenkinsfiles to define the build pipeline through stages like build, test and deploy. Jenkinsfiles can be written declaratively using a domain-specific language or scripted using Groovy. The pipeline runs on agent nodes and is composed of stages containing steps. Maven is a build tool that manages Java projects and dependencies through a POM file. The POM defines project properties, dependencies, plugins and profiles to customize builds.
Lean Drupal Repositories with Composer and DrushPantheon
Composer is the industry-standard PHP dependency manager that is now in use in Drupal 8 core. This session will show the current best practices for using Composer, drupal-composer, drupal-scaffold, Drush, Drupal Console and Drush site-local aliases to streamline your Drupal 7 and Drupal 8 site repositories for optimal use on teams.
This document provides an overview of Kubernetes 101. It begins with asking why Kubernetes is needed and provides a brief history of the project. It describes containers and container orchestration tools. It then covers the main components of Kubernetes architecture including pods, replica sets, deployments, services, and ingress. It provides examples of common Kubernetes manifest files and discusses basic Kubernetes primitives. It concludes with discussing DevOps practices after adopting Kubernetes and potential next steps to learn more advanced Kubernetes topics.
VMworld 2016: Getting Started with PowerShell and PowerCLI for Your VMware En...VMworld
This document provides an overview and introduction to PowerShell and PowerCLI for managing VMware environments. It discusses what PowerShell and PowerCLI are, important terminology like modules and functions, how to set them up and configure profiles, and examples of how to start coding with PowerShell including gathering data, writing logic statements, and using cmdlets safely. The presenters are introduced and an agenda is provided covering these topics at a high level to get started with PowerShell and PowerCLI.
Continuous Integration with Open Source Tools - PHPUgFfm 2014-11-20Michael Lihs
Presentation about open source tools to set up continuous integration and continuous deployment. Covers Git, Gitlab, Chef, Vagrant, Jenkins, Gatling, Dashing, TYPO3 Surf and some other tools. Shows some best practices for testing with Behat and Functional Testing.
This document provides an overview of build tools and focuses on Maven. It defines what build tools do, such as automating the process of compiling source code and packaging binaries. It discusses different build tools for various programming languages and frameworks. The document then describes Maven in more detail, covering its history, plugins, project object model (POM), dependencies, lifecycles, and an example command.
This document describes eBay's use of Fluo for continuous integration and deployment using OpenStack. Fluo provides a single interface for configuring, building, testing, and deploying code changes. It provisions instances on OpenStack to run tasks defined in a configuration file like running tests, building packages, and deploying code. Fluo replicates code, packages, and configuration management code across regions and datacenters. It supports common workflows from code review through integration testing, releases, and periodic jobs. Fluo aims to provide a fully automated and scalable continuous delivery system to deploy code changes to eBay's global infrastructure on OpenStack.
One commit, one release. Continuously delivering a Symfony project.Javier López
For the last few months we've been implementing a Continuous Delivery pipeline for the redesign of Time Out. In this talk I will demonstrate a real life example of what our pipeline looks like, the different tools we've used to get it done (phing, github, jenkins, ansible, AWS S3, ...), and peculiarities for PHP and Symfony2 projects. Most importantly, I'll be looking at things we've struggled with along the way and the lessons we've learnt.
Create your very own Development Environment with Vagrant and Packerfrastel
Vagrant, Packer, and Puppet can be used together to create a development environment. Packer is used to build custom base boxes that include only the operating system. Vagrant uses these base boxes to create isolated virtual machines. Puppet then provisions the virtual machines by installing additional software, configuring applications, and defining infrastructure as code. This allows for consistent, reproducible development environments that match production.
OpenShift Commons - Adopting Podman, Skopeo and Buildah for Building and Mana...Mihai Criveti
KubeCon OpenShift Commons - How Podman, Skopeo and Buildah provide a drop in replacement for Docker. How Podman offers better security using a fork-exec model. Building images with buildah. Introducing podman-compose and the Red Hat Universal Base Image.
Similar a Manage your Windows Infrastructure with Puppet Bolt - August 26 - 2020 (20)
Puppet camp2021 testing modules and controlrepoPuppet
This document discusses testing Puppet code when using modules versus a control repository. It recommends starting with simple syntax and unit tests using PDK or rspec-puppet for modules, and using OnceOver for testing control repositories, as it is specially designed for this purpose. OnceOver allows defining classes, nodes, and a test matrix to run syntax, unit, and acceptance tests across different configurations. Moving from simple to more complex testing approaches like acceptance tests is suggested. PDK and OnceOver both have limitations for testing across operating systems that may require customizing spec tests. Infrastructure for running acceptance tests in VMs or containers is also discussed.
This document appears to be for a PuppetCamp 2021 presentation by Corey Osman of NWOPS, LLC. It includes information about Corey Osman and NWOPS, as well as sections on efficient development, presentation content, demo main points, Git strategies including single branch and environment branch strategies, and workflow improvements. Contact information is provided at the bottom.
The document discusses operational verification and how Puppet is working on a new module to provide more confidence in infrastructure health. It introduces the concept of adding check resources to catalogs to validate configurations and service health directly during Puppet runs. Examples are provided of how this could detect issues earlier than current methods. Next steps outlined include integrating checks into more resource types, fixing reporting, integrating into modules, and gathering feedback. This allows testing and monitoring to converge by embedding checks within configurations.
This document provides tips and tricks for using Puppet with VS Code, including links to settings examples and recommended extensions to install like Gitlens, Remote Development Pack, Puppet Extension, Ruby, YAML Extension, and PowerShell Extension. It also mentions there will be a demo.
- The document discusses various patterns and techniques the author has found useful when working with Puppet modules over 10+ years, including some that may be considered unorthodox or anti-patterns by some.
- Key topics covered include optimization of reusable modules, custom data types, Bolt tasks and plans, external facts, Hiera classification, ensuring resources for presence/absence, application abstraction with Tiny Puppet, and class-based noop management.
- The author argues that some established patterns like roles and profiles can evolve to be more flexible, and that running production nodes in noop mode with controls may be preferable to fully enforcing on all nodes.
Applying Roles and Profiles method to compliance codePuppet
This document discusses adapting the roles and profiles design pattern to writing compliance code in Puppet modules. It begins by noting the challenges of writing compliance code, such as it touching many parts of nodes and leading to sprawling code. It then provides an overview of the roles and profiles pattern, which uses simple "front-end" roles/interfaces and more complex "back-end" profiles/implementations. The rest of the document discusses how to apply this pattern when authoring Puppet modules for compliance - including creating interface and implementation classes, using Hiera for configuration, and tools for reducing boilerplate code. It aims to provide a maintainable structure and simplify adapting to new compliance frameworks or requirements.
This document discusses Kinney Group's Puppet compliance framework for automating STIG compliance and reporting. It notes that customers often implement compliance Puppet code poorly or lack appropriate Puppet knowledge. The framework aims to standardize compliance modules that are data-driven and customizable. It addresses challenges like conflicting modules and keeping compliance current after implementation. The framework generates automated STIG checklists and plans future integration with Puppet Enterprise and Splunk for continued compliance reporting. Kinney Group cites practical experience implementing the framework for various military and government customers.
Enforce compliance policy with model-driven automationPuppet
This document discusses model-driven automation for enforcing compliance. It begins with an overview of compliance benchmarks and the CIS benchmarks. It then discusses implementing benchmarks, common challenges around configuration drift and lack of visibility, and how to define compliance policy as code. The key points are that automation is essential for compliance at scale; a model-driven approach defines how a system should be configured and uses desired-state enforcement to keep systems compliant; and defining compliance policy as code, managing it with source control, and automating it with CI/CD helps achieve continuous compliance.
This document discusses how organizations can move from a reactive approach to compliance to a proactive approach using automation. It notes that over 50% of CIOs cite security and compliance as a barrier to IT modernization. Puppet offers an end-to-end compliance solution that allows organizations to automatically eliminate configuration drift, enforce compliance at scale across operating systems and environments, and define policy as code. The solution helps organizations improve compliance from 50% to over 90% compliant. The document argues that taking a proactive automation approach to compliance can turn it into a competitive advantage by improving speed and innovation.
Automating it management with Puppet + ServiceNowPuppet
As the leading IT Service Management and IT Operations Management platform in the marketplace, ServiceNow is used by many organizations to address everything from self service IT requests to Change, Incident and Problem Management. The strength of the platform is in the workflows and processes that are built around the shared data model, represented in the CMDB. This provides the ‘single source of truth’ for the organization.
Puppet Enterprise is a leading automation platform focused on the IT Configuration Management and Compliance space. Puppet Enterprise has a unique perspective on the state of systems being managed, constantly being updated and kept accurate as part of the regular Puppet operation. Puppet Enterprise is the automation engine ensuring that the environment stays consistent and in compliance.
In this webinar, we will explore how to maximize the value of both solutions, with Puppet Enterprise automating the actions required to drive a change, and ServiceNow governing the process around that change, from definition to approval. We will introduce and demonstrate several published integration points between the two solutions, in the areas of Self-Service Infrastructure, Enriched Change Management and Automated Incident Registration.
This document promotes Puppet as a tool for hardening Windows environments. It states that Puppet can be used to harden Windows with one line of code, detect drift from desired configurations, report on missing or changing requirements, reverse engineer existing configurations, secure IIS, and export configurations to the cloud. Benefits of Puppet mentioned include hardening Windows environments, finding drift for investigation, easily passing audits, compliance reporting, easy exceptions, and exporting configurations. It also directs users to Puppet Forge modules for securing Windows and IIS.
Simplified Patch Management with Puppet - Oct. 2020Puppet
Does your company struggle with patching systems? If so, you’re not alone — most organizations have attempted to solve this issue by cobbling together multiple tools, processes, and different teams, which can make an already complicated issue worse.
Puppet helps keep hosts healthy, secure and compliant by replacing time-consuming and error prone patching processes with Puppet’s automated patching solution.
Join this webinar to learn how to do the following with Puppet:
Eliminate manual patching processes with pre-built patching automation for Windows and Linux systems.
Gain visibility into patching status across your estate regardless of OS with new patching solution from the PE console.
Ensure your systems are compliant and patched in a healthy state
How Puppet Enterprise makes patch management easy across your Windows and Linux operating systems.
Presented by: Margaret Lee, Product Manager, Puppet, and Ajay Sridhar, Sr. Sales Engineer, Puppet.
The document discusses how Puppet can be used to accelerate adoption of Microsoft Azure. It describes lift and shift migration of on-premises workloads to Azure virtual machines. It also covers infrastructure as code using Puppet and Terraform for provisioning, configuration management using Puppet Bolt, and implementing immutable infrastructure patterns on Azure. Integrations with Azure services like Key Vault, Blob Storage and metadata service are presented. Patch management and inventory of Azure resources with Puppet are also summarized.
This document discusses using Puppet Catalog Diff to analyze the impact of changes between Puppet environments or catalogs. It provides the command line usage and options for Puppet Catalog Diff. It also discusses how to integrate Puppet Catalog Diff into CI/CD pipelines for automated impact analysis when merging code changes. Additional resources like GitHub projects and Dev.to posts are provided for learning more about diffing Puppet environments and catalogs.
ServiceNow and Puppet- better together, Kevin ReeuwijkPuppet
ServiceNow and Puppet can be integrated in four key areas: 1) Self-service infrastructure allows non-Puppet experts to control infrastructure through a ServiceNow interface; 2) Enriched change management automatically generates ServiceNow change requests from Puppet changes and populates them with impact details; 3) Automated incident registration forwards details of configuration drift corrections in Puppet to ServiceNow to create incidents; and 4) Up-to-date asset management would periodically upload Puppet inventory data to ServiceNow to keep the CMDB accurate without disruptive discovery runs.
This document discusses how Puppet Relay uses Tekton pipelines to orchestrate containerized workflows. It provides an overview of how Tekton fits into the Relay architecture, with Tekton controllers managing taskrun pods to execute workflow steps defined in YAML. Triggers can initiate workflows based on events, with reusable and composable steps for tasks like provisioning infrastructure or clearing resources. Relay also includes features for parameters, secrets, outputs, and approvals to customize workflows. An ecosystem of open source integrations provides sample workflows and steps for common use cases.
100% Puppet Cloud Deployment of Legacy SoftwarePuppet
This document discusses deploying legacy software into the AWS cloud using Puppet. It proposes modeling AWS resources like security groups, autoscaling groups, and launch configurations as Puppet resources. This would allow Puppet to provision the underlying AWS infrastructure and configure servers launched in autoscaling groups. It acknowledges challenges around server reboots but suggests they can be addressed. In summary, it argues custom Puppet resources can easily model AWS resources and using Puppet to configure autoscaling servers is possible despite some challenges around rebooting servers during deployment.
This document discusses a partnership between Republic Polytechnic's School of Infocomm and Puppet to promote DevOps practices. It introduces several people involved with the partnership and outlines their mission to prepare more IT companies and individuals for jobs in the DevOps field through training courses. The document describes some short courses offered on DevOps topics and using the Puppet and Microsoft Azure platforms. It provides an example of how Republic Polytechnic has automated infrastructure configuration using Puppet to save time and reduce errors. There is a request at the end for readers to register their interest in DevOps by completing a survey.
This document discusses continuous compliance and DevSecOps best practices followed by financial services organizations.
Continuous compliance is defined as an ongoing process of proactive risk management that delivers predictable, transparent, and cost-effective compliance results. It involves continuously monitoring compliance controls, providing real-time alerts for failures and remediation recommendations, and maintaining up-to-date policies. Best practices for continuous compliance discussed include defining CIS controls and benchmarks, achieving transparent compliance dashboards and automated fixes for breaches.
DevSecOps is introduced as bringing security earlier in the application development lifecycle to minimize vulnerabilities. It aims to make everyone accountable for security. Challenges discussed include security teams struggling to keep up with DevOps pace and
The Dynamic Duo of Puppet and Vault tame SSL Certificates, Nick MaludyPuppet
The document discusses using Puppet and Vault together to dynamically manage SSL certificates. Puppet can use the vault_cert resource to request signed certificates from Vault and configure services to use the certificates. On Windows, some additional logic is needed to retrieve certificates' thumbprints and bind services to certificates using those thumbprints. This approach provides automated certificate renewal and distribution across platforms.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
Webinar: Designing a schema for a Data WarehouseFederico Razzoli
Are you new to data warehouses (DWH)? Do you need to check whether your data warehouse follows the best practices for a good design? In both cases, this webinar is for you.
A data warehouse is a central relational database that contains all measurements about a business or an organisation. This data comes from a variety of heterogeneous data sources, which includes databases of any type that back the applications used by the company, data files exported by some applications, or APIs provided by internal or external services.
But designing a data warehouse correctly is a hard task, which requires gathering information about the business processes that need to be analysed in the first place. These processes must be translated into so-called star schemas, which means, denormalised databases where each table represents a dimension or facts.
We will discuss these topics:
- How to gather information about a business;
- Understanding dictionaries and how to identify business entities;
- Dimensions and facts;
- Setting a table granularity;
- Types of facts;
- Types of dimensions;
- Snowflakes and how to avoid them;
- Expanding existing dimensions and facts.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Project Management Semester Long Project - Acuityjpupo2018
Acuity is an innovative learning app designed to transform the way you engage with knowledge. Powered by AI technology, Acuity takes complex topics and distills them into concise, interactive summaries that are easy to read & understand. Whether you're exploring the depths of quantum mechanics or seeking insight into historical events, Acuity provides the key information you need without the burden of lengthy texts.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
3. Housekeeping
● If you get stuck or are having technical
issues- please submit your questions in the
Q&A Chat and our team can help you out.
● You can also communicate with us via the
event chat.
● This workshop will be recorded, and we will
share the recording afterwards via email.
3
4. Agenda
- What is Bolt?
- Installation and Configuration
- Bolt Basics
- Creating a Bolt inventory
- Executing commands and scripts
- Converting scripts to tasks
- Executing tasks
- Executing plans
- Review
- Q & A
5. Our target for today
• You’ve been assigned a machine that will look something like this:
boltshopwin##.classroom.puppet.com
• The highly secure credentials are Administrator / Puppetlabs!
• We will use an alias to refer to this machine as www.
• We will be using Bolt to connect to this machine over WinRM.
• We might optionally RDP to the machine towards the end of the workshop.
PUPPET OVERVIEW5
7. What is Bolt?
• On-demand execution of commands, scripts in any language or level up to Bolt Tasks
and/or Plans.
• Can execute with or without and agent. (Puppet Agent, SSH or WinRM)
• Helps to define your overall automation story. Mature from commands and scripts to tasks
and plans or desired state where it makes the most sense.
• Bolt in Puppet Enterprise offers role-based access controls, a web console for centralized
operations and logging/auditing.
BOLT WORKSHOP7
8. Review - Types of Bolt Automation
PUPPET OVERVIEW8
• Commands
Scale a simple command to a plethora of systems.
• Scripts
Write in the language of your choice and target
remote systems.
• Tasks
Execute scripts with input validation, descriptive text
and cross-platform capabilities.
• Plans
Perform a step-based workflow consisting of
commands, scripts, tasks, plans or puppet code.
Note: Miscellaneous other types are available, like “apply” for puppet code and “file” for uploading/downloading files. We will
be focusing on the above. Try ‘bolt --help’ for a list of additional commands.
10. Installing Bolt
• Available as a client tool for Windows, MacOS and Linux
• Available as a docker image puppet/puppet-bolt on Docker Hub.
• Available inside of Azure Cloud Shell, both bash and PowerShell variants.
For more installation information, visit:
https://puppet.com/docs/bolt/latest/bolt_installing.html
11. Verifying your Bolt installation
• Open a shell.
• Type bolt --version
• This course requires a version greater than 2.23.0
For more installation information, visit:
https://puppet.com/docs/bolt/latest/bolt_installing.html
PUPPET OVERVIEW11
13. Organizing Bolt Content
with Puppet Modules
• Use puppet manifests, bolt tasks or bolt plans together inside of one module.
• manifests live in <module>/manifests
• plans live in <module>/plans
• tasks live in <module>/tasks
• Use this method when you want to make your Bolt content accessible to other Puppet
Enterprise users in your organization.
• Does not allow for additional puppet modules to be imported, deferring to your Puppet
control repository.
PUPPET OVERVIEW13
14. Organizing Bolt Content
with Bolt Projects
• Bolt Projects are stand-alone content, typically with all dependencies contained within the
project.
• Bolt Projects are decentralized from a traditional Puppet Enterprise infrastructure. We use
the traditional puppet module workflow for that.
• Bolt Projects can pull in any of the 6500+ puppet modules available on the Forge,
including tasks, plans and desired state code.
PUPPET OVERVIEW14
16. Exercise #2: Downloading the Bolt Project
https://github.com/puppetlabs-seteam/windows-boltshop
• Clone or download from the above link.
• Place into a ‘boltshop’ directory where you like.
• Open a shell and change to that directory.
• Run bolt task show to verify you have tasks that start with boltshop::.
Note: If you are using PowerShell, make sure your boltshop path is respecting case
sensitivity.
PUPPET OVERVIEW16
17. Exercise #2 Review
https://github.com/puppetlabs-seteam/windows-boltshop
• We just downloaded a Bolt Project
• Batteries are included. All dependencies for today’s workshop are included in the
workshop folder.
• This content should also be applicable outside of our virtual workshop environment today.
Kids, try this at home!
PUPPET OVERVIEW17
19. What’s in Our Bolt Project?
PUPPET OVERVIEW19
File or Folder Description
bolt-project.yaml Project specific metadata (I.e. name, public/private tasks, etc…)
inventory.yaml A static or dynamic list of servers along with relevant connection settings.
Puppetfile A list of modules, versions and dependencies we are using in this project. All modules will download
to the modules folder unless specified as local.
modules/ This folder stores any puppet forge or custom modules that we would like to use with our project.
The list of modules and dependencies are specified in the Puppetfile
tasks/* The folder that contains our Bolt Tasks.
plans/* The folder that contains our Bolt Plans.
files/* Content for our webserver to serve up.
20. Review: bolt-project.yaml
• Contains the name of the project. All
tasks and plans will start with
“boltshop” per the example to the right.
• Contains project specific configuration
items, like a custom inventory or
modulepath.
PUPPET OVERVIEW20
24. Exercise #3: Managing a Static Inventory File
1. Edit inventory.yaml
2. Replace the uri and alias fields your
assigned server’s FQDN and ‘www’,
respectively.
3. Credentials are Administrator/Puppetlabs!
4. Open a shell and change to the boltshop
directory.
5. From your shell, run
bolt inventory show --targets windows
PUPPET OVERVIEW24
25. Exercise #3 Review
1. Manage your server groups and connection info in the inventory.yaml file.
2. Inventory can be static or dynamic by adding content from the Puppet Forge for Terraform, Azure or
AWS.
3. Inventory can connect to Puppet Enterprise / PuppetDB for querying node already under configuration
management.
4. Dynamic inventory plugins can also be developed for other clouds / inventory systems.
PUPPET OVERVIEW25
27. Using Commands and Scripts
1. Commands default to PowerShell (Windows) or
default shell on Linux.
2. If your command line is getting too long or wild,
move it to a PowerShell script (or language of
choice)
3. Scripts can accept arguments, but they are not
validated.
PUPPET OVERVIEW27
28. Bolt Syntax
PUPPET OVERVIEW28
• Bolt command line syntax:
bolt [command|script|task|plan] run <name> --targets <targets> [options]
• To run a simple PowerShell command on a remote WinRM host:
bolt command run 'write-host Hello World!' --targets 10.0.0.1,10.0.0.2
--user Administrator --password ‘Puppetlabs!' --transport winrm --no-ssl
• To run a simple Bash command on a remote SSH host:
bolt command run 'echo Hello World!' --targets 10.0.0.1,10.0.0.2
--user root --private-key /path/to/key --transport ssh --no-host-key-check
30. Exercise #4: Execute Commands and Scripts
1. Open a shell and change to the boltshop directory.
2. From your shell, run
bolt command run ‘write-output “hello world!”’ --targets windows
3. From your shell, run
bolt script run examples/helloworld.ps1 --targets windows
PUPPET OVERVIEW30
31. Exercise #4 Review
1. Let’s Review
• We ran a command and a script. Congrats, you’re an Automator now! Update that resume.
• You just connected to a server over WinRM. SSH and puppet agent are also supported, as well as
both secure and insecure options based on environment.
• If WinRM security or configuration is an issue in your environment and you have PE, using the agent
to manage access is highly recommended.
2. What are we leaving out?
• We’re still using commands and scripts. Task and Plans give us more flexibility and scale better.
3. Aren’t scripts and commands enough?
• Depends on your environment. Getting existing scripts/commands into an automation framework for
reusability can be a crucial first step in organizing your environment for standardization and
consistent work.
• If you are sharing across teams or need to perform more than one action person script, a task or
plan is more suitable.
PUPPET OVERVIEW31
33. Scripts into Tasks!
Scripts into Tasks!
• Make your scripts more useful in Bolt by turning them into Puppet Tasks
• Any script file in a tasks directory of a module becomes a Task
• Parameters in Bolt pass through to the Param() block in PowerShell.
• Tasks are name spaced automatically, using familiar Puppet syntax:
site/mymod/tasks/script1.ps1 # mymod::script1
site/aws/tasks/show_vpc.sh # aws::show_vpc
site/mysql/tasks/sql.rb # mysql::sql
site/yum/tasks/init.rb # yum
PUPPET OVERVIEW33
34. Define “more useful” please.
Scripts into Tasks!
• Descriptive text. Know what the task does, what the parameters do and what
type of input you need to enter for the task to be successful.
• Can be cross platform. Define scripts to execute for both Linux and Windows
servers.
• Can be imported into Puppet Enterprise and executed through the GUI.
PUPPET OVERVIEW34
35. What is a task?
1. A script (in the language of your choice…I know, last time.)
2. Some metadata in JSON format
1. A description for the task and each parameter.
2. Any required or optional parameters along with the type of input required.
3. Any additional implementation details, like which script to execute per OS.
3. Lives in the <project>/tasks folder.
PUPPET OVERVIEW35
37. Exercise #5: Execute a Task
• Run bolt task show to see available tasks.
• Run bolt task run boltshop::helloworld –t www
• Run bolt task run boltshop::helloworld –t www name=<yournamehere>
PUPPET OVERVIEW37
38. Exercise #5 Review: Execute a Task
• Tasks offer descriptions for the task itself and any parameters
• Tasks contain metadata for input validation and other runtime requirements.
• Tasks contain the mentioned metadata (JSON) file and the PowerShell script. The
metadata is what makes it a task. Otherwise, it’s just PowerShell.
• Bolt parameters map to parameters defined in the Param() block in PowerShell by default.
You can also specify STDIN in the implementation details, or environment variables on
Linux.
PUPPET OVERVIEW38
40. Review: boltshop::windowsfeature
• Look at the powershell script and how we would think of reusing this across teams.
• Action – install or uninstall
• Feature – the name of the feature
• If you’ve worked with declarative languages like PowerShell DSC or Puppet, this starts to
push the boundary of where it’s easier to just use those as the solution.
PUPPET OVERVIEW40
42. Exercise #6: Execute a Windows Feature Task
1. Open a shell and change to your boltshop directory
2. Run bolt task show boltshop::windowsfeature
3. Run the following:
bolt task run boltshop::windowsfeature --targets www action=install feature=web-webserver`
4. When completed, visit http://<your_webserver>
5. Congrats, you’ve built a webserver!
PUPPET OVERVIEW42
43. Exercise #6 Review
1. We ran a task! Script + Metadata = Task.
2. We installed a Windows Feature. Think about all the additional parameters that go into the
Install-WindowsFeature cmdlet and what we missed.
3. That’s a good case for explicit commands or desired state.
4. We ran a single task, but building a web server typically involves more than just installing a
Windows feature. A step-based approach to automating the stand up of the webserver
will help here.
PUPPET OVERVIEW43
45. About Bolt Plans
1. Step based orchestration. In short, “Do this, then that”.
2. Can mix and match command, scripts, tasks, other plans and even puppet code.
3. Can specify different targets per step.
4. Can use YAML or the puppet language. Ease vs. Power.
5. We will use YAML for today’s workshop
PUPPET OVERVIEW45
46. Our webserver plan
1. Each step can be a command, script,
task, plan, puppet apply or file
upload/download
2. Each step can have different targets.
3. Descriptions exist for both the plan and
each step.
4. Global parameters can be used in any
step.
5. Bolt executes steps in order.
PUPPET OVERVIEW46
48. Why use puppet code?
1. The Puppet Forge has about 6500 modules available today.
2. If you have scripts or commands saved somewhere it’s pretty simple to create tasks. If
you have nothing, you can leverage the forge instead of reinventing the wheel.
3. Idempotency.
PUPPET OVERVIEW48
49. Why use puppet code?
1. I want to ensure the web server is installed.
2. I want to ensure the management tools are installed with it.
3. I don’t have to apply any further conditional logic.
4. If I want to remove it, switch ensure to absent.
PUPPET OVERVIEW49
50. Writing Puppet Code with YAML
1. Specify with the resources key vs
script/task/command/etc…
2. Parameters go under parameters.
3. We’ll use puppet code for IIS
instead of a bunch of PowerShell.
PUPPET OVERVIEW50
52. Exercise #7: Build a Web Server with a Bolt Plan
1. Cd to your boltshop directory
2. Run bolt plan show boltshop::build_webserver
3. Run bolt plan run boltshop::build_webserver --targets www
4. When completed, visit <your_webserver>
5. Congrats, you’ve customized your webserver.
PUPPET OVERVIEW52
53. Exercise #7 Review
1. We just executed a YAML plan that included commands, file uploads and puppet code.
2. We were able to mix and match and specify targets. In this case, it’s the same target, but
each step can target something different.
3. We just orchestrated several steps to create a webserver. The same model can be
applied to a multi-server IIS/SQL setup, patching and rebooting systems and more!
PUPPET OVERVIEW53
55. Lab Steps
1. Open plans/build_webserver.yaml
2. Add a “message of the day”, aka
logon message / legal notice text.
• Under the last IIS resource, add the motd class
• Set your title and content parameters.
• Save the file
3. Run the following:
bolt plan run boltshop::build_webserver --targets www
4. You should see resources changed. Now RDP to
your server.
5. If successful, after auth you should be prompted
with the MOTD.
PUPPET OVERVIEW55
56. Exercise #8 Review
1. We can easily add puppet code to our modules by leveraging the Puppet Forge.
2. When we re-apply puppet code we see a report of resources changed.
3. Visit forge.puppet.com for more available modules
and additional Windows content.
PUPPET OVERVIEW56
58. Bolt, now with 100% more PowerShell cmdlets!
1. Bolt now has PowerShell cmdlets!
2. The same Bolt command in PS cmdlet is:
Invoke-BoltPlan -Name boltshop::build_webserver -Targets www
3. Run Get-Command *Bolt* for a list of cmdlets.
PUPPET OVERVIEW58
59. Lab #9 Review
1. Bolt now has PowerShell cmdlets!
2. Cmdlets can be used instead of the traditional Bolt commands.
3. This is an early feature, so watch this space and let us know if you plan on using it in
the follow up survey.
PUPPET OVERVIEW59
61. What did I learn today?
- What is Bolt.
- How to run commands and scripts through Bolt.
- How to build and execute a task for scaling scripts and commands and distributing
amongst teams with diverse skill sets.
- How to build and execute a YAML plan to build step-based orchestration to stand up a
simple IIS webserver.
- How to use puppet modules within a plan.
- How to use PowerShell cmdlets to execute a command.
PUPPET OVERVIEW61
62. What’s Next?
- Fill out the follow-up survey!
- Join the Puppet Community slack!
(especially the #bolt and #windows channels)
https://slack.puppet.com
- Attend our virtual Puppet Camp Central on 9/24. Includes talks about Bolt on Windows!
https://info.puppet.com/09-24-Puppet-Camp-America-Central.html
PUPPET OVERVIEW62
64. Get in Touch
● Matt Stone: matthew.stone@puppet.com
● John Laffey: john.laffey@puppet.com
● Dan Shauver: shauver@puppet.com
● Rajesh Radhakrishnan:
rajesh.radhakrishnan@puppet.com
● Paul Reed: paul.reed@puppet.com
64