Software Engineering Practice - Configuration managementRadu_Negulescu
Configuration management involves tracking changes made to source code, documentation, and other project artifacts. It aims to prevent inconsistencies by enforcing processes for concurrent editing, versioning, labeling, and life cycle management. Some key challenges are dealing with large numbers of changes, rapid project growth, and potential bureaucracy. Common tools like version control systems and makefiles automate many configuration management tasks.
This document provides an overview of an IBM Rational Integration Tester training course. The summary is:
The course covers modeling systems, creating test cases, and analyzing results using IBM Rational Integration Tester. It introduces key concepts like the logical and physical views of systems, defining message schemas and exchange patterns, and building test cases. The document provides guidance on setting up a test project in Rational Integration Tester, including configuring the library manager, database, and environments.
Rit 8.5.0 virtualization training slidesDarrel Rader
The document discusses virtualization with IBM Rational Integration Tester. It introduces service virtualization and describes how virtualization can be used to isolate components for testing. It also discusses building a system model from recorded events, managing recorded events, creating and running simple stubs, and publishing and deploying stubs. The key points are that virtualization allows isolation of components for testing, events can be recorded to model complex systems, and stubs can be created, run, published and deployed to simulate system components.
This document provides an overview of applications in UrbanCode Deploy. It defines key terminology like components, environments, and processes. It describes how to create and scope applications, add components and resources to environments, and create application processes. It also covers the use of tags, approvals, notifications, properties and snapshots to control deployments.
This document discusses Dockerization as a replacement for virtual machines (VMs) to enable computational replication. It outlines some of the challenges with using VMs for computational replication, including dependency issues, software dynamicity, limited documentation, and barriers to adoption. The document then introduces Docker as a solution, describing how Docker images can help address dependency issues and how Docker simplifies updating software. Key features of Docker that enable effective computational replication are also highlighted, such as development over local environments, effective configuration, enhanced productivity, and application isolation through containers.
Kovair Agile Solution is an implementation of Agile based on SCRUM methodology using Kovair Platform. Kovair has extended SCRUM methodology to implement various tools necessary to implement Agile in a distributed development scenario.
An Approach To Software Development Life CycleBettyBaker
The document describes the waterfall software development life cycle (SDLC) approach and a modified implementation of it. The waterfall approach consists of five phases: requirements, design, coding, testing, and maintenance. The modified approach combines the requirements and design phases into a systems engineering phase. It also implements coding in mini code locks with testing after each lock rather than once at the end. Both aim to systematically structure the development process.
The document provides release notes for updated training workshops for IBM Rational Integration Tester 8.5.0. Major changes include:
1) All modules have been updated to use Linux instead of Windows.
2) New Platform modules have been added that simplify content using the addNumbers web service.
3) Modules have been updated for new features in Rational Integration Tester 8.5.0, including use of synchronization for system modeling and new exercises on test automation, virtualization, and performance testing.
Software Engineering Practice - Configuration managementRadu_Negulescu
Configuration management involves tracking changes made to source code, documentation, and other project artifacts. It aims to prevent inconsistencies by enforcing processes for concurrent editing, versioning, labeling, and life cycle management. Some key challenges are dealing with large numbers of changes, rapid project growth, and potential bureaucracy. Common tools like version control systems and makefiles automate many configuration management tasks.
This document provides an overview of an IBM Rational Integration Tester training course. The summary is:
The course covers modeling systems, creating test cases, and analyzing results using IBM Rational Integration Tester. It introduces key concepts like the logical and physical views of systems, defining message schemas and exchange patterns, and building test cases. The document provides guidance on setting up a test project in Rational Integration Tester, including configuring the library manager, database, and environments.
Rit 8.5.0 virtualization training slidesDarrel Rader
The document discusses virtualization with IBM Rational Integration Tester. It introduces service virtualization and describes how virtualization can be used to isolate components for testing. It also discusses building a system model from recorded events, managing recorded events, creating and running simple stubs, and publishing and deploying stubs. The key points are that virtualization allows isolation of components for testing, events can be recorded to model complex systems, and stubs can be created, run, published and deployed to simulate system components.
This document provides an overview of applications in UrbanCode Deploy. It defines key terminology like components, environments, and processes. It describes how to create and scope applications, add components and resources to environments, and create application processes. It also covers the use of tags, approvals, notifications, properties and snapshots to control deployments.
This document discusses Dockerization as a replacement for virtual machines (VMs) to enable computational replication. It outlines some of the challenges with using VMs for computational replication, including dependency issues, software dynamicity, limited documentation, and barriers to adoption. The document then introduces Docker as a solution, describing how Docker images can help address dependency issues and how Docker simplifies updating software. Key features of Docker that enable effective computational replication are also highlighted, such as development over local environments, effective configuration, enhanced productivity, and application isolation through containers.
Kovair Agile Solution is an implementation of Agile based on SCRUM methodology using Kovair Platform. Kovair has extended SCRUM methodology to implement various tools necessary to implement Agile in a distributed development scenario.
An Approach To Software Development Life CycleBettyBaker
The document describes the waterfall software development life cycle (SDLC) approach and a modified implementation of it. The waterfall approach consists of five phases: requirements, design, coding, testing, and maintenance. The modified approach combines the requirements and design phases into a systems engineering phase. It also implements coding in mini code locks with testing after each lock rather than once at the end. Both aim to systematically structure the development process.
The document provides release notes for updated training workshops for IBM Rational Integration Tester 8.5.0. Major changes include:
1) All modules have been updated to use Linux instead of Windows.
2) New Platform modules have been added that simplify content using the addNumbers web service.
3) Modules have been updated for new features in Rational Integration Tester 8.5.0, including use of synchronization for system modeling and new exercises on test automation, virtualization, and performance testing.
The document discusses operating system support for dynamic reconfiguration in embedded systems. It proposes a methodology using Linux to manage reconfiguration of FPGA devices. The key aspects are:
1) Defining a portable Linux-based solution to manage partial reconfiguration through a reconfiguration manager that handles requests from applications for functionalities.
2) Implementing a hardware-independent interface for applications to interact with configured hardware modules through standard Linux I/O functions.
3) Efficiently managing FPGA resources from within the operating system through a centralized reconfiguration manager that chooses configurations and caches modules.
This document provides an overview of components in UrbanCode Deploy. It discusses what components represent, how to create a component, import versions, and define processes. Components group related deployable artifacts and processes. When creating a component, you specify properties, import artifacts from a source configuration, and define default settings. Version importing involves selecting an agent and importing files. Component processes automate deployment through a graphical workflow that can download and execute steps on target servers.
Presentation on component based software engineering(cbse)Chandan Thakur
The document presents an overview of component based software engineering. It discusses what a component is, the fundamental principles of CBSE, the CBSE development lifecycle, and metrics used in CBSE. Benefits include reduced complexity and development time while difficulties include quality of components and satisfying requirements. CBSE uses pre-built components while traditional SE builds from scratch. Current component technologies discussed are CORBA, COM, EJB, and IDL. Applications of CBSE are in many domains.
(LinuxCon Japan 2016)
Linux has become one of the most important software to run the Civil Infrastructure Systems such as power plants, water distribution, traffic control and healthcare. However, existing software platforms are not yet industrial grade (in addressing safety, reliability and other requirements for infrastructure). At the same time, rapid advances in machine-to-machine connectivity are driving change in industrial system architectures.
The Linux Foundation establishes "Civil Infrastructure Platform(CIP)" as a new collaborative project. CIP aims to develop a super long-term supported open source "base layer" of industrial grade software. This base layer enables the use of software building blocks that meet requirements of industrial and civil infrastructure projects. In this talk, we will explain the technical details and focuses of this project.
The Civil Infrastructure Platform (CIP) proposes establishing an open source software platform to provide industrial grade software building blocks for critical infrastructure projects. The CIP will select and maintain open source components through long term support, focusing initially on a Linux kernel and core packages. It will be hosted by the Linux Foundation and aims to drive standardization and collaboration to reduce costs and improve quality for civil infrastructure systems.
Component-based software engineering (CBSE) is a process that emphasizes designing and building systems using reusable software components. It emerged from failures of object-oriented development to enable effective reuse. CBSE follows a "buy, don't build" philosophy where requirements are met through available components rather than custom development. The CBSE process involves identifying components, qualifying them, adapting them if needed, and assembling them within an architectural design. This leverages reuse for increased quality, productivity, and reduced development time compared to traditional software engineering approaches.
VMworld 2013: Moving Enterprise Application Dev/Test to VMware’s Internal Pri...VMworld
VMworld 2013
Thirumalesh Reddy, VMware
Padmaja Vrudhula, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
Key elements
Dynamic Distributed MILS platform
Dynamic MILS platform with deterministic networking
Mechanisms for dynamic reconfiguration and configuration introspection
Declarative dynamic architecture modeling and verification
Language to describe reconfigurable systems architecture, component models, failure models and fault propagation
Theory and framework for dynamic reconfiguration
Theory and framework for adaptation
Language to express critical properties to be verified
Compositional verification framework
Monitoring, Adaptation, Configuration, & Certification Assurance Planes
Assurance-based security evaluation methodology and runtime mechanisms for just-in-time certification of adaptive systems
1. The document presents a case study applying an enterprise configuration management platform, ScriptRock, to a multi-agent robotic system to improve reconfiguration times and simplify troubleshooting.
2. The robotic system consists of unmanned ground vehicles and a ground control station running various software modules. ScriptRock allows validating configurations by encoding requirements as executable tests.
3. An experiment was conducted to gauge the benefits of using ScriptRock for configuration management over existing manual methods on the robotic system. Results showed improved reconfiguration times and simplified troubleshooting.
Mannu Kumar has over 8 years of experience in software design and development. He has expertise in requirements analysis, design, development, and delivery of software products from end to end. He has worked on several projects related to storage, cloud storage, and distributed systems using technologies like C++, Python, OpenStack Swift, and more. His roles have included requirements gathering, design, development, testing, and resolving integration issues.
- Traditionally, separate teams handled software development, release, and support, which caused delays. The DevOps approach combines these roles into a single multi-skilled team.
- Three factors drove DevOps adoption: Agile reduced development time but introduced bottlenecks; Amazon improved reliability with single teams; software could be released as a service.
- DevOps benefits include faster deployment, reduced risk, and faster repair through collaboration between development and operations teams.
This document provides information on a course titled "Software Engineering" taught by Dr. P. Visu at Velammal Engineering College. The objectives of the course are outlined, including understanding software project phases, requirements engineering, object-oriented concepts, enterprise integration, and testing and project management techniques. Six course outcomes are also listed relating to comparing process models, requirements engineering, object-oriented fundamentals, software design, testing techniques, and project estimation and scheduling. The document then provides details on the 5 course units covering software process and agile development, requirements analysis, object-oriented concepts, software design, and testing and project management. Learning resources including textbooks and online links are also listed.
The vision of Autonomic Computing and Self-Adaptive Software Systems aims at realizing software that autonomously manage itself in presence of varying environmental conditions. Feedback Control Loops (FCL) provide generic mechanisms for self-adaptation, however, incorporating them into software systems raises many challenges.
The first part of this thesis addresses the integration challenge, i.e., forming the architecture connection between the underlying adaptable software and the adaptation engine. We propose a domain-specific modeling language, FCDL, for integrating adaptation mechanisms into software systems through external FCLs. It raises the level of abstraction, making FCLs amenable to automated analysis and implementation code synthesis. The language supports composition, distribution and reflection thereby enabling coordination and composition of multiple distributed FCLs. Its use is facilitated by a modeling environment, ACTRESS, that provides support for modeling, verification and complete code generation. The suitability of our approach is illustrated on three real-world adaptation scenarios.
The second part of this thesis focuses on model manipulation as the underlying facility for implementing ACTRESS. We propose an internal Domain-Specific Language (DSL) approach whereby Scala is used to implement a family of DSLs, SIGMA, for model consistency checking and model transformations. The DSLs have similar expressiveness and features to existing approaches, while leveraging Scala versatility, performance and tool support.
To conclude this thesis we discuss further work and further research directions for MDE applications to self-adaptive software systems.
The document provides an overview of an architecture example at DAFCA and discusses:
1) Key patterns used including Command, Template Method, Composite, and Layered Architecture patterns to encapsulate functionality and enforce pre/post conditions.
2) The emergence of domain concepts like Instruments, Commands, and Coordinators that mapped to user intent and hid implementation details.
3) How the architecture guided and enabled users to instrument designs while encapsulating DAFCA-specific logic.
The IBM Quick Deployer is an automated utility that uses IBM UrbanCode Deploy to install and configure IBM's Collaborative Lifecycle Management (CLM) and IoT Continuous Engineering (CE) solutions on Red Hat Linux virtual machines. It supports standard CLM deployment topologies, installs all required middleware like WebSphere Application Server and DB2, and configures applications. The Quick Deployer is used internally by IBM and available publicly to download, though support is not provided. It automates what was previously a manual process.
ACTRESS: Domain-Specific Modeling of Self-Adaptive Software ArchitecturesFilip Krikava
Presentation given at 29th Symposium On Applied Computing (SAC'14) - Dependable and Adaptive Distributed Systems track.
It is mainly based on the work done during my Ph.D.
The document discusses different software development process models including waterfall, evolutionary development, incremental development, and spiral models. The waterfall model involves sequential phases of requirements, design, implementation, testing and maintenance. However, it does not handle changes well. Evolutionary and incremental models incorporate feedback loops and iterative development. The spiral model is risk-driven and guides teams to adopt elements of other models based on a project's risk assessment.
[2015/2016] Collaborative software development with GitIvano Malavolta
This presentation is about a lecture I gave within the "Software systems and services" immigration course at the Gran Sasso Science Institute, L'Aquila (Italy): http://cs.gssi.infn.it/.
http://www.ivanomalavolta.com
This document proposes an approach to verify the maintainability of software systems using aspect-oriented programming and architecture description languages. It describes using an ADL called MexADL to define maintainability characteristics and sub-characteristics. Aspects are used to associate quality metrics with implementation code and verify maintainability. The approach is demonstrated on an SQL query optimizer case study, showing how aspects capture valid interactions and associate metrics to verify maintainability defined in the ADL. The approach helps detect architectural violations and avoid lack of compliance between design and implementation artifacts.
This document provides an overview of the Topic-Chat project, which aims to develop a chat application for students to discuss different topics and subjects. It includes sections on system analysis, software requirements, selected technologies, system design, and outputs. The key technologies used are Google Cloud Messaging for push notifications, PHP for the server, MySQL for the database, and Android for the client. Diagrams are provided showing the entity relationship, use cases, and system architecture. The outputs demonstrated include admin and student interfaces for registration, login, viewing topics and messages.
The document provides an overview of a presentation on cloud performance testing. The presentation agenda includes cloud 101 concepts, cloud offerings and deployment models, challenges of cloud computing, and tools for cloud performance testing. It also summarizes a proof of concept that was conducted to compare the performance and costs of using a commercial tool versus an open source tool for load testing on cloud infrastructure. The results showed comparable response times between the tools and significantly lower costs when using the cloud versus maintaining physical infrastructure.
The document discusses operating system support for dynamic reconfiguration in embedded systems. It proposes a methodology using Linux to manage reconfiguration of FPGA devices. The key aspects are:
1) Defining a portable Linux-based solution to manage partial reconfiguration through a reconfiguration manager that handles requests from applications for functionalities.
2) Implementing a hardware-independent interface for applications to interact with configured hardware modules through standard Linux I/O functions.
3) Efficiently managing FPGA resources from within the operating system through a centralized reconfiguration manager that chooses configurations and caches modules.
This document provides an overview of components in UrbanCode Deploy. It discusses what components represent, how to create a component, import versions, and define processes. Components group related deployable artifacts and processes. When creating a component, you specify properties, import artifacts from a source configuration, and define default settings. Version importing involves selecting an agent and importing files. Component processes automate deployment through a graphical workflow that can download and execute steps on target servers.
Presentation on component based software engineering(cbse)Chandan Thakur
The document presents an overview of component based software engineering. It discusses what a component is, the fundamental principles of CBSE, the CBSE development lifecycle, and metrics used in CBSE. Benefits include reduced complexity and development time while difficulties include quality of components and satisfying requirements. CBSE uses pre-built components while traditional SE builds from scratch. Current component technologies discussed are CORBA, COM, EJB, and IDL. Applications of CBSE are in many domains.
(LinuxCon Japan 2016)
Linux has become one of the most important software to run the Civil Infrastructure Systems such as power plants, water distribution, traffic control and healthcare. However, existing software platforms are not yet industrial grade (in addressing safety, reliability and other requirements for infrastructure). At the same time, rapid advances in machine-to-machine connectivity are driving change in industrial system architectures.
The Linux Foundation establishes "Civil Infrastructure Platform(CIP)" as a new collaborative project. CIP aims to develop a super long-term supported open source "base layer" of industrial grade software. This base layer enables the use of software building blocks that meet requirements of industrial and civil infrastructure projects. In this talk, we will explain the technical details and focuses of this project.
The Civil Infrastructure Platform (CIP) proposes establishing an open source software platform to provide industrial grade software building blocks for critical infrastructure projects. The CIP will select and maintain open source components through long term support, focusing initially on a Linux kernel and core packages. It will be hosted by the Linux Foundation and aims to drive standardization and collaboration to reduce costs and improve quality for civil infrastructure systems.
Component-based software engineering (CBSE) is a process that emphasizes designing and building systems using reusable software components. It emerged from failures of object-oriented development to enable effective reuse. CBSE follows a "buy, don't build" philosophy where requirements are met through available components rather than custom development. The CBSE process involves identifying components, qualifying them, adapting them if needed, and assembling them within an architectural design. This leverages reuse for increased quality, productivity, and reduced development time compared to traditional software engineering approaches.
VMworld 2013: Moving Enterprise Application Dev/Test to VMware’s Internal Pri...VMworld
VMworld 2013
Thirumalesh Reddy, VMware
Padmaja Vrudhula, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
Key elements
Dynamic Distributed MILS platform
Dynamic MILS platform with deterministic networking
Mechanisms for dynamic reconfiguration and configuration introspection
Declarative dynamic architecture modeling and verification
Language to describe reconfigurable systems architecture, component models, failure models and fault propagation
Theory and framework for dynamic reconfiguration
Theory and framework for adaptation
Language to express critical properties to be verified
Compositional verification framework
Monitoring, Adaptation, Configuration, & Certification Assurance Planes
Assurance-based security evaluation methodology and runtime mechanisms for just-in-time certification of adaptive systems
1. The document presents a case study applying an enterprise configuration management platform, ScriptRock, to a multi-agent robotic system to improve reconfiguration times and simplify troubleshooting.
2. The robotic system consists of unmanned ground vehicles and a ground control station running various software modules. ScriptRock allows validating configurations by encoding requirements as executable tests.
3. An experiment was conducted to gauge the benefits of using ScriptRock for configuration management over existing manual methods on the robotic system. Results showed improved reconfiguration times and simplified troubleshooting.
Mannu Kumar has over 8 years of experience in software design and development. He has expertise in requirements analysis, design, development, and delivery of software products from end to end. He has worked on several projects related to storage, cloud storage, and distributed systems using technologies like C++, Python, OpenStack Swift, and more. His roles have included requirements gathering, design, development, testing, and resolving integration issues.
- Traditionally, separate teams handled software development, release, and support, which caused delays. The DevOps approach combines these roles into a single multi-skilled team.
- Three factors drove DevOps adoption: Agile reduced development time but introduced bottlenecks; Amazon improved reliability with single teams; software could be released as a service.
- DevOps benefits include faster deployment, reduced risk, and faster repair through collaboration between development and operations teams.
This document provides information on a course titled "Software Engineering" taught by Dr. P. Visu at Velammal Engineering College. The objectives of the course are outlined, including understanding software project phases, requirements engineering, object-oriented concepts, enterprise integration, and testing and project management techniques. Six course outcomes are also listed relating to comparing process models, requirements engineering, object-oriented fundamentals, software design, testing techniques, and project estimation and scheduling. The document then provides details on the 5 course units covering software process and agile development, requirements analysis, object-oriented concepts, software design, and testing and project management. Learning resources including textbooks and online links are also listed.
The vision of Autonomic Computing and Self-Adaptive Software Systems aims at realizing software that autonomously manage itself in presence of varying environmental conditions. Feedback Control Loops (FCL) provide generic mechanisms for self-adaptation, however, incorporating them into software systems raises many challenges.
The first part of this thesis addresses the integration challenge, i.e., forming the architecture connection between the underlying adaptable software and the adaptation engine. We propose a domain-specific modeling language, FCDL, for integrating adaptation mechanisms into software systems through external FCLs. It raises the level of abstraction, making FCLs amenable to automated analysis and implementation code synthesis. The language supports composition, distribution and reflection thereby enabling coordination and composition of multiple distributed FCLs. Its use is facilitated by a modeling environment, ACTRESS, that provides support for modeling, verification and complete code generation. The suitability of our approach is illustrated on three real-world adaptation scenarios.
The second part of this thesis focuses on model manipulation as the underlying facility for implementing ACTRESS. We propose an internal Domain-Specific Language (DSL) approach whereby Scala is used to implement a family of DSLs, SIGMA, for model consistency checking and model transformations. The DSLs have similar expressiveness and features to existing approaches, while leveraging Scala versatility, performance and tool support.
To conclude this thesis we discuss further work and further research directions for MDE applications to self-adaptive software systems.
The document provides an overview of an architecture example at DAFCA and discusses:
1) Key patterns used including Command, Template Method, Composite, and Layered Architecture patterns to encapsulate functionality and enforce pre/post conditions.
2) The emergence of domain concepts like Instruments, Commands, and Coordinators that mapped to user intent and hid implementation details.
3) How the architecture guided and enabled users to instrument designs while encapsulating DAFCA-specific logic.
The IBM Quick Deployer is an automated utility that uses IBM UrbanCode Deploy to install and configure IBM's Collaborative Lifecycle Management (CLM) and IoT Continuous Engineering (CE) solutions on Red Hat Linux virtual machines. It supports standard CLM deployment topologies, installs all required middleware like WebSphere Application Server and DB2, and configures applications. The Quick Deployer is used internally by IBM and available publicly to download, though support is not provided. It automates what was previously a manual process.
ACTRESS: Domain-Specific Modeling of Self-Adaptive Software ArchitecturesFilip Krikava
Presentation given at 29th Symposium On Applied Computing (SAC'14) - Dependable and Adaptive Distributed Systems track.
It is mainly based on the work done during my Ph.D.
The document discusses different software development process models including waterfall, evolutionary development, incremental development, and spiral models. The waterfall model involves sequential phases of requirements, design, implementation, testing and maintenance. However, it does not handle changes well. Evolutionary and incremental models incorporate feedback loops and iterative development. The spiral model is risk-driven and guides teams to adopt elements of other models based on a project's risk assessment.
[2015/2016] Collaborative software development with GitIvano Malavolta
This presentation is about a lecture I gave within the "Software systems and services" immigration course at the Gran Sasso Science Institute, L'Aquila (Italy): http://cs.gssi.infn.it/.
http://www.ivanomalavolta.com
This document proposes an approach to verify the maintainability of software systems using aspect-oriented programming and architecture description languages. It describes using an ADL called MexADL to define maintainability characteristics and sub-characteristics. Aspects are used to associate quality metrics with implementation code and verify maintainability. The approach is demonstrated on an SQL query optimizer case study, showing how aspects capture valid interactions and associate metrics to verify maintainability defined in the ADL. The approach helps detect architectural violations and avoid lack of compliance between design and implementation artifacts.
This document provides an overview of the Topic-Chat project, which aims to develop a chat application for students to discuss different topics and subjects. It includes sections on system analysis, software requirements, selected technologies, system design, and outputs. The key technologies used are Google Cloud Messaging for push notifications, PHP for the server, MySQL for the database, and Android for the client. Diagrams are provided showing the entity relationship, use cases, and system architecture. The outputs demonstrated include admin and student interfaces for registration, login, viewing topics and messages.
The document provides an overview of a presentation on cloud performance testing. The presentation agenda includes cloud 101 concepts, cloud offerings and deployment models, challenges of cloud computing, and tools for cloud performance testing. It also summarizes a proof of concept that was conducted to compare the performance and costs of using a commercial tool versus an open source tool for load testing on cloud infrastructure. The results showed comparable response times between the tools and significantly lower costs when using the cloud versus maintaining physical infrastructure.
This document contains a resume and summary of qualifications for Prathesh B V. He has over 50 months of experience in information technology, including experience with software development, continuous integration tools, UNIX/Linux systems, and banking application implementations and automation. His experience includes roles at Cognizant Technology Solutions and Polaris Financial Technology. He holds a Bachelor's degree in Computer Science and Engineering.
Docker allows for easy deployment and management of applications by wrapping them in containers. It provides benefits like running multiple isolated environments on a single server, easily moving applications between environments, and ensuring consistency across environments. The document discusses using Docker for development, production, and monitoring containers, and outlines specific benefits like reducing deployment time from days to minutes, optimizing hardware usage, reducing transfer sizes, and enhancing productivity. Future plans mentioned include using Kubernetes for container orchestration.
This presentation by Andrew Aslinger discusses best practices and pitfalls of integrating Docker into Continuous Delivery Pipelines. Learn how Andrew and his team used Docker to replace Chef to simplify their development and migration processes.
This document provides an overview and summary of OpenShift v3 and containers. It discusses how OpenShift v3 uses Docker containers and Kubernetes for orchestration instead of the previous "Gears" system. It also summarizes the key architectural changes in OpenShift v3, including using immutable Docker images, separating development and operations, and abstracting operational complexity.
Docker allows creating isolated environments called containers from images. Containers provide a standard way to develop, ship, and run applications. The document discusses how Docker can be used for scientific computing including running different versions of software, automating computations, sharing research environments and results, and providing isolated development environments for users through Docker IaaS tools. K-scope is a code analysis tool that previously required complex installation of its Omni XMP dependency, but could now be run as a containerized application to simplify deployment.
Basic Idea
Develop a build system that leverages Docker for implementing continuous integration/deployment(CI/CD) pipeline. A git commit must kick off packaging a Docker Image and provisioning it in a VM.
A git based commit should be used for starting of a build for a docker image which would then be run and provisioned in a Virtual Machine. After every commit a series of test cases is then run on the code to ensure the correctness of the code. After all the test-cases pass, the image gets updated on docker-hub registry, and a VM gets provisioned which can then run the software directly (after pulling the image from the docker-hub).
This entire process ensures that the most recent and updated version of the code is available to the person who is using the software and this speeds up the overall process by at least 2-3 folds.
Develop and deploy Kubernetes applications with Docker - IBM Index 2018Patrick Chanezon
Docker Desktop and Enterprise Edition now both include Kubernetes as an optional orchestration component. This talk will explain how to use Docker Desktop (Mac or Windows) to develop and debug a cloud native application, then how Docker Enterprise Edition helps you deploy it to Kubernetes in production.
This document provides an overview of microservices architecture, including concepts, characteristics, infrastructure patterns, and software design patterns relevant to microservices. It discusses when microservices should be used versus monolithic architectures, considerations for sizing microservices, and examples of pioneers in microservices implementation like Netflix and Spotify. The document also covers domain-driven design concepts like bounded context that are useful for decomposing monolithic applications into microservices.
Using Containers to More Effectively Manage DevOps Continuous IntegrationCognizant
IT organizations can enhance efficiency and cut costs by deploying containers to manage DevOps continuous integration (CI) infrastructure that is self-contained and autonomous.
Usage guide of Unicorn platform
UNICORN: A Novel Framework for Multi-cloud Services Development, Orchestration, Deployment and Continuous Management Fostering Cloud Technologies Uptake from Digital SMEs and Startups
VMworld 2013: Best Practices for Application Lifecycle Management with vCloud...VMworld
VMworld 2013
Amjad Afanah, VMware
Rajesh Khazanchi, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
IBM Bluemix OpenWhisk: Serverless Conference 2016, London, UK: The Future of ...OpenWhisk
Learn more about the IBM Bluemix OpenWhisk, a serverless event-driven compute platform, which quickly executes application logic in response to events or direct invocations from web/mobile apps or other endpoints.
N. Sathish Kumar has over 10 years of experience in the IT industry. He has expertise in Java, Spring, Hibernate, Oracle, SQL Server, and legacy modernization tools like BluAge. Some of his projects include modernizing banking applications, developing web applications for failure analysis tracking and supply chain management, and migrating mainframe screens to new interfaces. He is skilled at all phases of the software development life cycle from analysis to deployment.
The evolving technology of the modern age has made it necessary to control the existing technologies efficiently and comfortably. The consumers expect the development of products that are easy to use and are efficient and which can be bought at the lowest possible cost from the industry. The daily difficulties related to lighting automation faced by people ranging from industry professionals to modern day housewives have inspired this project.
The project aims at controlling lighting appliances ranging from industries to sub-urban homes using web-based application at the front end complemented by an end user application developed for the aimed location using ZigBee based network. The other networking technologies like Bluetooth and WiFi consume a lot more energy as compared to ZigBee and are costlier too. The project facilitates controlling lighting appliances in groups as well as individually. The color and density of the lights can also be changed. The quick access feature is provided by the predefined end-user definable presets. The other features include scheduling options, obtaining live feedback, receiving notifications and maintenance pop-ups, bill estimation, power consumption etc.
Linux-Based Data Acquisition and Processing On Palmtop ComputerIOSR Journals
This document describes a Linux-based data acquisition and processing system implemented on a palmtop computer. The system uses a PCMCIA data acquisition card and free Linux drivers and libraries to acquire signals from sensors. As a demonstration, a phonometer application was created that can sample 1024 signals at 100 ksamples/s and compute the fast Fourier transform of the signal up to 6 times per second. The document outlines the hardware and software design of the system, including using a custom Linux kernel, COMEDI libraries for device control, and TCL/Tk for the user interface. Experimental results showed the system could successfully implement the phonometer application for acoustic signal analysis on the palmtop computer.
Linux-Based Data Acquisition and Processing On Palmtop ComputerIOSR Journals
This document describes the development of a data acquisition and processing system using a palmtop computer running Linux. The system uses a PCMCIA data acquisition card and free Linux drivers and libraries. A demo application was created that can sample 1024 signals from a microphone at 100 ksamples/s and compute the fast Fourier transform of the signal up to 6 times per second. The document outlines the hardware and software implementation including developing the C code on a desktop, cross compiling it for the palmtop, and downloading and testing the executable on the palmtop computer. It provides details on using COMEDI libraries for data acquisition and TCL/Tk for the graphical user interface.
The mainstreaming of containerization and microservices is raising a critical question by both developers and operators: how do we debug all this?
Debugging microservices applications is a difficult task. The state of the application is spread across multiple microservices, and it is hard to get a holistic view of the state of the application. Currently debugging of microservices is assisted by openTracing, which helps in tracing of a transaction or workflow for post-mortem analysis, and linkerd and itsio which monitor the network to identify latency problems. These tools however, do not allow to monitor and interfere with the application during run time.
In this talk, we will describe and demonstrate common debugging techniques and we will introduce Squash, a new tool and methodology.
Martin Koons is a senior .NET developer and software architect with over 20 years of experience developing applications using technologies like C#, ASP.NET, SQL Server, and Entity Framework. He has extensive experience designing and developing n-tier architectures, distributed systems, and mobile applications. Currently, he works as a senior .NET developer at UPS where he maintains package delivery systems and develops utility programs using C# and databases.
Similar a Application cloudification with liberty and urban code deploy - UCD (20)
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
How information systems are built or acquired puts information, which is what they should be about, in a secondary place. Our language adapted accordingly, and we no longer talk about information systems but applications. Applications evolved in a way to break data into diverse fragments, tightly coupled with applications and expensive to integrate. The result is technical debt, which is re-paid by taking even bigger "loans", resulting in an ever-increasing technical debt. Software engineering and procurement practices work in sync with market forces to maintain this trend. This talk demonstrates how natural this situation is. The question is: can something be done to reverse the trend?
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Digital Marketing Trends in 2024 | Guide for Staying AheadWask
https://www.wask.co/ebooks/digital-marketing-trends-in-2024
Feeling lost in the digital marketing whirlwind of 2024? Technology is changing, consumer habits are evolving, and staying ahead of the curve feels like a never-ending pursuit. This e-book is your compass. Dive into actionable insights to handle the complexities of modern marketing. From hyper-personalization to the power of user-generated content, learn how to build long-term relationships with your audience and unlock the secrets to success in the ever-shifting digital landscape.
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
FREE A4 Cyber Security Awareness Posters-Social Engineering part 3Data Hops
Free A4 downloadable and printable Cyber Security, Social Engineering Safety and security Training Posters . Promote security awareness in the home or workplace. Lock them Out From training providers datahops.com
The Microsoft 365 Migration Tutorial For Beginner.pptxoperationspcvita
This presentation will help you understand the power of Microsoft 365. However, we have mentioned every productivity app included in Office 365. Additionally, we have suggested the migration situation related to Office 365 and how we can help you.
You can also read: https://www.systoolsgroup.com/updates/office-365-tenant-to-tenant-migration-step-by-step-complete-guide/
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/how-axelera-ai-uses-digital-compute-in-memory-to-deliver-fast-and-energy-efficient-computer-vision-a-presentation-from-axelera-ai/
Bram Verhoef, Head of Machine Learning at Axelera AI, presents the “How Axelera AI Uses Digital Compute-in-memory to Deliver Fast and Energy-efficient Computer Vision” tutorial at the May 2024 Embedded Vision Summit.
As artificial intelligence inference transitions from cloud environments to edge locations, computer vision applications achieve heightened responsiveness, reliability and privacy. This migration, however, introduces the challenge of operating within the stringent confines of resource constraints typical at the edge, including small form factors, low energy budgets and diminished memory and computational capacities. Axelera AI addresses these challenges through an innovative approach of performing digital computations within memory itself. This technique facilitates the realization of high-performance, energy-efficient and cost-effective computer vision capabilities at the thin and thick edge, extending the frontier of what is achievable with current technologies.
In this presentation, Verhoef unveils his company’s pioneering chip technology and demonstrates its capacity to deliver exceptional frames-per-second performance across a range of standard computer vision networks typical of applications in security, surveillance and the industrial sector. This shows that advanced computer vision can be accessible and efficient, even at the very edge of our technological ecosystem.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
“Temporal Event Neural Networks: A More Efficient Alternative to the Transfor...
Application cloudification with liberty and urban code deploy - UCD
1. 1
Application Cloudification with Liberty and
UrbanCode Deploy - UCD)
Migration of 3 real applications from WAS to Liberty in a a bank, adopting UrbanCode
Deploy as release automation tool
January 2020
Davide Veronese
davide.veronese@it.ibm.com
https://davideveronese.wordpress.com/
https://www.linkedin.com/in/davide-veronese-b8b08b28/
2. 2
Agenda
Activity Overview scope
Objectives
Completion status
Infrastructure env
Teams set-up
Timeline
Workshops outcomes
References for questions and discussed points
3. 3
Cloud evolution – Status update
Bank apps
Liberty
3 - Containerize 4 - Operate
1 – Libertize
2 – Automation with LMI
1. Libertize – Deploy the application code on the Java
Liberty
2. Automation – (Build and) Deploy the applications on
Liberty with a LMI tool
Phase 1
3. Containerize – Move Bank apps with Liberty on
Containers with LMI tool
4. Operate – Run the Containers on CaaS (OCP) or
other platform
Phase 2
• IT topology simplification with adoption of a
Lightweight and Open source App Srv
• Optimize migration impacts: high portability
from WAS to Liberty
• 100% compatibility with Kubernetes for
Containers cluster implementation
• Enabler for an application architecture
simplification
• Accelerate DevOps adoption
• Strong rollback and recovery
capabilities
Value Proposition
Object of this presentation
4. 4
Bank app evolution PoC - objectives
Topics included in the PoC
• install selected Bank apps applications on WAS Liberty environment
• test and verify:
• Conflicts between Bank apps and Liberty libraries
• Resource access
• Security and SSO
• configuration management: based on master XML configuration file and an include XML config. test
and verify Liberty product as for future CTS candidate
• support Customer to identify gaps and processes to be implemented related to the new platform
• Analysis and implementation of the deployment process that will cover single and multiple J2EE
application on Liberty
• Implementation of the use case for installation rollback in the following two scenarios:
• Rollback caused by a fault/error in one of the installation steps
• Rollback driven by the user after a successful installation
• UCD native API exposure
5. 5
PoC activities – Completion status
Activity Status Notes to complete
Install selected Bank apps on WAS Liberty environment VMs available, Liberty+UCD installed and running. XFrame2 apps
installed correctly
Test and verify:
1. Conflicts between XFrame2 and Liberty libraries
2. Resource access
3. Security and SSO
1. intercommunication between different applications deployed working
correctly
2. Backend access ok, reading property files from an application to be
evaluated
3. Bank Security involved to identify potential solutions. No issue identified
Configuration management: based on master XML configuration file
and an include XML config. Test and verify Liberty product as for
future CTS candidate
Config mgnt addressed during the workshops. To be finalized after PoC
competition involving other Bank orgs (Security). No open items.
Integration with CyberArk analysed, integration with Jenkins demo done.
Support Customer to identify gaps and processes to be
implemented related to the new platform
Process and organizations topics jointly discussed, no criticality identified.
The impacts and gaps finalization is depending by the target Bank
solution timeline. Integration with MLX and other Bank tools discussed, to
be further analysed
Analysis and implementation of the deployment process that will
cover single and multiple J2EE application on Liberty*
Implemented 3 deployment processes using a common template. Bank
hands-on session on UCD done to review what done in previous
workshops,
Implementation of the use case for installation rollback in the
following two scenarios*:
• Rollback caused by a fault/error in one of the installation steps
• Rollback driven by the user after a successful installation
Rollback included into the deployment processes for the 3 Bank apps.
Deeper understanding of rollback mechanism completed
UCD native API exposure* API exposed by UCD and documented. API not tested in this phase.
Further analysis API overview, classification and how they can be invoked
scheduled by Jan 27th workshop
6. 6
Bank apps evolution PoC – Infrastructure env
The environment provided in this proposal is composed by:
• 3x2 instances of Liberty product, one for each application in scope
• 1 instance of IHS product
• 1 instance of UCD
• 1 Oracle schema for UCD and 10GB of SAN storage
The following schema provides an architectural view of the environment already provisioned with a Private
Cloud
App1
App2
App3
App1 App2 App3
7. 7
Workshops activities
Activities Done:
PoC env set-up according to the defined topology (slide 4) + Oracle DB
(in progress) Firewall requests for UCD integration with Nexus and CyberArk)
Bank applications manually installed on Liberty servers
Configuration management for Liberty, and related role mapping in UCD
• Analysis of requirements from Bank and IT supplier
• Definition of a potential target solution in the PoC
• (in progress) Definition of a final solution to be further analysed after the PoC
Bank apps Deploy processes implemented with UCD
• App1: completed first version with rollback
• App2: completed first version with rollback
• App3: completed first version with rollback
userID creation for Bank access to UCD console
8. 8
Config Management, high level solution (1/2)
High level view solution:
Environment property management: usage of env property to customize the configuration, adoption of
tokens in the conf files which can be easily adopted. IT supplier proposition to adopt a token structure
similar to WebSpere for variables ( ${property} )
Easy mgnt of mainframe resources: Bank asked to identify a solution to reduce the effort for
application teams to connect to mainframe envs. For DB2, MQ, CTG e IMS the «ambito» usually identify
hostname and port to be used for the connection factory in the server.xml. 3 options have been
discussed (“C” is currently implemented in UCD)
9. 9
Config Management, high level solution (2/2)
High level view solution:
Config ownership and responsibility: the Liberty conf model provides high level freedom to the
application teams. This means also more responsibility and impact on SLAs when configuring shared
resources (e.g. DB2). 3 options have been evaluated to reduce the responsibility for application teams
(“A” is currently partially implemented in UCD):
• A) Configuration Override at application server level
• B) XML schema validator
• C) manual approval
• Config mgnt at app level: the team discussed the option to have one single config component
(managed by app team) for each UCD application. Considering the deploy env shared by more EARs,
and more subsystem for the same AAM code, this solution will prevent duplication and overwriting. This
solution is under discussion. The alternative to have more than one config component for each
application can be applied, but a review of potential conflicts is required
10. 10
Main outcomes
Value added capabilities introduced with UCD (Workshop1):
• process templates for ear and wlp config to provide reuse and standardization
• rollout deployment to primary and secondary resources to provide business continuity
• installation rollback feature provided by the product
• added "operational process" to start and stop servers that could be used by developers
in test envs
• environment properties used to replace token and to have configuration files
customized in each env
• call external API during deployment process to check application liveness
• possibility to define human tasks con human interactions
• possibility to define approval process for specific environment before start deployment
11. 11
Main outcomes
Value added capabilities introduced with UCD (workshop2):
• UCD APIs overview
https://www.ibm.com/support/knowledgecenter/it/SS4GSP_7.0.4/com.ibm.udeploy.reference.doc/topics/rest_api_ref_overview.html
• Integration with Change Management system to push standard Bank roles into UCD for each
application, and to align UCD components with Change Management subsystems
• Quality gate: implementation of a basic quality gate related to component status (e.g.: Validated or
Deprecated)
• New application onboarding process on UCD as
• IT supplier will define Resources, Resource groups and Resource tree in Urbancode (maybe done manually or in case of
Cloud an image could already be available with UCD agent)
• application owner creates the Components, then the Application and he/she maps components to resource tree
• using the application template concept creation of all the components and the related resource tree
• agent protyping can also be used to create a full fledged environment (IT supplier still needs to provision actual servers
and map them to agent prototypes)
12. 12
Workshop3 outcomes (1/2)
1. Bank hands-on session on UCD to review what done in previous workshops:
• Creation of 2 new UCD components using the template for both ear file and config file
• Clarification about Bank/IT supplier responsibility
• Possibility to create components from Shared-Library
• Creation of an application from the template
• UCD Environment deep dive as set of target resources with related configurations, used as destination of application
deployment.
• Review of Primary and Secondary Server categories for Environment set-up
• Review of deployment processes already implemented
• Overview of how UCD can be triggered by the availability of a new app baseline released into the source code versioning
system
2. API overview, classification and how they can be invoked:
• Overview of API documentation
(https://www.ibm.com/support/knowledgecenter/it/SS4GSP_7.0.4/com.ibm.udeploy.reference.doc/topics/rest_api_ref_over
view.html)
• Focus on UCD “teamsecurity” APIs group
• Demo of UCD APIs adoption in a Groovy script to integrate Jenkins with UCD
• Adoption of UCD APIs as CLI utility commands for authentication
13. 13
Workshop3 outcomes (2/2)
3. Overview of gaps between current deployment process on WAS, and deployment
processes implemented with UCD in PoC for Liberty
• Open discussion and review of the processes already implemented. No specific issues identified
4. Deeper understanding of rollback mechanism
• Review of the Rollback capabilities implemented into the UCD processes implemented for the Bank apps
• Discussion about how this solution can be enriched to address further fault scenarios
5. Other discussed topics
• UCD and Jenkins integration: demo of “Jenkins Publisher” UCD plugin to integrate a Jenkins pipeline with UCD, creation
of a new UCD component driven by Jenkins
• UCD and CyberArk integration: overview of the available plug-in. Discussion about potential usage of UCD plug-in with
Liberty apps. Bank Security team have to be involved for further evaluations
• SSO options for Liberty (Bank Security involved), discussion of potential solution with JWT. The final closure of this topic
is out of the UCD/Liberty perimeter, and it will be finalised with Bank Security team.
14. 14
UrbanCode Deploy objects mapping
This meta model represents all UrbanCode Deploy object types and their relationships:
15. 15
UrbanCode Deploy: Extending product functions
The functions and integration capabilities of UrbanCode Deploy could be extended using:
• Plug-ins
UrbanCode Deploy plug-ins provide tools for creating component processes and integrations. UrbanCode
Deploy provides plug-ins for several common deployment processes, and others are available to integrate
with a wide variety of tools, such as middleware tools, databases, and servers. Ref.:
https://www.ibm.com/support/knowledgecenter/SS4GSP_7.0.4/com.ibm.udeploy.reference.doc/topics/plugin
_ch.html
• Rest API
The UrbanCode Deploy server have a separate REST interface. You can use these REST interfaces to
automate tasks on those servers. Ref.:
https://www.ibm.com/support/knowledgecenter/SS4GSP_7.0.4/com.ibm.udeploy.reference.doc/topics/rest_a
pi_ref_overview.html
• Command line interface
CLI is a command-line interface that provides access to the UrbanCode Deploy server. It can be used to
find or set properties and to run numerous functions. Ref.:
https://www.ibm.com/support/knowledgecenter/SS4GSP_7.0.4/com.ibm.udeploy.reference.doc/topics/cli_ch
.html
16. 16
UrbanCode Deploy: Managing security
UrbanCode Deploy uses a flexible team-based and role-based security model that maps to your organizational
structure.
From a high level, the security system for the server consists of an authentication realm, authorization realm,
roles, and teams. The authentication realm verifies the identity of the user or system that is trying to log on to
the UrbanCode Deploy server. The authorization realm manages user groups.
The available authentication and authorization realms are as follows:
• Internal Storage
Uses internal role management. The default authorization realm (Internal Security) is of this type.
• LDAP or Active Directory
Uses external LDAP role management.
• SSO
Provides single sign-on authorization.
Further informations on UrbanCode Deploy security concepts could be found at:
https://www.ibm.com/support/knowledgecenter/SS4GSP_7.0.4/com.ibm.udeploy.admin.doc/topics/security_ch.ht
ml
The next slide show the security model implemented in UrbanCode Deploy.
18. 18
UrbanCode Deploy: Integrating security with external system
UrbanCode Deploy provides both cli command or Rest API to manage teams, user/group assignment and
resource/team associations.
The integration implementation depends on the security model implemented in the external system.
A general suggestion is to define a group for each team/role and add the group with that role in the team:
• T1
• R1 T1R1 (group)
• R2 T1R2 (group)
• Rn T1Rn (group)
With this configuration the sync with an external system could be done just synchronizing group members using
Rest API or cli commands: addUserToGroup / removeUserFromGroup.
If the authorization realms is LDAP/AD or SSO the sync is not necessary.
19. 19
UrbanCode Deploy: Jenkins integration
UrbanCode Deploy provides the “Jenkins Pipeline” plug-in: https://developer.ibm.com/urbancode/plugin/jenkins-
2-0/
This plug-in is installed into the Jenkins server and includes functions to interact with IBM UrbanCode Deploy
components and deployments. With this plug-in, you can complete the following tasks:
• Create components
• Publish artifacts to a version
• Start component version imports
• Deploy snapshots or component versions
• Run operational processes
• Run the step multiple times within a single job
• Accomplish all of the above with pipeline script syntax
Useful links:
• Jenkins Pipeline Plug-in Tutorial: Component Version Import and Snapshot Deployment:
https://developer.ibm.com/urbancode/2017/07/11/jenkins-pipeline-tutorial/
• https://github.com/UrbanCode/jenkins-pipeline-ucd-plugin
20. 20
UrbanCode Deploy: CyberArk integration
CyberArk has implemented and released a plug-in for UrbanCode Deploy:
https://developer.ibm.com/urbancode/plugin/cyberark/
This plugin allows UrbanCode Deploy to get credentials from EPV via AIM, and to get secrets from Conjur for
setting up a CI/CD workflow.
The CyberArk plug-in provides the process steps:
• Authenticate Conjur
• Get Password from CCP (Web Service)
• Get Password from CP (CLI Utility)
• Get Variable from Conjur
The CyberArk plugin password retrieval steps generate secure process request properties accessible only by
the currently running process. In subsequent steps you may access these properties using the syntax
${p:CyberArk/password}, ${p:CyberArk/username}, and ${p:CyberArk/address}.
Useful links:
• https://github.com/cyberark/urbancode-conjur-aim