SlideShare una empresa de Scribd logo
1 de 49
Descargar para leer sin conexión
RLMCA202
Continuous Delivery
Module4
Dr. Sudheer Sankara Marar MCA, MBA, MTech, MA.JMC, PhD
HOD-MCA,
Nehru College of Engineering and Research Centre
Automated Acceptance Testing
• Let’s explore automated acceptance testing,
and its place in the deployment pipeline
• Acceptance tests take delivery teams beyond
basic continuous integration, by testing the
business acceptance criteria of your
application & validating that it provides users
with valuable functionality.
Automated Acceptance Testing:
Significance
• It’s cost consuming.. But, the cost of an
automated acceptance test suite is much lower
than performing frequent manual acceptance
and regression testing, or the risk of releasing a
poor-quality software.
• no other test proves that the application in
production, delivers some real business value as
per the expectation of users.
• protects the application as the team make some
large-scale changes to it.
Creating Maintainable Acceptance Test
Suites: the layered design
• Acceptance tests are derived
from acceptance criteria
• Acceptance criteria must follow
the INVEST principles (ie, they
must be Independent,
Negotiable, Valuable, Estimable,
Small, And Testable)
• Acceptance criteria should be
then automated
• Automated acceptance tests
should be layered
Testing against the GUI
• Acceptance tests are intended to simulate
user interactions with the system
• If acceptance tests are coupled to your UI,
– small changes to the UI can easily break your
acceptance test suites, high risk.
• If the GUI layer represents a clearly defined
collection of display-only code that doesn’t
contain any business logic of its own.
– the risk associated with bypassing small
Creating Acceptance Tests
how to create automated acceptance tests
• The Role of Analysts and Testers
– have a business analyst working as part of each team, representing the
customers and users of the system to identify and prioritize requirements.
– Have Testers to ensure the quality and readiness of the software is understood
by everybody
• Analysis on Iterative Projects
– short kick-off meetings are a vital part to ensure that every party has a good
understanding of that requirement and their role delivery process.
– This prevents analysts from creating “ivory tower” requirements that are
expensive to implement or test
• Acceptance Criteria as Executable Specifications
– Acceptance tests are executable specifications of the behavior of the software
being developed.
– Behavior-driven development models your acceptance criteria to be written in
the form of the customer’s expectations of the application behavior
– Tools like Cucumber, JBehave, Concordion, Twist, and FitNesse allow you to
write acceptance criteria like these as plain text and keep them synchronized
with the actual application.
• For example, in Cucumber, you
would save the acceptance
criterion as 
The Application Driver Layer
• The application driver layer understands how to talk to your
application—the system under test.
• The API for the application driver layer is expressed in a
domain language(DSL)
• It completely dispense the acceptance criteria layer and
express the acceptance criteria in the implementation of
the test.
– it makes acceptance tests completely independent of each
other.
– it allows you to create test data with a few simple high-level
commands
• If test doesn’t care about details, the DSL will supply
defaults that would work perfectly.
• Application driver layer, along with the DSL represented by
its API, tends to become quite extensive.
The Application Driver Layer
• How to Express Your Acceptance Criteria
– External DSL approach is that you can round-trip your
acceptance criteria (requires less tech skills)
– Internal DSL approach requires less complex tooling, and you
can use the autocomplete functionality (requires high tech
skills)
• The Window Driver Pattern
– The application driver layer is the only layer which understands
how to interact with the application
– provides abstraction to reduce the coupling between the
acceptance tests and the GUI of the system
Implementing Acceptance Tests
• State in Acceptance Tests
– Acceptance tests must simulate user interactions with the
system in a manner that will meet its business requirements.
– to test some behavior of the application, it must be in a specific
starting state
• Process Boundaries, Encapsulation, and Testing
– Don’t break encapsulation to make it testable
– Never add test-only interfaces to remote system components
• Managing Asynchrony and Timeouts
– Isolate the asynchrony behind synchronous calls.
– use test doubles to simulate the linking components
The Acceptance Test Stage
• The acceptance test suite should be run
against every build that passes the commit
tests
• In the deployment pipeline, only release
candidates that have passed this stage are
available for subsequent stages.
The Acceptance Test Stage
• Keeping Acceptance Tests Green
– Yes, it may be time consuming.. But, never dilute it.
– Developers must sit and waiting for the tests to pass
– get your acceptance tests green to feel confident about the
quality of your software.
– Use gimmicks, such as “LavaLamps” or “Bells and Whistles” to
keep your tests in good shape.
• Deployment Tests
– The best acceptance tests are atomic; they create their own
start conditions and tidy up at their conclusion.
– Design the test environment to be as close as expected
production environment.
– intended to show that the deployment has been successful, to
establish a known-good starting point for the execution of
acceptance tests.
Acceptance Test Performance
• Automated acceptance tests are to assert that
our system delivers the expected value to our
users.
• Acceptance test suites take several hours to
complete
• There are various techniques to improve the
overall efficiency of the team by reducing the
time it takes to get a result from acceptance
test stage.
• Refactor Common Tasks
– look for quick wins by keeping a list of the slowest tests; spend a
little time to find ways to make them more efficient.
• Share Expensive Resources
– Create a standard blank instance of the application at the start of
the test and discard it at the end.
– Pick those resources we will share between tests and which we
will manage within the context of a single test.
• Parallel Testing
– Divide your tests so that there is no risk of interaction between
them, then run them in parallel against an instance
• Using Compute Grids
– For tests that are expensive in their own right,
– For tests that are important to simulate many concurrent users.,
go for compute grids
Acceptance Test Performance
Case Study: Using Cloud Computing for
Acceptance Tests
• To increase the sophistication of
acceptance test environment
• optimization begins by identifying and
refactoring common patterns in our
acceptance tests.
• separate out API tests and to run them
first, ahead of the UI-based tests.
• Did some course-grained parallel
running of tests.
• Divided parallels into a couple of
batches.
• Switched to Amazon EC2 compute cloud
to allow ease of access and wider
scalability.
Testing for Nonfunctional requirements
• Nonfunctional requirements, focus on testing capacity,
throughput, performance etc..
– Performance is a measure of the time taken to process a
single transaction
– Throughput is the number of transactions a system can
process in a given timespan.
– Capacity is the maximum throughput a system can sustain,
for a given workload, while maintaining an acceptable
response time for each individual request.
• NFR such as availability, capacity, security, and
maintainability are as important and valuable as
functional ones
Managing Nonfunctional
Requirements
• NFRs
– have real business value.
– they are different and tend to cross the boundaries of other
requirements.
– They’re hard to handle in terms of analysis and
implementation.
• Everybody involved in delivery—developers, operations
personnel, testers, and the customer—need to think
through the application’s NFRs and its impact on the system
model
• Analyzing Nonfunctional Requirement
– define expectations as stories with quantitative specification
– supply a reasonable level of detail when analyzing NFRs
Programming for Capacity
• Poorly analyzed NFR tend to constrain thinking
and lead to overdesign and inappropriate
optimization.
– Focusing too early and too heavily on optimizing the
capacity of the application is inefficient, expensive.
• Avoid two extremes:
– the assumption that you will be able to fix all capacity
issues later;
– writing overcomplex code in fear of future capacity
problems.
Programming for Capacity
Strategy list to address capacity problems
1. Decide upon an architecture for your application.
2. Understand and use patterns and avoid antipatterns that affect
the stability and capacity of your system.
3. Keep the team working within the boundaries of the chosen
architecture
4. Pay attention to the data structures and algorithms chosen,
making sure that their properties are suitable for your application.
5. Be extremely careful about threading.
6. Establish automated tests that assert the desired level of capacity.
7. Use profiling tools as a focused attempt to fix problems
8. Use real-world capacity measures.
Measuring Capacity
This involves a broad spectrum of characteristics include:
• Scalability testing. How do the response time of an
individual request and the number of possible
simultaneous users change as we add more servers,
services, or threads
• Longevity testing. Run the system for a long time to see if
the performance changes over a protracted period of
operation.
• Throughput testing. How many transactions, or messages,
or page hits per second can the system handle
• Load testing. What happens to capacity when the load on
the application increases to production-like proportions
Measuring Capacity
Defining Success and Failure for Capacity Tests:
• Success or failure is often determined by a human analysis
of the collected measurements
• create graphs as part of our capacity testing that are easily
accessible from our deployment pipeline dashboard.
• Aim for stable, reproducible results.
– isolate capacity test environments from other influences and
dedicate them to the task of measuring capacity.
• Intensify the pass threshold if test is passed at a minimum
acceptable level.
– provides with protection from the false-positive scenario.
The Capacity-Testing Environment
• Capacity measurements of a system should be carried
out in an environment, that replicates the Production
environment in which the system will ultimately run.
• Make the investment and create a clone of your
production environment for the core parts of system.
• Use the same hardware and software specifications,
• Use the same configuration for each environment,
including networking, middleware, and OS
The Capacity-Testing Environment
• Strategy to limit the
test environment costs;
• the application is to be
deployed into
production on a farm
of servers, as shown in
Figure 1.
• Replicate one slice of
the servers, as in Fig 2,
not the whole farm.
Automating Capacity Testing
• The idea is all about, adding capacity testing as a
stage to the deployment pipeline.
• Capacity tests should
– Test specific real-world scenarios,
– Have a predefined threshold for success
– Be of short duration
– Be robust in the face of change
– Be composable into larger-scale complexities
– Be repeatable& capable of running in parallel
Automating Capacity Testing
• Which point in the application should recording, and
playback, take place?
– Our goal is to simulate realistic use of the system as closely
as we can..
• Depending on system’s architecture and behavior
variables we may use one among the 3 injection points,
1. Through the user interface.
2. Through a service or public API—for example, making HTTP
requests directly into a web server.
3. Through a lower-level API—for example, making direct calls
to a service layer or perhaps the database.
Adding Capacity Tests to the
Deployment Pipeline
• It’s not recommended to add capacity tests into the
acceptance test stage of deployment pipeline because:
– Capacity tests need to be run in their own special
environment.
– Some types of capacity test can take a very long time to
run
– Many activities from acceptance testing can be done in
parallel with capacity testing
– Capacity tests aren't run as frequently as acceptance tests.
• But, for some projects, it makes sense
– Here, treat it in a way similar to the acceptance test
stage—as a fully automated deployment gate.
Additional Benefits of a Capacity Test System
• Reproducing complex production defects
• Detecting and debugging memory leaks
• Longevity testing
• Evaluating the impact of garbage collection
• Tuning garbage collection
• Tuning application configuration parameters
• Tuning third-party application configuration
• Evaluating different solutions to complex problems
• Simulating integration failure
• Measuring the scalability over runs with different hardware configurations
• Load-testing communications with external systems
• Rehearsing rollback from complex deployments.
• Selectively failing parts to evaluate graceful degradation of service
• Performing real-world capacity benchmarks in temporarily available
hardware
Deploying and Releasing Applications
• how to create and follow a strategy for releasing
software, including deployments to testing
environments.
• All the processes—deploying to testing and production
environments and rolling back—need to form part of
your deployment pipeline implementation.
• It should be possible to see a list of builds available for
deployment into each of these environments and run
the automated deployment process by pressing a
button or clicking a mouse
Creating a Release Strategy
• Stakeholders must meet up during the project
planning process, and their discussions should
be working out a common understanding
throughout the lifecycle.
• This shared understanding is then captured as
the release strategy
Creating a Release Strategy
consider the following:
• Parties in charge of deployments
• Asset and configuration management strategy.
• Description of the technology used for deployment.
• Plan for implementing the deployment pipeline.
• Enumeration of the environments
• Description of the processes
• Requirements for monitoring the application
• Discussion of the method
• Description of the integration with any external systems.
• Logging details, to determine the application’s state
• Disaster recovery plan
• so that the application’s state can be recovered following a disaster.
• Service-level agreements
• Production sizing and capacity planning:
• Archiving strategy
The Release Plan
• The first release carries the highest risk; it needs careful
planning.
• it should include
– Steps required to deploy the application for the first time
– Steps required to back out the deployment should it go wrong
– Steps required to back up and restore the application’s state
– Steps required to upgrade the application
– Steps to restart or redeploy the application if it fail
– Location of the logs
– Methods of monitoring the application
– Steps to perform any data migrations
Releasing Products
• An Additional list of deliverables should be considered
if the output of your project is a software product
– Pricing model
– Licensing strategy
– Copyright issues around third-party technologies
– Packaging
– Marketing materials—print, web-based, podcasts, blogs,
press releases, conferences, etc.
– Product documentation
– Installers
– Preparing sales and support teams
Deploying and Promoting Your Application
• Use the same process to deploy to every
environment, including production.
• Automating should start with the very first
deployment to a testing environment.
• Don’t use manually pulling of software pieces;
instead, write a simple script to do the job.
The First Deployment
• The first deployment should happen in the
first iteration when you showcase your first
stories to the customer.
• Choose stories that are of high priority but
very simple to deliver in your first iteration
• Get the early stages of deployment pipeline
be able to demonstrate something..
Modeling Your Release Process and Promoting Builds
• As application grows, so will your deployment
pipeline implementation.
• During promoting builds, it should capture
– What stages a build has to go through
– What are the required gates or approval
– Who has the authority to approve a build passing
through that gate
Promoting Configuration
• not just the binaries need to be promoted.
• The configuration of the environment and
application; also need to be promoted
– Make your smoke tests verify that you are
pointing at the right things.
– Write infrastructure tests that check any key
settings and report them to your monitoring
software.
Orchestration
• Environments are shared between several
applications.
• Take extra care when preparing the
environment for a new deployment, so as to
not disturb the operation of any other
applications in this environment.
• Use systems integration testing(SIT) for the
applications that share the environment and
depend on each other.
Deployments to Staging Environments
• Perform final tests in a staging environment
that is very similar to production.
– Employ the capacity testing environment for both
capacity testing and staging.
– If the application includes any integration with
external systems, staging is the point to get a final
confirmation that all aspects work between each
system.
Rolling Back Deployments and
Zero-Downtime Releases
• Be able to roll back a deployment; in case it
goes wrong.
• Debugging problems result in
– late night hours n preassure
– mistakes with unfortunate consequences &
– angry users.
• Have a way to restore service to your users
when things go wrong, and debug the failure
in the comfort of normal working hours.
• Rolling Back by Redeploying the Previous Good Version
– the simplest way to roll back.
– to get back to a good state is to redeploy the previous good
version from scratch
– re-creates environments from scratch
• Zero-Downtime Releases
– also known as hot deployment
– the process of switching users from one release to another
happens instantaneously.
– also possible to back users out to the previous version, if
something goes wrong.
• Blue-Green Deployments
– one of the most powerful techniques for managing
releases.
– The basic idea is to have two identical versions of your
production environment, call it blue and green.
• users of the system are routed to the green environment, currently in production.
• to release a new version of the application, deploy it to the blue environment. This does not
affect the operation of the green environment.
• Canary Releasing
• Assume that you only have one version
of your software in production at a
time.
• It’s much easier to manage bugfixes
• Canary releasing, involves rolling out a
new version of an application to a
subset of the production servers to get
fast feedback.
– uncovers problems with the new version
without impacting the majority of users.
– Reduces risk of releasing a new version.
• Facebook chooses to use a strategy with multiple canaries,
the first one being visible only to their internal employees and
having all the Feature Toggles turned on, so they can detect
problems with new features early.
Emergency Fixes
• At times, a critical defect is discovered and has to
be fixed as soon as possible.
– Do not, disrupt your process. Emergency fixes have to
go through the same build, deploy, test, and release
process as any other change.
– If change is not tested properly, it can lead to
regressions that may even worsen the problem.
– If the change is not recorded, the environment may
end up in an unknown state
• Run every emergency fix through your standard
deployment pipeline
Continuous Deployment
• a motto of Extreme Programming—if it hurts, do it more often..
• Deploy every change that passes your automated
tests to production.
• This technique is known as Continuous
Deployment (by Timothy Fitz)
• Continuous deployment can be combined with
canary releasing using automated processes
• Continuous deployment reduces the risk of any
particular release.
Continuously Releasing User-Installed Software
• Releasing a new version of software installed
by users on their own machines(client-
installed software); needs several issues to
consider:
– Managing the upgrade experience
– Migrating binaries, data, and configuration
– Testing the upgrade process
– Getting crash reports from users
Tips and Tricks
• The People Who Do the Deployment Should Be Involved in
Creating the Deployment Process
– Developers should seek out the operations people informally
and involve them in the development process.
• Log Deployment Activities
– Keep a manifest of every piece of hardware in your
environments, which bits you touched during deployment, and
the logs of actual deployments.
• Don’t Delete the Old Files, Move Them
– UNIX world deploys each version of the application into a new
directory and have a symbolic link that points to the current
version.
– Deploying and rolling back versions is simply a matter of
changing the symbolic link to point to the previous version.
Tips and Tricks
• Deployment Is the Whole Team’s Responsibility
– Every member of the team should know how to deploy, and
every member of the team should know how to maintain the
deployment scripts.
• Have a Warm-Up Period for a New Deployment
– Don’t switch on your application at the prearranged hour. By the
time it is officially “live,” let the servers and databases try to fill
their caches, make their connections, and do “warm up.”
• Fail Fast
– Deployment scripts should incorporate tests to ensure that the
deployment was successful.
– the system should perform these checks as it initializes, and if it
encounters an error, it should fail to start
Thank You
• End of M#4
© dr. sudheer s marar
DEPARTMENT OF MCA
NEHRU COLLEGE OF ENGINEERING AND RRESEARCH CENTRE

Más contenido relacionado

La actualidad más candente

Performance Testing With Loadrunner
Performance Testing With LoadrunnerPerformance Testing With Loadrunner
Performance Testing With Loadrunner
vladimir zaremba
 
Performance Engineering Case Study V1.0
Performance Engineering Case Study    V1.0Performance Engineering Case Study    V1.0
Performance Engineering Case Study V1.0
sambitgarnaik
 

La actualidad más candente (20)

Performance testing
Performance testingPerformance testing
Performance testing
 
What is Performance Testing?
What is Performance Testing?What is Performance Testing?
What is Performance Testing?
 
Performance testing
Performance testingPerformance testing
Performance testing
 
Performance testing
Performance testingPerformance testing
Performance testing
 
Introduction to Performance Testing Part 1
Introduction to Performance Testing Part 1Introduction to Performance Testing Part 1
Introduction to Performance Testing Part 1
 
JMeter
JMeterJMeter
JMeter
 
Load and Performance Testing for J2EE - Testing, monitoring and reporting usi...
Load and Performance Testing for J2EE - Testing, monitoring and reporting usi...Load and Performance Testing for J2EE - Testing, monitoring and reporting usi...
Load and Performance Testing for J2EE - Testing, monitoring and reporting usi...
 
Performance Testing With Loadrunner
Performance Testing With LoadrunnerPerformance Testing With Loadrunner
Performance Testing With Loadrunner
 
Performance Engineering Case Study V1.0
Performance Engineering Case Study    V1.0Performance Engineering Case Study    V1.0
Performance Engineering Case Study V1.0
 
Neotys PAC 2018 - Ramya Ramalinga Moorthy
Neotys PAC 2018 - Ramya Ramalinga MoorthyNeotys PAC 2018 - Ramya Ramalinga Moorthy
Neotys PAC 2018 - Ramya Ramalinga Moorthy
 
An Introduction to Software Performance Engineering
An Introduction to Software Performance EngineeringAn Introduction to Software Performance Engineering
An Introduction to Software Performance Engineering
 
Performance Testing
Performance TestingPerformance Testing
Performance Testing
 
Performance Testing using LoadRunner
Performance Testing using LoadRunnerPerformance Testing using LoadRunner
Performance Testing using LoadRunner
 
Performance Testing Using VS 2010 - Part 1
Performance Testing Using VS 2010 - Part 1Performance Testing Using VS 2010 - Part 1
Performance Testing Using VS 2010 - Part 1
 
An Introduction to Performance Testing
An Introduction to Performance TestingAn Introduction to Performance Testing
An Introduction to Performance Testing
 
Performance Testing Principles
Performance Testing PrinciplesPerformance Testing Principles
Performance Testing Principles
 
Neotys PAC 2018 - Bruno Da Silva
Neotys PAC 2018 - Bruno Da SilvaNeotys PAC 2018 - Bruno Da Silva
Neotys PAC 2018 - Bruno Da Silva
 
Performance testing : An Overview
Performance testing : An OverviewPerformance testing : An Overview
Performance testing : An Overview
 
Test automation lessons from WebSphere Application Server
Test automation lessons from WebSphere Application ServerTest automation lessons from WebSphere Application Server
Test automation lessons from WebSphere Application Server
 
Performance and Load Testing
Performance and Load TestingPerformance and Load Testing
Performance and Load Testing
 

Similar a Ncerc rlmca202 adm m4 ssm

VCS_QAPerformanceSlides
VCS_QAPerformanceSlidesVCS_QAPerformanceSlides
VCS_QAPerformanceSlides
Michael Cowan
 
Chowdappa Resume
Chowdappa ResumeChowdappa Resume
Chowdappa Resume
chowdappa o
 
Chowdappa Resume
Chowdappa ResumeChowdappa Resume
Chowdappa Resume
chowdappa o
 
Load and performance testing
Load and performance testingLoad and performance testing
Load and performance testing
Qualitest
 
Dev ops for mainframe innovate session 2402
Dev ops for mainframe innovate session 2402Dev ops for mainframe innovate session 2402
Dev ops for mainframe innovate session 2402
Rosalind Radcliffe
 

Similar a Ncerc rlmca202 adm m4 ssm (20)

VCS_QAPerformanceSlides
VCS_QAPerformanceSlidesVCS_QAPerformanceSlides
VCS_QAPerformanceSlides
 
Chowdappa Resume
Chowdappa ResumeChowdappa Resume
Chowdappa Resume
 
Chowdappa Resume
Chowdappa ResumeChowdappa Resume
Chowdappa Resume
 
Testing throughout the software life cycle - Testing & Implementation
Testing throughout the software life cycle - Testing & ImplementationTesting throughout the software life cycle - Testing & Implementation
Testing throughout the software life cycle - Testing & Implementation
 
Neev Load Testing Services
Neev Load Testing ServicesNeev Load Testing Services
Neev Load Testing Services
 
The Devops Handbook
The Devops HandbookThe Devops Handbook
The Devops Handbook
 
Laravel Load Testing: Strategies and Tools
Laravel Load Testing: Strategies and ToolsLaravel Load Testing: Strategies and Tools
Laravel Load Testing: Strategies and Tools
 
Best Practices for Implementing Automated Functional Testing
Best Practices for Implementing Automated Functional TestingBest Practices for Implementing Automated Functional Testing
Best Practices for Implementing Automated Functional Testing
 
Shuvam dutta | Performance testing and engineering
Shuvam dutta | Performance testing and engineeringShuvam dutta | Performance testing and engineering
Shuvam dutta | Performance testing and engineering
 
Load and performance testing
Load and performance testingLoad and performance testing
Load and performance testing
 
Shuvam dutta | Performance testing & engineering
Shuvam dutta | Performance testing & engineeringShuvam dutta | Performance testing & engineering
Shuvam dutta | Performance testing & engineering
 
Best Practices for Applications Performance Testing
Best Practices for Applications Performance TestingBest Practices for Applications Performance Testing
Best Practices for Applications Performance Testing
 
The QA/Testing Process
The QA/Testing ProcessThe QA/Testing Process
The QA/Testing Process
 
Regression Test Automation Framework
Regression Test Automation Framework Regression Test Automation Framework
Regression Test Automation Framework
 
Design principles & quality factors
Design principles & quality factorsDesign principles & quality factors
Design principles & quality factors
 
Performance Testing Strategy for Cloud-Based System using Open Source Testing...
Performance Testing Strategy for Cloud-Based System using Open Source Testing...Performance Testing Strategy for Cloud-Based System using Open Source Testing...
Performance Testing Strategy for Cloud-Based System using Open Source Testing...
 
Automated testing overview
Automated testing overviewAutomated testing overview
Automated testing overview
 
Webinar Presentation: Best Practices in QA Testing - Leveraging Open Source T...
Webinar Presentation: Best Practices in QA Testing - Leveraging Open Source T...Webinar Presentation: Best Practices in QA Testing - Leveraging Open Source T...
Webinar Presentation: Best Practices in QA Testing - Leveraging Open Source T...
 
Dev ops for mainframe innovate session 2402
Dev ops for mainframe innovate session 2402Dev ops for mainframe innovate session 2402
Dev ops for mainframe innovate session 2402
 
How to Improve Automation Test Coverage_.pptx
How to Improve Automation Test Coverage_.pptxHow to Improve Automation Test Coverage_.pptx
How to Improve Automation Test Coverage_.pptx
 

Último

+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
Health
 
CALL ON ➥8923113531 🔝Call Girls Kakori Lucknow best sexual service Online ☂️
CALL ON ➥8923113531 🔝Call Girls Kakori Lucknow best sexual service Online  ☂️CALL ON ➥8923113531 🔝Call Girls Kakori Lucknow best sexual service Online  ☂️
CALL ON ➥8923113531 🔝Call Girls Kakori Lucknow best sexual service Online ☂️
anilsa9823
 
CALL ON ➥8923113531 🔝Call Girls Badshah Nagar Lucknow best Female service
CALL ON ➥8923113531 🔝Call Girls Badshah Nagar Lucknow best Female serviceCALL ON ➥8923113531 🔝Call Girls Badshah Nagar Lucknow best Female service
CALL ON ➥8923113531 🔝Call Girls Badshah Nagar Lucknow best Female service
anilsa9823
 

Último (20)

SyndBuddy AI 2k Review 2024: Revolutionizing Content Syndication with AI
SyndBuddy AI 2k Review 2024: Revolutionizing Content Syndication with AISyndBuddy AI 2k Review 2024: Revolutionizing Content Syndication with AI
SyndBuddy AI 2k Review 2024: Revolutionizing Content Syndication with AI
 
Unlocking the Future of AI Agents with Large Language Models
Unlocking the Future of AI Agents with Large Language ModelsUnlocking the Future of AI Agents with Large Language Models
Unlocking the Future of AI Agents with Large Language Models
 
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
 
Short Story: Unveiling the Reasoning Abilities of Large Language Models by Ke...
Short Story: Unveiling the Reasoning Abilities of Large Language Models by Ke...Short Story: Unveiling the Reasoning Abilities of Large Language Models by Ke...
Short Story: Unveiling the Reasoning Abilities of Large Language Models by Ke...
 
Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...
Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...
Shapes for Sharing between Graph Data Spaces - and Epistemic Querying of RDF-...
 
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
Reassessing the Bedrock of Clinical Function Models: An Examination of Large ...
 
Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...
Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...
Try MyIntelliAccount Cloud Accounting Software As A Service Solution Risk Fre...
 
Diamond Application Development Crafting Solutions with Precision
Diamond Application Development Crafting Solutions with PrecisionDiamond Application Development Crafting Solutions with Precision
Diamond Application Development Crafting Solutions with Precision
 
Vip Call Girls Noida ➡️ Delhi ➡️ 9999965857 No Advance 24HRS Live
Vip Call Girls Noida ➡️ Delhi ➡️ 9999965857 No Advance 24HRS LiveVip Call Girls Noida ➡️ Delhi ➡️ 9999965857 No Advance 24HRS Live
Vip Call Girls Noida ➡️ Delhi ➡️ 9999965857 No Advance 24HRS Live
 
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
+971565801893>>SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHAB...
 
CALL ON ➥8923113531 🔝Call Girls Kakori Lucknow best sexual service Online ☂️
CALL ON ➥8923113531 🔝Call Girls Kakori Lucknow best sexual service Online  ☂️CALL ON ➥8923113531 🔝Call Girls Kakori Lucknow best sexual service Online  ☂️
CALL ON ➥8923113531 🔝Call Girls Kakori Lucknow best sexual service Online ☂️
 
Software Quality Assurance Interview Questions
Software Quality Assurance Interview QuestionsSoftware Quality Assurance Interview Questions
Software Quality Assurance Interview Questions
 
The Ultimate Test Automation Guide_ Best Practices and Tips.pdf
The Ultimate Test Automation Guide_ Best Practices and Tips.pdfThe Ultimate Test Automation Guide_ Best Practices and Tips.pdf
The Ultimate Test Automation Guide_ Best Practices and Tips.pdf
 
CALL ON ➥8923113531 🔝Call Girls Badshah Nagar Lucknow best Female service
CALL ON ➥8923113531 🔝Call Girls Badshah Nagar Lucknow best Female serviceCALL ON ➥8923113531 🔝Call Girls Badshah Nagar Lucknow best Female service
CALL ON ➥8923113531 🔝Call Girls Badshah Nagar Lucknow best Female service
 
A Secure and Reliable Document Management System is Essential.docx
A Secure and Reliable Document Management System is Essential.docxA Secure and Reliable Document Management System is Essential.docx
A Secure and Reliable Document Management System is Essential.docx
 
Right Money Management App For Your Financial Goals
Right Money Management App For Your Financial GoalsRight Money Management App For Your Financial Goals
Right Money Management App For Your Financial Goals
 
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdfLearn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
 
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
W01_panagenda_Navigating-the-Future-with-The-Hitchhikers-Guide-to-Notes-and-D...
 
Optimizing AI for immediate response in Smart CCTV
Optimizing AI for immediate response in Smart CCTVOptimizing AI for immediate response in Smart CCTV
Optimizing AI for immediate response in Smart CCTV
 
Hand gesture recognition PROJECT PPT.pptx
Hand gesture recognition PROJECT PPT.pptxHand gesture recognition PROJECT PPT.pptx
Hand gesture recognition PROJECT PPT.pptx
 

Ncerc rlmca202 adm m4 ssm

  • 1. RLMCA202 Continuous Delivery Module4 Dr. Sudheer Sankara Marar MCA, MBA, MTech, MA.JMC, PhD HOD-MCA, Nehru College of Engineering and Research Centre
  • 2. Automated Acceptance Testing • Let’s explore automated acceptance testing, and its place in the deployment pipeline • Acceptance tests take delivery teams beyond basic continuous integration, by testing the business acceptance criteria of your application & validating that it provides users with valuable functionality.
  • 3. Automated Acceptance Testing: Significance • It’s cost consuming.. But, the cost of an automated acceptance test suite is much lower than performing frequent manual acceptance and regression testing, or the risk of releasing a poor-quality software. • no other test proves that the application in production, delivers some real business value as per the expectation of users. • protects the application as the team make some large-scale changes to it.
  • 4. Creating Maintainable Acceptance Test Suites: the layered design • Acceptance tests are derived from acceptance criteria • Acceptance criteria must follow the INVEST principles (ie, they must be Independent, Negotiable, Valuable, Estimable, Small, And Testable) • Acceptance criteria should be then automated • Automated acceptance tests should be layered
  • 5. Testing against the GUI • Acceptance tests are intended to simulate user interactions with the system • If acceptance tests are coupled to your UI, – small changes to the UI can easily break your acceptance test suites, high risk. • If the GUI layer represents a clearly defined collection of display-only code that doesn’t contain any business logic of its own. – the risk associated with bypassing small
  • 6. Creating Acceptance Tests how to create automated acceptance tests • The Role of Analysts and Testers – have a business analyst working as part of each team, representing the customers and users of the system to identify and prioritize requirements. – Have Testers to ensure the quality and readiness of the software is understood by everybody • Analysis on Iterative Projects – short kick-off meetings are a vital part to ensure that every party has a good understanding of that requirement and their role delivery process. – This prevents analysts from creating “ivory tower” requirements that are expensive to implement or test • Acceptance Criteria as Executable Specifications – Acceptance tests are executable specifications of the behavior of the software being developed. – Behavior-driven development models your acceptance criteria to be written in the form of the customer’s expectations of the application behavior – Tools like Cucumber, JBehave, Concordion, Twist, and FitNesse allow you to write acceptance criteria like these as plain text and keep them synchronized with the actual application.
  • 7. • For example, in Cucumber, you would save the acceptance criterion as 
  • 8. The Application Driver Layer • The application driver layer understands how to talk to your application—the system under test. • The API for the application driver layer is expressed in a domain language(DSL) • It completely dispense the acceptance criteria layer and express the acceptance criteria in the implementation of the test. – it makes acceptance tests completely independent of each other. – it allows you to create test data with a few simple high-level commands • If test doesn’t care about details, the DSL will supply defaults that would work perfectly. • Application driver layer, along with the DSL represented by its API, tends to become quite extensive.
  • 9. The Application Driver Layer • How to Express Your Acceptance Criteria – External DSL approach is that you can round-trip your acceptance criteria (requires less tech skills) – Internal DSL approach requires less complex tooling, and you can use the autocomplete functionality (requires high tech skills) • The Window Driver Pattern – The application driver layer is the only layer which understands how to interact with the application – provides abstraction to reduce the coupling between the acceptance tests and the GUI of the system
  • 10. Implementing Acceptance Tests • State in Acceptance Tests – Acceptance tests must simulate user interactions with the system in a manner that will meet its business requirements. – to test some behavior of the application, it must be in a specific starting state • Process Boundaries, Encapsulation, and Testing – Don’t break encapsulation to make it testable – Never add test-only interfaces to remote system components • Managing Asynchrony and Timeouts – Isolate the asynchrony behind synchronous calls. – use test doubles to simulate the linking components
  • 11. The Acceptance Test Stage • The acceptance test suite should be run against every build that passes the commit tests • In the deployment pipeline, only release candidates that have passed this stage are available for subsequent stages.
  • 12. The Acceptance Test Stage • Keeping Acceptance Tests Green – Yes, it may be time consuming.. But, never dilute it. – Developers must sit and waiting for the tests to pass – get your acceptance tests green to feel confident about the quality of your software. – Use gimmicks, such as “LavaLamps” or “Bells and Whistles” to keep your tests in good shape. • Deployment Tests – The best acceptance tests are atomic; they create their own start conditions and tidy up at their conclusion. – Design the test environment to be as close as expected production environment. – intended to show that the deployment has been successful, to establish a known-good starting point for the execution of acceptance tests.
  • 13. Acceptance Test Performance • Automated acceptance tests are to assert that our system delivers the expected value to our users. • Acceptance test suites take several hours to complete • There are various techniques to improve the overall efficiency of the team by reducing the time it takes to get a result from acceptance test stage.
  • 14. • Refactor Common Tasks – look for quick wins by keeping a list of the slowest tests; spend a little time to find ways to make them more efficient. • Share Expensive Resources – Create a standard blank instance of the application at the start of the test and discard it at the end. – Pick those resources we will share between tests and which we will manage within the context of a single test. • Parallel Testing – Divide your tests so that there is no risk of interaction between them, then run them in parallel against an instance • Using Compute Grids – For tests that are expensive in their own right, – For tests that are important to simulate many concurrent users., go for compute grids Acceptance Test Performance
  • 15. Case Study: Using Cloud Computing for Acceptance Tests • To increase the sophistication of acceptance test environment • optimization begins by identifying and refactoring common patterns in our acceptance tests. • separate out API tests and to run them first, ahead of the UI-based tests. • Did some course-grained parallel running of tests. • Divided parallels into a couple of batches. • Switched to Amazon EC2 compute cloud to allow ease of access and wider scalability.
  • 16. Testing for Nonfunctional requirements • Nonfunctional requirements, focus on testing capacity, throughput, performance etc.. – Performance is a measure of the time taken to process a single transaction – Throughput is the number of transactions a system can process in a given timespan. – Capacity is the maximum throughput a system can sustain, for a given workload, while maintaining an acceptable response time for each individual request. • NFR such as availability, capacity, security, and maintainability are as important and valuable as functional ones
  • 17. Managing Nonfunctional Requirements • NFRs – have real business value. – they are different and tend to cross the boundaries of other requirements. – They’re hard to handle in terms of analysis and implementation. • Everybody involved in delivery—developers, operations personnel, testers, and the customer—need to think through the application’s NFRs and its impact on the system model • Analyzing Nonfunctional Requirement – define expectations as stories with quantitative specification – supply a reasonable level of detail when analyzing NFRs
  • 18. Programming for Capacity • Poorly analyzed NFR tend to constrain thinking and lead to overdesign and inappropriate optimization. – Focusing too early and too heavily on optimizing the capacity of the application is inefficient, expensive. • Avoid two extremes: – the assumption that you will be able to fix all capacity issues later; – writing overcomplex code in fear of future capacity problems.
  • 19. Programming for Capacity Strategy list to address capacity problems 1. Decide upon an architecture for your application. 2. Understand and use patterns and avoid antipatterns that affect the stability and capacity of your system. 3. Keep the team working within the boundaries of the chosen architecture 4. Pay attention to the data structures and algorithms chosen, making sure that their properties are suitable for your application. 5. Be extremely careful about threading. 6. Establish automated tests that assert the desired level of capacity. 7. Use profiling tools as a focused attempt to fix problems 8. Use real-world capacity measures.
  • 20. Measuring Capacity This involves a broad spectrum of characteristics include: • Scalability testing. How do the response time of an individual request and the number of possible simultaneous users change as we add more servers, services, or threads • Longevity testing. Run the system for a long time to see if the performance changes over a protracted period of operation. • Throughput testing. How many transactions, or messages, or page hits per second can the system handle • Load testing. What happens to capacity when the load on the application increases to production-like proportions
  • 21. Measuring Capacity Defining Success and Failure for Capacity Tests: • Success or failure is often determined by a human analysis of the collected measurements • create graphs as part of our capacity testing that are easily accessible from our deployment pipeline dashboard. • Aim for stable, reproducible results. – isolate capacity test environments from other influences and dedicate them to the task of measuring capacity. • Intensify the pass threshold if test is passed at a minimum acceptable level. – provides with protection from the false-positive scenario.
  • 22. The Capacity-Testing Environment • Capacity measurements of a system should be carried out in an environment, that replicates the Production environment in which the system will ultimately run. • Make the investment and create a clone of your production environment for the core parts of system. • Use the same hardware and software specifications, • Use the same configuration for each environment, including networking, middleware, and OS
  • 23. The Capacity-Testing Environment • Strategy to limit the test environment costs; • the application is to be deployed into production on a farm of servers, as shown in Figure 1. • Replicate one slice of the servers, as in Fig 2, not the whole farm.
  • 24. Automating Capacity Testing • The idea is all about, adding capacity testing as a stage to the deployment pipeline. • Capacity tests should – Test specific real-world scenarios, – Have a predefined threshold for success – Be of short duration – Be robust in the face of change – Be composable into larger-scale complexities – Be repeatable& capable of running in parallel
  • 25. Automating Capacity Testing • Which point in the application should recording, and playback, take place? – Our goal is to simulate realistic use of the system as closely as we can.. • Depending on system’s architecture and behavior variables we may use one among the 3 injection points, 1. Through the user interface. 2. Through a service or public API—for example, making HTTP requests directly into a web server. 3. Through a lower-level API—for example, making direct calls to a service layer or perhaps the database.
  • 26. Adding Capacity Tests to the Deployment Pipeline • It’s not recommended to add capacity tests into the acceptance test stage of deployment pipeline because: – Capacity tests need to be run in their own special environment. – Some types of capacity test can take a very long time to run – Many activities from acceptance testing can be done in parallel with capacity testing – Capacity tests aren't run as frequently as acceptance tests. • But, for some projects, it makes sense – Here, treat it in a way similar to the acceptance test stage—as a fully automated deployment gate.
  • 27. Additional Benefits of a Capacity Test System • Reproducing complex production defects • Detecting and debugging memory leaks • Longevity testing • Evaluating the impact of garbage collection • Tuning garbage collection • Tuning application configuration parameters • Tuning third-party application configuration • Evaluating different solutions to complex problems • Simulating integration failure • Measuring the scalability over runs with different hardware configurations • Load-testing communications with external systems • Rehearsing rollback from complex deployments. • Selectively failing parts to evaluate graceful degradation of service • Performing real-world capacity benchmarks in temporarily available hardware
  • 28. Deploying and Releasing Applications • how to create and follow a strategy for releasing software, including deployments to testing environments. • All the processes—deploying to testing and production environments and rolling back—need to form part of your deployment pipeline implementation. • It should be possible to see a list of builds available for deployment into each of these environments and run the automated deployment process by pressing a button or clicking a mouse
  • 29. Creating a Release Strategy • Stakeholders must meet up during the project planning process, and their discussions should be working out a common understanding throughout the lifecycle. • This shared understanding is then captured as the release strategy
  • 30. Creating a Release Strategy consider the following: • Parties in charge of deployments • Asset and configuration management strategy. • Description of the technology used for deployment. • Plan for implementing the deployment pipeline. • Enumeration of the environments • Description of the processes • Requirements for monitoring the application • Discussion of the method • Description of the integration with any external systems. • Logging details, to determine the application’s state • Disaster recovery plan • so that the application’s state can be recovered following a disaster. • Service-level agreements • Production sizing and capacity planning: • Archiving strategy
  • 31. The Release Plan • The first release carries the highest risk; it needs careful planning. • it should include – Steps required to deploy the application for the first time – Steps required to back out the deployment should it go wrong – Steps required to back up and restore the application’s state – Steps required to upgrade the application – Steps to restart or redeploy the application if it fail – Location of the logs – Methods of monitoring the application – Steps to perform any data migrations
  • 32. Releasing Products • An Additional list of deliverables should be considered if the output of your project is a software product – Pricing model – Licensing strategy – Copyright issues around third-party technologies – Packaging – Marketing materials—print, web-based, podcasts, blogs, press releases, conferences, etc. – Product documentation – Installers – Preparing sales and support teams
  • 33. Deploying and Promoting Your Application • Use the same process to deploy to every environment, including production. • Automating should start with the very first deployment to a testing environment. • Don’t use manually pulling of software pieces; instead, write a simple script to do the job.
  • 34. The First Deployment • The first deployment should happen in the first iteration when you showcase your first stories to the customer. • Choose stories that are of high priority but very simple to deliver in your first iteration • Get the early stages of deployment pipeline be able to demonstrate something..
  • 35. Modeling Your Release Process and Promoting Builds • As application grows, so will your deployment pipeline implementation. • During promoting builds, it should capture – What stages a build has to go through – What are the required gates or approval – Who has the authority to approve a build passing through that gate
  • 36. Promoting Configuration • not just the binaries need to be promoted. • The configuration of the environment and application; also need to be promoted – Make your smoke tests verify that you are pointing at the right things. – Write infrastructure tests that check any key settings and report them to your monitoring software.
  • 37. Orchestration • Environments are shared between several applications. • Take extra care when preparing the environment for a new deployment, so as to not disturb the operation of any other applications in this environment. • Use systems integration testing(SIT) for the applications that share the environment and depend on each other.
  • 38. Deployments to Staging Environments • Perform final tests in a staging environment that is very similar to production. – Employ the capacity testing environment for both capacity testing and staging. – If the application includes any integration with external systems, staging is the point to get a final confirmation that all aspects work between each system.
  • 39. Rolling Back Deployments and Zero-Downtime Releases • Be able to roll back a deployment; in case it goes wrong. • Debugging problems result in – late night hours n preassure – mistakes with unfortunate consequences & – angry users. • Have a way to restore service to your users when things go wrong, and debug the failure in the comfort of normal working hours.
  • 40. • Rolling Back by Redeploying the Previous Good Version – the simplest way to roll back. – to get back to a good state is to redeploy the previous good version from scratch – re-creates environments from scratch • Zero-Downtime Releases – also known as hot deployment – the process of switching users from one release to another happens instantaneously. – also possible to back users out to the previous version, if something goes wrong.
  • 41. • Blue-Green Deployments – one of the most powerful techniques for managing releases. – The basic idea is to have two identical versions of your production environment, call it blue and green. • users of the system are routed to the green environment, currently in production. • to release a new version of the application, deploy it to the blue environment. This does not affect the operation of the green environment.
  • 42. • Canary Releasing • Assume that you only have one version of your software in production at a time. • It’s much easier to manage bugfixes • Canary releasing, involves rolling out a new version of an application to a subset of the production servers to get fast feedback. – uncovers problems with the new version without impacting the majority of users. – Reduces risk of releasing a new version.
  • 43. • Facebook chooses to use a strategy with multiple canaries, the first one being visible only to their internal employees and having all the Feature Toggles turned on, so they can detect problems with new features early.
  • 44. Emergency Fixes • At times, a critical defect is discovered and has to be fixed as soon as possible. – Do not, disrupt your process. Emergency fixes have to go through the same build, deploy, test, and release process as any other change. – If change is not tested properly, it can lead to regressions that may even worsen the problem. – If the change is not recorded, the environment may end up in an unknown state • Run every emergency fix through your standard deployment pipeline
  • 45. Continuous Deployment • a motto of Extreme Programming—if it hurts, do it more often.. • Deploy every change that passes your automated tests to production. • This technique is known as Continuous Deployment (by Timothy Fitz) • Continuous deployment can be combined with canary releasing using automated processes • Continuous deployment reduces the risk of any particular release.
  • 46. Continuously Releasing User-Installed Software • Releasing a new version of software installed by users on their own machines(client- installed software); needs several issues to consider: – Managing the upgrade experience – Migrating binaries, data, and configuration – Testing the upgrade process – Getting crash reports from users
  • 47. Tips and Tricks • The People Who Do the Deployment Should Be Involved in Creating the Deployment Process – Developers should seek out the operations people informally and involve them in the development process. • Log Deployment Activities – Keep a manifest of every piece of hardware in your environments, which bits you touched during deployment, and the logs of actual deployments. • Don’t Delete the Old Files, Move Them – UNIX world deploys each version of the application into a new directory and have a symbolic link that points to the current version. – Deploying and rolling back versions is simply a matter of changing the symbolic link to point to the previous version.
  • 48. Tips and Tricks • Deployment Is the Whole Team’s Responsibility – Every member of the team should know how to deploy, and every member of the team should know how to maintain the deployment scripts. • Have a Warm-Up Period for a New Deployment – Don’t switch on your application at the prearranged hour. By the time it is officially “live,” let the servers and databases try to fill their caches, make their connections, and do “warm up.” • Fail Fast – Deployment scripts should incorporate tests to ensure that the deployment was successful. – the system should perform these checks as it initializes, and if it encounters an error, it should fail to start
  • 49. Thank You • End of M#4 © dr. sudheer s marar DEPARTMENT OF MCA NEHRU COLLEGE OF ENGINEERING AND RRESEARCH CENTRE