Se ha denunciado esta presentación.
Se está descargando tu SlideShare. ×

Application Performance, Test and Monitoring

Cargando en…3

Eche un vistazo a continuación

1 de 66 Anuncio

Más Contenido Relacionado

Presentaciones para usted (20)

Similares a Application Performance, Test and Monitoring (20)


Más de Dony Riyanto (20)

Más reciente (20)


Application Performance, Test and Monitoring

  1. 1. Application Performance, Test & Monitoring Prepared by A Dony Riyanto Telegram @donyriyanto 2018
  2. 2. Performance & Load Testing Basics Part 1
  3. 3. Performance & Load Testing Basics ü Introduction to Performance Testing ü Difference between Performance, Load and Stress Testing ü Why Performance Testing? ü When is it required? ü What should be tested? ü Performance Testing Process ü Load Test configuration for a web system ü Practice Questions
  4. 4. Introduction to Performance Testing Ø Performance testing is the process of determining the speed or effectiveness of a computer, network, software program or device. Ø Before going into the details, we should understand the factors that governs Performance testing: ü Throughput ü Response Time ü Tuning ü Benchmarking
  5. 5. Throughput Ø Capability of a product to handle multiple transactions in a give period. Ø Throughput represents the number of requests/business transactions processed by the product in a specified time duration. Ø As the number of concurrent users increase, the throughput increases almost linearly with the number of requests. As there is very little congestion within the Application Server system queues.
  6. 6. Throughput Ø In the heavy load zone or Section B, as the concurrent client load increases, throughput remains relatively constant. Ø In Section C (the buckle zone) one or more of the system components have become exhausted and throughput starts to degrade. For example, the system might enter the buckle zone when the network connections at the Web server exhaust the limits of the network adapter or if the requests exceed operating system limits for file handles.
  7. 7. Response Time Ø It is equally important to find out how much time each of the transactions took to complete. Ø Response time is defined as the delay between the point of request and the first response from the product. Ø The response time increases proportionally to the user load.
  8. 8. Tuning Ø Tuning is the procedure by which product performance is enhanced by setting different values to the parameters of the product, operating system and other components. Ø Tuning improves the product performance without having to touch the source code of the product.
  9. 9. Benchmarking Ø A very well-improved performance of a product makes no business sense if that performance does not match up to the competitive products. Ø A careful analysis is needed to chalk out the list of transactions to be compared across products so that an apple-apple comparison becomes possible.
  10. 10. Performance Testing- Definition Ø The testing to evaluate the response time (speed), throughput and utilization of system to execute its required functions in comparison with different versions of the same product or a different competitive product is called Performance Testing. Ø Performance testing is done to derive benchmark numbers for the system. Ø Heavy load is not applied to the system Ø Tuning is performed until the system under test achieves the expected levels of performance.
  11. 11. Difference between Performance, Load and Stress Testing Load Testing Ø Process of exercising the system under test by feeding it the largest tasks it can operate with. Ø Constantly increasing the load on the system via automated tools to simulate real time scenario with virtual users. Examples: Ø Testing a word processor by editing a very large document. Ø For Web Application load is defined in terms of concurrent users or HTTP connections.
  12. 12. Difference between Performance, Load and Stress Testing Stress Testing Ø Trying to break the system under test by overwhelming its resources or by taking resources away from it. Ø Purpose is to make sure that the system fails and recovers gracefully. Example: Ø Double the baseline number for concurrent users/HTTP connections. Ø Randomly shut down and restart ports on the network switches/routers that connects servers.
  13. 13. Why Performance Testing Ø Identifies problems early on before they become costly to resolve. Ø Reduces development cycles. Ø Produces better quality, more scalable code. Ø Prevents revenue and credibility loss due to poor Web site performance. Ø Enables intelligent planning for future expansion. Ø To ensure that the system meets performance expectations such as response time, throughput etc. under given levels of load. Ø Expose bugs that do not surface in cursory testing, such as memory management bugs, memory leaks, buffer overflows, etc.
  14. 14. When is it required? Design Phase: Pages containing lots of images and multimedia for reasonable wait times. Heavy loads are less important than knowing which types of content cause slowdowns. Development Phase: To check results of individual pages and processes, looking for breaking points, unnecessary code and bottlenecks. Deployment Phase: To identify the minimum hardware and software requirements for the application.
  15. 15. What should be tested? Ø High frequency transactions: The most frequently used transactions have the potential to impact the performance of all of the other transactions if they are not efficient. Ø Mission Critical transactions: The more important transactions that facilitate the core objectives of the system should be included, as failure under load of these transactions has, by definition, the greatest impact. Ø Read Transactions: At least one READ ONLY transaction should be included, so that performance of such transactions can be differentiated from other more complex transactions. Ø Update Transactions: At least one update transaction should be included so that performance of such transactions can be differentiated from other transactions.
  16. 16. Performance Testing Process
  17. 17. Ø Determine the performance testing objectives Ø Describe the application to test using a application model 1. Describe the Hardware environment 2. Create a Benchmark (Agenda) to be recorded in Phase 2. A. Define what tasks each user will perform B. Define (or estimate) the percentage of users per task. 1.Planning
  18. 18. Record Ø Record the defined testing activities that will be used as a foundation for your load test scripts. Ø One activity per task or multiple activities depending on user task definition Modify Ø Modify load test scripts defined by recorder to reflect more realistic Load test simulations. Ø Defining the project, users Ø Randomize parameters (Data, times, environment) Ø Randomize user activities that occur during the load test 3.Modify 2.Record 1.Planning
  19. 19. Virtual Users (VUs): Test Goals Start: 5 Max Response Time <= 20 Sec Incremented by: 5 Maximum: 200 Think Time: 5 sec Test Script: One typical user from login through completion. 4. Execute
  20. 20. Ø Monitoring the scenario: We monitor scenario execution using the various online runtime monitors.  Ø Analysing test results: During scenario execution, the tool records the performance of the application under different loads. We use the graphs and reports to analyse the application’s performance. 6. Analyze 5.Monitor
  21. 21. Load Test configuration for a web system
  22. 22. Questions to Review your Skills Ø What are the factors that governs Performance Testing? Ø How are Throughput and Response time related with user load? Ø How do we decide whether the application passed or failed the load test? Ø What do you mean by Capacity, Stability and Scalability of an application. Ø What is the difference between Performance, Load and Stress testing? Ø What is Longevity, endurance, spike and Volume Testing? Ø At what point in SDLC, performance testing is required? Ø How to identify the transactions in a complete application for load testing? Ø Define the 6 steps involved in Performance Testing Process? Ø Explain the Load Test configuration of a web system and what is the role of Load Generators in it?
  23. 23. Load Test Planning Part 2
  24. 24. Load Test Planning ü Why Planning ü Analysing the Application ü Defining Testing Objectives ü Gathering Requirements ü Load Test Checklist ü Practice Questions
  25. 25. Why Planning Ø As in any type of system testing, a well-defined test plan is the first essential step to successful testing. Planning load testing helps to: Ø Build test scenarios that accurately emulate your working environment: Load testing means testing the application under typical working conditions, and checking for system performance, reliability, capacity, and so forth. Ø Understand which resources are required for testing: Application testing requires hardware, software, and human resources. Before beginning testing, we should know which resources are available and decide how to use them effectively. Ø Define success criteria in measurable terms: Focused testing goals and test criteria ensure successful testing. For example, it’s not enough to define vague objectives like “Check server response time under heavy load.” A more focused success criterion would be “Check that 50 customers can check their account balance simultaneously & that server response time will not exceed 1- minute”
  26. 26. Why Planning Ø Load test planning is a three-step process: ü Analyzing the Application  Analysis ensures that the testing environment we create using LoadRunner will accurately reflect the environment and configuration of the application under test. ü Defining Testing Objectives  Before testing, we should define exactly what we want to accomplish. ü Gathering Requirements  All the requirements and resources should be evaluated and collected beforehand to avoid any last minute hurdles.
  27. 27. Analyzing the Application Ø Load testing does not require as much knowledge of the application as functional testing does. Ø Load tester should have some operational knowledge of the application to be tested. Ø Load tester should have the idea on how the application is actually used in production to make an informed estimate. Ø Load tester must know the application architecture (Client Server, Local Deployment, Live URL), Platform and Database used.
  28. 28. Defining Testing Objectives Ø Determining and recording performance testing objectives involves communicating with the team to establish and update these objectives as the project advances through milestones Ø Performance, Load or Stress testing: Type and scope of testing should be clear as each type of testing has different requirements. Ø Goal Setting: General load testing objectives should be defined.
  29. 29. Defining Testing Objectives Ø Common Objectives: ü Measuring end-user response time ü Defining optimal hardware configuration ü Checking reliability ü Assist the development team in determining the performance characteristics for various configuration options ü Ensure that the new production hardware is no slower than the previous release ü Provide input data for scalability and capacity-planning efforts ü Determine if the application is ready for deployment to production ü Detect bottlenecks to be tuned
  30. 30. Defining Testing Objectives Stating Objectives in Measurable Terms: Ø Once you decide on your general load testing objectives, you should identify more focused goals by stating your objectives in measurable terms. Ø To provide a baseline for evaluation, determine exactly what constitutes acceptable and unacceptable test results. Ø For example: ü General Objective:  Product Evaluation: choose hardware for the Web server. ü Focused Objective:  Product Evaluation: run the same group of 300 virtual users on two different servers, HP and NEC. When all 300 users simultaneously browse the pages of your Web application, determine which hardware gives a better response time.
  31. 31. Gathering Requirements Users: Identify all the types of people and processes that can put load on the application or system. ü Defining the types of primary end users of the application or system such as purchasers, claims processors, and sales reps ü Add other types of users such as system administrators, managers, and report readers who use the application or system but are not the primary users. ü Add types of non-human users such as batch processes, system backups, bulk data loads and anything else that may add load or consume system resources. Transactions: For each type of user we identified in the previous step, identify the tasks that the user performs. Production Environment: ü Performance and capacity of an application is significantly affected by the hardware and software components on which it executes.
  32. 32. Gathering Requirements Production Environment: ü Speed, capacity, IP address and name, version numbers and other significant information. Test Environment: ü Should be similar to the production environment as is possible to be able to get meaningful performance results. ü It is important that the databases be set up with the same amount of data in the same proportions as the production environment as that can substantially affect the performance. Scenarios: ü Select the use cases to include ü Determine how many instances of each use case will run concurrently ü Determine how often the use cases will execute per hour ü Select the test environment
  33. 33. Gathering Requirements Load test Tool: ü Ability to parameterize data. ü Ability to capture dynamic data and use on subsequent requests. ü Application infrastructure monitoring. ü Support for the application's protocols Load test Lab must include the following: ü Test Servers. ü Databases. ü Network elements, operating systems and clients and server hardware.
  34. 34. Load Test Check List Planning ü Objective goals defined ü Test plan written and reviewed Staff Skills ü Experience in load testing ü Application knowledge ü Systems knowledge ü Communication and people skills Support Staff ü Key staff identified and allocated Load Test Lab ü Test servers allocated ü Databases populated ü Load test tools allocated
  35. 35. Questions to Review your Skills Ø Why planning is required before starting load test? Ø What are the three steps involved in load test planning? Ø What information should be collected about the application to be load tested? Ø What are the common testing objectives? Ø State the following testing objective in measurable term: “Ensure that the new production hardware is not slower than the previous release” Ø Why is the knowledge of Production Environment necessary before load test? Ø What are the factors that need to be considered for creating a scenario? Ø How to choose a load test tool? Ø What are the requirements to setup a load test lab? Ø What are main points in a Load test checklist?
  36. 36. Load Testing Tools Part 3
  37. 37. Load Testing Tools ü Manual testing Limitations ü Benefits of Automation ü Tools used for Performance Testing ü Practice Questions
  38. 38. Testers Load Generation System Under Test Do you have the testing resources? • Testing personnel • Client machines How do you coordinate and synchronize users? How do you collect and analyze results? How do you achieve test repeatability? Analysis? 123.20 All of you, click the GO button again Manual Testing Limitations Web server Database server Coordinator
  39. 39. Manual Testing Limitations Manual Testing Limitations ü Expensive, requiring large amounts of both personnel and machinery. ü Complicated, especially co-ordinating and synchronising multiple testers ü Involves a high degree of organization, especially to record and analyse results meaningfully ü Repeatability of the manual tests is limited
  40. 40. Load Generation System Under Test Benefits of Automation Web server Database server Vuser host • Controller manages the virtual users • Analyze results with graphs and reports • Replaces testers with virtual users Solves the resource limitations • Runs many Vusers on a few machines Analysis Controller
  41. 41. Benefits of Automation Using Automated Tools ü Reduces personnel requirements by replacing human users with virtual users or Vusers. These Vusers emulate the behaviour of real users ü Because numerous Vusers can run on a single computer, the tool reduces the amount of hardware required for testing. ü Monitors the application performance online, enabling you to fine-tune your system during test execution. ü It automatically records the performance of the application during a test. You can choose from a wide variety of graphs and reports to view the performance data. ü Because the tests are fully automated, you can easily repeat them as often as you need.
  42. 42. Tools used for Performance Testing Open Source Ø OpenSTA Ø Diesel Test Ø TestMaker Ø Grinder Ø LoadSim Ø Jmeter Ø Rubis Commercial Ø LoadRunner Ø Silk Performer Ø Qengine Ø Empirix e-Load
  43. 43. OpenSTA Ø Developed in C++ Ø HTTP Load Test Application Advantages: Ø Open Source Software Ø A user-friendly graphical interface Ø The script capture from the browser Ø The monitoring functionality Drawbacks: Ø Only designed for Windows Ø Only for HTTP
  44. 44. DieselTest Ø Software designed in Delphi5 Ø For systems under NT Environment Ø For HTTP/HTTPS applications Advantages: Ø Open Source Ø The quality of the chart Ø Simple and fast to use Ø The logging functionality Drawbacks: Ø The manual edition of the tests is badly designed Ø The ambiguity of certain results Ø Distributed tests are impossible Ø Specific technology environment (Delphi, NT)
  45. 45. TestMaker Ø Developed in Java Ø For HTTP, HTTPS, SOAP, XML-RPC, Mails (SMTP, POP3 and IMAP) applications Advantages: Ø The possibility to build any kind of test agent Ø The power of Java programming with some Python simplifications Ø Open source Drawbacks: Ø Familiarity with the Jython scripting language, Java language and to write it from scratch Ø The monitoring tools are very basic, since it is limited to the response analysis Ø Must pay for distributed testing
  46. 46. Grinder Ø Generic framework for load testing any kind of target systems, with scenario in Jython Ø Developed in Java Advantages: Ø Open Source Ø You can test everything with scripts in Jython Drawbacks: Ø Deployment for distributed test Ø Poor results and graphical interface
  47. 47. LoadSim Ø LoadSim is an open source software developed in Java, which is designed for HTTP distributed load testing Advantages: Ø Open Source Ø Generation of script Ø Each client have a different configuration (user, script…) Drawbacks: Ø No graphical interface Ø Poor results Ø No graphical representation of result Ø No monitoring
  48. 48. Jmeter Ø 100% Java desktop application Ø For Web and FTP, Java, SOAP/XML-RPC, JDBC applications Advantages: Ø Open Source Ø The distributed testing Ø Various target systems Ø Extensibility: Pluggable samplers allow unlimited testing capabilities Drawbacks: Ø Chart representation quite confuse Ø Terminology not very clear Ø Necessary to start remote machine one by one Ø Remote machines must be declared in a property file before starting application
  49. 49. Rubis Ø Provided with some load-test tool (designed for Rubis, but some parts of code could be re-used) and a monitoring system. Ø Developed in Java. Advantages: Ø Open Source Ø Monitoring capabilities Ø Charts representations and automatic generation of HTML report Drawbacks: Ø Specific to Unix environment and Rubis application
  50. 50. Empirix eLoad Ø Accurate testing of the response times and scalability of web applications and web services Ø Recording in VBscript Advantages: Ø Can simulate hundreds and thousands of concurrent users Ø Monitoring capabilities and Charts representation Ø Reasonable Price Drawbacks: Ø Complex User Interface Ø Limitations in recording of complex scenarios
  51. 51. Questions to Review your Skills Ø What are the limitations of manual load testing? Ø Why tools are used for automating load test? Ø List 5 Open Source and 5 Commercial load test tools. Ø What are the disadvantages of LoadRunner? Ø Explain the following Load Test tools: Silk Performer, Qengine. Ø Give a detailed comparison between Empirix E-load and LoadRunner. Ø Which other tools are commonly used for load testing?
  52. 52. Using JMeter Part 4
  53. 53.  Introduction  What is Jmeter ?  Why ?  Preparing tests  Step 1 Proxy server  Step 2 Organization  Step 3 Genericity  Step 4 Assertions  Running tests  Non GUI mode  Distributed testing  Analyzing Test JMeter
  54. 54. Introduction Ø Definition : Ø JMeter is an Apache Jakarta project that can be used as a load testing tool for analyzing and measuring the performance of a variety of services, with a focus on web applications. Ø Why ? : Ø JMeter can be used as a unit test tool for JDBC database connection, FTP, LDAP, WebServices,J MS, HTTP and generic TCP connections. JMeter can also be configured as a monitor, although this is typically considered an ad-hoc solution in lieu of advanced monitoring solutions.
  55. 55. Proxy Server Role Ø Record Http requests run by users. Ø Stick to the exact http request a lambda user . Ø Record only what is meaningful. Ø To be organized. Ø Warning Ø Doesn’t record https.
  56. 56. Organization Thread groups Loop controllers Ø Determine Ø How many users, will concurrently run the tests Ø How long between 2 launch of the test Ø How many times the tests will be run Ø Determine in a thread group Ø How long between 2 launch of the same sampler Ø How many times the set of tests will be run.
  57. 57. Organization Thread groups Loop controllers
  58. 58. Organization Throughput Controller Ø Make variable pause during the test run to simulate better a client behavior. Ø Because the thread group doesn’t take in count the server, can take several seconds before responding.
  59. 59. Genericity Ø Variabilisation : Ø In order not to modify a test to run it on different machines Ø Example : user and password changing from a shelf to an other
  60. 60. Genericity Ø Http default Request Ø Allows you to put a default ip port and path for all the Http Request contained in the scope Ø Gives you an easy way to run your test from a device to an other one just by changing the default adress.
  61. 61. Genericity Ø Regular Expression extractor Ø If the data has to be used several times along the test Ø Like a sessionId for instance.
  62. 62. Assertions Ø Response assertion Ø To match a pattern in the response code Ø The response code for instance. Ø Xpath assertion Ø Using the DOM of the response to check if an element appear. Ø A research result for instance. Ø Size assertion Ø To know if the size of the response received match with the size expected Ø To verify if the file received is the good one.
  63. 63. Running tests Ø Non Gui Mode Ø Why? Ø The stress due to test and display is too high when running distributed tests. Ø How ? Ø By running command line Example : jmeter -n -t my_test.jmx -l log.jtl -H my.proxy.server -P 8000
  64. 64. Distributed testing Ø Why ? Ø To simulate stressed environment with a lot of clients. Ø How ? Ø Edit “remote_hosts=” in Ø Start jmeter_server.bat on the host machines Ø Run jmeter.bat
  65. 65. Analyzing Test Aggregated graph Result tree Ø Gives all the statistics concerning the tests Ø May be recorded in a specified file for further treatment (data mining) Ø Gives in a tree form, all the samplers results, the requests, and the sampler data. Ø May also be recorded in a specified file for further treatment
  66. 66. Analyzing Test Aggregated graph Result tree