1) O documento discute como garantir melhor qualidade de software através da evolução de testes para gerenciamento inteligente da qualidade, apresentando a solução IBM Rational de virtualização em testes.
2) A solução de virtualização em testes da IBM Rational permite a criação de serviços virtuais para simular sistemas e dependências durante os testes, reduzindo custos e riscos.
3) Ao capturar e modelar o comportamento dos sistemas, a solução possibilita testes contínuos e integrados em diferentes ambientes ao longo do cic
Software is the invisible thread woven through systems, products and services, helping companies bring new – smarter -- innovations to market. We see this in every day devices like cell phones and automobiles, where the real value comes not from the device itself, but from software that makes it different, or better than your old phone or your last car. We see this in innovative new services being delivered to customers over the web, or software being used to automate core business processes. The best and brightest companies use software to propel innovation, connecting customers, suppliers, systems, and a host of business modules in a single intelligent, adaptive network. When software is a critical component of a larger system, which can connect to other systems - an automobile to a global positioning system (GPS), for example, or a smart grid - that is when significant fiscal and societal impact can be realized. The convergence of physical devices and information technology opens up the possibility for all types of integrated systems. These systems deliver exponential value to consumers and the public. Some examples of these are… Smart electric grid iPod and iTunes Android and Google Maps Traffic management systems Fleet management systems Healthcare management systems
Whatever we are doing is not working
Key Message: Successful organizations understand they must innovate to improve software delivery capability and that c ost, complexity and velocity are increasingly making today’s quality paradigm impractical. Speaking Points: What is driving the need for change? Three primary factors have been developing over the past decade - : (Increasing cost of quality) With the rise in global labor wages , outsourcing and off shoring testing strategies as a way to drive down the cost of software development has reached its practical end of life and is no longer sufficiently changing the dynamics and cost of software quality. (Increasing development complexity) Today’s applications and manufactured products are increasingly complex . They comprise an unprecedented level of connectivity and dependency between systems, processes and infrastructure. Whether deployed in traditional software development or cloud environments, businesses are able to create products, systems and services that are increasingly instrumented, interconnected, and intelligent. While software is fueling this innovation and growth, new challenges for testing these composite, heterogeneous applications, products and services while keeping pace with development teams has increased. 3. (Balancing quality and speed) Historically, businesses have had to balance their ability to deliver quality against speed/time to market . Over the past several years, software development teams have been finding new and innovative ways to drive down cost while increasing their flexibility and software development productivity through agile development and the use of automated tooling. Test teams can no longer keep up with development’s increased agility and the velocity at which they deliver working software code to be tested.
Huge Test Lab Costs : Use of hardware-based virtualization or cloud based resources provides partial savings (20-30%) Installation and configuration of software is still very labor intensive Certain systems cannot leverage hw virtualization, e.g. costly third party services, mainframe applications, proprietary systems Longer Cycle Time : Investment in UI test automation has proven to reduce cycle time for regression testing Testing new functions still require to have an environment available to develop test scripts The time wasted waiting for a test environment is severely reducing the ability to do proper acceptance testing Higher Risk : Addressed through better collaboration between development and testing, better test planning, e.g. using Rational Quality Manager Too many “trivial” defects are still found late in the process by Quality Assurance teams
Key Message: IBM Rational Test Virtualization Solution can help improve software quality management and testing to drive down the cost of software development, cut risk to the business and reduce cycle time without compromising software quality. Speaking Points: (Drive Down Cost) Cost of software development is driven by the effort, hardware and software to configure and deploy complex test environments. Virtualizing complex test environments which may be deployed in traditional software development or cloud environments can aid in driving down cost. (Reduce Risk) Big bang integration issues discovered late in the development cycle increases risk to the project. Executing on-going integration testing much earlier in the cycle helps development teams identify and resolve defects sooner. (Improve Cycle Time) Increasing demands for the availability of complex test environments is negatively impacting development team velocity. Virtualizing services allows teams to reduce wait times and quickly deliver the necessary testing environments. Each of these measures of success can help customers: Avoid project delays and costs associated with traditional test labs (drive down cost & improve cycle time) Test third-party services, complex heterogeneous environments and applications through virtualization which enables test clouds (drive down cost & improve cycle time) Identify and respond to defects earlier by testing virtualized application and system components until they become available (reduce risk & improve cycle time) Share test environments across the team enabling parallel development (drive down cost & improve cycle time) Minimize test as the bottleneck testing unavailable services by virtualizing them, enabling more iterative, agile development (reduce risk & improve cycle time)
See announce : http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?infotype=an&subtype=ca&appname=gpateam&supplier=897&letternum=ENUS212-177 What is IBM Rational Test Virtualization Solution? The IBM Rational Test Virtualization Solution focus on a key problem most organizations face: the time, and resources required to set up and manage test environments. Traditionally, people have been running around installing hardware, setting up application servers, database servers, installing application software, configuring all of that. Not only this is very capital intensive problem, but as environments have become more and more complex, this is also a very error prone process that typically involves a lot of scrap and rework. IBM Rational Test Virtualization Solution enables organizations to address that problem by Virtualizing complete stacks of software, hardware, and services, enabling developers and testers to stand up test environments in a matter of minutes vs weeks, to do that whenever they want, and in effect, start their testing much earlier than what has been traditionally possible. The net result: IBM Rational Test Virtualization Solution really can help organizations transform the way they deal with software quality by: 1. better managing their costs: Reduce hardware, software and labor costs associated maintaining complex test environments, 2. improve test cycle time: By reducing wasted time spent waiting on the availability of and setting up test environments), and 3. better manage risk in delivering software: By doing testing earlier, organization can avoid late stage integration issues IBM Rational Test Virtualization Solution consists of : - Rational Test Workbench - Rational Test Virtualization Server - Rational Performance Test Server
[22:12]We’re all familiar with running the unit tests the developers have made using Ant as part of the build process. If these tests fail, the build is considered to be broken and the build script generally stops. What we have been working on with many of our clients is how can we revolutionize this technique so we can get tests written by testers into this process rather than just relying on the unit tests written by developers into their code. These tests have a lot more business value than a developer unit test for the reasons we discussed before. This is technique #1: continuous and incremental integration testing. So, if we are going to be testing continuously, it’s no good executing these test cases manually, we’re going to need automation at the UI layer. This is a fundamental change in role for some of our testers, they will move from doing tests to feeding new tests into the test automation engine. Stephen Covey wrote that “a producer can invest one hour of effort and produce one unit of results, assuming no loss in efficiency. A manager, on the other hand, can invest one hour of effort and produce ten or fifty or a hundred units through effective delegation”. When we automate a test, we delegate execution of that test to the computer, freeing us to get on with adding more value elsewhere, perhaps concentrating on usability tests or building more automation. This is the only scalable way to be more agile, and yet not jeopardise quality. The trouble with this approach though is the requirement for the UI to exist. This, as we have already discussed, can delay testing. Luckily, our thinking with an automation mindset, coupled with modern approaches to building scalable application architectures can provide the answer: we can move the automation backwards to the lower layers, catching problems sooner and with greater ease. In the past this was not possible, as applications were monolithic, but the modern componentized approach enables testers to take on this role as the interfaces are exposed, and standard. At the same time, we need UI automation to become lighter weight. It’s no longer acceptable to have to write code to automate the testing of a UI. Many modern application tools are getting away from the developers writing code, instead they configure, so why do we still expect testers to code in order to automate? I know many testers are a big fan of open source tools, one reason being because they are considered free and thus easy to obtain but that is ignoring the biggest source of costs in testing: the time used by the people to do the testing. This includes the time taken to build and modify automated tests. Much scorn is directed at record and playback mechanisms for creating automated testing scripts that cannot be easily maintained, pushing an application change into a category where making the change, despite it being the best thing for the end user is now deprioritized because it would break the automated tests. This is clearly wrong. We want creating an automated UI test to be as easy as running a manual test, but for the results to be so easy to change later that we don’t blink. We cannot rely on gifted individuals choosing the one right approach to automation out of 50 wrong ones in order to make our agility possible. It has to be impossible to do automation badly. This combined approach allows us to isolate defects at an earlier stage in the development process, and report them in a more timely fashion to the developers as they occur. Today we see IDEs doing continuous compilation to highlight syntax errors and other benefits, all without someone raising a defect. Tomorrow we will see the results from continuous and incremental integration testing appear in the IDE in the same way, alerting developers in real-time to regression issues, allowing them to choose whether they need to quickly fix some code, fix a test case, or raise a work item for a larger piece of work to be prioritized. An interesting discussion for another time is whether “micro-defects”, defects that are known only to the developer writing the code, ought to be tracked for statistical interest.
We always want to be testing. Stub interfacing components out that are unavailable at the time and then re-introduce them when they are available. Continuous integration testing at a “system” level. When new components are introduced the automated test suite scan be run as regression tests. Thereby controlling the risk of extra functionality being deployed into the test environment by having mitigated the risk through earlier testing against stubs. This is an incremental and iterative approach to integration testing.
Key Message: Green Hat’s technology is real, available today and proven in the marketplace Speaking Points: Four examples of how Green Hat’s unique capabilities are driving real ROI: Major telecom carrier Business challenge: Multiple channels used to register and service customers. SOA environment featured B2B integration with channel partners. Early success meant production issues increased with additional projects Solution: Combined automation of integration platform with intelligent automation of QA process. Green Hat provided common assurance process across SOA lifecycle. Consumers could instantly utilize requirements to virtualize applications or services enabling testing even when services unavailable Results: Improved time to market: 30% increase in productivity“ Reduced complexity/risk: 40% reduction in errors. A leading global financial services firm Business challenge: Customer bought a next generation payments system. Impact of integrating was significant given many disparate, legacy formats Solution: Virtualized third party systems, otherwise unavailable for testing Results: Reduced cost of labor: 10 days of manual testing down to just 10 minutes; Reduced risk: Saved >$7 million so far, “Project would have been impossible without the tool” Major US insurer Business challenge: Needed flexibility to change its quote engine and customer service delivery more frequently without massive financial burden associated with manual testing. “Rate filing” cost $500,000 each time (external Professional Services and internal resources) Solution: An agile middleware solution was developed to match the legacy systems’ functionality. This development included new interfaces that enabled customer facing employees to generate additional revenue from each policy. Once developed, test scripts were stored in GH Tester software which enabled the team to quickly rerun and report results. Daily validation was necessary to ensure the daily deployments did not affect the quote engine calculations. In addition to validation, the team to regression test before and after each change. Results: Reduced cost of labor: User testing reduced by 95% to 2 hours, QA testing reduced by 90%, Total testing time per ‘Rate Filing’ reduced by 3,500 hours at an estimated cost saving of $76,000. Improved time to market: Simulation of quote engine transactions reduced by 94% to 2 hours, ‘Rate Filing’ validation reduced by 94% to 320 hours Global manufacturer acquires competitor – Business challenge: A cquired competitor and needed to migrate them off rented infrastructure onto company’s standardized middleware platform. Regression testing was essential Solution: GH Tester performed all required functions quickly and easily. Virtualization of unavailable systems whilst they migrated was critical Results: Increased time to market: Fully integrated in six months, two months early. Reduced risk: Saved significant rental costs and dependencies on 3 rd party owned system Identify/Qualify Lantana opportunities in your territory Complex applications with integration challenges (multiple technology, legacy, packaged applications) MQ Series, TIBCO, Software AG, System Z, SAP
Author Note: Mandatory Rational closing slide (includes appropriate legal disclaimer). Graphic is available in English only.