LinkedIn emplea cookies para mejorar la funcionalidad y el rendimiento de nuestro sitio web, así como para ofrecer publicidad relevante. Si continúas navegando por ese sitio web, aceptas el uso de cookies. Consulta nuestras Condiciones de uso y nuestra Política de privacidad para más información.
LinkedIn emplea cookies para mejorar la funcionalidad y el rendimiento de nuestro sitio web, así como para ofrecer publicidad relevante. Si continúas navegando por ese sitio web, aceptas el uso de cookies. Consulta nuestra Política de privacidad y nuestras Condiciones de uso para más información.
Empirical research methods for software engineering
Empirical Research Methods for Software Engineering <br />Prepared by: Dr. Sarfraz Nawaz Brohi & Dr. MervatAdibBamiah<br />
Agenda<br />1. Introduction to Empirical and Experimental Software Engineering <br />2. Empirical Research Methods <br />2.1 Case Study <br />2.2 Experimental Reserach<br />2.4 Survey <br />2.4 Post Mortem Analysis <br />3. Guidelines for Empirical Research in Software Engineering <br />3. Conclusion<br />
Experimental & Empirical Software Engineering <br /><ul><li> Experimental software engineering is a sub-domain of software engineering focusing on experiments on software systems (software products, processes, and resources). </li></ul>Text<br />Text<br />Text<br />Text<br /><ul><li> Empirical software engineering is a field of research that emphasizes the use of empirical studies of all kinds to accumulate knowledge. Methods used include experiments, variety of case studies, surveys, and statistical analyses.</li></li></ul><li>Why Empirical Study for Software Engineering<br />Empirical studies (use of experience, observation) have become important for software engineering<br />research.<br /><ul><li> Too many methods and tools exist to allow each programmer or software house to determine the best choice of tools and methods by trial and error.
Empirical studies determine the differences among alternative software techniques.</li></ul>For example experiment on quality and cost of a software product.<br />
How Empirical study helps software engineering Research <br />
Case Study<br /><ul><li> A case study is systematic description and analysis of an event organization or individual. </li></li></ul><li>Case Study<br />A case study determined the cause of the crash: <br /><ul><li> An overflow occurred when converting a 64-bit integer into a 16-bit integer in a program called the Inertial Reference System.
This overflow wasn’t monitored and therefore caused the entire control system to stop, and …boom!
The program causing the overflow wasn’t needed during flight, but only during initialization (up to -9s launch time).
It was kept running for 50 s into the flight to avoid re-initialization time of several hours in case of an aborted launch.
The software was designed for the Ariane 4 rocket, where this particular overflow could not happen.
So the error was a software-reuse error, caused by missing specifications of the conditions under which the software worked correctly.</li></li></ul><li>Software engineering Case Study on Ariane 5 crash<br /><ul><li> In software research, case studies are often used to demonstrate the functionality or capability of a new tool (existence proof).
Case studies are also useful for describing and understanding rare events (such as disasters caused by software failures).
Case studies are limited, because the cause of a specific event cannot be determined with any degree of certainty. To establish cause and events reliably, we need experiments.</li></li></ul><li>Experimental Research <br /><ul><li> Objective is to prove software engineering theory, hypothesis or product.
Experimental are also referred to as research-in-small because their scope is limited.
When experimenting random tests are conducted.
For- Example comparison between various processors. </li></li></ul><li>Experimental Research <br />
Survey<br /><ul><li> A survey collects data about some state of affairs querying a representative sample of some population.
Example: “If the president were elected tomorrow, whom would you vote?”
Surveys collect frequency data, but also information about reasons and preferences.
Example: “Why do you prefer a certain brand of car Surveys also test who holds certain preferences (male/female, age, ethnicity, income, location, etc
Surveys help understand why a certain phenomenon occurred and increase our ability to predict it.
Question: „What has caused the most difficulty when trying to understand object-oriented software?“</li></ul>1. Missing or inadequate design documentation <br />(16.8%)<br />2. Inheritance (15.5%)<br />3. Poor or inappropriate design (12.9%).<br />
Post-mortem Analysis<br />Post-mortem analysis is a research method studying the past.<br />The basic idea behind post-mortem analysis is to capture the knowledge and experience from a specific case or activity after it has been finished. <br />Two types of postmortem analysis: <br /><ul><li> a general post-mortem analysis capturing all available information from an activity .
focused post-mortem analysis for a specific activity, for example, cost estimation.</li></li></ul><li>Guidelines for Empirical Research in Software Engineering <br />Guidelines for empirical research will be useful for following cases:<br /><ul><li> The reader of a published paper.
The reviewer of a paper prior to its publication.
A meta-analyst wanting to combine information from different studies of the same</li></ul> phenomenon.<br /><ul><li> A journal editorial board.</li></li></ul><li>Guidelines for Empirical Research in Software Engineering <br />Kitchenham, B.A have categorized the guidelines according to the following sections:<br /><ul><li> Experimental context
Interpretation of results</li></li></ul><li>Guidelines for Experimental Context<br />Guidelines for experimental context are as listed below:<br /><ul><li> Be sure to specify as much of the industrial context as possible. In particular, clearly define the entities, attributes and measures that are capturing the contextual information.
If a specific hypothesis is being tested, state it clearly prior to performing the study,</li></ul>and discuss the theory from which it is derived, so that its implications are apparent.<br /><ul><li> If the research is exploratory, state clearly and, prior to data analysis, what</li></ul>questions the investigation is intended to address, and how it will address them.<br /><ul><li> Describe research that is similar to, or has a bearing on, the current research and</li></ul>how current work relates to it.<br />
Guidelines for Experimental Design<br />Guidelines for experimental design are as listed below:<br /><ul><li> Identify the population from which the subjects and objects are drawn.
Define the process by which the subjects and objects were selected.
Define the process by which subjects and objects are assigned to treatments.
Restrict yourself to simple study designs or, at least, to designs that are fully analyzed in the literature.
If you cannot avoid evaluating your own work, then make explicit any vested interests (including your sources of support), and report what you have done to minimize bias.</li></li></ul><li>Guidelines for Experimental Design<br /><ul><li> Avoid the use of controls unless you are sure the control situation can be unambiguously defined.
Justify the choice of outcome measures in terms of their relevance to the objectives of the empirical study.</li></li></ul><li>Guidelines for Conduct of the experiment and Data collection<br /><ul><li> Define all software measures fully, including the entity, attribute, unit and counting</li></ul> rules.<br /><ul><li> For subjective measures, present a measure of inter-rater agreement, such as the</li></ul>kappa statistic or the intra-class correlation coefficient for continuous measures.<br /><ul><li> Describe any quality control method used to ensure completeness and accuracy of</li></ul>data collection.<br /><ul><li> For surveys, monitor and report the response rate, and discuss the representativeness of the responses and the impact of non-response.
For observational studies and experiments, record data about subjects who drop</li></ul>out from the studies.<br /><ul><li> For observational studies and experiments, record data about other performance</li></ul>measures that may be adversely affected by the treatment, even if they are not the main focus of the study.<br />
Guidelines for Analysis<br /><ul><li> Specify any procedures used to control for multiple testing.
Ensure that the data do not violate the assumptions of the tests used on them.
Apply appropriate quality control procedures to verify your results</li></li></ul><li>Guidelines forPresentation of results<br /><ul><li> Describe or cite a reference for all statistical procedures used.
Present quantitative results as well as significance levels. Quantitative results should</li></ul>show the magnitude of effects and the confidence limits.<br /><ul><li> Present the raw data whenever possible. Otherwise, confirm that they are available</li></ul>for confidential review by the reviewers and independent auditors.<br /><ul><li> Provide appropriate descriptive statistics.
Make appropriate use of graphics.</li></li></ul><li>Guidelines for Interpretation of Results<br /><ul><li> Define the population to which inferential statistics and predictive models apply.
Differentiate between statistical significance and practical importance.