MC0084 – Software Project Management & Quality Assurance - Master of Computer Science - MCA - SMU DE
1. MC0084 – Software Project Management & Quality Assurance
1) What is project management? Explain various activities involved in project management.
Project management is a systematic method of defining and achieving targets with optimized use of resources
such as time, money, manpower, material, energy, and space. It is an application of knowledge, skills,
resources, and techniques to meet project requirements. Project management involves various activities,
which are as follows:
●
●
●
●
●
●
●
●
●
●
Work planning
Resource estimation
Organizing the work
Acquiring recourses such as manpower, material, energy, and space
Risk assessment
Task assigning
Controlling the project execution
Reporting the progress
Directing the activities
Analyzing the results
2) Describe the following with respect to Estimation and Budgeting of Projects: a) Software Cost Estimation
and Methods b) COCOMO model and its variations
a) Software Cost Estimation and Methods
I. Algorithmic Models: These methods provide one or more algorithms which produce a software cost
estimate as a function of a number of variables which relate to some software metric (usually its size)
and cost drivers.
II. Expert Judgment : This method involves consulting one or more experts, perhaps with the aid of an
expert-consensus mechanism such as the Delphi technique
III. Analogy Estimation - This method involves reasoning by analogy with one or more completed projects to
relate their actual costs to an estimate of the cost of a similar new project.
IV. Top-Down Estimation - An overall cost estimate for the project is derived from global properties of the
software product. The total cost is then split up among the various components.
V. Bottom-Up Estimation - Each component of the software job is separately estimated, and the results
aggregated to produce an estimate for the overall job.
VI. Parkinson's Principle - A Parkinson principle ('Work expands to fill the available volume") is invoked to
equate the cost estimate to the available resources.
VII. Price to Win - The cost estimation developed by this method is equated to the price believed necessary
to win the job. The estimated effort depends on the customer's budget and not on the software
functionality
VIII. Bottom-Up Estimation - Each component of the software job is separately estimated, and the results
aggregated to produce an estimate for the overall job.
Cost Estimation Guidelines
2. ●
●
●
●
●
●
●
●
Assign the initial estimating task to the final developers.
Delay finalizing the initial estimate until the end of a thorough study.
Anticipate and control user changes.
Monitor the progress of the proposed project.
Evaluate proposed project progress by using independent auditors.
Use the estimate to evaluate project personnel.
Computing management should carefully approve the cost estimate.
Rely on documented facts, standards, and simple arithmetic formulas rather than guessing, intuition,
personal memory, and complex formulas.
● Don't rely on cost estimating software for an accurate estimate.
b) COCOMO model and its variations
The Constructive Cost Model (COCOMO) is an algorithmic software cost estimation model developed by
Barry Boehm. The model uses a basic regression formula, with parameters that are derived from historical project
data and current project characteristics. COCOMO II is the successor of COCOMO 81 and is better suited for
estimating modern software development projects. It provides more support for modern software development
processes and an updated project database. The need for the new model came as software development
technology moved from mainframe and overnight batch processing to desktop development, code reusability and
the use of off-the-shelf software components. COCOMO consists of a hierarchy of three increasingly detailed and
accurate forms. The first level, Basic COCOMO is good for quick, early, rough order of magnitude estimates of
software costs, but its accuracy is limited due to its lack of factors to account for difference in project attributes
(Cost Drivers). Intermediate COCOMO takes these Cost Drivers into account and Detailed COCOMO additionally
accounts for the influence of individual project phases. The Constructive Cost Model (COCOMO) is an algorithmic
software cost estimation model developed by Barry Boehm. The model uses a basic regression formula, with
parameters that are derived from historical project data and current project characteristics. Basic COCOMO
computes software development effort (and cost) as a function of program size. Program size is expressed in
estimated thousands of lines of code (KLOC).
COCOMO applies to three classes of software projects:
● Organic projects - "small" teams with "good" experience working with "less than rigid" requirements
● Semi-detached projects - "medium" teams with mixed experience working with a mix of rigid and less
than rigid requirements
● Embedded projects - developed within a set of "tight" constraints (hardware, software, operational, ...)
The basic COCOMO equations take the form
Effort Applied = ab(KLOC)bb [ man-months ]
Development Time = cb(Effort Applied)db [months]
People required = Effort Applied / Development Time [count]
Basic COCOMO is good for quick estimate of software costs. However it does not account for differences in
hardware constraints, personnel quality and experience, use of modern tools and techniques, and so on.
Intermediate COCOMO computes software development effort as function of program size and a set of "cost
drivers" that include subjective assessment of product, hardware, personnel and project attributes. This extension
considers a set of four "cost drivers",each with a number of subsidiary attributes:Product attributes
● Required software reliability
● Size of application database
● Complexity of the product
Hardware attributes
3. ●
●
●
●
Run-time performance constraints
Memory constraints
Volatility of the virtual machine environment
Required turnabout time
Personnel attributes
● Analyst capability
● Software engineering capability
● Applications experience
● Virtual machine experience
● Programming language experience
Project attributes
● Use of software tools
● Application of software engineering methods
● Required development schedule
3) What is project scheduling? Explain different techniques for project scheduling.
Project scheduling is concerned with the techniques that can be employed to manage the activities that need
to be undertaken during the development of a project.
Scheduling is carried out in advance of the project commencing and involves:
●
●
●
●
identifying the tasks that need to be carried out;
estimating how long they will take;
allocating resources (mainly personnel);
Scheduling when the tasks will occur.
Once the project is underway control needs to be exerted to ensure that the plan continues to represent the
best prediction of what will occur in the future:
●
●
based on what occurs during the development;
Often necessitates revision of the plan.
Effective project planning will help to ensure that the systems are delivered:
● within cost;
● within the time constraint;
● To a specific standard of quality.
Two project scheduling techniques will be presented, the Milestone Chart (or Gantt Chart) and the Activity
Network.
Milestone Charts - Milestones mark significant events in the life of a project, usually critical activities which
must be achieved on time to avoid delay in the project. Milestones should be truly significant and be
reasonable in terms of deadlines (avoid using intermediate stages).
Examples include:
● installation of equipment;
● completion of phases;
● file conversion;
● cutover to the new system
4. Gantt Charts - A Gantt chart is a horizontal bar or line chart which will commonly include the following
features:
●
●
●
●
●
●
activities identified on the left hand side;
time scale is drawn on the top (or bottom) of the chart;
a horizontal open oblong or a line is drawn against each activity indicating estimated duration;
dependencies between activities are shown;
at a review point the oblongs are shaded to represent the actual time spent (an alternative is to
represent actual and estimated by 2 separate lines);
a vertical cursor (such as a transparent ruler) placed at the review point makes it possible to establish
activities which are behind or ahead of schedule.
Activity Networks - The foundation of the approach came from the Special Projects Office of the US Navy in
1958. It developed a technique for evaluating the performance of large development projects, which became
known as PERT - Project Evaluation and Review Technique. Other variations of the same approach are known
as the critical path method (CPM) or critical path analysis (CPA). The heart of any PERT chart is a network of
tasks needed to complete a project, showing the order in which the tasks need to be completed and the
dependencies between them.
EXAMPLE OF ACTIVITY NETWORK
The diagram consists of a number of circles, representing events within the development lifecycle, such as the
start or completion of a task, and lines, which represent the tasks themselves. Each task is additionally labelled
by its time duration. Thus the task between events 4 & 5 is planned to take 3 time units. The primary benefit is
the identification of the critical path. The critical path = total time for activities on this path is greater than any
other path through the network (delay in any task on the critical path leads to a delay in the project).
Tasks on the critical path therefore need to be monitored carefully.
The technique can be broken down into 3 stages:
1. Planning:
a. identify tasks and estimate duration of times;
b. arrange in feasible sequence;
c. Draw diagram.
2. Scheduling:
a. Establish timetable of start and finish times.
3. Analysis:
5. a. establish float;
b. Evaluate and revise as necessary.
4) Explain the Mathematics in software development? Explain its preliminaries also.
Mathematics has many useful properties for the developers of large systems. One of its most useful properties
is that it is capable of succinctly and exactly describing a physical situation, an object or the outcome of an
action. Ideally, the software engineer should be in the same position as the applied mathematician. A
mathematical specification of a system should be presented, and a solution developed in terms of a software
architecture that implements the specification should be produced. Another advantage of using mathematics
in the software process is that it provides a smooth transition between software engineering activities. Not
only functional specifications but also system designs can be expressed in mathematics, and of course, the
program code is a mathematical notation – albeit a rather long-winded one.
The major property of mathematics is that it supports abstraction and is an excellent medium for modeling. As
it is an exact medium there is little possibility of ambiguity: Specifications can be mathematically validated for
contradictions and incompleteness, and vagueness disappears completely.
In addition, mathematics can be used to represent levels of abstraction in a system specification in an
organized way. Mathematics is an ideal tool for modeling. It enables the bare bones of a specification to be
exhibited and helps the analyst and system specifier to validate a specification for functionality without
intrusion of such issues as response time, design directives, implementation directives, and project
constraints. It also helps the designer, because the system design specification exhibits the properties of a
model, providing only sufficient details to enable the task in hand to be carried out. Finally, mathematics
provides a high level of validation when it is used as a software development medium. It is possible to use a
mathematical proof to demonstrate that a design matches a specification and that some program code is a
correct reflection of a design. This is preferable to current practice, where often little effort is put into early
validation and where much of the checking of a software system occurs during system and acceptance testing.
Mathematical Preliminaries
To apply formal methods effectively, a software engineer must have a working knowledge of the
mathematical notation associated with sets and sequences and the logical notation used in predicate calculus.
The intent of the section is to provide a brief introduction. For a more detailed discussion the reader is urged
to examine books dedicated to these subjects
Sets and Constructive Specification
A set is a collection of objects or elements and is used as a cornerstone of formal methods. The elements
contained within a set are unique (i.e., no duplicates are allowed). Sets with a small number of elements are
written within curly brackets (braces) with the elements separated by commas. For example, the set {C++,
Pascal, Ada, COBOL, Java} contains the names of five programming languages. The order in which the
elements appear within a set is immaterial. The number of items in a set is known as its cardinality. The #
operator returns a set's cardinality. For example, the expression #{A, B, C, D} = 4 implies that the cardinality
operator has been applied to the set shown with a result indicating the number of items in the set. There are
two ways of defining a set. A set may be defined by enumerating its elements (this is the way in which the sets
just noted have been defined). The second approach is to create a constructive set specification. The general
form of the members of a set is specified using a Boolean expression. Constructive set specification is
preferable to enumeration because it enables a succinct definition of large sets. It also explicitly defines the
rule that was used in constructing the set. Consider the following constructive specification example: {n : _ | n
< 3 . n} This specification has three components, a signature, n : _, a predicate n < 3, and a term, n. The
signature specifies the range of values that will be considered when forming the set, the predicate (a Boolean
6. expression) defines how the set is to be constricted, and, finally, the term gives the general form of the item of
the set. In the example above, _ stands for the natural numbers; therefore, natural numbers are to be
considered. The predicate indicates that only natural numbers less than 3 are to be included; and the term
specifies that each element of the set will be of the form n.
Therefore, this specification defines the set {0, 1, 2} When the form of the elements of a set is obvious, the
term can be omitted. For example, the preceding set could be specified as (n : _ | n < 3} All the sets that have
been described here have elements that are single items. Sets can also be made from elements that are pairs,
triples, and so on. For example, the set specification {x, y : _ | x + y = 10 . (x, y2)} Describes the set of pairs of
natural numbers that have the form (x, y2) and where the sum of x and y is 10. This is the set { (1, 81), (2, 64),
(3, 49), . . .} Obviously, a constructive set specification required to represent some component of computer
software can be considerably more complex than those noted here. However the basic form and structure
remains the same.
5) What is debugging? Explain the basic steps in debugging?
Debugging is a methodical process of finding and reducing the number of bugs, or defects, in a computer
program or a piece of electronic hardware, thus making it behave as expected. Debugging tends to be harder
when various subsystems are tightly coupled, as changes in one may cause bugs to emerge in another. Many
books have been written about debugging (see below: Further reading), as it involves numerous aspects,
including: interactive debugging, control flow, integration testing, log files, monitoring (application, system),
memory dumps, profiling, Statistical Process Control, and special design tactics to improve detection while
simplifying changes.
Step 1: Identify the error.
This is an obvious step but a tricky one, sometimes a bad identification of an error can cause lots of wasted
developing time, is usual that production errors reported by users are hard to be interpreted and sometimes
the information we are getting from them is misleading.
A few tips to make sure you identify correctly the bug are.
See the error - This is easy if you spot the error, but not if it comes from a user, in that case see if you can get
the user to send you a few screen captures or even use remote connection to see the error by yourself.
Reproduce the error - You never should say that an error has been fixed if you were not able to reproduce it.
Understand what the expected behavior should be - In complex applications could be hard to tell what
should be the expected behavior of an error, but that knowledge is basic to be able to fix the problem, so we
will have to talk with the product owner, check documentation… to find this information
Validate the identification - Confirm with the responsible of the application that the error is actually an error
and that the expected behavior is correct. The validation can also lead to situations where is not necessary or
not worth it to fix the error.
Step 2 - Find the error.
Once we have an error correctly identified, is time to go through the code to find the exact spot where the
error is located, at this stage we are not interested in understanding the big picture for the error, we are just
focused on finding it. A few techniques that may help to find an error are:
Logging - It can be to the console, file… It should help you to trace the error in the code.
Debugging - Debugging in the most technical sense of the word, meaning turning on whatever the debugger
you are using and stepping through the code.
7. Removing code - I discovered this method a year ago when we were trying to fix a very challenging bug. We
had an application which a few seconds after performing an action was causing the system to crash but only
on some computers and not always but only from time to time, when debugging, everything seemed to work
as expected, and when the machine was crashing it happened with many different patterns, we were
completely lost, and then it occurred to us the removing code approach. It worked more or less like this:
We took out half of the code from the action causing the machine to crash, and we executed it hundreds of
times, and the application crashed, we did the same with the other half of the code and the application didn’t
crash, so we knew the error was on the first half, we kept splitting the code until we found that the error was
on a third party function we were using, so we just decided to rewrite it by ourselves.
Step 3 - Analyze the error.
This is a critical step, use a bottom-up approach from the place the error was found and analyze the code so
you can see the big picture of the error, analyzing a bug has two main goals: to check that around that error
there aren’t any other errors to be found (the iceberg metaphor), and to make sure what are the risks of
entering any collateral damage in the fix.
Step 4 - Prove your analysis
This is a straightforward step, after analyzing the original bug you may have come with a few more errors that
may appear on the application, this step it’s all about writing automated tests for these areas (is better to use
a test framework as any from the xUnit family).
Once you have your tests, you can run them and you should see all them failing, that proves that your analysis
is right.
Step 5 - Cover lateral damage.
At this stage you are almost ready to start coding the fix, but you have to cover your ass before you change
the code, so you create or gather (if already created) all the unit tests for the code which is around where you
will do the changes so that you will be sure after completing the modification that you won’t have break
anything else. If you run this unit tests, they all should pass.
Step 6 - Fix the error.
That’s it; finally you can fix the error!
Step 7 - Validate the solution.
Run all the test scripts and check that they all pass.
6) What is a fish bone diagram? How is it helpful to the project management?
Fishbone diagram, a brainchild of Dr. Kaoru Ishikawa, is an analysis tool that provides a systematic way to look
at the potential factors causing a particular effect. It’s quite difficult to resolve complicated problems without
considering the cause-and-effect relationship between involved factors. It is referred to as ‘Fishbone diagram’
because the diagram resembles the skeleton of a fish. To facilitate easy identification of the key relationship
among various variables, Ishikawa grouped all the causes into two major categories: the 6Ms (for
manufacturing industry) and the 4Ps (for service industry).
The 6Ms
● Manpower, Machines, Methods, Measurements, Materials, Management/Money power
The 4Ps
● Policies, Price, People, Quality
When to Use Fishbone Diagram
We can apply Fishbone diagram in the following situations:
8. 1. If the traditional ways of approaching the problem seem time consuming.
2. When the problem is too complicated for the team to identify the root cause.
3. When there are many potential causes of the problem.
However, the team members are free to modify these categories depending upon their subject matter and
project.
The following steps are involved in constructing a fishbone diagram:
1- State the Problem
It’s the simplest step. Note down the problem your team is facing in detail. Identify when and where it
occurs and who are involved. After identifying the problem, take a sheet of paper, write down the problem in
a square box on the right hand side of the page. Draw a straight line horizontally from the left side of the
paper to problem box. Now this arrangement appears like fish head and spine.
2- Figure out All the Possible Factors Involved
Many people find it difficult to structure the complex though process around a problem. Discuss with your
teammates and use the available tools such as flow charts and affinity charts to find out as many possible
factors as you can. You can categorize the factors into 6Ms, 4Ps, or some other type based on the nature of
problem under study.
In fishbone analogy, for each category draw a slanted line with the arrow pointing towards the backbone.
3- Brainstorm and Identify the Root Cause
When brainstorming, strive to identify the major causes (categories) then discuss the secondary factors
within these categories to analyze their relevance with the problem. This way the team can concentrate on
one major cause at a time and look further into sub-causes if necessary.
All the major causes (categories) are drawn as fishbones and the secondary causes as bonelets in the
diagram. Your team now has a comprehensive list of potential causes. By discussion and use of analytical tools
the team members can decide what is the root cause and take appropriate action.
Role of Fishbone Diagram in Project Management
Fishbone diagrams primarily show the root causes of an event, e.g. quality failures. Therefore they are of
vital importance in project management through project quality plan, fault detection, and task management.
The proactive project managers apply fishbone diagrams for early planning, especially when gathering factors,
and to identify hidden factors that can play significant role in the project. It is also used in mapping the
operation, business process modeling and business process improvement.
Tips to Successfully Build a Fishbone Diagram
1.
2.
3.
4.
5.
Make sure that all the team members agree on the problem statement prior to beginning.
Consider all the possible factors for casualty and label them properly.
Split the overcrowded categories.
Merge the empty branches with others.
Study the root causes that are most likely to merit deeper investigation.