The document discusses experimental research methods. It defines experimental research as applying treatments to groups and measuring their effects. It describes key aspects of experimental design including independent and dependent variables, as well as threats to internal and external validity like history, maturation, and selection bias. Finally, it outlines different experimental designs like single group, parallel group, and rotation group designs and the steps involved in conducting experimental research.
2. Research...
• The systematic application of a family of
methods employed to provide trustworthy
information about problems
an ongoing process based on many
accumulated understandings and explanations
that, when taken together lead to
generalizations about problems and the
development of theories
3. EXPERIMENTAL RESEARCH
• …the researcher selects participants and divides
them into two or more groups having similar
characteristics and, then, applies the treatment(s)
to the groups and measures the effects upon the
groups
• According to W.S. Monore
“Experimentation is the name given to the type of
educational research in which the investigator
controls the educative factors to which a child of
group of children is subjected during the period of
inquiry and observes the resulting achievement.
5. Variables in Experimental Research
• …a concept (e.g., intelligence, height, aptitude) that can assume any one of a
range of values
Independent Variable:
• Experimental Variable, Cause, or Treatment
• The activity or characteristic the researcher believes
makes a difference
Dependent Variable:
• Criterion Variable, Effect,
• Outcome of the study
• Difference in group(s) that occurs as a result of the
manipulation of the IV
Controlled variables:
are variables that is sometimes overlooked by researchers, but it is usually far more
important than the dependent or independent variables.
6. Validity
• Validity refers to the condition that observed
differences on the dependent variable are a
direct result of manipulation of the
independent variable , not some other variable.
• Internal validity
• External validity
7. Internal Validity
Campbell & Stanley (1971) identified 8 threats to internal
validity:
• History - becomes more likely the longer a study is; caused by
external events.
• Maturation - physical/mental changes occurring in subjects over
time; more likely to occur when study is extended over a long
period of time.
• Testing (pretest sensitization) - result of higher scores on a
posttest due to participants having taken a pretest; unlike above,
more likely to occur when there are short intervals between
testing.
8. Contd…….
• Differential Selection of Subjects - differences
already present between two pre-formed groups
could account for differences in posttest results.
• Mortality (attrition) - occurs most often in long-
term studies; refers to participants who drop out
of a group potentially sharing some characteristic
that affects the significance of the study.
Instrumentation - lack of consistency between
measuring instruments; data collection leads to
unreliable/invalid results.
9. External Validity
• results of the study can be reconfirmed with other groups, in other settings, and at
other times (if the conditions are similar to those present in the experiment).
Bracht & Glass (1968) identified 6 threats to external validity:
Pretest-Treatment Interaction - participants react differently to a treatment because
they have been pretested; pretests may alert participants to the make-up of the
treatment; therefore, results can only be generalized to other pretested groups.
Multiple-Treatment Interference - the same participants receivemore than one
treatment in succession; effects are carried-over from the first treatment making it
hard to determine the effectiveness of the second treatment.
Selection-Treatment Interaction - occurs when participants are not randomly
selected for the treatments they receive; can occur when participants are a pre-
formed group or an individual; limits the generalizability of the results.
10. Contd……
• Specificity of Variables - does not depend on the experimental design
chosen; threatens validity when a study is conducted:
o with a specific kind of subject;
o based on a particular definition of the independent variable; using specific
measuring instruments;
o at a specific time; and
o under a specific set of circumstances.
• Experimenter Effects - experimenter unintentionally affects the
implementation of the study’s procedures, the behavior of the participants,
or the assessment of participant behavior, thereby affecting the results of
the study.
• Reactive Arrangements - factors associated with how a study is conducted
effectively influence the feelings and attitudes of the participants; affects
generalizability of the results.
11. Increasing internal and external
validity
• Increasing Internal validity
• Randomly select participants
• Randomly assigns group
• Use a control group
• Increasing external validity
• Careful adherence to good experimental
practices
12. Experimental Design
• is the blue print of the procedures that enable
the researcher to test hypothesis by reaching
valid conclusions about relationships between
the independent and the dependent variables.
13. Types of Experimental Design
• SINGLE – GROUP DESIGN: (One group method)
• It consists of comparing the growth of a single group under
two different sets of conditions – i.e., of subjecting the
group successively to an experimental and to a control
factor equivalent periods of time and then comparing the
outcomes.
• Procedure:
• Test the group; introduce method A; test the group again;
and note the gains.
• Allow for a period of transition.
• Test the group again; introduce method B; test the group
once more; note the gains.
• Compare the gains in 1 and 3.
14. Advantages/Limitations
• Advantages:
• It permits the experiment to be conducted by teacher in his own
classroom without assistance.
• It seems to make a fair attempt at equating the factors.
• Limitations:
• It fails to control many non-experimental variables.
• The students may not be equally motivated by the two methods
nor the teacher equally effective and enthusiastic about both.
• The one group method unless handled with great care, the
experiment may easily give undue credit to the independent
variable causing changes and overlook other conditions.
• Ignore carry on effect
• Ignore maturation
15. PARALLEL – GROUP DESIGN
(Equivalent-group method)
• It is designed to overcome certain difficulties encountered in the
one-group design. Here the relative effects of two treatments are
compared on the basis of two groups which are equated in all
relevant aspects. The second group which is called the control
group serves as a reference from which comparisons are made. The
basic group design is as follows:
Experimental Control
• Pre-test Pre-test
• Experimental factor Control factor
• Final test Final test
• Comparison of gains
16. ROTATION-GROUP DESIGN
• When the experimental and control group are only
approximately equivalent in relevant factors, it may be
possible to conduct investigation by rotating the groups at
periodic intervals. It is commonly employed in situations
where a limited number of subjects are available.
Stage 1 Stage 2
Group A: IV1 Group A: IV2
Group B: IV2 Group B: IV1
• Thus the researcher applies the same independent
variables to different groups at different times during the
experiment.
17. Steps of experimental Research
• Identifying, defining and delimiting the problem
• Reviewing the literature
• Formulating hypothesis and deducing the
consequences
• Drawing up the experimental design
• Defining the population
• Carrying out the study
• Measuring the outcomes
• Analyzing and interpreting the outcomes
• Drawing up the conclusion