2. CONTENTS
• Introduction
• Definition
• Factors affecting questionnaire survey
• Advantages of questionnaire survey
• Disadvantage of questionnaire survey
• Pshycology of asking questions
• Steps in questionnaire survey design
• Testing methods
• Validity & Reliablity of questionnaire
• Conclusion
30-06-2020 2Dr Ramesh R
3. REFERENCES
• Barbara H. Forsy et.al; Methods for Translating Survey Questionnaires Paper presented to
American Association for Public Opinion research, Montreal, Canada, May, 2006.
• Kothari C.K; Research Methodology‐ Methods and Techniques , New Age International, New
Delhi;200Rea LM, Parker RA.
• Trochim, W. M. K. (2006). Survey research.
• Designing and conducting survey research: A comprehensive guide. John Wiley & Sons; 2014
Sep 9.
• Nardi PM. Doing survey research. Routledge; 2015 Nov 17, Chapter 4.
• ABS (2001). Pretesting in survey development. An Australian Bureau of Statistics perspective,
Research Paper, ed. Australian Bureau Statistics, Canberra, Australia. ABS (2004a).
• Forms Design Standards Manual. Australian Bureau of Statistics. Available on-line at
http://www.sch.abs.gov.au.European Statistical System Committee. European Statistics Code of
Practice.
• Schubert A, Ahsbahs C. The ESCB Quality Framework for European Statistics. Austrian Journal of
Statistics. 2015 Apr 30;44(2):3-11.
• Sæbø HV. Quality Assessment and Improvement Methods in Statistics–What works?. Statistika.
2014;94(4):5-14.
30-06-2020 3Dr Ramesh R
5. DEFINITION
• Question: A request for specific piece information to be obtained from
respondents by researchers.
• Misinterpreting a question can lead to respondents answering a different
question than the researcher intended
30-06-2020 5Dr Ramesh R
6. A response to a request for information
30-06-2020 6Dr Ramesh R
7. • Questionnaire: Is a research data collecting instrument for the
purpose of gathering information from respondents.
30-06-2020 7Dr Ramesh R
8. • Postal or paper mail surveys and Internet surveys are the two best
known forms of self-administered questionnaires
30-06-2020 8Dr Ramesh R
9. FACTORS AFFECTING QUESTIONNAIRES
1. Length of the questionnaire.
2. Complexity of the questions asked.
3. Quality and design of the questionnaire.
4. Relative importance of the study as determined by the potential
respondent.
5. Extent to which the respondent believes that his responses are
important.
6. Timing of distribution of questionnaires for data collection.
30-06-2020 9Dr Ramesh R
10. ADVANTAGES
1. They are more cost effective to administer than personal (face-to-face)
interviews.
2. They are relatively easy to administer and analyse
3. Most people are familiar with the concept of a questionnaire
4. They reduce the possibility of interviewer bias
5. They are perceived to be less intrusive than telephone or face-to-face
surveys and hence, respondents will more readily respond truthfully to
sensitive questions
6. They are convenient since respondents can complete it at a time and place
that is convenient for them.
30-06-2020 10Dr Ramesh R
11. DISADVANTAGE
• When send by hand, post, e-mail or the Web usually the response rate
tends to be low:-
Questionnaire too long
Questionnaire complicated to complete,
Subject matter not interesting to the respondent
Subject matter perceived as being of a sensitive nature.
• Researcher does not have control over who fills the questionnaire even
though it may be addressed or delivered to the intended participant.
30-06-2020 11Dr Ramesh R
12. • Response rates in face-to-face and telephone interviews are in general
higher than in self-administered surveys (Groves & Couper, 1998)
Groves RM, Couper MP, Groves RM, Couper MP. A conceptual framework for survey participation.
Nonresponse in household interview surveys. 1998:25-46.
• To achieve a high response rate in self-administered surveys
requires special efforts in the contact phase of the survey.
• For instance, in mail/internet surveys an attractive questionnaire
and cover letter in combination with well timed reminders are
necessary (Dillman, 1978, 2007).
Dillman DA. Mail and Internet surveys: The tailored design method--2007 Update with new Internet,
visual, and mixed-mode guide. John Wiley & Sons; 2011 Jan 31.
30-06-2020 12Dr Ramesh R
13. PSYCOLOGY OF ASKING QUESTIONS
• How research instrument shape respondents’ answers
• How research instrument evaluate the changes as a function of
respondents’ age and culture.
• First address respondents’ tasks & subsequently discuss how respondents
make sense of the questions asked.
• Next, review how respondents answer behavioral questions and relate
these questions to issues of autobiographical memory and estimation.
• Finally, address attitude questions and review the conditions that give rise
to context effects in attitude measurement.
1. Sudman S, Bradburn NM, Schwarz N. Answering a survey question: Cognitive and communicative
processes. Thinking about answers: the application of cognitive processes to survey methodology. San
Francisco, CA: Jossey-Bass Publishers. 1996:55-79.
2. Tourangeau R, Rips LJ, Rasinski K. The psychology of survey response. Cambridge University Press;
2000 Mar 13.
30-06-2020 14Dr Ramesh R
14. QUESTIONNAIRE DESIGN
• Questionnaire design, according to the Code of Practice, has to make
sure that European Statistics “accurately and reliably portray reality”.
Brancato G, Macchia S, Murgia M, Signore M, Simeoni G, Blanke K, Hoffmeyer-Zlotnik J. Handbook of recommended
practices for questionnaire development and testing in the European statistical system. European Statistical System.
2006.
Valid &
Reliable
Wording
Accuracy
Layout
Structure
30-06-2020 15Dr Ramesh R
16. LITERATURE REVIEW
SPECIFY SURVEY OBJECTIVES
CONCEPUTUALISATION &OPERATIONLISATION
EXPLORING CONCEPTS: FOCUS GROUPS AND IN-DEPTH INTERVIEWS
DEFINITION OF A LIST OF VARIABLES AND A TABULATION PLAN
DECISION ON THE DATA COLLECTION MODE
WRITING AND SEQUENCING THE QUESTIONS
VISUAL DESIGN ELEMENTS
ELECTRONIC QUESTIONNAIRE DESIGN
CROSS-NATIONAL HARMONISATION OF QUESTIONS30-06-2020 17Dr Ramesh R
17. LITERATURE REVIEW
• It is should be applied at the very beginning of the questionnaire
development.
• Every questionnaire design should start with a review of the existing
literature
30-06-2020 18Dr Ramesh R
18. • Taking advantage of the findings from the work of other researchers is
highly advisable.
• But all questions need to be verified, since findings from other surveys
can not guarantee that a question is appropriate in the new survey
context (Fowler, 2002).
Freeman D, Dunn G, Garety PA, Bebbington P, Slater M, Kuipers E, Fowler D, Green C, Jordan J, Ray
K. The psychology of persecutory ideation I: a questionnaire survey. The Journal of nervous and
mental disease. 2005 May 1;193(5):302-8.
30-06-2020 19Dr Ramesh R
19. Question consideration to be done from literature research
• What almost equal surveys have been developed?
• What kind of testing was conducted and which were the results?
• Which recommendations on the design are presented?
30-06-2020 20Dr Ramesh R
20. IMPORTANCE OF REVIEW ARTICLE
• It is not costly and time intensive.
• It helps to get into the topic.
• It identifies basic problems.
• It helps the researcher to structure his/her work.
30-06-2020 21Dr Ramesh R
21. SURVEY OBJECTIVES
• “A prerequisite to design a good survey instrument is deciding what is to be
measured” (Fowler, 2002).
• Decisions should be made on the target population, on sampling design, the
available resources and the data collection mode to be preferred.
• New surveys or major redesigns of surveys, an intensive user-focused
consultation done to identify the concepts required.
30-06-2020 22Dr Ramesh R
23. • In defining the aims there is one basic rule of questionnaire design:
• “Ask what you want to know, not something else” (Bradburn, 2004)
Bradburn NM, Sudman S, Wansink B. Asking questions: the definitive guide to questionnaire design--for
market research, political polls, and social and health questionnaires. John Wiley & Sons; 2004 May 17.
30-06-2020 24Dr Ramesh R
25. OPERATIONALISATION
• Scientific concepts used to determine or prove the concept.
• Observable variables (operationalisation or measurement)
30-06-2020 26Dr Ramesh R
26. 2 perspective to bridge the gap between theory (concept) and
observable variables (operationalisation or measurement):
i) Theory driven approaches
• Dimension/indicator analysis,
• Semantic analysis
• Facet design methods
ii) Empirical driven approaches
• Content sampling
• Symbolic interactionism
• Concept mapping
30-06-2020 27Dr Ramesh R
27. EXPLORING CONCEPTS
• Especially in new surveys it should be explored whether the concepts
and indicators the investigator is seeking are compatible with what
respondents have in mind.
• Qualitative methods - To get an idea of how respondents think
• 2 methods:-
1. Focus groups.
2. In-depth interviewing.
Groves RM. Survey errors and survey costs. John Wiley & Sons; 2004 Apr 30.30-06-2020 28Dr Ramesh R
28. FOCUS GROUP
• Focus groups are composed of a
small number of target population
members guided by a moderator.
AIM:
a) To learn how respondents use terms
related to the topic,
b) How they understand concepts or
specific terminology, and
c) How they perceive questions
d) To pre-field test the questionnaires.
Krueger and Casey, 2000
30-06-2020 29Dr Ramesh R
29. IN-DEPTH OR QUALITATIVE INTERVIEWS
In-depth or qualitative interviews
focus on:-
AIM:
• Respondents’ understanding of
the concepts,
• How the respondents interpret
certain questions
• How they arrive at their answers
• They are not based on a group
discussion
Legard R, Keegan J, Ward K. In-depth interviews. Qualitative research practice: A guide for social science
students and researchers. 2003 Feb 18;6(1):138-69.30-06-2020 30Dr Ramesh R
30. QUESTIONNAIRE SCHEME
• SCHEME: How to transfer reality into observable statistical concepts
• One suitable approach is based on “entity/relationship schemes” (or
“conceptual schemes”) (Chen, 1976).
• The basic structure of entity/relationship schemes consists of entities, the
logical links between the entities (relationships) & entities’ attributes.
30-06-2020 31Dr Ramesh R
31. • The entities are the concepts of interest for the survey; they are
represented in the scheme by rectangles.
• The relationships are the logical links between entities; they are
represented in the scheme by rhombi.
• The entity’s attributes are the characteristics to be known of each entity,
so they will constitute the questions of our questionnaire. They are
written in the scheme above the lines connected to the rectangles.
30-06-2020 32Dr Ramesh R
33. LIST OF VARIABLES
• The data required are to be collected
via questions and answers,
operationalised and digitised into
variables and values.
1. Background variables (e.g.
demographic variables)
2. Variables used to measure the survey
concepts
3. Technical variables (e.g. required for
weighting).
Belli RF, Lee EH, Stafford FP, Chou CH. Calendar and question-list survey methods: Association between
interviewer behaviors and data quality. Journal of Official Statistics. 2004 Jun 1;20(2):185.
30-06-2020 34Dr Ramesh R
34. TABULATION PLAN
• Define variables and values
• Develop a preliminary tabulation plan
30-06-2020 35Dr Ramesh R
35. MODE OF DATA COLLECTION
• It is important to note that the questionnaire should not be designed
without a decision on the data collection mode.
• Standard data collection modes are listed in table below:-
30-06-2020 36Dr Ramesh R
36. Factors taken into consideration before selecting data
collection mode:
• Subject of the survey
• Complexity of questionnaire
• Estimated interview length
• Characteristics of the target population:
• Budget at disposal for the survey
30-06-2020 37Dr Ramesh R
38. Characteristic phrasing of the question represents attitude/view of the investigator
WRITING QUESTIONS
More than one stimulus in questions should not be used
Long/ complex, hypothetical & loaded questions should be avoided
Non understandable notions in questions should be avoided
Questions should be converted to the respondants language without change of
the sentence meaning
Questions should be simple, precise
30-06-2020 39Dr Ramesh R
39. • During writing process, the questionnaire designer should bear in
mind the major effects derived from respondent behaviour that can
introduce error in writing questions.
1. Context effect
2. Memory effect
3. Sensitivity effect
4. Social desirability effect
5. Fatigue point
30-06-2020 40Dr Ramesh R
40. CONTEXT EFFECTS
• It comprise all sorts of influences that other questions or information
(instructions, section headings, etc.) might have on the respondent’s
interpretation of a question (Biemer and Lyberg, 2003).
• It arise in the comprehension and retrieval phases of the response
mechanism and, according to Smith (1991).
• It can be avoided if respondent consider survey interview as communication
process.
30-06-2020 41Dr Ramesh R
41. RECALL OR MEMORY EFFECT
• One should be aware of possible bias, when requiring information from
respondents long-term memory.
• It is the short-term memory (temporary memory) which is used
extensively when completing questionnaires (ABS, 2004a).
• Eg:- Asking mother of 10 year old child about oral habit who had thumb
sucking habit when he was very young.
30-06-2020 42Dr Ramesh R
42. SENSITIVITY EFFECTS
• Questions on topics which respondents may see as embarrassing or highly
sensitive can produce inaccurate answers as the content can be considered as
invasion of their privacy.
• On embarrassing topics especially in face-to-face interviews, usually the
responses will be more “acceptable” because the risk of disclosure of the
“true” answers (Tourangeau et al., 2000).
• Eg: Information about child’s tooth decay maybe to some parents.
30-06-2020 43Dr Ramesh R
43. SOCIAL DESIRABILITY
• A common way of reducing the social desirability effect without harming
the respondent cooperative disposition is telling him/her in a direct way, in
the instructions at the beginning of the questionnaire, that all the answers
are equally good and acceptable, or that there is no “good” or “bad”
answers and that people have different opinions.
• Again, these problems can be due to a combination of factors, such as the
personality, the education level and the social position of the respondent as
well as conditions of the interview or design of a self-administered
questionnaire.
30-06-2020 44Dr Ramesh R
44. FATIGUE POINT
• Minor obstacles accumulate in the person’s mind until a point is reached
when it becomes too much and the person no longer cares about what goes
on in the questionnaire. This point is known as the fatigue point, and its
presence can introduce serious error into the data (ABS, 2004a), or can
compromise the completion of the interview.
• Maybe be due to poor use of any questionnaire design elements like
language, question sequencing, length, layout etc.
30-06-2020 45Dr Ramesh R
46. Sequencing of questions should be self evident to respondent
Questions should be arranged according to subject, topic & logical groups
Sequence of questions should flow from less to more complex topics
First question should be applicable to all respondents
At the end of questionnaire space for additional comments should be provided
Use of checks should be evaluated carefully
30-06-2020 47Dr Ramesh R
47. FACTUAL
QUESTIONS
• Fact based information required from the respondent
• Two specific factual questions:-
• Classification or Demographic Questions: (e.g. age, sex, place).
• Knowledge Questions: these questions test the respondent’s knowledge
about event/ disease
BEHAVIOUR
QUESTIONS
• These questions require information about the activity/behaviour of the
respondent
• e.g. “How many times your child brush his teeth?”
• These questions require respondants to recall the event
• 2 types of error: omission error, intrusion error
30-06-2020 48Dr Ramesh R
48. • These questions seek to measure subjective opinions (“Are you in favour of …?”)
• Opinion questions are very sensitive to changes in wording & it is impossible to check
the validity of responses to opinion questions.
• Opinion questions have 2 basic components: an object and an evaluative dimension.
Most common dimensions are about agreement (approval or disapproval), truthfulness
(true or false), assessment (good or bad), importance (important or not important), and
intensity (minimum, maximum).
• Two of the most common opinion questions are:-rating scales and rankings.
OPINION
QUESTIONS
• These are the “What would you do if ... ?” type of questions.
•
• Hypothetical questions should be avoided because:
• Most people do not predict their behaviour very well & many people respond to
hypothetical questions based on their perceptions of the probability that events will
occur.
HYPOTHETICAL
QUESTIONS
30-06-2020 49Dr Ramesh R
49. Questions categories based on Answers
Open
Questions
Numeric
Open end
Text Open
end
Closed
Questions
Partially
closed
Limited
choice
Multiple
choice
Checklist
• It allow respondents to answer
in their own words.
• It is often used in pilot tests.
• Provides respondents with a
range of options with possible
answers to choose from.
• Respondent only needs to
choose the most appropriate
answer
• What is your age in years?
• What is your weight in years?
• What is your occupation?
30-06-2020 51Dr Ramesh R
50. CLOSED QUESTIONS
• Limited choice: To choose one of two mutually exclusive answers.
(dichotomous question)
Eg:- “yes/no” answers
• Multiple choice: To choose from a no: of response categories provided, from
which only one should be selected.
• Checklist (or check-all questions): More than one answer can be chosen, To
choose all response categories that applies
• Forced choice: Respondent is “forced” by a yes/no answer for every category
• Partially closed: Provide a set of responses where the last alternative is “Other,
please specify”, followed by an appropriately sized answer box for respondents
to answer.
30-06-2020 53Dr Ramesh R
51. BASED ON COGNITIVE CAPACITY OF THE RESPONDENTS
OPEN QUESTIONS
• Used only when researcher does
not know or can not predict
beforehand all the possible
responses
• When the respondent’s answers are
considered to add value to the
survey objectives
• Respondents have to identify the
background and meaning of a
question by themselves
CLOSED QUESTIONS
• Reduces response burden of
respondent
• In addition to question itself the
answer categories or scales are
presented to the respondents
• In case of closed questions
respondents get a neutral frame
which is equal and obligatory for all
30-06-2020 54Dr Ramesh R
52. • Research in cognitive psychology proved that the presence of response
categories influence the answers: they help the respondents to clarify
questions meaning, and to build a frame for an adequate response
• The choice between open or closed-ended question depends on the level
of knowledge the survey designers have on the survey subject.
• A good level of knowledge is needed in the formulation of response
categories for closed-ended questions.
30-06-2020 55Dr Ramesh R
53. RESPONSE CATEGORIES
• It is important to ensure that responses are adequate, exhaustive and disjoint.
• Responses should be worded clear & precise
• It should clarify the meaning of the question
Factors considered in deciding response categories:-
1. Number of responses options
2. Order of response options
3. Special Cases of Response Categories
Tables and matrices
Use of standard classification systems
Rating scales30-06-2020 56Dr Ramesh R
54. NUMBER OF RESPONSES OPTIONS
• It influences the quality of the data as both too few and too many categories
can cause errors.
• Too many can cause respondent fatigue and inattention, resulting in ill-
considered answers.
• Too few can cause difficulty in finding one which accurately describes the
situation.
• The standard advice has been to use five to nine categories (Schaeffer and
Presser, 2003).
30-06-2020 57Dr Ramesh R
55. ORDER OF RESPONSE OPTIONS
Data quality compromised
when a question includes a
large number of response
options
The options presented first
may be selected because they
make an initial impact on
respondents, or because
respondents lose
concentration & don’t read
remaining
If possible, options should
be presented in a
meaningful order: whenever
there is an inherent order in
the list of options, this
should be used.
The last options may be
chosen because they are
more easily recalled,
particularly if respondents
are given a long list of
options.
The order of response
options can introduce bias.
If some options are more
socially desirable than
others these should go last
to reduce bias
30-06-2020 58Dr Ramesh R
56. USE OF STANDARD CLASSIFICATION SYSTEMS.
• When available, standard definitions for concepts, variables and
classifications should be applied; validating techniques should be used to
look for possible difficulties in using classifications
• When using a unupdated classification 2 errors occurs:-
• It may not be used by the respondent
• Also some categories may not be interpreted by respondent
• Can be avoided by adding the informal classification in the survey
questionnaire
30-06-2020 60Dr Ramesh R
57. RATING SCALES
• Type of ordered closed question; commonly used; it seeks to locate a
respondent’s opinion - the favourability of an item, the frequency of
behaviour etc. - on a rating scale with a limited number of points.
• Response scales can be characterised by:-
Type of labelling used (Verbal scales: Strongly Disagree, Disagree, Agree,
Strongly Agree or numeric/endpoint-labelled: 1, 2, 3, 4, 5)
Number of scale points (even or odd),
Dimensionality (bipolar or unipolar) and
Direction (ascending or descending).
Rank order scale
Pictorial scale
30-06-2020 61Dr Ramesh R
60. RANK ORDER SCALES.
• Opinion questions which ask the respondent to number the different options
in a question in order of importance.
• It should be avoided for two reasons:
• They are quite complicated to explain and respondents often have
difficulty completing them correctly.
• The output from ranking questions is quite difficult to deal with when you are
looking for a “winner” alone. ( Better option is to use a verbal scale)
30-06-2020 65Dr Ramesh R
61. “Don’t Know”, “Don’t Remember”, “Not Applicable”
categories.
• Response categories which directly
related to the relevance issue
• for example, when the researcher is
aware that a good amount of
respondents have “no opinion” or when
that a particular question does not apply
to a subset of the target population.
30-06-2020 66Dr Ramesh R
63. • CONTEXT AND SENSITIVITY EFFECTS: the questionnaire should be
supported by a presentation on survey objectives and a clear confidentiality
assurance
• MEMORY EFFECT: memory aids, retrieval cues and appropriate
reference periods should be used
• HYPOTHETICAL QUESTIONS: this type of questions should be used
with caution, particularly when concerning opinions and attitudes.
• RESPONSE CATEGORIES: there should be no overlapping among the
response categories and they should cover all possible answers; in
CATI/CAPI surveys, in general terms, the “Don’t know” option should be
included.
• ORDER OF RESPONSE OPTIONS: if possible, options should be
presented in a meaningful order
30-06-2020 68Dr Ramesh R
64. LANGUAGE
• Simple language for questions and instructions should be used
• Technical words should not be used
• Long sentences should be avoided
• Negative words should also be avoided
• Sentences should have clauses in chronological order
• Active voice is preferable to passive voice
• Ambiguous expressions, if necessary to be used, should be defined
• General, abstract and deductive questions should be avoided
• Validation techniques should be used to choose the most relevant wordings
for survey questions
30-06-2020 69Dr Ramesh R
65. DOUBLE-BARRELLED QUESTIONS:
• Single question addressing more than one issue, should not be used:
• It creates confusion & also problems in the analyses
• Instead, a single information should be asked about at a time
Eg:- of a double-barrelled question
• Do you believe covid 19 precautionary training for pediatric dentist before
examining children should be linked to the availability & interest of pediatric
dentist ?
• Too much information
• Address more than 1 issue
• Creates confusion
30-06-2020 70Dr Ramesh R
66. LEADING QUESTIONS AND UNBALANCED
QUESTIONS:
• Questions may easily become leading or appealing or may contain
persuasive definition; therefore wording should be designed with
caution not to construct leading questions.
• Attitude questions should be balanced: the question should reflect both
sides of an opinion.
30-06-2020 71Dr Ramesh R
67. VISUAL DESIGN ELEMENTS
• Good “look and feel” for cognitive
functionality
• Immediate emotional responses
• Use as few matrices as possible. If
using matrices, reduce complexity,
build them consistently and regularly.
• Using natural mappings: Start the
questions in the upper left quadrant,
where interviewers and respondent
expect them.
30-06-2020 72Dr Ramesh R
68. • Standardize question patterns & establish consistency in the use of
symbols and graphical arrangements across the questionnaire.
• Highlight the answer space by providing a figure/ground composition.
• Use font size, brightness and colour to attract attention, if needed.
• Provide strong visual guides for changes in the pattern of questions and
skip instructions.
30-06-2020 73Dr Ramesh R
69. CROSS-NATIONAL HARMONISATION OF QUESTIONS
• TRANSLATION: of questions about attitudes and behaviour should
be integrated in a process of forward-translation, pretesting, revision,
and a final decision by bilingual and bi-cultural experts in
coordination with the whole team of translators and researchers.
• The translating person should act as agent between the culture
underlying the master copy and that underlying the target
respondents of their translation.
• In addition, it is very important to pretest the translation with
cognitive and quantitative methods.
30-06-2020 74Dr Ramesh R
70. • HARMONISATION: for harmonisation of socio-demographic
variables there is only a small number of real harmonised
measurement instruments.
The most important are:-
• The who questionnaire for caries detection in children
• The who questionnaire for caries detection in adults
30-06-2020 75Dr Ramesh R
72. • Questionnaire testing is critical for identifying problems for both
respondents and interviewers with regard to, e.g. question wording and
content, order/context effects, and visual design.
• In the Recommended Practices, we distinguish two major categories of
questionnaire testing methods :-
1. Prefield
2. Field methods.
• This distinction is an analytical one, but nevertheless reflects some principal
and operational differences.
30-06-2020 77Dr Ramesh R
73. PRE-FIELD METHODS
• Pre-field methods are done in lab/hospital/clinic environment.
• Respondents are not interviewed at home, but in a testing environment
facilitating the use of specialised testing methods (e.g. a cognitive
laboratory).
• Only small parts of the questionnaire might be included.
• Additional questions can be added on how the respondents perceive the
questions.
30-06-2020 78Dr Ramesh R
74. • Pre-field methods are used to collect information on how respondents
think when answering the questions.
• Often, the focus is on single question rather than the whole
questionnaire.
30-06-2020 79Dr Ramesh R
75. METHODS
• Informal tests: distribute a first draft of questionnaire to colleagues and
acquaintances to get feedback
• Pilot test: distribute questionnaire to 5 to 10 people to get feedback
• Expert groups: with a group of 5 to 6 people, the discussion is chaired by a
moderator; this is the only method which does not involve the respondents.
• Cognitive interviews: It is based on the assumption that verbal reports from
the respondents are a direct representation of their specific cognitive
processes elicited by the questions (Ericsson and Simon, 1993).
• Observational interviews
• Focus groups & in-depth interviews30-06-2020 80Dr Ramesh R
76. COGNITIVE INTERVIEWS
• They are typically used after a questionnaire constructed based on focus
groups and has been improved in expert groups.
• AIM: To obtain qualitative information on how questions are understood
and answered by actual respondents.
• PROCEDURE: They consist of one-on-one in-depth interviews in which
respondents describe their thoughts while answering the survey questions or
after having answered the questions.
• They are carried out in labs/hospital/clinics or other suitable rooms and are
recorded on video or tape for further analyses.
30-06-2020 81Dr Ramesh R
77. Evaluatequestionnairegenerally
Test several
different
question
wordings
against each
other
Investigate how
respondents
understand or
interpret a
question
Investigate if the
way how
respondents
retrieve
information is
triggered by a
formulation
Investigate if
respondents
understand a
question or if
they interpret a
question the way
you want them to
THINK
ALOUD OR
PROBE
THINK ALOUD
Discover which parts
of the questionnaire
respondents read and
how they move
around from question
to question
CONCURRENT
THINK ALOUD
Identify complex
and/or confusing
formulations in a
question
PARA
PHRASING
VIGENTTE
CLASSIFICA
TION
Test if certain
item groups
or classes you
want to define
are sensible
SORTING
30-06-2020 82Dr Ramesh R
78. OBSERVATIONAL INTERVIEWS
• To identify problems in the wording,
question order, visual design etc. of
self-administered questionnaires.
• Also understand the time needed to
complete the questionnaire.
• Respondent behaviour: To find out
whether all the questions and instructions
are read before answering
• Observed cognitive processes: counting
on fingers or writing calculations on the
page are watched closely
30-06-2020 83Dr Ramesh R
79. FIELD METHODS
• The field test is usually conducted during the data collection phase & includes
bigger sample sizes and allows quantitative analyses.
• Eg:- in the context of a pilot study, in conjunction with the actual data
collection, or in parallel to ongoing or recurring surveys.
• The focus is more on the complete questionnaire instead of individual
questions.
30-06-2020 84Dr Ramesh R
80. Methods
1. Traditional field test
2. Behaviour coding
3. Interviewer debriefings
4. Respondent debriefings
5. Follow-up interviews
6. Experiments
7. Three step test interview
30-06-2020 85Dr Ramesh R
81. TRADITIONAL FIELD TEST
• For interview surveys:- involves a small number of interviewers doing a
few interviews each followed by an interviewer debriefing session with the
researcher.
• For postal surveys:- involves posting the questionnaires to respondents and
reviewing the questionnaires that are returned.
30-06-2020 86Dr Ramesh R
82. BEHAVIOUR CODING
• Behaviour coding is a technique that consists of systematic
classification of interviewer/respondent interaction in order to
evaluate the quality of the questionnaire.
• This method can be conducted in the field as well as in the lab
30-06-2020 87Dr Ramesh R
84. INTERVIEWER DEBRIEFING
• Interviewer with good experience & training conducts interview to get
required information from the outline of the questionnaire problems obtained
prior itself.
• Debriefing sessions are recorded for later analysis.
• Notes taken during debriefing, because evaluation of entire recording is time
consuming.
• When changes are made to the questionnaire as the result of testing, it is
essential to conduct another test to evaluate the new questionnaire.
30-06-2020 89Dr Ramesh R
85. RESPONDENT DEBRIEFING
• It is usually done immediately after interview
• Even if the questionnaire was recorded by interview mode respondent debrief
can be done using questionnaire
• Open ended questions are employed to find out whether questions & concepts
were well perceived
• It can employed as a supplement to behaviour coding or item nonresponse
analysis.
30-06-2020 90Dr Ramesh R
86. EXPERIMENTS
• Objective of experimental trials are determined by group discussion
• Protocol of survey, Hypothesis, study design , sample size, randomization,
type1&2 error are determined
• No of factors evaluated increases the feasibility & complexity of the survey
30-06-2020 91Dr Ramesh R
87. The Three-Step Test Interview (TSTI)
• It combines observational and interviewing techniques to identify how items are
interpreted, and whether problems occur during the completion of the questionnaire
• The TSTI encompasses three consecutive steps:
• Concurrent thinking aloud- To collect observational data
• Retrospective interview- To rectify the gaps in observational data
• Semi-structured interview- To elicit experience & opinion
• ‘‘The TSTI has been developed specifically as an instrument for discovering
problems that occur during the completion of self-administered questionnaires by
observing actual response behaviour’’
van der Veer K. The Three-Step Test-Interview (TSTI) for pre-testing self-completion questionnaires and for
qualitative interviewing. ADVANCES IN INTERNATIONAL PSYCHOLOGY. 2013 Apr 22:112.
30-06-2020 92Dr Ramesh R
90. VALIDITY & RELIABILITY
Is the research investigation providing answers to the research questions for
which it was undertaken?
If so, is it providing these answers using appropriate methods and procedures?
30-06-2020 96Dr Ramesh R
91. VALIDITY
• Validity is concerned with the accuracy of our measurement.
• Validity is the extent to which a test measures what it claims to measure.
• It is vital for a test to be valid in order for the results to be accurately
applied and interpreted.
30-06-2020 97Dr Ramesh R
92. INTERNAL VALIDITY
• It is basically evaluates the extent to which the study is free from flaws
• This measures whether the questions can really explain the outcome we
want to research.
• For eg:- NaF varnish used for caries prevention, we need to ask questions
that help us identify factors that influence the use of varnish.
• Here look for relationship between independent variables (e.g., dental
caries.) and the dependent variable (e.g., use of Na F varnish).
30-06-2020 98Dr Ramesh R
93. EXTERNAL VALIDITY
• This refers to the extent to which the results can be generalized to the
target population the survey sample is representing.
• As we all know, the way we ask questions will determine the answer we
get.
• In other words, the questions should represent how the target population
talks and think about the issue under research, which often calls for the
need to conduct exploratory qualitative research.
30-06-2020 99Dr Ramesh R
95. CONTENT VALIDITY
• Extent to which a questionnaire covers a content of the sample (domain of the
aspects measured)
• Whether items and questions used in survey cover the content addressed by
the questionnaire
• Content validation should be carried out while a test is being developed
• Content validity should be checked by a panel, and thus it goes hand in hand
with interexaminer reliability (Kappa!)
• Based on subjective logic; no definitive conclusion can be drawn.
30-06-2020 101Dr Ramesh R
96. Content validity Ratio (CVR)
Lawshe’s Method
• All the expert were asked to score every question according to the
following pattern.
• Necessary question = 1, Useful but not necessary question = 2, Not
necessary = 3
• Then, the answers were computed according to the following content
validity ratio (CVR). CVR = nE - N/2
N/2
The number of experts who selected the question as a necessary one: nE
The Total Number of Experts: N
30-06-2020 102Dr Ramesh R
98. Content validity Index(CVI)
• Two types of CVIs.
1. Content validity of individual items
2. Content validity of the overall scale.
• Researchers use I-CVI information to guide them in revising,
deleting, or substituting items
• I-CVIs tend only to be reported in methodological studies that focus
on descriptions of the content validation process
• Most often reported in scale development studies is the CVI
30-06-2020 104Dr Ramesh R
99. I -CVI Content
Validity of
individual items
S-CVI Content
Validity of overall
scale
S-CVI/UA Proportion of
items on a scale that
achieves a relevance rating
of 3 or 4 by all the experts
S-CVI/Ave
Average of the I- CVIs
CVI: Degree to which an
instrument has an appropriate
sample of items for content
being measured
30-06-2020 105Dr Ramesh R
103. • Kappa statistic is a consensus index of inter-examiner agreement that
adjusts for chance agreement .
• It is an important supplement to CVI because Kappa provides information
about the degree of agreement beyond chance.
• Evaluation criteria for Kappa is the values:-
Above 0.74 : excellent
Between 0.60 and 0.74 : good
Between 0.40 and 0.59 : fair
30-06-2020 109Dr Ramesh R
106. FACE VALIDITY
• Each question must have a logical link
with the objective
• Face validity can be established by one
person
• Not a validity in technical sense because
it does not refer to what is actually
being measured rather what it appears
to measure
• It has more to do with rapport and
public relations than with actual validity
30-06-2020 113Dr Ramesh R
107. CRITERION VALIDITY
• The extent to which a measuring instrument accurately predicts
behaviour.
• The measuring instrument is called ‘criteria’
• It is of two types:-
1. Predictive validity
2. Concurrent validity
30-06-2020 114Dr Ramesh R
108. • When the focus of the test is on
criterion validity, we draw an
inference from test scores to
performance.
• A high score of a valid test indicates
that the test taker has met the
performance criteria.
• Regression analysis can be applied
to establish criterion validity.
• An independent variable could be
used as a predictor variable and a
dependent variable, the criterion
variable.
• The correlation coefficient between
them is called validity coefficients.
30-06-2020 115Dr Ramesh R
109. • The correlation coefficient tells the degree to which the instrument is
valid based on the measured criteria.
• What does it look like in an equation?
• The symbol “r” denotes the correlation coefficient.
• A higher “r” value shows a positive relationship between the
instruments.
• A mix of high and low “r” values shows a negative relationship.
30-06-2020 116Dr Ramesh R
111. As a rule of thumb, for absolute value of r:
0.00-0.19: Very weak
0.20-0.39: Weak
0.40-0.59: Moderate
0.60-0.79: Strong
0.80-1.00: Very strong.
30-06-2020 118Dr Ramesh R
112. PREDITIVE VALIDITY
• If the test is used to predict future performance
• Eg: Caries activity test. . . . Performance of these tests correlates with
caries activity
• Eg: measurement of sugar exposure for caries development
30-06-2020 119Dr Ramesh R
113. CONCURRENT VALIDITY
• If the test is used to estimate present performance or person’s ability at the
present time not attempting to predict future outcomes
• Eg: Measurement of DMFT for caries experience
• Eg: Oral health awareness.
• Long vs. Short version of the awareness programme. Random sampling.
• Levels of agreement = correlation coefficient.
• Perfect agreement = coefficient of 1.
• Lack of agreement = coefficient of zero.
30-06-2020 120Dr Ramesh R
114. CONSTRUCT VALIDITY
• Most important type of validity
• Assesses the extent to which questionnaire accurately measures a theoretical
construct it is designed to measure; also known as theoretical validity
• The construct is called Latent variable
• Measured by correlating performance on the test with performance on a test for
which construct validity has been determined
• Eg: A new index for measuring caries can be validated by comparing its values
with a standard index (like DMFT)
30-06-2020 121Dr Ramesh R
115. • Another method is to show that scores of the new test differs across
people with different levels of outcomes being measured
• Eg: Establishing the validity of a new caries index by applying it to
different stages of dental caries and calculating its accuracy
30-06-2020 122Dr Ramesh R
116. LATENT VARIABLES
• Most/all variables in the social world are not directly observable.
• This makes them ‘latent’ or hypothetical constructs.
• We measure latent variables with observable indicators, e.g.
questionnaire items.
• We can think of the variance of an observable indicator as being
partially caused by:
– The latent construct in question
– Other factors (error)
30-06-2020 123Dr Ramesh R
118. • Specifying formative versus reflective constructs is a critical preliminary
step prior to further statistical analysis.
• Specification follows these guidelines:
Formative
– Direction of causality is from measure to construct
– No reason to expect the measures are correlated
– Indicators are not interchangeable
Reflective
– Direction of causality is from construct to measure
– Measures expected to be correlated
– Indicators are interchangeable
An example of formative versus reflective constructs is given in the figure
below.
30-06-2020 125Dr Ramesh R
120. Factor model
• A factor model identifies the relationship between observed items
and latent factors.
• For example, to study the causal relationships between maternal
anxiety and child behaviour, first he/she has to define the constructs
“Maternal anxiety” and “child behaviour.”
• To accomplish this step, we needs to develop items that measure the
defined construct.
30-06-2020 128Dr Ramesh R
121. Construct, dimension, subscale, factor, component
• This construct dimensions
• This scale has subscales
• The factor structure has factors/components
30-06-2020 129Dr Ramesh R
122. Exploratory Factor Analysis
• (EFA) is a statistical approach to determining the correlation among the
variables in a dataset.
• This type of analysis provides a factor structure (a grouping of
variables based on strong correlations).
• EFA is good for detecting "misfit" variables
• An EFA should always be conducted for new datasets
30-06-2020 130Dr Ramesh R
123. Communalities
• A communality is the extent to which an item correlates with all other
items.
• Higher communalities are better.
• If communalities for a particular variable are low (between 0.0-0.4), then
that variable will struggle to load significantly on any factor.
• Low values indicate candidates for removal after you examine the pattern
matrix.
30-06-2020 131Dr Ramesh R
124. Parallel analysis
• It is a method for determining the number of components or factors to
retain from factor analysis.
30-06-2020 132Dr Ramesh R
125. • Construct validity is made up of TWO important components:-
1. Convergent validity: the items that are indicators of a specific
construct should converge or share a high proportion of variance in
common, known as convergent validity. The ways to estimate the
relative amount of convergent validity among item measures:
2. Discriminant Validity: the extent to which a construct is truly distinct
from other construct.
30-06-2020 135Dr Ramesh R
126. Discriminant validity can be tested by examining the AVE for each
construct against squared correlations (shared variance) between the
construct and all other constructs in the model.
A construct will have adequate discriminant validity if the AVE
exceeds the squared correlation among the constructs (Fornell &
Larcker, 1981; Hair et al., 2006).
30-06-2020 136Dr Ramesh R
127. • Average Variance Extracted (AVE): is the average squared factor
loading.
• A VE of 0.5 or higher is a good rule of thumb suggesting adequate
convergence.
• AVE less than .5 indicates that on average, more error remains in the
items than variance explained by the latent factor structure impose on
the measure (Haire et al., 2006, p 777).
30-06-2020 137Dr Ramesh R
128. Factor Loading:
• Factor Loading: at a minimum, all factor loading should be statistically
significant.
• A good rule of thumb is that standardized loading estimates should be .5 or
higher, and ideally .7 or higher.
• Construct Reliability: construct reliability should be .7 or higher to
indicate adequate convergence or internal consistency
30-06-2020 138Dr Ramesh R
130. RELIABILITY
• Degree of consistency and accuracy with which an instrument measures the
attribute for which it is designed to measure
• It is the degree to which the questions elicit the same type of information
each time we use them, under the same conditions.
• Reliability is also related to internal consistency, which refers to how
different questions or statements measure the same characteristic.
30-06-2020 142Dr Ramesh R
131. TEST –RETEST METHOD
• Administration of a research instrument to a sample of subjects on two
different occasions
• Scores of the tool administered at two different occasions is compared and
calculated by using following formula of correlation coefficient
• The correlation coefficient reveals the magnitude and directions of
relationships between scores generated by research instrument at two
separate occasions.
• Interpretation of results done as follows:-
• +1.00 score ---- perfect reliability
• 0.00 score ---- no reliability
• Above 7 indicates --- acceptable reliability
30-06-2020 143Dr Ramesh R
132. SPLIT OF METHOD
• Divide items of a research instrument in 2 equal parts through grouping
either in odd number question and even number question /first half and
second half item groups
• Administer two subparts of the tool simultaneously, score them
independently and compute the correlation co-effcient on the two
separate scores.
30-06-2020 144Dr Ramesh R
133. Internal-Consistency Reliability
• Overall degree of relatedness of all test items or raters.
• Also called reliability of components.
• Item-to-Item Reliability: The reliability of any single item on
average.
• Judge-to-Judge Reliability: The reliability of any single judge on
average.
30-06-2020 145Dr Ramesh R
134. Cronbach’s alpha
• To evaluate the internal consistency of observed items, and also
applies factor analysis to extract latent constructs from these consistent
observed variables.
• >0.90, means the questions are asking the same things
• 0.7 to 0.9 is the acceptable range.
30-06-2020 146Dr Ramesh R
136. POST-EVALUATION METHOD
• Set of analyses of the data commonly performed, indirectly aimed at
evaluating the quality of specific questions.
Analysis of item nonresponse rates: The questions that present the
highest nonresponse rates are commonly investigated to search for the
possible causes.
30-06-2020 148Dr Ramesh R
137. Analysis of response distributions: To compare different
questionnaire versions or for validation with external data, and usually
performed in conjunction with other methods, such as respondent
debriefing.
Analysis of the editing and imputation phase: The amount of edit
failures can suggest possible problems with given questions. Therefore
the evaluation of this phase should be exploited for preventing error in
next editions of the survey.
Re-interview studies: The results of reinterview studies, in general
estimating simple response variance or response bias, can be used to
explore hypotheses on problematic questions
30-06-2020 149Dr Ramesh R
138. CONCLUSION
• “A researcher also can falsely economize by using scales that are too brief in
the hope of reducing the burden on respondents.
• Choosing a questionnaire that is too brief to be reliable is a bad idea no
matter how much respondents appreciate its brevity.
• Respondents’ completing “convenient” questionnaires that cannot yield
meaningful information is a poorer use of their time and effort than
completing a somewhat longer version that produces valid data.
DEVELLIS (2003, P.12-13)
30-06-2020 150Dr Ramesh R
Questionnaires constitute the basis of every survey-based statistical measurement. They are by far the most important measurement instruments statisticians use to grasp the phenomena to be measured.
Errors due to an insufficient questionnaire can hardly be compensated at later stages of the data collection process.
Therefore, having systematic questionnaire design and testing procedures in place is vital for data quality, particularly for a minimisation of the measurement error.
A SENTENCED WORD EXPRESSED TO ELICIT INFORMATION
that is interpreted (or misinterpreted), considered, edited, and mapped onto a set of response options
A questionnaire refers to a device for securing answers to questions by using a form which the respondent fills in by himself
well thought-out foundation.
specifying the concepts to be measured.
specified concepts have to be translated, or in technical terms, operationalized into measurable variables. TO REDUCE ERRORS
construct validity: the extend to which a measurement method accurately represents the intended construct
This first step is conceptual rather than statistical; the concepts of concern must be defined and specified. On this foundation we place the four cornerstones of survey research:
Only when these cornerstones are solid, high quality data are collected, which can be used in further processing and analysis.
The wording, structure and layout of all questionnaires must lead to valid and reliable results.
The accuracy of the measurement clearly is the key requirement of the code.
Development of a conceptual framework
Writing and sequencing the questions
Making proper use of visual design elements
Implementing electronic questionnaires technically
Prevlance of dental caries in children
The question about prevention of ecc won’t be relevant when it is applied to dental caries
informal communication with experts, ----
expert group meetings ----
key faculty----concrete information---------
Ask what relevant data is to be procured from the respondent.
For obtaining apt data it can be recorded again also if one was not apt.
Occurance of dental caries in young children
Split Theoretical concept into subdomain concept
Qualitative research semniatic
check for linguistic& vobulary of topic
ambugity of terms and concepts in questions
Diff perspective or facet relevant for topic
Popln,content facetor interest topic, response categories
Set of qn relevant for topic tested
Discuss issue with focus group- content sampling
Discuss with group( focus group/in depth interview)
Map-----using focus group but c maping
moderator may outline purposes and basic rules of the discussion and reassure the participants on confidentiality
should preferably last 1-1 ½ hours
tape-recorded, video-recorded or observed by one-way mirror
explore how people think and talk about a topic
creative thinking and checking the concepts
qualitative interviews for pre-testing purposes are rather rarely conducted
How to tranfer reality into observable concept
The basic structure of entity/relationship schemes consists of entities, the logical links between the entities (relationships) as well as the entities’ attributes.
Concept of interest for survey
Logical link b/w entirty
Enity attribute written in scheme above lines
Reality into figures-----questionnaire
Check the availability definitions of variables
Factors taken into consideration:-
the number,
the contents
the scope of the survey variables.
the sensitivity of the questions
possible problems of information retrieval.
preferences of the target population.
Have written research objectives. Do preparatory work,.
2. Should obtain the most complete and accurate information possible.
3. Ensure that respondents fully understand the questions
4. Not likely to refuse to answer,
5. Your questionnaire should be organised and worded to encourage respondents
6. Seek accurate, unbiased and complete information.
7. Easy to answer
8. Should be arranged so that sound analysis and interpretation are possible.
9. Keep is as brief to sustain interest.
The quality of the data collected from recall questions is influenced by the importance of the event for the respondent and the length of time the event took place.
it is during the process of reporting an answer to the interviewer that the respondents have to integrate the information retrieved in their memories into an appropriate format of communication. During this process the respondents may disclose only the impression they want to give of themselves. It is in this phase that problems connected with social desirability come out, a source of nonresponse and/or data distortion.
Should follow a logical stream;
arranged into logical groupings
subject decide the grouping of the questions;
use of checks should be carefully evaluated against the increase of difficulties in controlling the interview;
filters should be avoided for sensitive questions
Intrusion errorsrefer to when information that is related to the theme of a certain memory, but was not actually a part of the original episode, become associated with the event.
one can never be certain how valid any answer to a hypothetical question is nor can you measure the probability
do not oblige anyone to anything,
it is much easier to agree with a statement than to go against it; especially if the statement is more socially acceptable to agree.
What would you do if u have decay in your teeth?
Therefore, hypothetical questions should be avoided or used only when referring to a hypothetical occurrence of a situation a respondent familiar with.
The risks of closed-ended questions are to forget some important categories and
to formulate overlapping response categories when a single response is asked.
and, in addition,
This is one of the reasons why cooperation between survey methodologists and subject matter experts in questionnaire development is extremely important
exhaustive;: including or considering all elements or aspects; fully comprehensive
The order of response options has a greater effect on data quality when a question includes a large number of response options
Ideally, a good response scale should: be easy to interpret by respondents, have a discrimination that fits the respondents’ perceptions, and cause minimal response bias.
In practice, determining a scale with a “certified” minimal respondent bias is a difficult task because each of those above-mentioned varieties produces specific effects on the responses.
Therefore it is important to state that scales – even neutral looking ones, like the numerical scales – are not at all “neutral”.
In general, from the scale respondents get “information” about:-
The distribution of “real” situations, or behaviours and,
Their own position in this distribution.
In particular, the problem is with ranking only some of the items. When a respondent ranks every item, it is easy to assign a value to each item that can be assembled in some way, but when a respondent only ranks some items it is very difficult to decide how to value the items left blank. The interpretation of those items is unclear – are they equally unimportant, are they not applicable, and are they equivalent to those ranked last by other respondents or not?
The recommended way of measuring items that require a ranking is to ask respondents to rate each item individually, using a verbal scale rather than a numeric one similar to other rating questions.
For factual questions, these categories are kinds of item nonresponse. For opinion questions “Don’t know” or “Don’t remember” might be used by the respondent as a neutral answer, if this is not present among the responses’ options. The decision about whether to include or exclude these categories depends to a large extent on the subject matter. Excluding these options may not be a good idea, as respondents may be forced to give an answer when, for example, they really do not know what their attitude is to a particular subject, or they do not know the answer to a factual question that has been asked. When respondents are forced to develop an attitude on the spot, e.g. through a forced-choice rating scale, this attitude might be highly unreliable. This approach may be reasonable only when the researcher has good reason to believe that virtually all subjects have an opinion. In general terms, if the question is crucial for the survey these options should not be allowed. If the question is sensitive and not essential, the possibility of nonresponse might be considered
DOUBLE NEGATIVE: DO YOU AGREE OR DISAGREE WITH THE FOLLOWING
Questions with too much information should be avoided
When complicated questions are asked, containing several clauses and determinations, the respondents may give answers to questions which they have simplified.
Cognitive interviewing is a collection of different techniques for studying the comprehension stage, thought and answering processes of respondents during the interview using expanded or intensive interviewing approaches (Biemer and Lyberg, 2003). It is based on the assumption that verbal reports from the respondents are a direct representation of their specific cognitive processes elicited by the questions (Ericsson and Simon, 1993).
Cross-national comparability in European or international surveys, 2 further tasks required.
The translations of the questions or questionnaires have to be functionally equivalent, i.e. the respondents in different countries must have the same understanding of the questions.
The demographic as well as socio-economic variables have to be harmonised through commonly accepted instruments.
Problems with question wording include, for example, confusion with the overall meaning of the question as well as misinterpretation of individual terms or concepts.
Problems with skip the question instructions may result in missing data and frustration of the interviewers and/or respondents.
Poor visual design can easily lead to confusion and inappropriate measurements also in interviewer-administered surveys.
Laboratory – office/ clinic/hospital
follow-up probes.
are often used to elicit the reasons and motives for the respondent’s behaviour
This means that the interview is carried out in a way very similar to the subsequent fieldwork (regarding setting, lengths, choice and order of questions, etc.), and the majority of the conditions mirror the real survey situations.
Tape-recording is useful for reviewing unclear notes.
The entire process should be carefully documented throughout the testing.
Question designers and researchers must have a clear idea of potential problems.
Debriefing should be done immediately after the interview.
The survey mode does not have to be the same as the one used by respondent debriefing (i.e. even if survey is interviewer-administered, the mode of respondent debriefing can be self-administered).
Open-ended questions employing standardised probes can provide valuable information to indicate whether questions and concepts are well understood.
Respondent debriefing provides a useful supplement to other quantitative measures of quality, such as behaviour coding or item nonresponse analysis. Therefore, respondent debriefing should be conducted after behaviour coding or any test that highlighted possible problems.
The objectives for an experiment are best determined by a group discussion.
The protocol of the experiment describing its design, the study and the confounding variables, the implementation settings, and the statistical aspects should be prepared before its realisation.
The hypotheses should be expressed in measurable terms.
The design and the size of the sample should be decided in order to ensure the necessary power and representativeness of the experiment. Sometimes, practical issues can be considered more relevant than the statistical ones.
The type I and type II errors associated to the hypothesis testing should be evaluated in relation to the specific application.
Randomisation of assignment should be adopted.
Factorial designs serve when many elements need to be evaluated.
However, increasing the number of factors increases the complexity and affects the feasibility of the experiment.
Method for pre testing the self completed questionnaire
Step 1& 2 distinct for TSTI
Step 3 similar to cognitive & in depth methods
The interpretation of the items and the thought process experienced by the patient when filling out the questionnaire is put in a broader (social-biographical) context
Validity and reliability are not always aligned. Reliability is needed, but not sufficient to establish validity.
We can get high reliability and low validity. This would happen when we ask the wrong questions over and over again, consistently yielding bad information.
Also, if the results show large variability, they may be valid, but not reliable.
In short, don’t assume reliability and validity, unless you design surveys that really measure what you want and do it consistently.
Although often discussed in the context of sample representativeness, we know that survey design also affects validity.
In other words, it depends on asking questions that measure what we want to measure.
People might have negative reactions to an intelligence test that did not appear to them to be measuring their intelligence
Evaluates any differences in measurement occurred are due to independent variable and nothing else
In our example, assume we want to estimate the share of preference of our product in the hairstyling product category. To achieve this, we need to include other brands that represent this category, otherwise, we can’t extrapolate the results to the category as a whole.
Uses logical reasoning and hence easy to apply
For eg:- Content Evaluation Panel composed of fifteen members, a minimum CVR of .49 is required to satisfy the five percent level
Kappa valu .8 above is considered in this
The extent to which a measuring instrument appearsvalid on its surface
–formative
-Direction of causality is from measure to construct
– No reason to expect the measures are correlated
– Indicators are not interchangeable
. In general, an EFA prepares the variables to be used for cleaner structural equation modeling.
In the table below, you should identify low values in the "Extraction" column.
Essentially, the program works by creating a random dataset with the same numbers of observations and variables as the original data
AVG VARIANCE EXTRACTED
It is concerned with the consistency of measurement.
This is particularly important in satisfaction and brand tracking studies because changes in question wording and structure are likely to elicit different responses.
More detailed analysis can take into consideration the nonresponse in combinations of variables, and study if some nonresponse patterns are associated to certain respondents’ profiles.