5. 5
Design : the 3 panels
1. Object selection: list of object types you can add to the test
• Questions (8 different types of question)
• Design items
• Questions from existing tests or library sections
2. Test tree: the current hierarchical list of objects already in the test
3. Object designer: the designer tool of an object
Currently selected item in the tree
6. 6
Design : Object selection
1. New object selection panel (left)
A simple click on an object inserts an instance of such an item in the Test tree, just after the
currently selected item or at the end of the tree if there is no selection.
Currently selected item in the tree.
New object inserted after.
7. 7
Design : Test tree
3. Test tree panel (right)
Allows to remove items from the test and reordering of the items in the tree.
8. 8
Design : Object designer
2. Object designer panel (middle)
Tool to edit properties and functionnalities of the object. Each type of question has its own
designer. Note the three tabs:
• Design view : designer
• Student view : a raw representation of what the student will see
• Answer view : the correct answer
9. 9
Design : Object designer
Design view : designer
Student view : a raw representation of what the
student will see. Notice the numbering, disposition
for example
Answer view : the correct answers
10. 10
Question : Multiple Choices (MC)
Points: maximum score
obtained when answer(s) is
(are) correct
Difficulty: arbitrary question
difficulty indicator
Numbering: numbering type
displayed before the option
buttons
Disposition: vertical,
horizontal, dropdown
Randomize: if checked, the
proposed options are shuffled
Hints: Hint to display for the
question (see option « Show
Hints » in the « Presentation
attributes »)
Feedbacks: displayed after
submission (see option « Show
Feedbacks » in the « Presentation
attributes » )
Question text: don’t forget it !
Image: to illustrate the question
11. 11
Question : Multiple Choices (MC) (following)
Add as many options as
needed and select the correct
one
12. 12
Question : Multi-Select (MS)
Scoring: two ways of scoring
the answers
1.All or nothing: the points are
attributed if all and only the
correct options are checked by
the student, else the student
gets 0 point.
2.Weighted: each correctly
checked option receives partial
credit; an equal measure of
partial credit is deducted for
each incorrect answer.
(partial credit =
Points / #C
where #C is the total
number of expected correct
answers.)
14. 14
Question : True/False (TF)
Enter the text displayed for both
option and select the correct one.
15. 15
Question : Ordering (ORD)
Scoring: two ways of scoring
the answers
1.All or nothing: the points are
attributed if all the items are
correctly sorted, else the
students receives 0 point
2.Weighted: for each item in
correct position, the student
receives partial credit;
(partial credit of item
i = Points * Wi / Wtotal
where Wi is the
weight of item i and Wtotal is the
sum of the weight of all items)
16. 16
Question : Ordering (ORD) (following)
Add as many items as needed
Specify the correct position of
each item
Eventually, set a relative weight
attributed to the item (default = 1)
Test interface for Ordering question
17. 17
Question : Matching (MAT)
Scoring: two ways of scoring
the answers
1.All or nothing: the points are
attributed if all the items are
correctly matched, else the
students receives 0 point
2.Weighted: for each correct
matching, the student receives
partial credit;
(partial credit of item
i = Points * Wi / Wtotal
where Wi is the
weight of item i and Wtotal is the
sum of the weight of all items)
18. 18
Question : Matching (MAT) (following)
Add as many matching pairs as
needed
Eventually, assign a relative
weight to the matching pair
(default = 1)
In order to increase difficulty,
you can add so called
« distractors », which are non
associated items (all distractors
must be added in the same list)
Test interface for Matching question
19. 19
Question : Free Text (FT)
A Free text question requires a
manual grading.
Instructor notes: information
displayed to the instructor
during the manual grading
process.
20. 20
Question : Short Answers (SA)
# required answers: number of
expected answers
Scoring: two ways of scoring the
answers
1.All or nothing: the points are
attributed if all the expected
answers are entered, else the
student receives 0 point.
2.Weighted: for each correct
answer entered, the student
receives partial credit;
(partial credit = Points /
#C
where #C is the total
number of expected answers)
21. 21
Question : Short Answers (SA) (following)
Add as many correct answers as
needed (at least # required answers)
Indicate if answer is case sensitive
(default no)
Enter eventual alternate (variant)
answers with their relative weight)
22. 22
Question : Fill In the Blanks (FIB)
Scoring: two ways of scoring the
answers
1.All or nothing: the points are
attributed if all the answers are
entered, else the student receives
0 point.
2.Weighted: for each correct
answer entered, the student
receives partial credit;
(partial credit of answer
i = Points * Wi / Wtotal
where Wi is the weight
of answer i and Wtotal is the sum of
the weight of all answers)
23. 23
Question : Fill In the Blanks (FIB) (following)
Add as many text blocks and blank
bocks as needed.
Set a relative weight for the answer
(default = 1)
Indicate if answer is case sensitive
(default no)
Enter eventual alternate (variant)
answers with their relative weight)
Test interface for Fill in the
blanks question
24. 24
Design item: Text block
Allows to add additional text and images
Test interface
27. 27
Preview: See what the student will get
From this screen, start a test attempt in preview mode.
You may ask to track this « Preview » attempt in order to see it in tracking and statistics
reports.
Note : As soon as attempts are tracked, modifications to the questions in the test are restricted to minor changes. But« Preview »
attempts can be deleted (see following point) in order to be able to update the test.
Tracked « Preview » attempts can be deleted.
31. 31
Settings: General
Name: used to easily identify the test inside Test
Manager. Also used during publication
Description: additional description for internal use
Starting page: design of the starting page of the test
32. 32
Settings: Presentation
Questions per page: best practice : 1 per page (0 means all the questions on the same page)
Order of questions: Preset : order set at design time
Show submit button on each page: or only when last page has been reached
Show Hints: (see next slide)
Show Feedbacks: feedbacks will be displayed after submission if «Show corrections» is checked
(see Submission tab)
33. 33
Settings: Grading
Grading is the process of giving an overall grade (quotation) to the test after its submission. It’s
based on the scores obtained for each question and uses one of the 4 grading schemes
available. Grading can be automatic or manual.
Grading at submit: the test is automatically graded after the submission. If not checked, grading
must be done manually (for example if some questions require manual scoring, test is best
graded manually)
Grading scheme: select the grading scheme to apply and the grade/value/percent to pass,
depending on the scheme
34. 34
Settings: Grading (following)
Numeric: numeric scale (min and max values); value to pass. The global score of the test is
transformed according to the scale to determine the pass status.
Pass/Fail: names of the two grades can be changed; percentage to pass
Letters A-E: limits can be changed; grade to pass
Custom: like previous but with custom defined grades (maximum ten); grade to pass
35. 35
Settings: Attempts
Attempts allowed: Maximun number of attempt allowed by the student; default = 1
(note: CC publications allow only one attempt by Test event. LP publications allow multiple
attempts by Trainig Plan but only with overal grading « Best attempt » )
Delay before new attempt: in day(s), or hour(s)
Overall grading: if more than 1 attempt allowed, how to compute the overall grading.
Warning!: all methods are not necessary available for some publication types since
they could lower an already attributed grade.
36. 36
Settings: Restrictions
Backward allowed: backward navigation allowed or not in the test
Maximum Time: maximum time (minutes) allowed to complete the test. Student is warned when
maximum time is elapsed.
Bonus Time: when maximum time is elapsed, the bonus time allows the student to terminate and
submit the test before definitive end.
Late submission: fixes the way to handle the late submission (after Max + Bonus times).
- Allow normal submission: student may submit the test normally ; it’s only flagged as «late»
- Auto submission: the test is automatically submitted, terminated and flagged as «late»
- Enforce limit: the test is automatically terminated, not submitted
Start and End date & time: not applicable in CC publications
38. 38
Publication
To allow students to enter a Test, it must be published. A Publication defines settings for the particular
environment where the test will run. Those attributes are inherited from the Test settings and can be
overwritten.
Moreover, during publication some actions are automatically done to prepare the specific environment
(for example Test Categories are created or updated in CC environment or Catalog entries performed
in LP environment)
40. 40
Publication: CC environment
A test event can now be
created with this category
The publication triggered the creation of
a Test Category in CC environment
41. 41
Publication: LP environment
Only one LP publication can be created for a test
Specify the language for which the corresponding LP courses
must be generated and set their names .
No new LP
publication can be
created for the test
46. 46
Score answers & Grade attempt
Grade: Screen to select Test attempts. Displays the most important informations about
grading
Jump to modify score of questions in an attempt
47. 47
Score answers & Grade attempt
Typically, a Free Text question requires manual scoring. The « Notes for instructor » appears
here to help to score the answer.
Updating the score of an answer automatically (re)grades the attempt.
48. 48
Score answers & Grade attempt
For a publication with « Grade at submit » attribute not checked, the icon « Grade attempt »
appears for an attempt after submission. The attempt can be graded by updating the score of any
question in it (see previous slide) or, without modying the scores, by clicking this icon.
49. 49
Score answers & Grade attempt
For a CC publication : the « blue » line displays the grade and status stored in CC environment.
For publication with « Grade at submit » attribute checked, the grade and status are automatically
sent to CC environment when test is submitted.
If an attempt is (re)graded manually, the new/updated grade and status must/can be sent to CC
environment here.
50. 50
Score answers & Grade attempt
For a LP publication : the « blue » line displays the score and status stored in LP environment.
For publication with « Grade at submit » attribute checked, the grade and status are automatically
sent to LP environment when test is submitted.
If an attempt is (re)graded manually, the new/updated score and status must/can be sent to LP
environment here.
51. 51
Delete & Copy test
A test can be deleted, with all its questions, if there is no result
collected for it (including tracked « preview » attempts)
A test can be copied, with all its questions, to a create a new
one. All questions are duplicated and can be maintained
independently.
52. 52
Questions Library
Test Makers allows to create a repository (library) of questions, organized in sections.
A library section can be used as a random section in a test. A fixed number of questions will be randomly picked
from the library section at runtime.
A section already used as a random section in a test cannot be deleted anymore.
Questions of the library can also be individually copied in a test.
Click here to add a new section in the library. Give it a name and populate it with questions
53. 53
Questions Library (following)
Questions of the library section already used in attempts cannot be deleted anymore
Questions of a library section can be « deactivated ». They will not be used anymore in runtime.
This is useful when a question has already been used in attempts but is now obsolete.
54. 54
Random section
Click here to add a random section in a test.
•Give it a name
•Select the Library section to attach
•Set the number of questions to pick up randomly at runtime
•Set the points attributed to each question
56. 56
Update of a test: restrictions
As soon as results have been collected for a test, some updates are not possible any more
57. 57
Update of a test: restrictions (following)
Any update which does not affect the structure and scoring of the questions.
• Update of the question texts
• Update of the option texts in MC,MS,T/F questions (but no option can be added or removed)
• Update of the item texts in ORD questions (but no item can be added or removed)
• Update of the choice and match texts in MAT questions (but no item can be added or removed and no
distractors can be added or removed)
• Updates of the answer texts in SA and FIB questions (but no answer can be added or removed)
• Display aspects (disposition, numbering, randomize)
•…
Any update which affects the structure and scoring of the questions.
• Adding or removing question
• Adding or removing options, possible answers, items, … depending on the question type
• Changing the type of a question
• Changing the points attributed to a question
• Changing the eventual scoring mode (“weighted” or “All or nothing”) of a question
• Changing the “correct” options, the correct order, the correct matching, … depending on the question type
•…
58. 58
Update of a test: restrictions (following)
Best practice
1.Creating a pilot version
2.Test the pilot version with tracked/or not tracked “Preview” attempts inside Test Maker
3.Tracked preview attempts can always be deleted. The test is then fully updatable once again.
4.“Preview” attempts outside Test Maker, by mailing to a Pilot target group.
5.Tracked preview attempts can always be deleted. The test is then fully updatable once again.
6. Repeat previous steps until all appears to be good.
7.Publish the Test (V1)
8.Even after real results have been collected in LP or CC environment, restricted updates are still
possible as seen previously.
9.If major structural changes become necessary, create a new version of the Test, starting
eventually from a copy of the obsolete one.