Microteaching on terms used in filtration .Pharmaceutical Engineering
Andrey Kurtasov - Automated generation of assessment test items from text some quality aspects
1. Automated Generation of
Assessment Test Items from Text
Some Quality Aspects
Andrey Kurtasov
akurtasov@gmail.com
Vologda State University
Russia
AIST 2014 — Yekaterinburg, Russia
Andrey Kurtasov AIST 2014 Automated Generation of Assessment Test Items from Text 1 / 3
2. Significance of the Research Project
• The teaching process of today widely uses text resources that were
not originally intended for use as teaching aids and do not contain
test questions or exercises
• Development of assessment test items is a complex task that
requires teachers to spend a significant amount of time on
• Automatic generation of test items is a promising NLP application
that has not been studied sufficiently in relation with the Russian
language
Previously1
we developed a simple experimental system for generating
fill-in-the-blank test items. In this poster, we review the key factors that
affect the quality of generated test items and define the main directions
for future work.
Generation is performed in three steps. Let us review them from the
quality perspective in order to plan future work.
1
Kurtasov, A. A System for Generating Cloze Test Items from Russian-Language Text /
In Proceedings of the Student Research Workshop associated with RANLP 2013
Andrey Kurtasov AIST 2014 Automated Generation of Assessment Test Items from Text 2 / 3
3. What Affects the Quality of Test Items?
1. Text preprocessing (document triage & text segmentation)
Performed well with available tools. Manual segmentation features are to be
added into the UI for more flexibility.
2. Segment Filtering
Similar to the task of extractive text summarization. The main features for
scoring sentences by importance are sentence length, use of cue phrases,
a sentence’s position in a paragraph, occurrence of frequent terms, and
occurrence of title words. Our plan: leverage an available summarization toolkit
(e.g. MEAD) and evaluate it using such metrics as precision and recall.
3. Test Item Generation
Creation of fill-in-the-blank questions by taking a sentence and replacing some
of the words in the sentence with blanks. Problem: determine which words
should be blanked out to produce a useful question. Our plan: apply supervised
machine learning for classifying potential questions. Features for classifying
question candidates include a word’s position and its syntactic role in the
sentence, and whether the word is a specific domain term or not.
Other problems for future research: anaphora resolution, interrogative sentence
generation, and distractor generation for multiple-choice tests.
Andrey Kurtasov AIST 2014 Automated Generation of Assessment Test Items from Text 3 / 3