6. Related Work
Course Evaluation
● Braga, M. et al. 2014
● Cohen, Peter A. 1981
● Greenwald, A. G. and Gillmore, M.G. 1997
● Marsh, H.W., and Roche, L.A. 1997
● Stark, P. B. and Freishtat, R. 2014
6
Perceived Learning & Education
● Eom, S. B., Wen, H. J., & Ashill, N. 2006
● Richardson, J. C. and Swan, K. 2003
● Swan, K. 2001
7. Demographics Questions
For MOOCs: Country, Gender, Age, Years of
training, Reason for taking the course.
For IEOR 170: Major, Year, Number of other
related courses taken, Interest in the subject,
Reason for taking the course.
7
8. Quantitative Analysis Topics (QAT)
1. How would you rate the course so far in terms of
technical difficulty?
2. How would you rate the course so far in terms of
usefulness to your career?
3. How would you rate your enthusiasm so far for this
course?
4. How would you rate your performance so far in this
course?
5. How would you rate the effectiveness of course
assignments so far to help you develop your skills? 8
9. NLP Limitation in M-CAFE
Selecting a set of insightful,
novel, and relevant ideas
is hard.
Suggestions are often short
and subject-specific.
9
10. Related Work
Collaborative Filtering
● Goldberg, K. et al. 2001
● Konstan, J.A. et al. 1997
● Pearson, K. 1901
● Sarwar, B. et al. 2001
● Yang, X. et al.2014
10
Natural Language Processing
(NLP)
● Adamopoulos, P. 2013
● Pang, B and Lee, L. 2008
● Reich, J. et al. 2014
15. 15
Participation
IEOR 170: 16 weeks in
Jan - May, 2015
● Student Count: 96
● QAT Rating Count:
424
● Idea Count: 270
● CF Rating Count:
2483
16. Quantitative Analysis Topics
Graph visualization of QAT rating changes
over time.
Figure 2: course difficulty rating over
the first 10 weeks for IEOR 170. 16
19. Wilson Score:
We took the mean grade g and then
calculated the 95% confidence interval of g
using standard error: g +/- 1.96*SE(g). We
then rank the ideas by the lower bound g -
1.96*SE(g).
19
Given a set of rating to each idea, how
should we rank them.
20. Since each participant rates k<<N ideas, how
to choose which ideas to present.
Uncertainty Sampling!
For each idea i,
Probability of exposure:
P(i) ∝ SE(i)
where SE(i) is the standard
error of idea i
20
21. CF performance assessment
No universal rule on how good an idea is.
Assess from specific perspectives:
Do CF selected ideas have a broad topic coverage?
Is CF selecting ideas with better quality in general?
Does CF idea ranking agree with Instructor ranking?
21
22. CF performance assessment
1. Chat forums.
2. Basics.
3. Javascript.
4. Additional time.
5. Additional exercises.
6. Security.
7. Update technology.
Figure 3: The number of comments for each topic in the top 20
comments for CS 169.2x. 22
23. Quality scoring metric:
1 - Not readable.
2 - Readable but unrelated to the course.
3 - Present one idea about the course but it is not a
suggestion.
4 - Present a suggestion with some reasoning.
5 - Present a suggestion with reasoning and propose a
solution.
CF performance assessment
23
24. A suggestion with a quality score of 5:
Design patterns are hard to grasp without getting your hands dirty in a messy
problem. I think using a quiz for that week instead of a challenging homework
assignment was a mistake. I understand the concepts as abstract entities but would
still have a hard time figuring out when and how to use them. I felt the same way
about the Javascript week as well. A homework assignment doing JS and AJAX on
the rotten potatoes example would have been ideal.
A suggestion with a quality score of 1 is:
Devise + Omniauth !!! 24
CF performance assessment
27. Conclusion
Developed a novel platform to generate timely
feedback on course issue.
Motivated student participation in courses.
Highlighted valuable ideas using peer-to-peer
collaborative filtering.
27
28. Future Work
Explore how sorting and presenting ideas based on factors
such as time or novelty will affect participation.
Add topic tagging to organize suggested ideas.
28