Collaboration, communication, creativity, critical thinking and problem-solving are among the skills that are needed to study and work in this 21st century. As important as they are, evaluating, assessing and teaching them in a practical, scalable and efficient way is still a challenge not fully met by current pedagogical-technological practices. Multimodal Learning Analytics (MmLA), the processing and analysis of multiple sources of data to better understand and improve learning processes, has been posed as a possible solution to augment the natural capabilities of both instructors and students to provide and receive feedback to support the development of those skills. During this session, we will have a hands-on demo of two systems to automatically generate feedback for communication and collaboration skills; then, we will explore the affordances that low-cost sensors and current advances in artificial intelligence provide to automatically record and analyze face-to-face, complex learning processes as those involved for the development of 21st-Century Skills. Finally, we will discuss and ideate practical MmLA tools that could be built to augment your current teaching and learning practices.
Presentation at NYU - November 2019.
4. Two options – 10 minutes
• OPAF – Oral Communication • CLiF – Collaboration
5. Two options – 10 minutes
• Perform a flawed presentation
• Do not look at the audience
• Have a close posture
• Speak too softly
• Introduce filled pauses
• Solve “Lost at Sea” task
• Be aware of the collaboration
• Try to remember as much as
possible
9. Increasing the capability of
humans to approach a
complex problem situation,
to gain comprehension to
suit their particular needs,
and to derive solutions to
problems.
Douglas Engelbart
Augmenting Human
Intellect
1962
10. Learning analytics is the measurement,
collection, analysis and reporting of data
about learners and their contexts, for
purposes of understanding and
optimizing learning and the
environments in which it occurs.
30. Setting requirements
• Non-intrusive
• Capture the same information that a human observer
• Mainly faces and speech of participants
• No cables
• Synchronized
• Easy-to-operate
• Record at least 2 hours
37. Step 2: Extract low-level modalities
• Audio:
• Direction of arrival
• Speech to Text
• Prosodic features
• Speaker diarization
• Text:
• Word frequency
• Sentiment analysis
• Questions asked
• …
38. Step 3: Extract low-level modalities
• Digital Pen:
• Writing mechanics
• Text
• Sketch recognition