New pub: Using Accessible Motion Capture in Educational Games for Sign Language Learning

New pub: Using Accessible Motion Capture in Educational Games for Sign Language Learning

Conference, Multimodal Learning Analytics
A new publication will be presented at the EC-TEL conference 2023 in Aveiro, Portugal. "Using Accessible Motion Capture in Educational Games for Sign Language Learning" https://link.springer.com/chapter/10.1007/978-3-031-42682-7_74 Abstract Various studies show that multimodal interaction technologies, especially motion capture in educational environments, can significantly improve and support educational purposes such as language learning. In this paper, we introduce a prototype that implements finger tracking and teaches the user different letters of the German fingerspelling alphabet. Since most options for tracking a user’s movements rely on hardware that is not commonly available, a particular focus is laid on the opportunities of new technologies based on computer vision. These achieve accurate tracking with consumer webcams. In this study, the motion capture is based on Google MediaPipe. An evaluation based on user feedback shows that…
Read More
New CfP: Multimodal and Immersive Systems for Skills Development and Education (BJET)

New CfP: Multimodal and Immersive Systems for Skills Development and Education (BJET)

Special Issue
Call for Papers: Multimodal and Immersive Systems for Skills Development and Education Guest editor(s):  Daniele Di Mitri, DIPF, Germany Bibeg Limbu, TU Delft, The Netherlands Jan Schneider, DIPF, Germany Deniz Iren, Open University, The Netherlands Michail Giannakos, NTNU, Norway Daniel Spikol, University of Copenhagen, Denmark Roland Klemke, Open University, The Netherlands Rationale for publications During the last decade, we have seen an enormous penetration of multimodal and immersive systems such as virtual, augmented reality and motion-based systems. Such systems, along with rapidly evolving technological affordances (e.g., multimodal interaction, tactile feedback) powered by Artificial Intelligence (AI) and sensors, are attempting to redefine how we interact and learn with technology. This attempt has long-term implications for human-computer interaction and technology-enhanced learning, enabling new forms of personalised, contextual, and deliberate practice of skills…
Read More
New Pub: Multimodal Learning Experience for Deliberate Practice

New Pub: Multimodal Learning Experience for Deliberate Practice

Book chapter
A new book chapter has been published as part of the Multimodal Learning Analytics Handbook edited by Springer. While digital education technologies have improved to make educational resources more available, the modes of interaction they implement remain largely unnatural for the learner. Modern sensor-enabled computer systems allow extending human-computer interfaces for multimodal communication. Advances in Artificial Intelligence allow interpreting the data collected from multimodal and multi-sensor devices. These insights can be used to support deliberate practice with personalised feedback and adaptation through Multimodal Learning Experiences (MLX). This chapter elaborates on the approaches, architectures, and methodologies in five different use cases that use multimodal learning analytics applications for deliberate practice. Di Mitri, D., Schneider, J., Limbu, B., Mat Sanusi, K.A., Klemke, R. (2022). Multimodal Learning Experience for Deliberate Practice. In: Giannakos,…
Read More
Workshop @ JTELSS – Artificial Intelligence in Education and Multimodal Learning Experience

Workshop @ JTELSS – Artificial Intelligence in Education and Multimodal Learning Experience

Workshop
At this year’s JTEL summer school in Halkidiki, Greece (see previous blog post here), Daniele Di Mitri and Jan Schneider together with Prof Roland Klemke and Dr Bibeg Limbu, contributed in a mini-track on Artificial Intelligence in Education. The mini-track started with the session "Artificial Intelligence in Education, Multimodal Learning Experience and Ethics of AI (MAIED)". The purpose of this session was to provide an overview of all the topic of AI in Education and Multimodal Learning Experiences. The workshop started with a lecture style presentation from the presenters on AI in Education, Multimodality, theories behind Multimodal Learning Experiences and application use cases. https://twitter.com/dimstudi0/status/1529012671495868416 Thus the workshop included a pitch-style presentation of the PhD research by all the PhD candidates at the summer school involved in the field of AI in…
Read More
Fernando P. Cardenas-Hernandez joins the team

Fernando P. Cardenas-Hernandez joins the team

Multimodal Learning Analytics, Project, Team
Starting 1st July 2021, Fernando P. Cardenas-Hernandez joins the team as a doctoral researcher.  He earned his Master’s degree in Microsystems from the University of Freiburg. After his graduation, he worked as a software engineer in different companies. Some of his previous projects made use of microcontrollers, SBCs and thermal & industrial cameras. He is currently involved in the MILKI-PSY project.
Read More
New Pub: Towards Automatic Collaboration Analytics for Group Speech Data Using Multimodal Learning Analytics

New Pub: Towards Automatic Collaboration Analytics for Group Speech Data Using Multimodal Learning Analytics

General education, Journal, Multimodal Learning Analytics, Open access, Publication
Collaboration is an important 21st Century skill. Co-located (or face-to-face) collaboration (CC) analytics gained momentum with the advent of sensor technology. Most of these works have used the audio modality to detect the quality of CC. The CC quality can be detected from simple indicators of collaboration such as total speaking time or complex indicators like synchrony in the rise and fall of the average pitch. Most studies in the past focused on “how group members talk” (i.e., spectral, temporal features of audio like pitch) and not “what they talk”. The “what” of the conversations is more overt contrary to the “how” of the conversations. Very few studies studied “what” group members talk about, and these studies were lab based showing a representative overview of specific words as topic clusters…
Read More