November 11, 2021
Artificial Intelligence, General education, Multimodal Learning Analytics, Project meeting

On November 9, 2021, the first face-to-face meeting of the BMBF funded project “Multimodal Immersive Learning with Artificial Intelligence for Psychomotor Skills” (Milki-Psy) consortium took place in the city of Cologne where the DIPF, as an active member, was represented by Dr Daniele Di Mitri, Dr Jan Schneider, Gianluca Romano and Fernando P. Cardenas-Hernandez. The purpose of this meeting was to present the progress of each project partner as well as to propose and discuss possible solutions for the two case studies of this project, which are running case and robot case.

Running use case


The running case is developed in close cooperation with the German Sport University located in Cologne. The partners are currently evaluating different setups for sensor tracking.  The current version uses two Kinect Azure sensors in parallel. The aim is to detect and capture human poses and devise feedback methods to allow the learns to acquire a new motion sequence. Some of the methods include for example skeleton visualization, visualization of instructions, audio feedback, or seeing the user as a 3D animation or as a mirror-reflection. The current challenge lies in the modelling of experts in whether to adopt a feature representation or supervised mistake detection. The discussion also covered the pro’s and con’s of copying experts, the effects of the feedback in running, comparison with other psychomotor skills such as martial arts, evaluation of extreme movements.

Robot use case

The second use case consists of developed in collaboration with the Cobot lab of the Cologne University of Applied Science is in following instructions for the assembly process with a collaborative robot (COBOT-Yumi from ABB). Here the some of the key aspects discussed were the importance of collaboration script before the assembly, communication during robot collaboration scenarios, commonalities with the running case, what to do in order to have a more natural human-robot collaboration, voice recognition as an alternative to improve the collaboration. Also, the support of Augmented Reality (AR) was considered, through cloud-based anchors, visualization via AR of human movements, gamification aspects.