Open Source 4 you

“Everything should be as simple as possible, but not simpler.”Albert Einstein

How can you cleverly use data about the way students learn (learning analytics) to improve the design of education (learning design)? This is the question Marcel Schmitz asks himself in his doctoral research. From his background in ICT, he designed the board game Fellowship of the Learning Activities & Analytics (FoLA2) to provide insight into this process from a multidisciplinary perspective.

Learning analytics
The aim of this serious game is to enrich education on the basis of knowledge about learning behaviour. Learning analytics provide data about the behaviour during an educational activity. They provide insight into how someone learns,' says knowledge technologist Marcel Schmitz. In this way, the learning activity can respond better to the student's needs.

Developing learning activity
This is done in the game by discussing each step in the development of a learning activity in advance. Schmitz: 'The strength lies in that discussion, because in this way participants consciously think about the use of data and educational technology in the design from the very beginning. For example, do you want to know whether education is activating enough? Or who takes the initiative during the lesson and who doesn't? What data do you need to find out and how are you going to measure it? You ask these questions first and only then do you design the education.

Serious game
Marcel Schmitz also works as a lecturer at the Data Intelligence professorship at the ICT Academy of Zuyd University of Applied Sciences. The game, the serious game Fellowship of the Learning Activities & Analytics (FoLA2), is an important part of his PhD research at the Department of Online Learning and Instruction of the OU Faculty of Educational Sciences. His supervisor Hendrik Drachsler, Professor of Learning Analytics at the OU, will present the game during the online conference Learning Analytics and Knowledge (LAK) in Frankfurt from 23-27 March. The co-supervisor is Dr. Maren Scheffel, also specialised in learning analytics at the OU. Here the promo video for LAK20:

Systematically
Tickets with different colours per subject guide participants systematically and in a structured way along the different aspects when designing a learning activity. Participants have different roles, such as student, teacher, but also study coach or developer, and give each other feedback. They first choose the goal (or 'challenge'), then the pedagogical approach and the method of interaction (for example, between instructor and student, or students among themselves).

Data on learning behaviour
The next step is to discuss which technological tools are needed (such as an app) and which data about the learning behaviour can be collected. Think for example of data on how actively students participate during the activity. Finally, various scenarios or learning designs can be developed, tried out and adapted if necessary. A digital version of the 'physical' game with cards will also be developed. This will make it easier to save a detailed scenario and adapt it if necessary.

Reference:
Schmitz, Marcel; Scheffel, Maren; Bemelmans, Roger; Drachsler, Hendrik (2020): Fellowship Of The Learning Activity – Learning Analytics 4 Learning Design. https://doi.org/10.25385/zuyd.9884279
Multimodal Learning Hub: Smart devices with sensor capabilities are becoming increasingly popular opening the opportunity to the development of learning applications able to track and support learners in multiple learning scenarios. The development and research of these types of Multimodal Learning Applications are slow and expensive. Currently, most of these applications are built from scratch and designed to support a very specific learning task. The Multimodal Learning Hub (LearningHub) attempts to address this issue. The LearningHub is system designed to record Multimodal Learning Experiences by collecting, integrating and storing data from customizable sets of generic sensor applications.

Research Project

Github:

https://github.com/janschneiderou/LearningHub

Publications

  • Schneider, J., Di Mitri, D., Limbu, B., & Drachsler, H. (2018, September). Multimodal learning hub: A tool for capturing customizable multimodal learning experiences. In European Conference on Technology Enhanced Learning (pp. 45-58). Springer, Cham.
Presentation Trainer: Practice and feedback are two of the most important aspects to develop public speaking skills. Having always a human tutor available providing learners with feedback regarding their presentations skills is not feasible. The Presentation Trainer (PT) is a system designed to help learners to develop basic skills for public speaking. It allows learners to practice their presentation while receiving real-time feedback regarding their non-verbal communication. After practicing their presentation, the PT presents learners with the opportunity to reflect about their performance.

Research Project

Video:



Github:

https://github.com/janschneiderou/PT20

Publications

  • Schneider, J., Börner, D., Van Rosmalen, P., & Specht, M. (2015, November). Presentation trainer, your public speaking multimodal coach. In Proceedings of the 2015 ACM on International Conference on Multimodal Interaction (pp. 539-546). acm.
  • Schneider, J., Börner, D., Van Rosmalen, P., & Specht, M. (2016). Can you help me with my pitch? Studying a tool for real-time automated feedback. IEEE Transactions on Learning Technologies, 9(4), 318-327.
  • Schneider, J., Börner, D., Van Rosmalen, P., & Specht, M. (2017). Presentation Trainer: what experts and computers can tell about your nonverbal communication. Journal of computer assisted learning, 33(2), 164-177.
  • Schneider, J., Börner, D., Van Rosmalen, P., & Specht, M. (2016, September). Enhancing public speaking skills-an evaluation of the Presentation Trainer in the wild. In European Conference on Technology Enhanced Learning (pp. 263-276). Springer, Cham.
  • Schneider, J., Börner, D., Van Rosmalen, P., & Specht, M. (2017, June). Do You Know What Your Nonverbal Behavior Communicates?–Studying a Self-reflection Module for the Presentation Trainer. In International Conference on Immersive Learning (pp. 93-106). Springer, Cham.
VR Presentation Trainer: Practice and feedback are two of the most important aspects to develop public speaking skills. Having always a human tutor available providing learners with feedback regarding their presentations skills is not feasible. The Presentation Trainer (PT) is a system designed to help learners to develop basic skills for public speaking. It allows learners to practice their presentation while receiving real-time feedback regarding their non-verbal communication. After practicing their presentation, the PT presents learners with the opportunity to reflect on their performance. The VR Presentation Trainer enables learners to receive feedback from the PT and train their skills while being immersed in a Virtual Reality classroom.

Research Project

Github:

https://github.com/CanIALugRoamOn/VRPT
Literature:
Schneider, J., Romano, G., & Drachsler, H. (2019). Beyond Reality—Extending a Presentation Trainer with an Immersive VR Module. Sensors, 19(16), 3457.
Salsa Trainer Dancing is an activity that positively enhances the mood of people that consists of feeling the music and expressing it in rhythmic movements with the body. Learning how to dance can be challenging because it requires proper coordination and understanding of rhythm and beat. In this paper, we present the first implementation of the Dancing Coach (DC), a generic system designed to support the practice of dancing steps, which in its current state supports the practice of basic salsa dancing steps. However, the DC has been designed to allow the addition of more dance styles. We also present the first user evaluation of the DC, which consists of user tests with 25 participants. Results from the user test show that participants stated they had learned the basic salsa dancing steps, to move to the beat and body coordination in a fun way. Results also point out some direction on how to improve the future versions of the DC. Literature:
Romano, G., and Schneider, J., and Drachsler, H. (2019). Dancing Salsa with Machines Filling the Gap of Dancing Learning Solutions. Sensors 2019, 17, 3661.
The Booth: Events such as giving presentations, taking an exam, going to a job interview, participating in any kind of tournament, etc. are usually emotionally charged. To perform as best as possible in these types of events it is important to also prepare for them emotionally. The Booth guides learners through a series of psychological exercises designed to reduce unhelpful feelings such as stress, anxiety, etc. and increase resourceful feelings such as joy, confidence, happiness, etc.

Research Project

Github:

https://github.com/janschneiderou/theBooth

Publications

  • Schneider, J., Börner, D., van Rosmalen, P., & Specht, M. (2018). Do you Want to be a Superhero? Boosting Emotional States with the Booth. Journal of Universal Computer Science, 24(2), 85-107.
The CPR Tutor is real-time multimodal feedback system for cardiopulmonary resuscitation (CPR) training. The CPR Tutor detects mistakes using recurrent neural networks for real-time time-series classification.
The CPR Tutor is a Kinect-based application which works in conjunction with the Myo armband.
From a multimodal data stream consisting of kinematic and electromyographic data, the CPR Tutor system automatically detects the chest compressions, which are then classified and assessed according to five performance indicators. Based on this assessment, the CPR Tutor provides audio feedback to correct the most critical mistakes and improve CPR performance. The CPR Tutor was trained using Laerdal ResusciAnne QCPR manikin but can be used with any manikin.
CPR Tutor

Github:

https://github.com/dimstudio/CPRTutor

Publications

  • Di Mitri, D., Schneider, J., Trebing, K., Sopka, S., Specht, M., & Drachsler, H. (2020). Real-Time Multimodal Feedback with the CPR Tutor. In I. I. Bittencourt, M. Cukurova, & K. Muldner (Eds.), Artificial Intelligence in Education (AIED’2020) (pp. 141–152). Cham, Switzerland: Springer, Cham. https://doi.org/10.1007/978-3-030-52237-7_12
The Visual Inspection Tool is a web-based tool developed in Javascript and HTML5, which allows the visual inspection and the annotation of multimodal datasets encoded with MLT-JSON data format. In the VIT, the expert can load the session files one by one to triangulate the video recording with the sensor data. The user can select and plot individual data attributes and inspect visually how they relate to a video recording. The VIT is also a tool for collecting expert annotations. In the case of CPR Tutor, the annotations were given as properties of every single chest compression. Visual Inspection Tool

Github:

https://github.com/dimstudio/visual-inspection-tool

Publications

  • Di Mitri D., Schneider J., Specht M., Drachsler H. (2019) Read Between the Lines: An Annotation Tool for Multimodal Data for Learning. In Proceedings of the 9th International Conference on Learning Analytics & Knowledge - LAK19 (pp. 51–60). New York, NY, USA: ACM. DOI: 10.1145/3303772.3303776
Mobius is a smartphone-based system for remote tracking of citizens' movements. By collecting smartphone's sensor data such as accelerometer and gyroscope, along with self-report data, the MOBIUS system allows classifying the users' mode of transportation. With the MOBIUS app the users can also activate GPS tracking to visualise their journeys and travelling speed on a map. The MOBIUS app is an example of a tracing app which can provide more insights into how people move around in an urban area. In this paper, we introduce the motivation, the architectural design and development of the MOBIUS app. To further test its validity, we run a user study collecting data from multiple users. The collected data are used to train a deep convolutional neural network architecture which classifies the transportation modes using with a mean accuracy of 89%.
MOBIUS app

Github:

https://github.com/khaleelasyraaf/Mobius_Client https://github.com/HansBambel/Mobius_server

Publications

  • Di Mitri, Daniele; Asyraaf Mat Sanusi, Khaleel; Trebing, Kevin; Bromuri, Stefano (2020) MOBIUS: Smart Mobility Tracking with Smartphone Sensors. Proceedings of the EAI conference S-Cube
Serene is a tool to support self-regulated learning, both in research and in everyday practice.

Planning
Serene supports the planning process by offering a template that is especially suited for the creation of learning goals. Learning goals have to be associated with a point in time and learners are asked to also create sub-plans on how they aim to achieve the goals.

We are currently working on features to analyze the written goals via NLP and give direct feedback during the creation (e.g. keeping them SMART).

1

Monitoring
Monitoring of the learning in serene is done via an interface that
  • Asks the learner for the progress on their tasks
f

  • Asks them for the reasons that affected their learning
g

Learners can therefore already connect their goal achievement performance with the reasons, providing them with better insight why they perform particularly good or bad.

Reflection
The reflection tab has several visualizations that are there to help the learner reflect on his progress.

The screenshot shows an example of two possible visualizations, others can be easily activated.

g
We are currently working on functionality that will give individualized recommendations to learners based on their goal achievement habbits.

This plugin provides extensive logging for user interactions with the moodle learning management system. By default, moodle only provides logs for high-level interactions with the site, such as completing an activity. However, for the purpose of research of learning behavior, fine-grained data is often required. In the current iteration, the plugin provides logging for the following user interactions: Scrolling on a page Mouse Movement Clicking on elements Video and audio element interactions (play, pause, seek) Text highlighting and copying Page resize Changes in checkbox selection Text input changes For all interactions, the current page, timestamps and user id’s are logged. To preserve privacy, the user id’s can be set to be hashed before leaving the learning management system. With these interaction data, behavioral indicators for the research of learning phenomena can be composed. For example, the scrolling indicators and timestamps could be used to infer whether a page has been fully read, or only skimmed. The installation of the regular plugin installation process Github Repository: https://github.com/EducationalTechnologies/interaction-logging-plugin
The TIILA project wants to give agency to the users by allowing them to decide on the Learning Analytics that will be done with their data. With this outspoken approach, we intend to raise the user's data literacy, teach them privacy awareness and enlighten them about dangers and opportunities of learning analytics. The German sociologist Niklas Luhmann defined trust as a way to cope with risk, complexity, and a lack of system understanding. Following this, we believe that users will at some point feel the urge to investigate the Learning Analytics system in place. We want to allow for this by providing automated trust-building features. In our opinion, such features encourage a gain of trust in the system leading to a raise in commitment and engagement with Learning Analytics. The hypothesis of this project is that this will ultimately improve the overall impact of the Learning Analytics.  
 
The infrastructure consists of three core engines and possibly a variety of research project engines docking in.
Literature
Open Learning Analytics Indicator Repository (OpenLAIR) is a learning analytics tool that helps course designers, teachers, students and educational researchers to make informed decisions about the selection of learning activities and LA indicators for their course design or LA dashboard.

OpenLAIR is a system whose frontend consists of a dashboard. This dashboard provides an interface that filters out the list of indicators and their metrics based on learning design activity. The information presented by the OpenLAIR is the result of a literature review, where we harvested and analyzed learning analytics papers from the last ten years (2011-2020) and extracted from them learning design and learning analytics activities, learning analytics indicators and metrics. The tool is based on the framework below.



The reference framework is based on LD and LA elements. In LD and LA, it starts with a learning objective, wherein LD the objective can be a learning event or can lead to a learning event. Then it leads to learning activities. In LD, to fulfill a learning activity, a learning task is required whether the support (such as learning materials) is needed or not, which leads to learning outcomes. In LA, learning activities in a learning environment lead to the generation of log data that forms metrics, and metrics help create indicators for LADs. The learning outcome in LD can be shown or presented via LA indicator(s) for selected LD-LA activities.

OpenLAIR dashboard contains learning events, learning activities, indicators, and metrics. Where Learning Event is a learning or teaching event that occurs during a learner’s activity or a teacher’s activity. Leclercq and Poumay identified eight learning events: create, explore, practice, imitate, receive, debate, meta-learn, and experiment. In order to interact with OpenLAIR it is recommended to take a tour that it provides by clicking on start tour in the top right corner.

To access OpenLAIR use the following link: OpenLAIR
Edutex evolved at the beginning of the pandemic out of a desire to discover more about the physical context of learners in distance education without losing track of data protection. With this desire as the focus, Edutex was implemented as a redesign and re-development of TIILA. In order to be able to support learners in the near future in their custom physical learning environment and in their individual home learning processes with the help of adaptive interventions, we have decided on the integration of commodity smartphones and smartwatches. In current studies, we used the Edutex Android smartphone and smartwatch apps to integrate their sensor data combined with questionnaire data obtained on the devices with learning management system data. We are currently exploring artificial intelligence methods to analyze the resulting multi-modal data stream, including through time series analysis, to provide just-in-time adaptive interventions in teacher dashboards or on learner smart wearables in the future.
This figure shows the software architecture design of Edutex. The design differentiates between the client-side part and the server-side part. The client-side comprises modules for data acquisition and data usage. The server-side encapsulates the processing logic for data acquisition and data usage as well as data curation, data analysis, and storage.
Literature
George-Petru Ciordas-Hertel et al. (2021). Mobile Sensing with Smart Wearables of the Physical Context of Distance Learning Students to Consider Its Effects on Learning. Sensors 21, 19 (oct 2021), 6649. https://doi.org/10.3390/s21196649

If you are interested in the projects that triggered the development of our products navigate to the research projects page.