Bachelor/Master theses on Educational Technology and Learning Analytics

Research and Developing projects for Educational Technologies.  The following projects can be investigated either as seminar works or Bachelor and Master thesis.

Within the research field of Educational Technologies we have a broad range of projects, generally, we distinguish those projects between ‘Research’ (RES) and ‘Development’ (DEV) projects.

  • RES: Research projects are supposed to deliver a scientific manuscript that describes the State-of-the-Art-of-the-Research, formulates research questions, collects or reuses a dataset to than discuss the results and provide an answer to the research questions. Normally, these projects require less programming work. The reports can be written in German or English.
  • DEV: Development projects do not require to have a written report, but are supposed to deliver working prototypes that are well documented.

In some cases, the projects can be both RES and DEV projects. Furthermore, the projects can also be conducted in teams, but then each member of a group needs to have a clear task description and outcome to show.

In a course setup, all projects will be presented in a colloquium at the end of the semester.  Bachelor students need to present their tools only, while Master students need to show the contribution of their work to the body of knowledge in Educational Technologies during the project presentation. Interested? Please contact Prof. Dr. Drachsler (drachsler (at) em.uni-frankfurt.de).

  • Publication Harvesting for Learning Analytics

Task: To create a repository of learning analytics (LA), we want to gather all publications that feature learning analytics and categorize them. The terminology within LA varies a lot and so it is not easy to find all related publications via web search.
We therefore want to evaluate whether the automatic extraction of publications (e.g. using OXPath) may help in this endeavor.

Expected results
      • An overview which approaches to exist for:
        • automatic extraction of publications
        • Categorization of their contents
      • A prototypical implementation that uses scientific search engines (e.g. google scholar) to find publications related to a certain topic and clusters them according to their contents

This thesis can be handled by more than one person.
Contact Person: Daniel Biedermann, biedermann(at)dipf.de

  • A Math Learning Application to Detect Dyscalculia

Task: Dyscalculia is a probably widespread occurrence that is hardly diagnosed correctly. Sometimes the diagnosis is abused to obtain easier grades but often it goes simply undiagnosed. The problem is to differentiate from math inabilities that are not related to dyscalculia and simply come from a lack of motivation or general insufficient practice. The goal is to look into the literature and find which indicators of dyscalculia exist and how they could be integrated into a general purpose math learning environment. The idea is to create an application which would benefit every math learner (for example) just regular learning of the multiplication table) but which would identify error patterns which are unique to dyscalculic students.
For a Bachelor’s thesis, this topic may be shared among several students.

Expected results
      • An overview of the error patterns that dyscalculic learners exhibit.
      • A concept that explains which tasks and assignments could be used so that both general learners and dyslexic learners can use the same program.
      • A prototypical application where these concepts are implemented.

Contact Person: Daniel Biedermann, biedermann(at)dipf.de

Multimodal Learning Analytics Topics

  • Using smartphones for Multimodal Learning Analytics

Task: Smartphones have embedded multiple sensors such as: accelerometers, GPS, Microphones, Cameras, etc. For this project the task would be to develop an application that uses the data obtained by the sensors from smartphones in order to record Multimodal Learning experiences. Examples of learning experiences can be: dancing, gymnastics, martial arts, playing a musical instrument, public speaking, etc. The student can decide what type of learning task would like to make recordings from. The student also needs to do some recordings in order to test the developed application. For research purposes it would be better if the recordings are made using some experts and novices performing the learning tasks.

Expected results
      • Development of a program able to extract data out of the sensors from a smartphone.
      • Development of a library class for smartphones that can connect to our multimodal recording tool (TCP and UDP socket connection).
      • Design the set-up for recording of the learning task
        • Which specific learning task will be recorded
        • Which characteristics of this learning task can and should be recorded
        • Which sensors of the smartphone are needed for the recording (We can provide other sensors, such as kinect, MYO armband, Leapmotion, etc, to improve the recording)
        • How should the user carry the smartphone while doing the recording
      • Creating a set of multimodal learning recordings.

Contact Person: Dr. Jan Schneider, Schneider.Jan [at] dipf.de

  • Training Presentations with Virtual and/or Augmented reality

Task: We have a tool called the Presentation Trainer that is designed to help people to practice their presentations while receiving some feedback. It uses the Kinect V2 to track the voice and posture of the learner and based on that it gives the learner some basic instructions. The task for this project is to create an application using the Microsoft Hololens that shows a virtual audience. The application should be able to display the feedback produced by the Presentation Trainer. The student should also conduct some test with the application to investigate the user experience.

Expected results
      • Development of an application for the Microsoft Hololens that is able to display a virtual audience and the feedback produced by the Presentation Trainer.
        • A plus will be to make the audience react based on the feedback(e.g. Seem distracted, sleepy, etc. when the user is doing something wrong. Seem interested when the user is doing something right)
      • Conduct some user tests with participants to investigate the user experience
        • A plus would be to conduct an experiment to compare results between feedback on a screen (currently version of the PT) and feedback on a Virtual Reality device.

Contact Person: Dr. Jan Schneider, Schneider.Jan [at] dipf.de

  • Inspection Tool for Multimodal Recordings of learning experiences

Task: Multimodal data is generally noisy and difficult to interpret and analyse. For this project the user will develop an application able to open multimodal recordings, where users can manually annotate specific sections of the recordings, save these sections in files that can be used for later analysis and/or for the use of machine learning. Finally the annotations and sections should be stored in a learning record store.

Expected results
      • Development of ta tool with the following features:
        • Opening multimodal recordings
        • Plotting multimodal data
          • Recorded Values
          • Derivatives
          • Different frame rates
        • Playing videos and audios included in the recordings
        • The user should be able to select and tag sections of the recordings
          • The raw selected data should be saved in some type of file format (e.g. cvs), that can later be opened and used by softwares for statistical analysis (R, Excel) and machine learning libraries.
          • The tags together with the selections should also be stored in a learning record store.

Contact Person: Dr. Jan Schneider, Schneider.Jan [at] dipf.de

  • Holo-dubbing

Task: You are sitting in a meeting in which a foreign language is spoken. You would like to participate in the meeting and speak out your opinions but you lack the language skills to do so. Augmented reality and speech recognition technologies can come in support to tackle these situations. Several NLP services can now be used to perform automatic, fast and reliable translations in real time. If that is combined with wearable augmented reality devices like the Microsoft Hololens headset, it would be possible to create an application to create an automatic and immersive dubber which works in real-time.

Expected results
      • UWP Visual Studio application able to get English spoken language through a microphone input and transform it into subtitles;
      • The app can make use of one of the existing NLP and speech recognition libraries (Microsoft, Google, Amazon, Watson…);
      • A MS Hololens would be made available to test the application into real time scenarios
      • Additional features can be discussed, such as:
        • Further languages support (German, Dutch, etc.)
        • Real-time affect detection

Contact Person: Daniele Di Mitri, daniele.dimitri [@] ou.nl

  • Multimodal chess-playing

Task: The popular game of chess is an interesting learning scenario for investigating the true meaning of expertise in cognitive intense tasks. In artificial intelligence the game of chess is usually considered a search problem: find the optimal move taking into account the opponent’s reactions in all the possible configurations. Human however are not able of keeping track of all the combinatorial possibilities, and for this reason they adopt a search system which is much more based on heuristics or tactics. The scope of this multimodal application consists in untangling the strategy of the players by means of sensors data and multimodal learning analytics. The nature of this task is highly explorative. The multimodal application should be able to correlate the decisions taken by the players (i.e. a move in the chess board) within a particular state configuration of the game with the observed sensor data. The analysis can also look at different strategies adopted by different players and reason on the differences.

Expected results:

      • Sensor capturing plays sessions of one sensor among Emotiv Insight EEG headset, Empatica E4 wristband or Eye tracking device
      • Correlate the sensor data with players’ moves an board configurations
      • Analysing recurrent patterns in the sensor data and peculiarities of each individual  player
      • Possible extension:
        • Compare two or more players
        • Scale to multiple integrated sensors

Contact Person: Daniele Di Mitri, daniele.dimitri [@] ou.nl

Share this: