
ChatGPT and other GenAI tools are said to be good for learning. But does their usage really empower learners, or does it overwhelm them instead? Studies from Highly- Informative Learning Analytics (HILA) programs show how complex the effects of such AI-tools can be. While dashboards can potentially improve students’ learning outcomes, AI feedback can sometimes be helpful and sometimes be demotivating for students, depending on their feedback literacy.
In a recent presentation at IWM Lectures Hendrik Drachsler argues that we need more research into Didactical Intelligence – a framework for understanding when, how and for whom AI and Learning Analytics truly improves learning and when not. Technology alone doesn’t guarantee better outcomes; its success depends on thoughtful integration into pedagogy. He therefore presented the Highly-Informative Learning Analytics research platform. This research platform delivers educational interventions across diverse contexts, from schools to higher education. It supports a wide range of AI-enhanced learning activities and enables the EduTec group to systematically gather empirical evidence on the effects of these AI agents under relatively standardized conditions.
Hendrik begins his presentation by revisiting Skinner’s Teaching Machines, which introduced the concept of programmed instruction. These machines promised individualized learning through a simple, transparent chain of effects: learners could progress at their own pace, receive immediate feedback and engage with material structured in small, manageable steps. It was a vision of personalized education long before digital platforms existed. Fast forward to today, and the conversation has shifted to AI and tools like ChatGPT. Recent literature reviews suggest that ChatGPT can support learning effectively, offering explanations, examples and even scaffolding for complex topics. But as Hendrik emphasized, the story isn’t that simple.
2025_11_Drachsler_IWMDashboards that visualize progress can improve student outcomes—but AI-generated feedback is a mixed bag. Sometimes it motivates learners; other times, it discourages them. The difference often comes down to feedback literacy: how well students understand and use feedback. For AI-generated feedback to really be beneficial for students, it needs to be understandable, transparent and trustworthy for all students alike. To reach this goal, we need to guide our research towards more Didactical Intelligence.
We acknowledge that AI offers exciting possibilities, but that it also raises critical questions. Working on these challenges can be an opportunity to shape the future of education that empowers all students to reach their full potential.
