New pub: A Human-centric Approach to Explain Evolving Data
A recent study led by Gabriella Casalino at the University "Aldo Moro" of Bari, Italy in collaboration with Daniele Di Mitri highlights the importance of transparency and explainability in Machine Learning models used in educational environments. As we embrace this technological shift driven by AI in education, it is imperative to address the ethical considerations surrounding AI applications in educational settings. A recent study has underscored the critical importance of transparency and explainability in machine learning models utilized in educational environments. At the forefront of this study is the introduction of DISSFCM, a dynamic incremental classification algorithm that harnesses the power of fuzzy logic to analyze and interpret students' interactions within learning platforms; by offering human-centric explanations, the research endeavours to deepen stakeholders' understanding of how AI models arrive at…