New Pub: GRIPF at TSAR 2025 Shared Task Towards controlled CEFR level simplification with the help of inter-model interactions
Language learners make the fastest progress when reading texts that match their proficiency level. But most real-world texts are too hard—and manually adapting them is time-consuming. So the big question is: Can AI automatically simplify texts to a specific CEFR level without losing meaning? We explored exactly this in the TSAR 2025 Shared Task, where systems had to rewrite advanced English texts (B2+) to easier levels like A2 or B1. Our team submitted two different approaches: EZ-SCALAR and SAGA. EZ-SCALAR works like an expert panel of AI models. Two large language models (GPT-5 and Claude) each produce their own simplification, critique each other, refine their versions, and then a final “judge” model picks the best result. An extended version, EZ-SCALAR Lex, adds something extra: a vocabulary check using EFLLex, a…



![[Workshop] Introduction to Language Technology and Language Modeling for Education](https://edutec.science/wp-content/uploads/2025/05/PXL_20250512_122427649.MP_-700x340.jpg)





