How Can We Preserve Students’ Thinking and Agency in an Age of AI?
In short:
Popularisation of free AI tools increases urgency for new ways of teaching and learning.
There is plenty of room for innovation within current curricula and schemes of work.
As AI becomes more agentic, let’s make sure we retain the students’ agency.
Practically this means more group projects, problem-based learning, personalisation, and fewer exams.
As usual, there has been rapid progress in AI adoption and utility. ChatGPT is now the sixth most visited site in the world, the UK and US refused to sign an AI agreement, and the importance of AI companies has been elevated to the Presidential level in both the US and China. It was impossible to miss the news about DeepSeek’s R1 model, and Europe’s champion has also emerged in Le Chat by Mistral AI. The real news, however, is that AI’s paywall is gone: competition has forced companies to make state-of-the-art AIs free for teachers and students to access.

That means we should now assume that all take-home assignments involve AI use, and to clearly tell students that while we encourage their exploration of new technology, we still expect them to be able to do the work without using AI.
A common refrain among AI-savvy teachers is that they are limited by public exam authorities and curriculum bodies, that current curricula simply don’t incentivise students to use AI ethically, because they often achieve the highest scores simply by letting AI write the entire essay or think for them. This “enfeeblement” was explicitly named by the Center for AI Safety as a major AI risk, so it should be a priority for teachers and school admins to figure out how to guide, monitor, and ensure that students use AI without ceding their own agency and critical thinking.

There are both technical and pedagogical answers to the issue. Technically, schools could give students education-tuned versions of AI that are instructed to not give students the answer, instead replacing direct answers with scaffolding questions to help students learn. Khan Academy was an early mover in this field, and it is becoming easier for more forward-thinking schools to set up their own AI interfaces using open source software so that they retain full control over students’ and teachers’ AI data.
Pedagogically, teachers should assign more group projects and allow more personalisation in homework. Group projects put students in a position to negotiate and reach resolutions, which make them great for developing agency and related soft skills like effective communication. Letting students assert themselves more in homework similarly benefits agency, such as assigning a position paper instead of a research report or asking them to propose what types of experiment they want to do. Presentations and viva voce also lets us move away from high-stress, high-stakes exams. Doing so would work well enough for now, until further research and experimentation guide us towards reformed curricula designed with AI and the future of work in mind.
Perspective: Education as a field moves slowly and cautiously, as we must when change will affect the lives and futures of so many. However, because AI is now free and accessible like never before, we need to move with uncharacteristic swiftness to make sure students and teachers are still in charge and are resistant against AI dependency.
What’s Next? Lots of AI literacy training, paired with mechanisms to monitor and track usage among teachers and students. Whether it’s declaration forms or regular surveys or something more sophisticated, it’s just useful to know whether and how AI is being used in your school. Administrators can further shape and guide the use of AI by specifying and paying for specific tools for teachers and students to use, which goes a long way towards establishing transparency and accountability.