July 25' Update from Good Future Foundation
In this edition:
Connecting, Learning and Getting Inspired at the Festival of Education
Calling All Educators: Contribute to our Research on AI and Disinformation
New Podcast Episode: Approaching AI with Cautious Optimism at Watergrove Trust
AI in Action: Why AI’s ‘Context Window’ Changes Everything in Marking Season
“But Does AI Harm Critical Thinking?”
In short:
A widely-shared MIT study does not say that AI harms critical thinking—but it does challenge assumptions about written work as a proxy for effort or learning.
In both academia and industry, assessments are shifting: live challenges, multimedia deliverables, and AI-native outputs are becoming more common.
Treat AI use like open-book tests: useful in some subjects, but not a universal default.
Last month, a much-discussed MIT Media Lab paper set off alarms: “AI harms critical thinking!” the headlines read. But the actual study said something more subtle, as Wharton professor Ethan Mollick summarised:
“52 students wrote essays. 1/3 used ChatGPT. They remembered their essay less at the time. 4 months later, 18 people came back, and the ChatGPT group were still less engaged in their essay.”
That’s not a smoking gun for AI harming thinking, but it does raise a deeper issue: writing is no longer reliable proof of thinking:
From p.137 of the MIT paper:
“The most consistent and significant behavioral divergence between the groups was observed in the ability to quote one's own essay. LLM users significantly underperformed in this domain..
…Correct quoting ability, which goes beyond simple recall to reflect semantic precision, showed the same hierarchical pattern: Brain-only group > Search Engine group > LLM group.”
It is not surprising that students are less able to quote their essays when an AI had saved them from labouring over sentence structures and phrasing. The assumption that writing equals care, effort, or skill is collapsing before our eyes. That goes beyond education: think of job applications, letters of recommendation, lesson plans, quarterly reviews.
What replaces writing as proof?
Recruiters are less interested in polished cover letters or portfolios, and more in what candidates can do in real time. Live challenges are more popular than ever, and companies are experimenting with AI analysis of interview transcripts, screen recordings, and even a candidate’s AI usage history: what did they ask the model? How did they edit or build on its suggestions? The goal is to assess not just final output, but process - something far harder to fake.
In academia, the response varies:
Raise the bar: Some professors now require students to use AI by default, then raise expectations. Work done with AI should be more ambitious, better researched, more nuanced.
Design smarter tasks: Assignments can be harder to plagiarise when they include references to in-class discussions, and require self-reflection.
Switch modalities: Interview exams, presentations, and group projects remain robust. Q&A, pee4r critique, and interactive discussion are still difficult for AI to fake, especially in real time.
Where this is heading
Educators are still exploring what AI-era assessment might look like. I hope that within a decade, we will start to see fully AI-native assessments in standardised testing, tasks that measure how well a student uses, critiques, and improves AI outputs. But we’re not there yet.
For now, a perspective many educators find useful is this: AI-assisted work is like an open-book test. It has its place. It makes more sense for some subjects than others. But most of the time, we still want students to demonstrate what they can do without assistance.
Not because we don’t think they should use AI, but because we need to measure what students can achieve with and without AI.
Connecting, Learning and Getting Inspired at the Festival of Education









It was an exhilarating 2 days for Good Future Foundation to be part of the Festival of Education for the first time! Our marquee became a hub for meaningful discussions and connections with more than 150 teachers during the event in early July.
Engaging Conversations and Knowledge Sharing
Our trustee, Advisory Council members, Student Council representatives, and AI Quality Mark school partners gathered in our space to connect with educators on pressing AI topics. The conversations were rich and practical, covering ethical implementation and digital wellbeing to prepare future-ready students, combating misinformation, bridging educational gaps, and building effective AI strategies and implementation plan through the AI Quality Mark. We're deeply grateful to our speakers who contributed their wisdom, expertise, and energy throughout the event. These sessions were recorded and will soon be accessible through our teacher-focused community platform.
Launch of Good Future AI and Teacher Community Platform
A highlight was the launch of our free-forever Good Future AI tool which helps teachers get started with using GenAI for specific tasks. We’ve prioritised the Department for Education product safety guideline above all else to ensure data privacy and ease of use are at the heart of this completely free tool.
We’ve also unveiled our online teacher community platform that we’re partnering with Royal Grammar School High Wycombe, to allow educators to sharing resources and experiences when it comes to the use of AI.
Both initiatives support our teachers and schools going through the AI Quality Mark process and receiving our professional development support. We’ll be sharing more about these exciting developments in our next newsletter!
Preparing the Next Generation on Responsible Use of AI
The summer break came alive with our week-long residential AI Summer Programme, a partnership with the National Mathematics and Science College. From 21-27 July, we welcomed a brilliant group of students aged 14-17 who share the same passion for AI and received 100% bursaries to participate in this intensive week of learning and discovery.
The programme immersed students in a curriculum focused on ethical and responsible AI applications in our world. Through a blend of targeted lessons, interactive workshops, and collaborative projects, students didn’t just learn about AI but experienced it. They visited Bletchley Park where they connected with the historical roots of computing, and made trips to London’s Apple HQ and the Salesforce AI Centre to witness AI’s role in industry. Beyond structured lessons, students also participated in activities including debates on AI in student assessment, creative sessions with AI art tools, and collaborative challenges to build knowledge bots.
Voices from the Programme
The impact of the week is best described by the participants themselves. One student shared her reflection on their future:
Even if AI becomes faster, smarter, or more efficient, it can’t replace my hunger to learn, desire to build, and ability to imagine. I no longer want to choose a career just because AI might not take it, I want to pursue what excites me, because that’s what makes me, me.
Another student highlighted their key takeaways:
My three key takeaways are: first, the fact that AI is extremely powerful and that positive outcomes depend on how you use it. Secondly, the days out were amazing, visiting places where AI is growing and expanding within companies. Thirdly, the evening activities were very fun and helped those who were less sociable socialise.
The transformative experience wasn’t just for students. Our educators left inspired as well. One teacher shared how the programme has reshaped their perspective on integrating AI into regular teaching practices:
Very much so and I’ve gained plenty of new ideas and platforms to sign post to my students. Got me thinking about the future of careers guidance too and how best to navitage students around the uncertainty of the working world of the future.
A Heartfelt Thank You
This programme would not have been possible without the incredible support of our partners. We extend our heartfelt thanks to:
MCC Digital for providing iPads for every student
Skriva for donating styluses to bring ideas to life
Salesforce and Apple for hosting our industry visits
Our teachers and guest speakers who shared their invaluable time and expertise.
Our amazing students whose curiosity, enthusiasm, and willingness to explore AI made this programme truly special. Your engagement and reflections on your learning were the heart of this experience.
We’re incredibly proud of this programme’s success and are already looking forward to how we can continue to develop and enhance this experience to support more young people in becoming grounded, confident, and responsible digital citizens in the ethical use of AI.
Exciting Updates on AI Quality Mark
Meet Anneliese, Project Manager Heading the AI Quality Mark
As our AI Quality Mark community grows to over 400 schools and trusts, we're also scaling up our support. We are excited to announce that Anneliese Smith has joined us as the new Project Manager dedicated to this initiative.
Many of you may have already connected with Anneliese, who has hit the ground running since joining our team in July. Bringing 15 years of experience in education and edtech, and as a former Head of Digital Learning, she’s perfectly positioned to guide schools on their AI implementation journeys.
If you haven't get in touch with her and would like to discuss about your school's AI journey, please contact Anneliese at quality-mark@goodfuture.foundation.
Celebrating Excellence: GEMS Winchester School – Dubai Achieves Gold AI Quality Mark
A huge congratulations is in order for GEMS Winchester School - Dubai (WSD), which has officially become the first school outside UK to achieve the Gold AI Quality Mark. This achievement reflects the school’s comprehensive and strategic approach to embedding AI across all facets of school life.
Local news outlets have spotlighted WSD’s success to recognise their exemplary commitment to responsible AI integration. Matthew Lecuyer, CEO and Executive Principal at GEMS Winchester School captured the significance:
This accolade isn’t just a recognition of where we are today; it’s a launchpad for where our learners, staff, and wider community are heading.
The school’s leadership team, led by Leena Atkins (Head teacher and Whole School AI Integration Lead), Alicia Ramsay (Director of Learning and Teaching), and Swati Nirupam (Head of Computing), has created a model of AI integration that encompasses not just teaching and learning, but also school operations, safeguarding, and community engagement. Our assessor was particularly impressed by the shared vision of the entire team at WSD for leveraging technology to provide the very best for their students.
More about the AI Quality Mark
We've recently updated the AI Quality Mark page on our website with more details about the assessment process, criteria, and benefits for participating schools. The expanded FAQ section addresses common questions about implementation, timeframes, and support resources. If you're interested in learning more about how your school can embark on this journey, we invite you to explore these resources at https://www.goodfuture.foundation/ai-mark
Calling All Educators: Contribute to our Research on AI and Disinformation
With just a few prompts, generative AI now creates hyper-realistic images, audio, and video which makes it harder than ever to distinguish fact from fiction. We also know that our students scroll through a sea of algorithmically-selected content daily. So how can schools and educators support them with the awareness and skills to navigate this complex landscape?
To help answer this question, our Advisory Council member Thomas and Student Council member Yoyo are now working with us to conduct a research on the impact of AI and disinformation on school communities. To ensure our findings are meaningful and reflect the realities of the classroom, we invite you to share your perspective. Your input will directly help us develop practical and relevant resources to support teachers and schools. Please take 5 minutes to contribute your insights through the survey below. We also encourage you to share this with other educators in your network.
New Podcast Episode: Approaching AI with Cautious Optimism at Watergrove Trust
Our latest Foundational Impact episodes, recorded during Watergrove Trust’s AI workshop, features Dave Leonard (Strategic IT Director) and Steve Lancaster (AI Steering Group member) sharing their “cautiously optimistic” approach to AI implementation. Through leadership support and voluntary staff participation in their AI working group, they’ve fostered trust-wide commitment to responsible AI use while prioritising staff wellbeing.
Why AI’s ‘Context Window’ Changes Everything in Marking Season
In Short:
Most AI tools can only “read” the equivalent of a few essays at a time.
Premium versions with larger reading capacity exist, but they still require teachers to juggle file sizes and prompts.
Once that limit is sidestepped, AI can run high-quality analytics that uncover growth trends, augment learning gaps, identify recurring misconceptions, and give teachers a data-rich map of what to focus on next.
The semester is nearly over. Your Year 10 History class had written four essays this term, which is over 100,000 words in total. Wouldn’t it be nice if the end-of-term audit didn’t have to rely on spot checks, and if an AI could analyse the entire corpus to identify learning trends and recurring misconceptions, so you could generate revision notes and quizzes that are tailored to your class’s needs?
Sadly, when you upload those essays to ChatGPT, it either refuses, truncates half the text, or forgets your instructions halfway through. The culprit is not your prompt, it is the model’s context window.
Think of the context window as the whiteboard in the model’s head. Everything you typed, plus everything it has written, must fit on that board for it to think. When the board is full, earlier lines get wiped and coherence slips. Context is measured in tokens (1 token ≈ 0.75 English words) and free chatbots are usually limited to 4-8K tokens context (~3K-6K words), while paid ones support more:
Your 100,000-word corpus weighs in at over 130,000 tokens, and that’s before you ask the AI questions about your students’ performance. Adding in variables like prompt quality and model intelligence, one use case that worked great for one teacher could completely fall apart for another. Improving teachers’ AI literacy is surely the first step towards addressing these frustrations, as is developing school-centric tools that abstract away these technical requirements so that teachers don’t have to worry about these details in the first place.
Here’s a report and prompt I used to analyse essays from an ESL class, and I instructed the AI to format the analysis based on the Content-Language-Organisation framework as used in the Hong Kong Diploma of Secondary Education (HKDSE). The English teachers I worked with told me that they are looking forward to automating a lot of repetitive paperwork and reporting with similar approaches, although the OCR technology needed to digitise students’ handwritten work isn't reliable enough yet.
Prompt Used

























