Sep 25' Update from Good Future Foundation
We hope you’re having an incredible start to the new school year! As classrooms come alive with new energy, we’re delighted to share our September newsletter filled with relevant AI developments and practical ideas from educators in different settings. Here’s a quick look at what’s inside this edition:
Celebrating AI Quality Mark Achievements!
We’re delighted to see an increasing number of schools considering how their institutions should respond to artificial intelligence, including where and how this technology should be addressed or integrated across different areas of school life.
As the new school year begins, we’re proud to announce that six schools and multi-academy trusts have already achieved their AI Quality Mark awards! Many more have met with our Project Manager, Anneliese to learn about the framework and accreditation process.
The global reach of our initiative continues to expand, with two non-UK recipients among our newest awardees: Instituto Monseñor Dillon in Argentina (Progress Award) and Rashid and Latifa School in Dubai (Silver Award). We’re also celebrating Tring School, which has advanced from Bronze status earned earlier this year to Silver in the new school year which is a wonderful example of continued growth and commitment!
Other new members of our growing AI Quality Mark community include Mount Kelly (Bronze Award), Fairview Community Primary (Progress Award), and LEO Academy Trust, which earned Silver status across its network of nine schools.
To provide insight into the accreditation experience, we’ve invited Cheryl Shirley, Director of Digital Learning at LEO Academy Trust to share their journey:
Exclusive Opportunities for AI Quality Mark Schools
As sponsors of the upcoming STEM Primary Conference organised by The Science Hub and the Next Generation Schools Conference 2026 organised by Big Education, we’re pleased to offer AI Quality Mark schools complimentary tickets to both events. These conferences provide excellent opportunities to learn best practices and discover innovative teaching strategies from other schools while networking with like-minded educators. If you’re interested in attending either event, please email us to secure your free tickets.
Is the AI Quality Mark right for your school?
Whether your school is just beginning to explore AI or has already implemented significant AI initiatives, the AI Quality Mark framework provides valuable structure for learning, planning, and reviewing your approach. The framework offers clear guidance while allowing flexibility to address your school’s unique needs and priorities.
To learn more about how the AI Quality Mark could benefit your school, please book a call with Anneliese at quality-mark@goodfuture.foundation
Debunking Myths
In short:
Adoption is broad and fast. OpenAI’s usage study shows gender parity and much faster growth in low-income countries; Anthropic reports U.S. workplace use roughly doubling in two years. (OpenAI, Anthropic)
Reliability got a concrete path forward. OpenAI’s 4–5 Sept paper argues current training and evals reward guessing; align rewards to calibrated uncertainty and abstention to cut bluffing. (OpenAI)
Oversight is intensifying. The FTC opened an inquiry into “companion” chatbots; the U.S. Senate held a hearing after parental testimonies of harm. (FTC, US Senate)
Hello, and welcome back to the new school year. I wonder how many educators out there share my experience of encountering a vocal minority who still undersell AI, either by pretending it doesn’t exist or by listing its flaws and pointing to “Achilles’ heels” like the environmental costs of training models or societal risks like job displacement. These are of course valid concerns which are fair to surface, but they are also incomplete and dangerous because students, mass media, market participants, and the rest of society certainly are not waiting to adopt AI. If we want educators to make good decisions about AI, the least we need is up-to-date facts. Here are four September-fresh anecdotes—with links and explanations—for you to engage in positive discussions about AI.
1) Adoption is mainstream—and changing shape
As of 1 September, SimilarWeb ranks ChatGPT as the 5th most visited website in the world (up from 6th in August), placing it ahead of popular sites like X.com, Reddit, or Wikipedia; this popularity has been a sustained trend and it’s hard to argue that AI is just a fad.
OpenAI’s usage study released on 15 September analysed 1.5 million conversations and shows that early demographic gaps are closing: while users with typically feminine names only accounted for 37% of ChatGPT users in Jan 2024, the percentage had risen to 52% by July 2025. Growth in the lower-income countries also ran 4× the rate of the higher-income group, and this will no doubt increase as AI companies hand out subscriptions for free and launch affordable subscription tiers.
“Patterns of use can also be thought of in terms of Asking, Doing, and Expressing. About half of messages (49%) are “Asking,” a growing and highly rated category that shows people value ChatGPT most as an advisor rather than only for task completion. Doing (40% of usage, including about one third of use for work) encompasses task-oriented interactions such as drafting text, planning, or programming, where the model is enlisted to generate outputs or complete practical work. Expressing (11% of usage) captures uses that are neither asking nor doing, usually involving personal reflection, exploration, and play.” (OpenAI)
Anthropic’s September Economic Index paints a similar picture of rapid and widespread adoption: 40% of U.S. employees report using AI at work, up from 20% two years ago. This rate of growth is unprecedented:
“Historically, new technologies took decades to reach widespread adoption. Electricity took over 30 years to reach farm households after urban electrification. The first mass-market personal computer reached early adopters in 1981, but did not reach the majority of homes in the US for another 20 years. Even the rapidly-adopted internet took around five years to hit adoption rates that AI reached in just two years.” (Anthropic)
Why this matters: Like it or not, adoption is happening around school policies. If one truly believes that AI is a net negative to education or society, then it’s time to test and understand it intensively to list out where we want students to use or not use AI (I’m personally partial to separating assessments into 75% pure-human and 25% AI-augmented tracks). Schools must build staff AI literacy so choices are intentional, not incidental.
2) Hallucinations: we know how to fix it now
On 5 September, OpenAI published “Why Language Models Hallucinate.” The reason is actually quite simple:
“Hallucinations persist partly because current evaluation methods set the wrong incentives… Think about it like a multiple-choice test. If you do not know the answer but take a wild guess, you might get lucky and be right. Leaving it blank guarantees a zero. In the same way, when models are graded only on accuracy, the percentage of questions they get exactly right, they are encouraged to guess rather than say “I don’t know.”” (OpenAI)
The fix is also simple in principle: new model training methods will also reward abstention, expressing uncertainty, and seeking. Doing so will move reliability from a risk into a model capability. The latest models such as GPT-5 are already much less prone to hallucination than earlier models, and I wouldn’t be surprised if hallucination stopped being a problem a year from now (fingers crossed!).
Why this matters. Tools that cite sources by default, expose confidence, and ask clarifying questions before answering will unlock myriad new school-safe use cases. In truth, even if model progress were to stop today, society would still have decades of work ahead discovering how to make good use of this technology across every dimension. A student’s education is a decades-long effort, and it’s only fair that we plan for their future based on where technologies and societal trends are heading.
3) Safety moved from rhetoric to oversight
Regulators are reacting to real-world incidents, highlighting tragic cases and rising concern about emotional dependency. On 11 Sep, the FTC issued compulsory orders to Alphabet, Meta, OpenAI, Snap, X.AI, and Character.ai asking how they measure and mitigate risks to children and teens. On 16 Sep, the U.S. Senate Subcommittee on Crime & Counterterrorism also held a hearing where parents and experts testified about harms attributed to chatbots.
It’s about time this happened, and these efforts will give rise to laws and frameworks that enable safer and more responsible application of AI technologies. OpenAI posted a special note on teen safety, freedom, and privacy, acknowledging problems within current laws and pledging to increase protection for under-18 users. Clear regulations and measures can help build trust and accountability, which should ultimately boost adoption.

Why this matters. Schools don’t get an opt-out. The real choice is engage and govern versus ignore and let usage proceed without guard-rails. Put AI on the risk register; enable data controls; define age-appropriate red lines; and teach students to disclose when they have used AI. This protects students and staff while adoption grows.
4) AI and the acceleration of progress
This one came out a bit earlier, but it’s too exciting and eye-catching not to share: George Church—the Harvard geneticist and pioneer behind many biotech breakthroughs such as gene sequencing and DNA synthesis—said in an interview he “wouldn’t be surprised” if ageing as a natural cause of death is solved around 2050. This is based on how AI is greatly compressing the time needed to discover new drugs and design new treatments, which means we can engineer billions of years’ worth of evolutionary progress “in one afternoon”. Industry is already moving AI-designed drugs into trials, and pharma is standing up shared AI platforms for discovery.
Why this matters. It’s not enough to debunk myths about AI, we also need to tell hopeful stories so people see that it can be used as a force for good. AI wins International Math Olympiad gold medals and Nobel Prizes, and it’s incredible that students can use essentially the same models to help themselves learn and work smarter. Given current trends of model progress, not using AI (at least a little bit) will soon make as much sense as a nearsighted person rejecting glasses because they don’t want to be reliant on them.
Bonus myth-watch: “95% of Gen-AI pilots fail?”
A widely shared MIT report claimed that virtually all GenAI experiments have zero return made August headlines. I would take it with a pinch of salt, not least because the same lab is actually researching and promoting their own AI agents framework. Below, I let Ethan Mollick’s criticism of the paper speak for itself:
Try this in September
Start where you dislike the work. Pick one task you dread and let AI handle the grunt work while you keep the judgement.
Add this to your prompts:“If anything is unclear or missing, ask me clarifying questions before answering.” It nudges models towards the behaviour reliability research is now rewarding.
Style Studio: Paper-to-Palette with Image-to-Image AI
In Short:
We use AI to make lessons more interesting, interactive, and personalised, without drifting from what we actually want students to learn.
Students restyle the same paper sketch into multiple movements, then judge fidelity using textbook features.
Tool-agnostic: Google Gemini (Nano Banana), Adobe Firefly, or any school-approved image-to-image option.
1) What this aims to achieve
We turn “style” from abstract labels into observable features. Students start with their own sketch, generate side-by-side versions, and practise naming what they see using precise vocabulary. The point is analysis and language, not pretty pictures. By holding composition constant and varying style, students notice patterns, argue about accuracy, and justify claims with evidence.
2) What happened in the lesson
3) Reflection
What worked. Personalisation lifted engagement; students cared about improving their piece. Keeping composition constant made differences legible, which pushed more precise vocabulary and firmer critique.
Prompts. Keep them short. Always include “keep composition” and name the movement. Add one or two hallmark features only; avoid long lists. If a prompt misfires, just try again or nudge with a single clarifying phrase (e.g., “broken colour and visible brushwork” for Impressionism; “fractured planes and multiple viewpoints” for Cubism).
Access, control, and safety. Start with tools your school already approves so work stays in your managed environment. Gemini is free via Google Classroom, Canva for Education is also free for K-12 and provides image generation. Upload sketches only, not faces or personal photos.
Bonus: print and share. The best output can be used for hallway displays, portfolios, or a parent newsletter strip.
Connecting and Learning Together on our Teachers Community Platform
Our online teacher community is growing with new partnership and resources. We’re excited to announce a collaboration with Royal Grammar School High Wycombe, launch new bite-sized AI CPD for busy teachers, and share exclusive recordings from the Festival of Education. All available for you now on the platform.
Announcing our Partnership with Royal Grammar School High Wycombe with our Online Teachers Community Platform
Our teachers community platform continues to evolve with a newly confirmed partnership with The Royal Grammar School, High Wycombe! The platform serves as a trusted space for educators exploring AI in education where they can access reliable resources, confidently ask questions, and find inspiration through stories and case studies from other educators implementing AI responsibly in their classrooms. We hope this platform becomes teachers’ destination for ongoing learning, practical implementation ideas, and connecting with educators who share the same commitment to thoughtful and considered approach to AI use to enhance education.
Please come join us and be part of the conversation!
Bite-Sized Learning for Busy Teachers
Looking for quick, practical ways to discuss responsible AI use with your students? Good Future Foundation has partnered with STEM Learning UK to create AI CPD Drops—short, practical learning modules designed for educators on the go.
Based on focus groups and surveys with educators, we’ve developed five specialised CPD drops: three for primary and two for secondary educators. Each module offers ready-to-use classroom activities for teaching responsible AI use. You’ll receive a certificate and digital badge by completing all five sessions.
Our first video is now available on our community platform! Delivered by Alex More, an experienced teacher and EdTech consultant, this session explores how to help students develop digital fluency to thrive in an AI-infused world. It includes activities to help primary students understand AI concepts like bias, transparency, and consent while building critical thinking skills for both supervised classroom environments and unsupervised online spaces. Sign up today to access these resources!
Catch Up on What You Missed at the Festival of Education
Earlier this summer, Good Future Foundation took part in Festival of Education, where we facilitated engaging conversations with more than 150 educators. We’ve captured these valuable exchanges and have now uploaded them to our community platform for you to access.
Featured sessions include:
“Capability, Conscience and Courage: A New Approach to Ethical Digital Childhood” by Laura Knight
“What does the world of work look like for Gen Alpha?” by Emily and Alec from GFF Student Council, facilitated by Xiu Ting
“AI Literacy and Disinformation” by Jim Knight and Daniel Emmerson
“Doing Good and Doing Well: A New Ethic for Inclusive, Equitable and Sustainable Impact” by Jim Knight, Ana R., and Daniel Emmerson
“What If We Designed Schools for Human Flourishing” by Emily and Alec from GFF Student Council, facilitated by Laura Knight and Daniel Emmerson
New Episodes on Foundational Impact Podcast
Our September podcast features conversations with two experienced educators for the new school year. Listen to Alex More discuss “Preserving Humanity in an AI-enhanced Education,” where he discusses generational differences in AI perception and balancing technology with human connection. Then join Matthew King from Brentwood School in “Creating a Culture of AI Literacy Through Conversation,” as he shares how meaningful dialogue rather than rigid policies has shaped their successful approach to school-wide AI integration.


Highlights from these conversations are now accessible on YouTube, while complete episodes remain available across all major podcast platforms.






























