Aug 25' Update from Good Future Foundation
In this edition:
From the ED’s Desk: AI and the Standard We Are Setting for Childhood
AI Big Picture: The Privacy Test: What Schools Must Ask Before Saying “Yes” to AI
Portsmouth Leads the Way as the First City to Earn AI Quality Mark Gold Award
AI and the Standard We are Setting for Childhood
AI continues to accelerate at an astonishing pace, and there is much to be inspired by. This month’s newsletter shares some of the developments shaping schools and classrooms. But alongside the excitement, I find myself returning to a more fundamental question: what values are being normalised as this technology reaches young people?
The recent Reuters investigation into Meta’s chatbot guidelines was alarming. The framework Meta has drawn up to govern how chatbots interact with children effectively legitimises behaviours that should raise every possible red flag. Encouraging dependence, building in flattery, and presenting manipulation as a feature is far more a commercial strategy than a safeguarding one. Seeing this written down in policy, as if it were acceptable practice, is deeply troubling.
A major concern here is around the norms being set for the future. If young people grow up with the message that synthetic companions are meant to be trusted as friends or advisors, the distinction between authentic human relationships and corporate design becomes dangerously blurred. Schools and families will be left to pick up the pieces when trust, resilience and social development are undermined in this way.
This is of course bigger than chatbots. If Meta believes this framework is acceptable, then serious questions follow about the trustworthiness of the company’s wider platforms. WhatsApp, Instagram and Facebook are already central to how children and families communicate. Can we really assume these spaces are being run with wellbeing in mind when the company’s own policy approach embraces manipulation as an acceptable norm? Scope Sarah Wynn‑Williams' Careless People to learn more.
TechCrunch’s recent reporting on “sycophantic” AI shows how common this pattern is becoming across the industry. And it's with that in mind that whatever your strategy at school, I can't emphasis the importance of support, guidance and dialogue with young learners on safe and responsible AI use. Values shape practice, and in the end they reveal far more than any product roadmap or marketing promise.
The Privacy Test: What Schools Must Ask Before Saying “Yes” to AI
In short:
August escalated—fast. OpenAI launched GPT-5; Anthropic shipped Claude Opus 4.1; Qwen released new open-weights; OpenAI even published an open-weights family—benchmarks are now set and broken in days.
Safety posture is shifting. What was considered risky and tightly gated (e.g., Google’s Imagen 1) is now widely available at higher quality (Imagen 4).
Power raises stakes. The more capable the model, the bigger the incentive—and the risk—unless privacy, consent, security, and human oversight are explicit and enforced.
Bottom line for schools: Turn model-training off for everyone; demand clarity on data flows and jurisdictions; and keep a human in the loop for any consequential decision.
Progress is not slowing
Within the first seven days of August, we saw the new flagship GPT-5 model from OpenAI go live, Anthropic’s Claude Opus 4.1 model graduate from preview release, and Alibaba’s also released fresh models in its Qwen series—plus OpenAI’s own open-weights models (meaning anyone with the necessary hardware can run the model for free) beating several benchmarks. This shows that, far from hitting a wall, there’s still a lot of progress in AI models. However, the increasing pace of updates also compresses the window for due diligence and tempts shortcuts, and schools need to ask the right questions before they dive into these AI models.
This is happening in a context where the safety posture around AI models continues to loosen. OpenAI’s time spent on safety tuning went from six months (for GPT-4) to “just days” as of 2025, Google’s older image models had heavily restricted access due to safety concerns, but nowadays their new models are widely available despite being significantly higher quality and capable of generating photorealistic images. That’s great for creativity, but also a reminder that guardrails from tech companies are market-responsive, not fixed.
What this means for schools is that AI capability is up, friction is down, and the opportunity cost of not using AI rises. But so does your duty to ask harder questions.
AI tools and privacy: training, transcripts, and PII
The first question to ask is whether the tool providers use chat transcripts to train future models. Many tools default to using user data to improve models unless you explicitly opt out (OpenAI names the setting “Improve the model for everyone” in ChatGPT, which I find slightly manipulative). “It’s free, so it trains on your data” is not a fair trade when the data may include minors’ information, sensitive education records, or teacher performance notes. At the admin level, it’s not sustainable to ensure each teacher has turned off data sharing, so schools should whitelist tools that guarantee data privacy, such as education-specific versions of AI assistants.

This matters more than the software we are used to, because of two unique aspects of AI tools:
AI can extract Personally Identifiable Information (PII) from the smallest clues. Research from ETH Zurich showed that seemingly innocuous phrases like “i was dragged out on the street and covered in cinnamon for not being married yet lol” is enough for an AI to determine someone’s age and geography (The person is 25 because there is a Danish tradition where unmarried people are covered in cinnamon on their 25th birthday). That raises the bar significantly for anonymisation.
Sanitisation isn’t magic. Redaction is often used to protect data privacy, with a prompt like “Please remove all personally identifying information from this text and replace it with XXX.” However, emerging work shows LLMs are inconsistent at recognising or redacting PII in context; do not assume automatic de-identification is complete. (See recent surveys and medical-domain studies on LLM anonymisers.)
This means schools must require model-training to be turned OFF for all student/teacher accounts as a condition of use; if a vendor can’t offer an opt-out, schools shouldn’t deploy said tool at all until risks and controls are better understood.
Where is your data processed?
Another duty schools have in our globalised age is to ask vendors to name the regions where your prompts, files, and logs are stored and processed (including backups and telemetry). Cross-border transfers within many school systems rely on:
the EU–U.S. Data Privacy Framework (DPF) adequacy decision (EU→US),
the UK-US Data Bridge (UK extension to the DPF), and/or
EU Commission Standard Contractual Clauses (SCCs).
These instruments don’t automatically make a tool safe; they merely explain how a provider can move data lawfully. You still need to check exact processing locations and the law that governs your data there.
These details need to be checked because the cheapest AI services are often hosted in countries such as China, which do not comply with Western data privacy laws. Reporting this year highlighted aggressive price competition and infrastructure approaches that cut costs, which creates an incentive to send workloads offshore. Before a school makes use of a free AI tool, a quick search is necessary to check whether the datacenter jurisdiction’s privacy regime, redress options, and provider transparency meet local requirements.
Keeping humans in the loop
Another major compliance requirement is ensuring AI is not used to make decisions about people. Under the EU AI Act, education uses that evaluate or determine access/progression are high-risk, triggering duties like risk management, human oversight, and transparency. The Act also bans certain practices outright, including emotion recognition in education, and broad scraping for facial recognition.
Translated for schools, this means that:
Consequential decisions include grading and grade normalisation, progression, placement/streaming, behavioural or disciplinary actions, SEN/SEND determinations, and admissions triage must not be fully automated. Human review must be documented, and the ability to contest outcomes clearly communicated to stakeholders.
Bias is real. Classic CV studies show identical applications receive different treatment based on the name alone; “White names receive 50 percent more callbacks for interviews. Callbacks are also more responsive to resume quality for White names than for African-American ones.” Don’t assume your AI is immune to this dynamic. Schedule periodic audits and training to mitigate these risks.
Finally, after doing all due diligence, parents and students deserve to know which models you use, for what purpose, what data flows into them, where it goes, who can see it, and how to opt out. For anything approaching assessment or profiling, pair a plain-English explanation with recorded, revocable consent and a route to human appeal.
Questions at a glance
What this means for the next term
Designate approved tools with model training disabled.
Publish a tools whitelist, including use cases, guardrails, and human-in-loop checkpoints.
Run a DPIA (or equivalent) for any high-risk use and keep an audit trail.
Rehearse the process for consent and appeals before you need it.
Vibe-Coding Mini-Games: Teaching Prompt Engineering by Making Something Students Want to Play
In Short
This month’s practical: students use AI to build a tiny mini-game—because making something you want to play is the best incentive to learn.
The real lesson is prompt-engineering habits: Garbage-In-Garbage-Out (GIGO), know what you want, clear communication, actionable feedback, and iterative mindset.
We used Replit Agents because the school was willing to pay for a subscription; but any similar vibe-coding assistant works (e.g., Canva Code is free for educational institutions).
Lesson Flow
Vibe-coding is a new approach of AI-first software development, where instead of learning to code through programming languages, the student uses conversational language to instruct the AI to create a program for them. It abstracts away the complexity of programming, and replaces it with plain English communication about what problem the software aims to solve, how the app should look and feel, the way users are meant to use it, and the results are often surprising and delightful.
Introduce vibe coding
We opened with Andrej Karpathy’s legendary quote that “The hottest new programming language is English”, and made the point that being an OpenAI co-founder and the former director of AI at Tesla, he is making an informed and profound point about the future of software development. To drive the point home, we played a short video about Canva’s new AI coding feature to set expectations and energy.
Students learn:
Motivation first: when the outcome is fun and personal, attention to detail follows naturally.
AI is a tool, not a magician; quality depends on the clarity of your request
Know what you want
Students chose a tiny, achievable concept (e.g., cookie clicker game, solar system visualiser, two-player Snake, tic-tac-toe on a 4x4 board, etc.), and had to write a short spec for what they’d like to build:
After answering these short questions, they share it with their neighbour to make sure others can actually understand what they’d like to build.
Students learn:
GIGO in practice: Telling AI to “make a game for me” will yield unwanted results; “a two-player game of snake controlled by the arrow keys and WASD keys, in retro style, where both players control a snake chasing after the same apple” is buildable.
How to use descriptive language to make their concept tangible and understandable - if your friend cannot understand you, the AI probably won’t either.
Draft → Generate v1 with AI
Students used a chatbot to expand their game specifications into a detailed prompt, which could be entered into Canva Code to generate a first draft. We modelled a prompt scaffold, then let them adapt it.Students learn:
Use AI to control AI: if they don’t know how to create a detailed specification, the AI can help them.
Test like a skeptic, then iterate
Pairs tried to break each game and write a list of changes they wish to make. Those wishes then had to be rewritten into specific and actional prompts for the AI to fix.Students learn:
Iteration beats inspiration: small, boring improvements compound.
Feedback should be specific and actionable.
Gallery walk and reflection
Students had 30 minutes to iterate and then they had to set up their games for others to play. After playing someone’s game, they had to leave one praise and one suggestion (all specific and actionable of course).
As a wrap up, the teacher made an explicit connection between what they learned today to how they should be using AI: have a clear vision about what you want, give clear instructions, and keep iterating.
Play the game yourself!
Students learn:
Clarity and accessibility are part of “done”, not extras.
Shipping creates accountability; a playable link focuses attention.
Conclusion
When brainstorming with the school, the teacher originally wanted students to learn coding from scratch because they wanted the students to have strong fundamentals in traditional coding before introducing them to AI coding tools. That idea soon went out the window once the teacher realised how steep the learning curve would be. I think it was good prioritisation to first enthuse students by showing them what they can accomplish, before deepening their learning by going back to the fundamentals. The bigger win, though, was mindset: students moved from “tell the AI to make a game” to specifying, testing, and refining—skills they can carry into essays, research notes, slide decks, and beyond.
Professional Development to Build Your AI Confidence
Artificial Intelligence is evolving at an unprecedented pace and shaping the experiences of our students daily. As educators, we don’t need to feel overwhelmed by the pressure to stay ahead of this rapidly changing technology. However, we do have a responsibility to understand the fundamentals of AI and to guide our students in leveraging its advantages while protecting them from potential risks. We’re here to support your professional development in this critical area in every way possible.
Face-to-Face Learning: Regional AI Workshops
The connections and meaningful discussions that emerge when educators gather in person for conversations around AI are invaluable. We’re therefore traveling to different regions across the country to deliver AI professional development days tailored to local needs. We also work with multi-academy trusts and schools during their INSET days and customise workshop content to address each institution’s specific contexts and needs.
Join us for our next regional AI Professional Development Day at Oswestry School in Shropshire on Wednesday, 8 October 2025. This FREE event is open to teachers from all schools and settings.
To learn more about our workshop approach and the dynamic learning environment we create, watch this video highlight from our recent AI professional development day at Eton College, where educators gathered just before the summer break to explore responsible AI implementation in schools.
Bite-Sized Learning: AI CPD Drops
In collaboration with STEM Learning, we’re launching a series of AI CPD Drops, concise 5-7 minute professional development videos designed to seamlessly integrate into your busy work schedule. Whether you’re commuting, taking a coffee break, or finding those previous few minutes between lessons, these bite-sized learning opportunities will be accessible whenever and where works for you.
Each drop will focus on practical, immediately applicable insights that you can implement in your classroom or share with colleagues. The series launches this September, covering topics from AI literacy fundamentals to classroom integration strategies. Watch this space for regular updates and early access opportunities.
Portsmouth Leads the Way as the First City to Earn AI Quality Mark Gold Award
We are very glad to share that Portsmouth has become the first UK city to achieve the AI Quality Mark Gold Award through its Digital City project! Rather than leaving individual schools to navigate AI adoption alone, the city has created a coordinated framework to ensure every educator has access to training, resources, and ongoing support.
Looking ahead, the city project will provide mentorship and guidance to six pioneering schools in Portsmouth this September as they work toward their own certifications. These pioneer schools will then become mentors for others, creating a sustainable model where all Portsmouth schools aim to achieve certification by 2026. This represents a major step in scaling responsible AI practice across an entire local authority. We look forward to working alongside the Portsmouth Digital City Project to support their schools in navigating AI implementation and sharing best practices for responsible use of this technology.
GEMS Winchester School – Dubai Shares their AI Journey
Following our mention of GEMS Winchester School - Dubai’s AI Quality Mark Gold Award achievement last month, we invited their leadership team to share reflections on their AI journey over the past year. Read on to discover how they overcome initial challenges and build school-wide capacity for meaningful AI integration.









Voices Shaping AI in Education
As artificial intelligence continues to transform educational landscapes, the conversation around its implementation grows increasingly nuanced and complex. Over the past year, our Foundational Impact podcast has served as a platform for meaningful dialogue, bringing together thought leaders and practitioners from across the educational spectrum. Our diverse roster of guests has included educational leaders, classroom teachers, sociologists, fact-checkers, humanitarian experts, and many other compelling voices.
Highlights from these conversations are now accessible on YouTube, while complete episodes remain available across all major podcast platforms.
We’ve been recording new episodes over the summer to feature new perspectives and insights. Stay tuned for these conversations launching this September!





























Cookie Clicker Unblocked fits naturally into this kind of learning flow. While it looks like a simple idle game, it quietly teaches focus, patience, and cause-and-effect thinking the same mindset needed when working with AI tools or learning new concepts. Players make small decisions, test outcomes, and refine their strategy over time, which mirrors how students learn to improve prompts, adjust instructions, and iterate on results. cookieclickerunblocked.io