23andMe / AI / Data
AI promises easier administrative tasks for teachers, but at what cost does it affect student data privacy and intellectual property?
Yom Fox, Principal at Georgetown Day School in Washington DC, and I caught up at the Institute of Global Learning conference in Miami last month, and we talked through the news about 23andMe filing for bankruptcy and the bigger question of what happens to sensitive personal data when companies go under? That conversation quickly led us into another topic we’re both passionate about: the growing use of AI tools in UK/US schools and the serious concerns around student data collection.
If a company like 23andMe can file for bankruptcy and leave millions of people wondering what happens to their genetic information, what about edtech companies we are trusting with student data?
Yes, companies like OpenAI offer clear privacy policies... Similarly, 23andMe had robust protections stating they don't use customer data for training by default, employ encryption standards, and retain information for limited periods. Similarly, 23andMe implemented robust protections including SSL encryption, opt in research programs, and compliance with international privacy frameworks. They even stated that any prospective buyer should comply with their privacy policy.
And yet, despite all of that, 23andMe still suffered a major data breach in 2023 that exposed sensitive personal information for millions of users, showing that strong policies on paper don’t always translate to strong protections in practice.
In my work at Good Future Foundation, I'm learning more about ways schools are adopting AI solutions that promise to save teachers time through marking and grading. But what happens to all that student information if these companies change hands or close down?
This is especially concerning since these tools collect personal information about children. Children might be writing essays, completing activities, while having their content uploaded to these systems daily, creating valuable data profiles.
These profiles could contain insights into a child's learning patterns, emotional responses to challenges, subject preferences, and even indicators of neurodiversity or learning difficulties that haven't been formally diagnosed. Such detailed information, if misused, could affect a student years later.
For instance, writing samples processed through AI systems could theoretically be used to build personality assessments that might influence future employment screening algorithms, especially as more companies adopt AI recruitment tools that analyse patterns.
Of course, there are also positive aspects to collecting specific information, such as creating personalised learning profiles that model and adapt content to differentiate work. But there are considerations that need addressing…
While UK schools must comply with GDPR under the supervision of the Information Commissioner's Office, the U.S. has no single law or enforcement body offering the same protections. Many teachers tell me - and, as Yom shared, in the US - they don't fully understand what data the technology they use is gathering, where it's stored, or how long it's kept. This is totally fair - when have folks had the time to learn about this since AI bombarded its way into our lives?
Before bringing AI tools into classrooms, school leaders need clear security protocols and commitments about data handling. This is particularly important for applications built on OpenAI's API or similar services, as despite their current policies, acquisition, or regulatory changes, could alter how student data is handled in future. If there are processes in place to protect against this, what are they and how can they be verified?
Better privacy protection methods totally exist. Techniques like Differential Privacy can analyse data without exposing individual information by adding calculated "noise" to the data while still preserving its statistical usefulness. (Apple Differential Privacy Overview, Harvard Privacy Tools Project)
I’m also working with a school in Oxford where they created a simple checklist for vetting new educational technology and now require all vendors to answer their data privacy questions in writing before purchasing.
When we choose technology for our classrooms, we're making decisions that can impact students' privacy for years to come. The 23andMe situation reminds us that even established companies with millions of users and strong privacy policies can face uncertain futures and security challenges.
Asking direct questions about data practices often reveals how seriously a company takes these concerns. Our students may not yet grasp the full implications of their digital footprint, but we should spend more time thinking about how we can make the best choices on their behalf.
Last but not least, I want to thank Yom for her continued input, inspiration and thought leadership as we navigate these complex challenges together!