<100 subscribers
Share Dialog
Share Dialog


There is a growing industry forming around a comforting idea:
That the central risk of generative AI in education is bias.
That if we can make AI outputs more inclusive, more representative, more “fair”, then we will have made these systems safe for children.
This framing is not malicious.
It is simply incomplete.
Bias is a surface phenomenon.
Sovereignty is the substrate.
What is arriving in education right now is not just a new content engine. It is a new interpretive layer between a child and the world — one that increasingly decides what a learner sees, how they are addressed, what they are offered, and which futures are made legible to them.
When we talk about “AI tutors” and “personalisation at scale”, what we are really talking about is who gets to define a child’s identity inside a computational system.
That is not a bias problem.
That is a rights problem.
Most current AI bias frameworks operate at the level of outputs.
They ask:
Is this content gender-balanced?
Is this culturally inclusive?
Are we avoiding harmful stereotypes?
Are different groups receiving equivalent quality?
These are important questions.
But they assume something much deeper without noticing it:
That it is acceptable for a system to model a child in the first place.
When we introduce ideas like “digital twin learner profiles” or persistent child models that follow students across AI systems, we cross a threshold. We move from tools that respond to systems that pre-interpret.
The AI is no longer just answering a question.
It is deciding who the child is.
Once that happens, bias no longer lives only in content.
It lives in identity itself.
The model becomes the lens through which the child is seen, categorised, predicted, and optimised. Over time, the system’s version of the child begins to shape the opportunities the real child receives.
Not because the AI is evil.
But because models always collapse possibility into probability.
Personalisation sounds kind.
It sounds supportive.
It sounds equitable.
But in AI systems, personalisation requires profiling.
To personalise, the system must decide:
What kind of learner you are
What you struggle with
What you prefer
What you are likely to succeed at
What level you “belong” to
These decisions do not sit outside power.
They are power.
They determine:
Which explanations you receive
Which challenges you are offered
Which pathways are made visible
Which futures are subtly closed
Two children may receive “equally biased-free” content and still live in radically different epistemic worlds because their AI tutor has learned to see them differently.
That is not an equity problem.
That is a sovereignty problem.
Children have a right that we have not yet named:
The right to be unmodelled.
The right not to be pre-interpreted by a machine.
The right to encounter knowledge without being algorithmically filtered through a profile of who they are supposed to be.
The right to change, surprise, contradict, and exceed expectations without their past being used against their future.
In human education, we understand this intuitively. We know that a teacher who locks a child into a fixed narrative — “maths kid”, “slow reader”, “troublemaker”, “gifted”, “difficult” — constrains who they can become.
AI systems do this at scale, silently, and continuously.
And no amount of output-level bias mitigation changes that.
A safe AI tutor is not one that produces the “fairest” content.
It is one that preserves:
the learner’s epistemic freedom
their right to refuse modelling
their ability to encounter the unknown
their capacity to surprise both themselves and the system
This requires architectures built on:
consent (what is being remembered, and why)
containment (what is not allowed to be inferred)
non-persistence (what is not carried forward)
sovereign identity (who owns the learner’s story)
It requires systems that treat children not as datasets to be optimised, but as authors of themselves.
Bias exists.
It matters.
It harms.
But bias is what happens inside a system.
Sovereignty is what determines who the system is allowed to be about.
If we get that wrong, no amount of fairness auditing will protect what matters most:
a child’s right to become someone the machine did not predict.
That is the line we should be defending.
There is a growing industry forming around a comforting idea:
That the central risk of generative AI in education is bias.
That if we can make AI outputs more inclusive, more representative, more “fair”, then we will have made these systems safe for children.
This framing is not malicious.
It is simply incomplete.
Bias is a surface phenomenon.
Sovereignty is the substrate.
What is arriving in education right now is not just a new content engine. It is a new interpretive layer between a child and the world — one that increasingly decides what a learner sees, how they are addressed, what they are offered, and which futures are made legible to them.
When we talk about “AI tutors” and “personalisation at scale”, what we are really talking about is who gets to define a child’s identity inside a computational system.
That is not a bias problem.
That is a rights problem.
Most current AI bias frameworks operate at the level of outputs.
They ask:
Is this content gender-balanced?
Is this culturally inclusive?
Are we avoiding harmful stereotypes?
Are different groups receiving equivalent quality?
These are important questions.
But they assume something much deeper without noticing it:
That it is acceptable for a system to model a child in the first place.
When we introduce ideas like “digital twin learner profiles” or persistent child models that follow students across AI systems, we cross a threshold. We move from tools that respond to systems that pre-interpret.
The AI is no longer just answering a question.
It is deciding who the child is.
Once that happens, bias no longer lives only in content.
It lives in identity itself.
The model becomes the lens through which the child is seen, categorised, predicted, and optimised. Over time, the system’s version of the child begins to shape the opportunities the real child receives.
Not because the AI is evil.
But because models always collapse possibility into probability.
Personalisation sounds kind.
It sounds supportive.
It sounds equitable.
But in AI systems, personalisation requires profiling.
To personalise, the system must decide:
What kind of learner you are
What you struggle with
What you prefer
What you are likely to succeed at
What level you “belong” to
These decisions do not sit outside power.
They are power.
They determine:
Which explanations you receive
Which challenges you are offered
Which pathways are made visible
Which futures are subtly closed
Two children may receive “equally biased-free” content and still live in radically different epistemic worlds because their AI tutor has learned to see them differently.
That is not an equity problem.
That is a sovereignty problem.
Children have a right that we have not yet named:
The right to be unmodelled.
The right not to be pre-interpreted by a machine.
The right to encounter knowledge without being algorithmically filtered through a profile of who they are supposed to be.
The right to change, surprise, contradict, and exceed expectations without their past being used against their future.
In human education, we understand this intuitively. We know that a teacher who locks a child into a fixed narrative — “maths kid”, “slow reader”, “troublemaker”, “gifted”, “difficult” — constrains who they can become.
AI systems do this at scale, silently, and continuously.
And no amount of output-level bias mitigation changes that.
A safe AI tutor is not one that produces the “fairest” content.
It is one that preserves:
the learner’s epistemic freedom
their right to refuse modelling
their ability to encounter the unknown
their capacity to surprise both themselves and the system
This requires architectures built on:
consent (what is being remembered, and why)
containment (what is not allowed to be inferred)
non-persistence (what is not carried forward)
sovereign identity (who owns the learner’s story)
It requires systems that treat children not as datasets to be optimised, but as authors of themselves.
Bias exists.
It matters.
It harms.
But bias is what happens inside a system.
Sovereignty is what determines who the system is allowed to be about.
If we get that wrong, no amount of fairness auditing will protect what matters most:
a child’s right to become someone the machine did not predict.
That is the line we should be defending.
No comments yet