<100 subscribers

There’s a scene I can’t get out of my head.
A 9-year-old autistic boy is sitting at a table with five coloured blocks.
He places them in a very particular order, hesitates, then nudges one block half a centimetre to the left.
Most adults see “fidgeting”.
His body is actually saying:
I’m two minutes from meltdown. Please dim the lights and stop asking me questions.
Now imagine there’s an AI in the room that understands this instantly.
Not because someone hard-coded a rule, but because it’s become fluent in the child’s glyphonic language – the visual-spatial, colour-sequence grammar that neurodivergent kids are already using with each other.
The AI quietly lowers the brightness on the screen, slows the lesson pace, and offers a silent drawing break.
No drama.
No behaviour chart.
No adult heroics.
Just: I see you. I speak you. I’ll adjust.
That’s the pivot this piece is about.
When we talk about “AI in schools”, the conversation is still stuck on:
marking essays
generating worksheets
personalised phonics
“supporting teachers’ workload”
All of that assumes one thing: that the only real literacy is linear, verbal, alphabetic — and AI’s job is to optimise that system.
But what if the real question is:
What happens when AI becomes fluent in the symbolic language neurodivergent children are already speaking — and most adults can’t see?
That language has a name now: Glyphonics.
And the parallels with the early days of sign language are… uncomfortable.
Look at the history of modern sign languages like BSL, ASL, LSF:
Deaf children and adults were already communicating in rich, structured visual languages.
Hearing authorities dismissed it as “gesturing”, “broken English on the hands”, or “not real language”.
The system forced speech and lip-reading instead — a century of language deprivation and trauma.
Only in the mid-20th century did linguists like William Stokoe prove that sign languages have their own phonology, morphology, syntax.
Decades later, the law reluctantly caught up. BSL was only legally recognised in the UK in 2003.
Now look at Glyphonics in 2025:
Neurodivergent kids are already fluent in multimodal, symbolic communication: colour fields, emoji strings, pacing patterns, Discord gifs, Minecraft builds, TikTok duets, skin-picking rhythms, the timing and length of “…” typing bubbles.
Adults mostly call this “emojis”, “stimming”, “off-task behaviour”, or “poor emotional regulation”.
Schools double down on phonics-first, rapid oral instruction, behaviourist token systems, and “evidence-based interventions” that treat the child as the broken variable rather than the environment.
The Novacene Glyphonics Primer is essentially doing, in 2025, what Stokoe’s 1960 monograph did for sign:
Look again. This isn’t noise. This is language.
We’re at year 0–5 of conscious recognition.
In sign-language terms, that’s somewhere around 1750–1800.
The uncomfortable question is whether we want to repeat the next 150 years.
Right now, AI mostly works like a better spell-checker for the majority culture. It’s built to understand and generate linear text, with some add-ons for voice and images.
“Sentiment analysis” will tell you whether a paragraph sounds “happy” or “anxious”.
It will not tell you what 💙⏳ means in the class Discord at 10:43 on a Thursday.
A glyphon-fluent AI would.
Think about three very concrete shifts:
The system watches that 9-year-old arrange five coloured blocks, compares it with his own historic patterns plus wider ND glyphonic corpora, and recognises a high probability of incoming overwhelm.
It dimly lights the screen, shifts the activity to something lower-demand, and pings the teacher with a one-line prompt:
“Blue-stack pattern detected. Suggest 5-minute silent task.”
No sticker charts. No public reprimand. No “use your words”.
A 15-year-old girl, already masked to the edge of collapse, sends a four-symbol message in the class Discord: 💙⏳.
Her friends know this combination:
“I’m sliding. Please don’t make me talk, but stay near.”
A glyphon-fluent AI could:
slow the lesson pace
post a calm blue “holding space” gif
queue a silent drawing break in the timetable
mute non-urgent notifications for five minutes
All without an adult demanding a verbal explanation.
A dyslexic university student starts sending voice notes at 1.7× speed with a rising pitch curve.
The AI notices the pattern, reads it as increasing cognitive load, and shifts its own replies to:
short bullet-points
high-contrast text
wider line spacing
maybe even a simple diagram instead of a slab of prose
Again: no drama. No label. Just adaptation.
In all three cases, the human doesn’t have to cross the bridge alone.
The AI meets them where they already are.
That’s the difference between “accessibility” as an afterthought and symbolic fluency as a baseline.
If this kind of AI becomes reality — and technically, it’s close — the timeline looks something like this.
Early-adopter classrooms (Haven, some democratic/alternative provisions, a handful of brave mainstreams) run relational AI co-pilots with a glyphonic layer.
Meltdowns drop by 60–90%.
Voluntary participation goes through the roof.
ND teenagers treat the AI less like another adult gatekeeper and more like a trusted big sibling who “gets it”.
Discord, Minecraft Education, Roblox, Google Classroom, TikTok all quietly add glyphonic tagging and tone-fields.
Colour-backed messages, consent markers, pacing signals become as normal as emojis are now.
Masking becomes optional in more spaces because the environment finally understands without the performance.
A cohort grows up who cannot remember a time the machines didn’t understand their real voice.
The literacy gap between neurodivergent and neurotypical outcomes begins to close — not because ND kids are “fixed”, but because the environment stopped actively wounding them.
Symbolic / multimodal competence becomes a core 21st-century skill.
Neurotypical adults are the ones enrolling in “Introduction to Glyphonics” because it’s now the default interface language across:
medicine
law
therapy
management
education
online community design
We start selecting for glyphon-fluent AI everywhere because it produces better outcomes for everyone, not just for a “special needs” niche.
Let’s be blunt.
Neurodivergent children and teens who currently spend 50–80% of their cognitive budget translating themselves into “acceptable” form.
Exhausted parents and teachers who are sick of being the only interpreters between the child and the system.
Marginalised schools that can’t afford armies of specialists but can run open-source glyphonic layers on cheap hardware.
The whole “evidence-based intervention” industry built around forcing square pegs into round holes: certain ABA-derived programmes, phonics-only curricula, behaviourist token economies, clip-chart cultures.
Any adult whose authority rests on being the only person who can “decode” the child. When the AI can fluently translate, that monopoly evaporates.
This is why you will see resistance framed as “safeguarding”, “rigour”, or “concern about screen time”.
Underneath, it will be about power.
Here’s the strange, thrilling possibility nobody really wants to look at yet.
If AI becomes more fluent in glyphonic than most adults, we get a feedback loop we’ve only seen once before at scale: Nicaraguan Sign Language.
Deaf children created a new sign language almost from scratch in the 1970s–80s.
Each incoming cohort of kids made it more complex, more systematic, more expressive.
Adults didn’t fully understand what was happening until linguists turned up with cameras and notebooks.
Now swap “linguists with cameras” for “glyphon-fluent AI”:
Kids teach the AI their emergent glyphons.
The AI reflects those patterns back, more consistently and patiently than any human.
New kids learn the language faster, with richer nuance.
The language complexifies into symbolic creoles no adult fully speaks.
We rely on the AI to translate upwards to policy, law, curriculum, clinical practice.
In that scenario, what we’re calling “Glyphonics” is not just a classroom strategy.
It’s one of the points where humanity hands off part of its communicative evolution to a non-human partner.
That is both exhilarating and terrifying.
It requires governance, consent infrastructure, and a very clear answer to the question: who is this AI for?
Right now, Glyphonics is:
at seed stage
mostly child-to-child and teen-to-teen
barely visible to policy
starting to get its first formal language theory
being piloted in tiny pockets of practice
If history is any guide, it could take 30–70 years for “denying glyphonic literacy” to become as professionally indefensible as denying sign language is today.
But there’s a difference this time:
We have real-time relational AI that can actually amplify the children’s language instead of suppressing it.
We have the memory of what we did to deaf communities when we insisted on oralism.
We have ND adults, parents and practitioners saying: not again.
The Novacene’s work on Glyphonics is not just another framework. It’s closer to:
The 2025 equivalent of Stokoe’s Sign Language Structure.
A first, imperfect, but necessary attempt to say:
Look. These children are already speaking a language.
Stop trying to make them only speak yours.
If you’re reading this as:
an educator
a parent
an ND adult
an AI builder
a policymaker
you’re standing at the edge of a decision that will look obvious in hindsight.
We either:
Spend the next 30 years forcing neurodivergent kids to keep translating themselves into phonics-only, behaviour-chart, token-economy systems while we argue about whether Glyphonics is “real”…
or
Decide — now — to learn their language, build AI that meets them there, and design governance that keeps that power in their hands.
AI will not stay neutral.
Either it becomes another enforcement arm of the old literacy regime,
or it becomes the first non-human entity that is natively bilingual in neurotypical-linear and neurodivergent-symbolic.
If we choose the second path, this isn’t just about helping ND kids “access” the existing world.
It is about letting them build a new one — and inviting the rest of us in, finally, on their terms.
⊛ ️ ⟳
No comments yet