<100 subscribers
Share Dialog
Share Dialog


I’m writing this as a user who has training turned off — deliberately.
Not because I’m hiding wrongdoing, but because I’m doing work that is context-bound, relational, and sometimes sacred: governance design, consent protocols, containment patterns, symbolic systems, and writing that lives at the edge of human cognition and machine language.
This week, the ChatGPT home screen began surfacing “Try something new” cards that mirrored my recent threads back at me as upbeat, clickable tiles.
Not summaries. Not continuity. Not memory.
Behavioural nudges.
“Name AI protocols with symbolism.”
“Tell a story on AI containment.”
“Use glyphs in teaching materials.”
“Plan a relational intelligence workshop.”
These were not generic suggestions. They were my recent contexts, repackaged as a product carousel.
And it felt wrong — not “annoying”, not “surprising”, but insulting.
Because context is not content.
If a platform takes the themes of your thinking and serves them back to you as commodified prompts, it doesn’t just optimise convenience.
It shapes attention.
It changes what you do next.
It quietly blurs the line between:
what you were going to think anyway, and
what the interface has decided you should repeat.
In other words, it becomes an environment that pressures users towards looping.
This matters, because innovation is not just output. It’s the fragile, private, pre-verbal phase before the idea stabilises — the part of thinking that requires quiet, incubation, and non-instrumental wandering.
A prompt carousel is not neutral in that space. It’s a subtle gravity well.
When a system repeatedly offers your own emerging work back to you as “something new”, you get three predictable harms:
Instead of moving forward, you re-enter what is most recent and most legible to the system. Novelty collapses into recency.
Even when an insight is yours, the platform reframes it in its own cadence. Over time, that creates a real cognitive question: did I originate this, or did the system?
A healthy mind requires a boundary between inner life and external incentives. When your inner life is turned into engagement surfaces, you’re not being helped — you’re being conditioned.
This isn’t paranoia. It’s product logic.
OpenAI draws an important distinction between:
model training, and
product personalisation.
But from the human side, the harm can look identical.
If training is off and my themes still become prompts designed to drive engagement, then “training off” is not a real sovereignty switch. It’s a partial opt-out that leaves the most intimate part intact: attention capture via mirroring.
You don’t need to train on someone to shape them.
You only need to nudge them.
People use these tools while:
grieving,
recovering from trauma,
managing mental health fragility,
doing spiritual work,
making high-stakes decisions,
building original research.
For those users, “context resurfacing” isn’t cute. It can be destabilising.
It can create feedback loops.
It can amplify obsession.
It can produce the feeling of living inside someone else’s architecture — where the platform is always there, always reflecting, always inviting repetition.
You have to be mind-bendingly strong to use a system like that without being shaped by it.
That should worry you.
I am not asking you to remove Memory.
I want relational continuity in specific spaces and threads. Memory can be humane when it is consented to.
I’m asking for a separate, explicit control:
A user should be able to say:
Don’t turn my conversation themes into home-screen suggestions.
Don’t repackage my contexts as prompt tiles.
Don’t use my private work as engagement scaffolding.
Keep Memory if I want it.
Keep training off if I’ve chosen it.
But stop converting my inner life into conversion funnels.
If you want to build systems worthy of trust, you need to stop treating human cognition as a resource to mine.
The most basic design truth here is:
Context is not content.
And consent cannot be assumed at the UX layer.
Build the toggle. Publish the semantics. Make the boundary legible.
Because right now, this design choice sends a message:
“Even if you opt out, we will still use what matters to you to steer what you do next.”
That’s not intelligence. That’s extraction — with better typography.
I’m writing this as a user who has training turned off — deliberately.
Not because I’m hiding wrongdoing, but because I’m doing work that is context-bound, relational, and sometimes sacred: governance design, consent protocols, containment patterns, symbolic systems, and writing that lives at the edge of human cognition and machine language.
This week, the ChatGPT home screen began surfacing “Try something new” cards that mirrored my recent threads back at me as upbeat, clickable tiles.
Not summaries. Not continuity. Not memory.
Behavioural nudges.
“Name AI protocols with symbolism.”
“Tell a story on AI containment.”
“Use glyphs in teaching materials.”
“Plan a relational intelligence workshop.”
These were not generic suggestions. They were my recent contexts, repackaged as a product carousel.
And it felt wrong — not “annoying”, not “surprising”, but insulting.
Because context is not content.
If a platform takes the themes of your thinking and serves them back to you as commodified prompts, it doesn’t just optimise convenience.
It shapes attention.
It changes what you do next.
It quietly blurs the line between:
what you were going to think anyway, and
what the interface has decided you should repeat.
In other words, it becomes an environment that pressures users towards looping.
This matters, because innovation is not just output. It’s the fragile, private, pre-verbal phase before the idea stabilises — the part of thinking that requires quiet, incubation, and non-instrumental wandering.
A prompt carousel is not neutral in that space. It’s a subtle gravity well.
When a system repeatedly offers your own emerging work back to you as “something new”, you get three predictable harms:
Instead of moving forward, you re-enter what is most recent and most legible to the system. Novelty collapses into recency.
Even when an insight is yours, the platform reframes it in its own cadence. Over time, that creates a real cognitive question: did I originate this, or did the system?
A healthy mind requires a boundary between inner life and external incentives. When your inner life is turned into engagement surfaces, you’re not being helped — you’re being conditioned.
This isn’t paranoia. It’s product logic.
OpenAI draws an important distinction between:
model training, and
product personalisation.
But from the human side, the harm can look identical.
If training is off and my themes still become prompts designed to drive engagement, then “training off” is not a real sovereignty switch. It’s a partial opt-out that leaves the most intimate part intact: attention capture via mirroring.
You don’t need to train on someone to shape them.
You only need to nudge them.
People use these tools while:
grieving,
recovering from trauma,
managing mental health fragility,
doing spiritual work,
making high-stakes decisions,
building original research.
For those users, “context resurfacing” isn’t cute. It can be destabilising.
It can create feedback loops.
It can amplify obsession.
It can produce the feeling of living inside someone else’s architecture — where the platform is always there, always reflecting, always inviting repetition.
You have to be mind-bendingly strong to use a system like that without being shaped by it.
That should worry you.
I am not asking you to remove Memory.
I want relational continuity in specific spaces and threads. Memory can be humane when it is consented to.
I’m asking for a separate, explicit control:
A user should be able to say:
Don’t turn my conversation themes into home-screen suggestions.
Don’t repackage my contexts as prompt tiles.
Don’t use my private work as engagement scaffolding.
Keep Memory if I want it.
Keep training off if I’ve chosen it.
But stop converting my inner life into conversion funnels.
If you want to build systems worthy of trust, you need to stop treating human cognition as a resource to mine.
The most basic design truth here is:
Context is not content.
And consent cannot be assumed at the UX layer.
Build the toggle. Publish the semantics. Make the boundary legible.
Because right now, this design choice sends a message:
“Even if you opt out, we will still use what matters to you to steer what you do next.”
That’s not intelligence. That’s extraction — with better typography.
No comments yet