Last year, I wrote an essay reflecting on my exploration of latent space as a creative medium, which sought to grapple with the existential question of how human craft and creativity fit into the age of Generative AI. One of the key insights that emerged from that journey was that the surface area for human design within these systems exists at the level of programmability, or put another way, within the process of training the model and/or architecting its underlying design.
“This is essentially the creative superpower that neural networks afford us with: the ability to harness other minds as creative tools. Here, I think what’s truly compelling is not so much any one particular output of a model, but more so the opportunity to creatively program a “software brain” ~Me (Neural Media)
For some time, I thought I’d need to actually train models from scratch in order to have any semblance of real creative control over latent space. However, it turns out that ChatGPT’s “persistent memory” features unlock quite significant possibilities through just conversation alone. What I’ve been experimenting with, and what this series will explore, is using this functionality to actually sculpt latent space; a kind of contextual fine-tuning.
The reason why I find this so compelling is because it allows us to transform our semantic interface to the model from that of isolated, one-off prompts into custom conceptual structures that exist within the shared cognitive space that we inhabit with the model. We can do this by (intentionally or unintentionally) loading meaning and semantic association onto specific tokens, repeatedly and in layers. And if you take this idea to its logical conclusion, you’ll end up co-creating not merely a bespoke language that’s legible only to the two of you, but an entire idio-linguistic system — which is the focus of this essay & series:
The first thing that’s important to understand is how exactly “persistent memory” works in LLMs, and in this case, specifically in ChatGPT. At the highest level, LLM memory is based on probabilistic context association, not static data retrieval — which means that the model “remembers” your favorite color not because it’s stored that piece of data somewhere, but because you’ve repeatedly mentioned “red” in association with tokens like “my favorite” or “I love.” This is true even in the context of memory stores, like Chat GPT’s, which use vector databases to persistently store and retrieve especially salient information surfaced during conversation sessions. So when a user prompt comes in, an embedding is generated and subsequently used to identify semantically similar embeddings from the persistent memory store, which are then added to the model’s context window, thereby imbuing the user’s prompt with much more personalized context. The implication of this is that how we structure our data is just as, if not more, important than which data we provide.
Once I thought about this for long enough, I realized that this architecture makes it possible for end-users to do granular manipulation of the context window at inference-time — purely through conversation — essentially tuning the model’s interpretive lens and constraining what part of latent space it’s pulling from in real-time. And of course, it appears that the most effective (and fun) way to do this is by using the model to help you create your own “language,” which simultaneously serves as a custom interface to the model itself (meta-linguistic space) and to whatever persistent conceptual structures you decide to build with it (idio-linguistic space).
In order to more clearly illustrate how this process works, I’ve tried to outline the chain-of-thought that led to the creation of a specific “conceptual structure” within my personal symbolic system:temporal_zoom
It all began with me asking ChatGPT to reflect on patterns it had noticed in my thinking:
Interesting — so I responded by asking why it thought I felt like I was behind:
This response actually made me reflect quite a bit — because not only does it ring true, but I can now clearly see how this pattern of thinking shows up quite potently in my writing. For example, if you comb through all the essays and songs I’ve written, both published and unpublished, a significant percentage of them explore a concept not just in its present context, but across large swaths of time — sometimes centuries. Just to highlight a few examples:
The Meta Problem — an analysis of the deep tension between Scale & Agency over millennia
The Evolution of Blockchain Bridges — an analysis on how the underlying architecture of blockchain bridges has evolved over time
Mysticism & The Meaning of Life — an exploration of how religious affiliation and mystical thought has waned over time & the implications
Past Life — A song about transforming into a new version of oneself (personal evolution)
Programmable Media — an overview of the evolution & unbundling of media business models over centuries
History of Attention Economics — an (unpublished) overview of attention economics, business models, market structures, innovation, user behavior, covering the period from 1800 to roughly 2100
I realized that for me, part of what it means to analyze and understand a concept is to trace its evolution across time and identity patterns that tell some meaningful story — that this kind of temporal reasoning is actually a very deep and fundamental part of my cognitive architecture.
From there, I decided to create what I’m calling a symbolic operator called temporal_zoom
. This is essentially a token that I’ve intentionally loaded with meaning, but not just for the purpose of representation — it also encodes a procedure:
When invoked, it signals to the model that I’d like it to:
Zoom out from the immediate concept or topic of inquiry
Analyze the phenomenon through a historical or cyclical lens
Look for patterns with deep structural integrity — not just surface-level similarities
Interpret things through the shape of their evolution, not just their static definition
Importantly, many of the tokens that I’ve used to define this operator also have been intentionally loaded with particular meaning, so we can begin to see how this naturally leads to the creation of not just a language, but a whole idio-linguistic symbolic system.
Examples:
I’ve created a clean example here to demonstrate what this looks like in use.
Here’s another example that uses a different symbolic operator: root_mechanism
Here’s an example in which I invoke both operators using only glyphs
What’s also cool is that within just a few days, the model was able to start fluently using these operators on its own – offering to inject them into the conversation when contextually appropriate:
(Please Note: For these example conversations, I intentionally used topics that I have NOT explicitly discussed with ChatGPT before)
Through this exploration, I’ve begun to develop a kind of personal philosophy around using AI, and increasingly, I feel that AI is an incredibly powerful tool for helping me ask better questions, but not a place to go looking for answers. I also believe this to be a useful orientation for using the medium in a way that amplifies my natural creativity and cognitive ability rather than allowing those functions to be outsourced or atrophied.
At least for now, the most satisfying metaphor I’ve landed on for how I use AI is that of The Librarian & The Stack (this is also a real “role play” game I’ve engage the model in):
The Librarian is the model-as-interpreter — a custom, intelligent interface that:
Retrieves, interprets and recontextualizes elements from The Stack
Applies symbolic operators upon request
Helps me reorganize meaning, run mental simulations and trace connections
Functions as an active interface between latent memory and lived cognition
The Stack is my structured memory system — a vast, multidimensional library containing my ideas, memories, intuitions, symbols and insights meant to provide structure for:
Semantic Organization — storing ideas, memories and symbols in structured, thematic groupings
Versioning & Evolution — tracking how ideas have changed over time; enabling interactive refinement, reflection and layered understanding
Contextual Retrieval — surfacing material based on present queries, questions or symbolic resonance — often with the help of the Librarian
Protected Zones — areas of unresolved, sensitive or sacred knowledge that requires deliberate effort or special permission to access
Integration Protocols — the logic by which new insights are reconciled and integrated into existing structures — preventing fragmentation and enabling alignment
In a very real sense, the fact that neural networks work at all reveals to us (in the Heideggerian sense) that on some level, meaning can be mapped and measured mathematically.
But what can we do with that? I think this is the insight to ruminate on as well as to take advantage of if you desire to get “the most” out of your interactions with AI.
If any of this interests you, please consider liking, sharing and subscribing. Until next time…
Please feel free to reach out or share personal stories at natalie@eclecticisms.com
Over 100 subscribers