You see this? It's outrageous! It's a wrongful digital account of my physical activity. I've tapped, activated, deactivated, turned bluetooth off, tapped some more, turned bluetooth back on. It remains wrong. My digital library is a lie of my IRL life. How can I live with myself like that?
How can I track my progress or ask ChatGPT to suggest changes to my training plan? Not that I ever asked any LLM to set up a daily training plan, except that one time where I had 7 days left before a 23km trail race and I wanted to know how fucked I'll be due to inconsistent training as a result of Christmas, New Year and Carnival screwing with my training schedule.
It's not yet 9 in the morning, I'm standing in the beautiful kitchen of my mother in-law, silently cursing at my heart rate monitor. My cortisol level have spiked. I'm in a high stress situation. Fear is settling in. Fear Of Missing Data. FOMD.
I have been FOMDed before, by this exact same device. My polar H10, one of the most accurate heart rate sensors in the world, has become unreliable. The day of my longest trail race, 26km, it just didn’t connect to my phone's bluetooth. Here I was, in a crowded bar in a small village about to start the greenest trail race on this holiday island without knowing how exhausted I would be.
Receipt is notifying me that I have been benched for infinity and beyond, a friendly and funny nudge to get off my arse. I want to reassure it by speaking soothing words while stroking it gently: no, I’m just without reliable heart rate sensor. I still run, hitting 0 intensity minutes while arriving home red-headed and sweaty.
Don't fret, I evolved.
I learned how to go running without a heart rate sensor. And you can do it too if you follow this simple 3 step approach:
Step 1: Put on running gear
Step 2: Go outside the house
Step 3: Start running
It’s remarkable simple, once you decide that technology has failed you.
In our world where data decides everything, doing something without leaving a trail is refreshing after you have dealt with existential questions about the purpose of doing something that isn't measured, tracked, recorded and memorialized for eternity.
What you read on socials is the result of a meticulously fine tuned algorithm. Even these words, typed with my thumbs on my phone, are predicted and the temptation to just select the right spelling Apple suggest is tugging me every so slightly away from the joy of writing towards the efficiency of producing.
Convenience (yes! spelled right for once ) is everything in our life. But what if this is our demise? What if our thrill to eliminate any friction causes humanity to die?
"Get off the fear bandwagon" you say, the thought too uncomfortable to dwell on. And so, just like my kid or me, when hitting a low, you open your favorite app and get sucked away leaving an empty physical shell on the couch. By the time you come out of it, your mind is in a different state and you have forgotten our conversation. Ignorance is bliss.
AI will revolutionize how work is done, a futurist told me recently. He foresee's that every work flow executed on a two dimensional space will be automated and done by an AI agent. We will painstakingly set up processes for AI to execute and improve.
Step 1: Set up context
Step 2: Execute
Step 3: Fine tune
Fine-tune used to be called feedback. You get feedback, dwell on it, and make changes. There is no growth without feedback. Here humanity is divided. Who gives the feedback? Who decides what feedback merits attention and isn't just some biased shit spewed out from a degenerate brain of some dick who sees himself holier than though because of his balls?
It's either going to be humans validating AIs output and through this giving feedback, or AIs controlling, coercing, and caring for each other.
I'm in the camp of keeping humans in the loop, adding friction to this relentless efficiency race. LLMs memory, their context window, is getting better but still operating more like your computer's RAM and not your brain. And then there is the issue of modularity aka context-dependence that AI researchers haven't yet solved. A rule wouldn't be a rule if it didn't have at least one exception. I learned that in French class.
Under the hood of LLM, large language models, they predict the next token. There's this mathematical function called stochastic gradient decent that decides what the next word should be. Stochastic is something that is "well-described by a random probability distribution".
Don't run from big words!
The existence of a specific word in a text is based on a specific sequence, the probability distribution. Just like those sequences in math classes where you had to "predict" the next number, every word has a probability of showing up in a sentence. In very simple terms this probability distribution could be "there's a 80% chance that every 5th word is the word the". Of course this probability changes depending on the context (Bible vs Comic), what word came before, if you are a English native speaker or talking Spanglish. For this artificial probability rule to be a true representation of reality, LLMs need text. And the more the better.
And this is where shit is going to hit the fan.
We are entering a self-reinforcing feedback loop. Human text is used to create these probability distributions LLMs use. People use LLM to create text, also known as slop because of it's lack of depth. Humans, driven by efficiency and forgetting how to sit uncomfortably in front of a sentence that just doesn't sound right, just hit publish and feed their slop into the digital world. All this slop is fed back to LLMs to fine-tune their probability distribution. But as more text is generated by LLM, LLMs more often learn "the output I create is based on correct probability assumptions". They've reached a point of arrested development because of bad data.
But you and I are going to miss out on so much more.
We're loosing depth. We're loosing nuances. You can see this in output from students using LLMs to generate essays. The choice of words and topics of those who use LLMs are more similar than those who only use their brain (full paper). A new word enters our vocabulary: Cognitive debt. The stuff you don't know because you outsource the thinking.
We are further blending everything into the same grey, bland, tasteless mass. Globalization of our minds.
The stakes are high. Imagine 20 kids having the same opinion about the Amazon rain forest, the usefulness and beauty of IKEA flatpack furniture, if erecting walls is a valid path towards peace and freedom, what the best boy band is, if homeless people should be shoot, broccoli as superfood or if universal basic income really makes lazy fuckers out of us all.
Over 100 subscribers