<100 subscribers

We live in curious times: never have we produced so much value without realizing it. Every time we ask a chatbot a question, correct a wrong answer, ask for an image to be redone, or better describe what we want, we are feeding a machine that learns from our minds. The catch is that all this cognitive effort goes uncompensated. On the contrary: after freely offering this precious material, we still have to pay to access the refined result. It’s as if we mined gold for free and then had to buy the finished jewel in the shop window.
This provocation raises a question: what if every prompt we typed were recognized as a cognitive asset? What if, instead of giving away value without return, we were paid for the invisible contribution we add to artificial intelligence?
Training artificial intelligence models is expensive. In 2023 alone, it is estimated that OpenAI spent more than US$100 million to train advanced versions of GPT. Google, in training Gemini, is said to have spent even more, both on cloud infrastructure and on data. But what really makes these models useful is not just supercomputers or sophisticated algorithms. It’s human participation.
A central concept in this process is RLHF (Reinforcement Learning with Human Feedback). In simple terms, it means that people interact with the model, evaluate responses, correct mistakes, and indicate what’s better. This human refinement is what makes the difference between a robot that responds coldly and an assistant that seems to understand our intentions.
It is true that companies already pay large sums to access licensed datasets and that thousands of workers are hired for annotation tasks, as in RLHF. These professionals correct responses, classify outputs, and refine models in an organized and compensated way.
But, in parallel, there is another immense group: the common user. Millions of people who, when interacting with chatbots in their daily lives, make adjustments, give feedback, reformulate questions, and unknowingly provide valuable data that also helps models improve. This mass of contributions receives nothing in return.
In other words: while some companies and professionals are paid, billions of interactions from ordinary users remain invisible and free.
A prompt may look like just a sentence typed into a text box. But for an AI, it’s much more. It’s a unit of information that carries context, intention, creativity, emotion. It’s a fragment of the human mind transformed into language.
And if prompts have a real impact on the quality of AI, why are they still treated as disposable, without exchange value? We are looking at a cognitive asset that has not yet been formally recognized.
Let’s call a cognitive asset any human contribution that creates value in training or refining artificial intelligence. It’s not just raw data (photos, old texts, browsing histories), but intentional interaction: questions, corrections, comments, reformulations.
This asset has unique characteristics:
It’s scarce: each person thinks in a singular way, with unique experiences and intuitions.
It’s valuable: it directly improves the performance of billion-dollar systems.
It’s measurable: it can be recorded, quantified, and even compared.
It’s invisible: in the current model, there is no recognition or reward.
In other words, prompts are like nuggets of gold we hand over for free to digital mining giants. The companies collect them, refine them, turn them into bars, and then sell them back to us, now at a price.
The hypothesis is simple: every time a user interacts with AI, they generate value. That value could be recognized in the form of micropayments, credits, royalties, even profit-sharing. Just as artists are paid when their songs are streamed, why shouldn’t we be paid when our inputs are used to train or improve models?
The impacts would be profound:
Redistribution of value: The AI economy today concentrates billions in a few companies. Paying users would mean distributing wealth more fairly, since everyone participates in the construction.
Valuing time and creativity: What is currently seen as “casual use” would be reclassified as work. A good idea, a creative formulation, a useful correction, all of that would have recognized value.
New forms of digital income: In a world where automation threatens jobs, micropayments for cognitive contributions could become a way to supplement income. Every conversation with AI would become not only personal learning but also a source of revenue.
Cultural shift: We would stop seeing ourselves merely as “users” and begin to assume the role of co-creators of artificial intelligence.
Technology already offers clues. The logic of Web3 and smart contracts proves it’s possible to record authorship, distribute royalties, and track contributions transparently. Imagine:
Each prompt registered as a small unit of authorship on-chain.
Models recognizing which inputs were used in their refinements.
Smart contracts automatically distributing micro-compensations whenever that knowledge is reused.
This is not a distant utopia: similar systems already exist for NFTs (digital art), RWAs (tokenized real estate), and music (automatic royalties). What’s missing is applying the same reasoning to what we call a cognitive asset.
These are real issues without simple answers. But the central point is not to offer a perfect system right now, it’s to open the debate. Today, the asymmetry is total: we give away for free and then pay to get it back. Some companies and workers are compensated, but the vast crowd of common users that sustains AI still goes unrecognized.
Maybe it’s time we start demanding alternatives.
Share Dialog
No comments yet