ChrisF | Starholder
Editor’s note: This is a conceptual exercise initiated to show new patterns of systems-based thinking and expand the design space of AI x Crypto. There is no project planned. Paper is for demonstration and educational purposes only.
By @ChrisF_0x, @awrigh01, @poof_eth
Modern digital products are largely built around capturing human attention – a scarce resource treated as currency in the traditional internet economy. From social media feeds to streaming platforms, success is measured by eyeballs and engagement time, monetized via advertising or subscriptions. This attention economy model has dominated for decades, but we are at an inflection point. Advances in AI and blockchain are enabling a shift from products as static, finished outputs for humans toward products as dynamic, computational “seeds” that grow and generate value through algorithmic processing. In this new paradigm, tokenized compute consumption – not human attention – becomes the primary unit of value in digital systems.
At the core of this thesis is a simple but radical idea: Investing compute into a product yields compounding improvements, akin to earning “interest” on an initial principal. We call this computational interest, referring to the accruing value that results when AI-driven processes continually refine and expand a product’s capabilities. Instead of one-and-done content or software releases, creators will publish parameterized seeds – essentially code or model blueprints – that require further token-backed computation to fully realize their potential. As these seeds are fed with processing power (paid for in tokens), they iteratively improve or generate richer outcomes over time. The product’s value grows in proportion to the compute invested, rather than the user attention captured at launch.
This white paper presents a visionary and technical framework for such a token-based computational value creation system. We explore how token consumption (e.g. spending crypto-tokens for AI processing) can be harnessed as the metric of value, supplanting click-throughs and watch time. We introduce key building blocks – Reasoning Rights, Computational Arbitrage, Intelligence Middleware, Computational Catalysts – that enable this ecosystem to function. The design of a native token called Catalyze (CTZ) is detailed, including its role in allocating AI reasoning capacity, rewarding optimization, powering parametric product models, and yielding ongoing returns. We then outline a technical vision for implementing this on existing blockchain infrastructure (EVM-compatible chains or specialized networks like Bittensor’s TAO), and rank several high-impact application domains (e.g. code optimization, algorithmic trading, adaptive UI/UX) ideally suited for early deployment of this paradigm. Finally, we discuss how economic value flows back to the original seed creators, aligning incentives for innovation. Diagrams and tables are provided to compare the traditional attention economy with the emerging compute economy, to visualize token flows across AI compute layers, and to illustrate a sample lifecycle of a seed growing via computational interest.
Our aim is to educate and inspire frontier technologists, builders, and operators. This is both an explanatory resource and a strategic call-to-action. By the end, we hope to convey not only how such a token-driven compute economy could work in practice, but why it represents a new paradigm of creation – one where value is unlocked by algorithms and tokens fueling continuous growth, rather than by mere human attention. The age of AI-native products demands new economic thinking; here we propose one possible blueprint for that future.
In order to appreciate the proposed shift, it’s important to contrast the attention-centric model of value with a compute-centric model. In the traditional attention economy, user attention is the commodity: platforms compete for finite hours in the day that a person can focus on content (Attention economy - Wikipedia). The design of products, therefore, optimizes for stickiness, virality, and engagement above all. Value extraction often comes indirectly (e.g. showing ads to the captive audience or harvesting user data). This model has limitations – human attention is capped and easily fatigued – and has led to well-documented downsides (information overload, clickbait content, etc.).
In an AI-driven compute economy, the focus shifts from capturing attention to performing useful computation. Here, token consumption (compute spend) is the direct measure of value generation – each useful processing cycle expended (and paid for in tokens) contributes to improving a service or solving a problem. Rather than battling for user screen time, products compete to execute more complex or more numerous computations that deliver tangible improvements. The “consumer” of content in many cases might not even be a human end-user, but an AI agent or another program. As AI researcher Andrej Karpathy noted, “It’s 2025 and most content is still written for humans instead of LLMs. 99.9% of attention is about to be LLM attention, not human attention.” (Have humans passed peak brain power? | Martin Signoux). In other words, AI agents and models are poised to become the dominant consumers of digital content and services. This foreshadows a world where machine attention (processing cycles) far outstrips human attention, and where optimizing for AI consumption becomes critical.
To clarify this new paradigm, Table 1 compares key aspects of the attention economy versus a tokenized compute economy:
Aspect | Attention-Based Economy | Tokenized Compute Economy |
Unit of Value | Human attention (time, clicks, engagement) (Attention economy - Wikipedia) | Compute consumption (AI processing cycles paid in tokens) |
Core Scarce Resource | User focus and cognitive bandwidth (limited hours per day) | Processing power & reasoning capacity (GPU/TPU, model time) |
Product Design Goal | Captivating outputs to maximize user retention and virality | Parametric seeds that spur ongoing algorithmic computation |
Monetization Model | Indirect (advertising, subscriptions, data monetization) | Direct (token fees for compute, “yield” from computational work) |
Value Growth Driver | Network effects, viral spread, more users = more attention | Compute investment – more processing yields a better product ([Navigating the High Cost of AI Compute |
Optimization Focus | A/B tests, UI tweaks to increase clicks and views | Algorithmic improvements, model tuning to increase output quality |
Example Products | Social media apps, video streaming platforms, news sites | Decentralized AI services, autonomous trading bots, self-optimizing software |
Table 1. Attention Economy vs Tokenized Compute Economy. In the attention model, value is tied to engaging finite human attention (Attention economy - Wikipedia). In the compute model, value scales with the amount of useful computation performed. AI “attention” (processing) becomes the dominant currency (Have humans passed peak brain power? | Martin Signoux), and tokens directly incentivize computational work rather than eyeballs.
Crucially, AI-native systems have shown that adding more computation directly improves the product, in a way that adding more user attention does not. Empirical observations in the AI industry confirm this: “The generative AI boom is compute-bound. It has the unique property that adding more compute directly results in a better product.” (Navigating the High Cost of AI Compute | Andreessen Horowitz). Unlike traditional software where returns on additional R&D or hardware eventually diminish, modern AI models (like large language models) keep improving with more training data and bigger compute budgets. This linear (at times super-linear) relationship between compute and quality is a driving insight. It means that to increase the value of an AI-powered product, one can literally pour more computation (and thus more tokens) into it. For example, doubling the parameter count of a model or running an algorithm for more iterations can yield a measurably better outcome, akin to a factory that produces higher-quality goods by simply running longer or with more resources.
Equally important is the impending scale of machine-to-machine interaction. When AI agents consume content or call APIs, they do so tirelessly and at volumes no human could. An AI service might make thousands of model inferences or transactions per second – something only possible when the “customer” is another algorithm. This leads to a future where much of the demand for digital products comes from AIs, not humans. In economic terms, the demand curve shifts: instead of competing for 8 billion humans each with limited hours, one could be serving 8 trillion AI agent requests with practically unlimited appetite for computation. The limiting factor becomes compute supply and efficiency, not human population or interest. Already, we see early signs of this in web traffic patterns – bots and AI scrapers often constitute a majority of internet traffic. Rather than fight this as a problem, the compute economy embraces AI consumption as the new source of value.
In summary, the paradigm shift is from designing products to hold human attention as long as possible, to designing products to spark as much useful computation as possible. Value is no longer what users do with the product, but what AI algorithms do with it. A well-designed computational product invites continual AI processing – learning, optimizing, iterating – which in turn continuously enhances the product. This cyclical growth through compute is what we mean by Creation in the Token Economy: creation of value via tokens fueling computation.
What does a product look like in this new model? We introduce the concept of a parameterized computational seed. Rather than a polished, final application delivered to an end-user, a seed is more like a proto-product: it encapsulates an initial state, a set of parameters or rules, and is designed to be incomplete or under-optimized on purpose. This is not a bug but a feature – the seed requires further computation to bloom into its full potential. In practice, a seed could be an AI model with moderate training that can be fine-tuned further, a piece of software with adjustable configurations that can be optimized, or a dataset that can be continually enriched. The key is that it’s parametric (its behavior can be adjusted/improved by changing parameters or running additional processing) and it is intentionally not fully optimized upfront.
The core thesis is that these seeds, once deployed, will attract token-backed compute from the network or users, which unlocks more value over time. We liken this to depositing money in a bank and earning interest – except here the “principal” is an initial product state and the “interest” is the improvement gained from computation invested. Thus, computational interest is generated. Just as financial interest compounds, if a seed’s improvements make it more useful, it will draw even more usage (and further compute investment), creating a positive feedback loop. A product that starts as a small seed might, after a series of token-fueled compute cycles, become far more sophisticated and valuable than its initial version.
It’s useful to illustrate how a seed’s lifecycle might play out in such a system. Figure 1 shows an example lifecycle of a parametric product seed evolving through computational investment:
Figure 1: Sample seed evolution lifecycle. A creator publishes a new parametric seed (left), such as a partially trained model or algorithm. The seed then attracts compute investment – token-funded processing (middle left) contributed by network participants or users aiming to improve or use the seed. This processing transforms and grows the seed’s capabilities (middle right), for instance by refining the model’s accuracy or generating richer output. As the seed’s performance improves, it yields valuable outcomes or rewards (right), which can be distributed to both the contributors (as usage results or token rewards) and back to the seed’s creator. The presence of these yields then encourages further interest and token investment (bottom arrow looping back), either by re-investment from earlier participants or by attracting new users who see the improved value. Over multiple cycles, the seed accrues computational interest, compounding its value through each round of compute funding.
In such a lifecycle, the initial creator of the seed essentially plants a digital asset that can grow autonomously. Instead of trying to launch a “finished” product to immediately capture users, the creator releases something akin to an evolving smart contract or AI agent that others in the ecosystem have an incentive to run and improve. The end-users of the product may only see the final evolved form (for example, a highly accurate AI service), but under the hood, a marketplace of token-driven computation has been at work to get it there.
One real-world analogy to this model is the practice of open-source software that improves over time with community contributions. However, previously it was hard for original developers to capture the monetary value of community improvements (beyond goodwill or donations). In a tokenized compute system, by contrast, improvements are tied to token flows, so there are built-in mechanisms to reward those improvements, including the original seed authors. We will detail the economic design for that in later sections (see Value Capture for Seed Creators), but at a high level, one can think of the seed as having a form of on-chain royalty or dividend: as the seed generates activity and tokens change hands to pay for compute, a fraction can be routed back to the creator automatically. This ensures the creator benefits from the computational interest their seed accrues, much like an investor earning interest on a principal or an artist receiving royalties on subsequent performances of their creation (Chain Insights — How Can NFT Creators Enforce Royalty Fees Once and For All? | by Chain | Medium).
It’s important to note that not all computational work is valuable – the system should encourage useful computation (that which improves quality or achieves goals) and discourage waste. This is where careful token incentive design comes in (discussed in the next section). By aligning token rewards with measurable improvements or successful task outcomes, the network can channel compute power toward productive ends (like training better models, finding more efficient algorithms, etc.) instead of pointless number-crunching. In essence, computation becomes a new form of work that is rewarded in tokens only if it produces something of merit (similar to how Bitcoin miners do work but only get rewarded if they find a valid block, ensuring their work secures the network (TAO Token Economy Explained. In January 2021, the first Bittensor… | by Opentensor Foundation | Medium)).
The notion of computational interest also implies that some products may become autonomous value accrual vehicles. Imagine deploying an AI agent seeded with a strategy that allows it to continuously improve itself (within bounds) using profits it earns. For instance, an algorithmic trading bot might start with modest abilities, use initial token funding to trade and learn, then reinvest part of its profits into accessing more computing power or buying better data, thereby improving its trading strategy iteratively. Over time it could grow its capital and capabilities without much human intervention – the tokens fueling its compute act like reinvested interest. This blurs the line between user and product: the product itself behaves like an investor of compute. While such autonomous agents need guardrails, they exemplify the transformative potential when products are not static tools but self-improving entities in a token economy.
To summarize, parameterized seeds are the new unit of product design, meant to be catalyzed by further computation. They unlock the ability for continuous improvement post-deployment, funded by tokens. This approach aligns naturally with the state of AI technology, where models are rarely “done” – they can always be fine-tuned with more data or made more efficient with more optimization. It also resonates with trends in DevOps (constant iteration) and Web3 (community-driven upgrades). By explicitly baking in the expectation of post-launch compute investment, we create a structure where everyone is incentivized to contribute resources to make the product better, knowing that those contributions are rewarded and that the product’s value can grow unbounded rather than stagnating after an initial launch spike.
Building such a computationally-driven token economy requires several foundational primitives – core concepts or components that enable the overall system to function. We introduce four key primitives here, which form the vocabulary for our framework: Reasoning Rights, Computational Arbitrage, Intelligence Middleware, and Computational Catalysts. Each plays a distinct role in balancing the supply and demand of compute, aligning incentives, and ensuring the system’s outputs remain useful and coherent.
In a world of scarce AI compute and potentially infinite tasks, there must be a mechanism to decide who gets to use the available reasoning capacity of the network and for what purposes. Reasoning Rights refers to a tokenized entitlement to invoke AI reasoning or computation on the network. You can think of it as the right to “ask” the collective AI network to work on your problem. In practical terms, this could be implemented as a certain amount of CTZ tokens that need to be staked or spent to get your task processed. Holding tokens thus gives you access – much like owning arcade tokens gives you the right to play games, or holding cloud credits lets you run servers.
Reasoning Rights ensure that the network’s compute resources are allocated to those who value it most, as evidenced by their willingness to pay or stake tokens. This not only prevents abuse (spam or endless requests with no cost), but also creates a market for reasoning where complex or urgent tasks might command more tokens. If the system is built on a blockchain like Bittensor or Ethereum, the concept is analogous to gas fees or staking for access, but specifically framed around AI reasoning capacity. In Bittensor’s design, for example, the native token TAO serves as both a reward and as a credential for accessing the network’s AI resources (Comprehensive Analysis of the Decentralized AI Network Bittensor - ChainCatcher). Similarly, in our CTZ system, a user or agent would present tokens to “unlock” a certain amount of compute or a certain level of AI service.
Additionally, Reasoning Rights could be programmable and tradable. For instance, a seed creator could bundle a certain amount of reasoning execution into their product for promotion (like free credits), or users could trade their priority access if they don’t need it (imagine a marketplace for AI compute slots). In decentralized networks, we might even see futures or options on reasoning (buying rights to X amount of compute next week, anticipating demand spikes). The overarching purpose, however, is to make access to intelligence a governed resource. It creates a clear economic signal: if a task is worth doing, it’s worth paying tokens for. And conversely, if you hold tokens, you have a claim on the network’s thinking power.
Over time, one could envision Reasoning Rights NFTs – perhaps non-fungible tokens that grant specific privileges, like the right to a certain model’s output stream or a time-window of exclusive reasoning on a subset of the network. This could tie into governance as well: holders might influence what kinds of tasks the network prioritizes. But at its simplest, Reasoning Rights convert raw compute into a quantified, token-managed asset.
Markets tend toward efficiency via arbitrage, and a computation-focused economy is no different. Computational Arbitrage refers to the practice of finding differences in the “cost vs. value” of compute across the system and exploiting them for profit or efficiency. In essence, if there are multiple ways or places to perform a computation, arbitrageurs will seek the most cost-effective method that still earns the token reward. This is crucial for the network’s health: it drives down costs, balances load, and signals where resources should flow.
For example, suppose two seeds offer similar functionality (say two different algorithms competing to solve the same problem). If one seed currently yields higher token rewards per unit of compute than the other (perhaps because it’s undervalued or less saturated), miners or compute providers will flock to the higher-paying one. This will increase competition there, possibly driving down its rewards over time, while the neglected one might increase rewards to attract help. The system equilibrates such that tasks equalize in reward-to-cost ratio, assuming free movement of compute resources. This is analogous to miners in blockchain choosing to mine whichever coin is most profitable relative to its difficulty and price – switching whenever one offers a better return on their hash power.
We already see hints of this in Bittensor’s network, where “miners mainly join the network when they find computational arbitrage opportunities, especially when mining rewards exceed computational costs.” (Bittensor has several serious flaws. Is it doomed to fail? | 金色财经 on Binance Square). In other words, participants continuously compare the token rewards vs. their electricity/hardware costs, and shift their resources accordingly. Our framework formalizes this behavior: Computational Arbitrageurs are first-class actors. They could be independent nodes or services that monitor various seeds and tasks, and allocate GPU power to wherever the token yield per FLOP (floating-point operation) is highest. In doing so, they maximize their profit and simultaneously ensure the network’s overall efficiency (no part of the network remains overpaid or underutilized for long).
Computational arbitrage also encourages a form of competitive optimization. If there are two methods to achieve the same result, perhaps one is more compute-heavy but straightforward, another is clever and efficient. An arbitrageur might prefer the efficient method if the token reward is the same, because it costs them less in actual computation – effectively earning a higher margin. Over time, the system will favor techniques (and seeds) that deliver the most “bang for the buck” in terms of token reward vs. compute spent. This dynamic incentivizes innovation: if you can devise a way to accomplish a task with half the compute, you can arbitrage the difference until rewards adjust.
We can expect specialized roles to emerge: some actors will specialize in training models cheaply (perhaps in regions with lower electricity or on idle hardware) to supply the network with compute at lower cost, while others might specialize in solving niche tasks that have higher payouts. Middleware (discussed next) can also engage in arbitrage by routing tasks to the cheapest competent provider. All these actions are guided by token signals – CTZ becomes the universal accounting unit for compute value, allowing direct comparison of very different tasks on a single scale (token per operation or per result).
One thing to guard against is perverse incentives or exploitation. Pure arbitrage without guardrails could lead to participants gaming the metrics (producing superficially acceptable results with minimal compute to farm tokens, for instance). This is why verification mechanisms and quality assessments are needed (similar to how Bittensor uses validators to score the quality of model outputs (Comprehensive Analysis of the Decentralized AI Network Bittensor - ChainCatcher)). Arbitrageurs must be rewarded only when the results meet the expected quality or accuracy. This might involve Proof-of-Compute or Proof-of-Result schemes – cryptographic proofs or statistical checks that the claimed computation was actually done correctly. By coupling those checks with rewards, we ensure arbitrage drives useful work, not cheat strategies.
In summary, Computational Arbitrage acts as the market mechanism balancing the compute economy. It treats compute as a commodity that can flow to its best use. This ensures no single seed or task gets irrationally overpriced relative to others, and it pushes everyone toward more efficient algorithms. In economic terms, it’s akin to arbitrage in financial markets equalizing prices of identical goods. Here the “goods” are computational tasks, and their prices are denominated in CTZ tokens.
As tasks become complex and involve multiple AI systems or services, there arises a need for an orchestration layer – an intermediary that can break down tasks, route subtasks to the right modules, and integrate the results. We refer to these as Intelligence Middleware Layers: they are the glue and conductor that coordinate various AI components to fulfill a higher-level goal. In our token economy, these middleware actors are crucial and are given incentives (via tokens) to optimize and streamline computation, acting sort of like miners for meta-tasks.
Consider a scenario where an end-user’s request (or a seed’s requirement) is not a single computation but a workflow – e.g., “monitor my website and automatically improve any slow code each day.” This might involve multiple steps: analyzing performance logs, identifying code bottlenecks, rewriting code, testing it, deploying it. Different AI services or models might handle each step. An Intelligence Middleware service could take on this request, decompose it, and figure out how to fulfill it using the available seeds and compute providers. It might call a log-parsing AI, then a code generation AI, then a verification AI, etc. In doing so, it incurs costs (it has to spend tokens to use those services), but it also can charge the original requester (or get rewarded by the protocol for completed objectives). Essentially, the middleware is an intelligent agent or pipeline manager that adds value by connecting simpler components into a complex solution.
The need for such orchestration is already being discussed in the context of AI agents. As one observer put it, “AI agents won’t be the ones coordinating how they interact with one another... We’ve seen agents spiraling into a rabbit hole... The Conductor (or AI Orchestrator) is needed to direct this symphony of agents.” (Thinking of AI Agents? You Need A Conductor! | HackerNoon) (Thinking of AI Agents? You Need A Conductor! | HackerNoon). This conductor is exactly what our Intelligence Middleware does. It ensures that autonomous AI agents or modules stay on track and don’t waste resources on tangents or redundant efforts. In a token context, the middleware has a strong incentive to be efficient: any unnecessary calls to other services cost tokens, which cuts into its profit. Therefore, it will use strategies like caching results, reusing computations, or choosing the most cost-effective service (tying back to computational arbitrage) to minimize expense while meeting the task requirements.
We can envision multiple layers of middleware, hence the plural “Layers”. There might be low-level middleware focusing on optimizing single API calls (like a smart load balancer that picks the fastest/cheapest node for a given model inference). Then higher-level middleware might chain entire workflows or provide domain-specific orchestration (e.g., an “Intelligence Middleware” specialized in financial analysis tasks versus one for UI/UX adaptation tasks). Each layer could take a small cut of tokens for the value it adds. For instance, a middleware that figures out how to answer a complex query by splitting it into simpler queries (and thereby uses less total compute) could earn an optimization reward – essentially a share of the tokens saved. The token design can include explicit middleware rewards to encourage such behavior: e.g., if a middleware finds a way to complete a task using 100 CTZ worth of compute when naive methods would have taken 150 CTZ, the protocol could award part of the 50 CTZ saved to the middleware as a bonus.
Another role of middleware is maintaining state and memory across transactions. Individual compute providers might be stateless (they just process input to output). But a middleware can keep track of longer-term goals or conversational context, acting as the “memory” of an AI product. This is very important for iterative improvement seeds: the middleware can remember past computation results and feed them into future ones, effectively implementing the feedback loop needed for computational interest to accumulate. It can also manage checkpoints (snapshots of a seed’s state) and decide when to branch a seed (creating a new variant) if multiple approaches are worth exploring. In doing so, middleware can become an “Intelligence broker” – connecting those who need reasoning done with those who can do it, and ensuring quality control.
From a technical perspective, Intelligence Middleware could be implemented as a network of bots or smart contracts. For instance, a smart contract might hold logic to automatically route a request to a list of services and aggregate the result, releasing payment conditioned on a satisfactory outcome. Alternatively, a decentralized network of human or AI curators could serve as the middleware layer (somewhat like how validators work in Bittensor, judging the quality of results and forming consensus (Comprehensive Analysis of the Decentralized AI Network Bittensor - ChainCatcher)). The design may vary, but the consistent idea is that middleware adds meta-intelligence – it’s intelligence about using intelligence. It doesn’t directly solve the end problem but knows how to utilize those who can.
In our token economy, we explicitly reward middleware via CTZ tokens for their contributions. This could be through fees (a middleware charges a fee to the user for handling the job) or through protocol-level rewards for efficiency or correctness. This ensures that the often invisible work of orchestration is financially incentivized. Just as internet infrastructure like routers and servers underpin online services (and are paid via hosting fees, bandwidth charges, etc.), intelligence middleware will underpin the AI token economy and must have a business model. By allocating tokens to this layer, we encourage the emergence of robust “middleware companies” or autonomous agents that specialize in coordinating AI – effectively becoming the AI DevOps of this ecosystem.
The final primitive is more abstract but vital: Computational Catalysts. These refer to elements in the system that trigger disproportionate amounts of computation from a small input – similar to how a tiny chemical catalyst can spark a large reaction. In our context, a computational catalyst could be a particularly well-designed seed or incentive structure that unleashes a cascade of productive compute activity. Another way to think of it: catalysts are the levers or multipliers that accelerate the growth of computational value.
One straightforward example of a catalyst is a challenge or competition injected into the network. Imagine the creators of the system post a standing bounty (with tokens) for achieving a certain goal, like “improve this AI model’s accuracy on dataset X by 5%” or “find an algorithm that sorts 10% faster than the current best” (much like DeepMind’s AlphaDev discovered a 70% faster sorting algorithm (AlphaDev discovers faster sorting algorithms - Google DeepMind)). This bounty acts as a catalyst: it might inspire multiple teams or AI seeds to try various approaches, thereby generating a lot of computation (people training models, running experiments) in pursuit of the prize. The presence of the reward catalyzes activity that wouldn’t have happened otherwise. In doing so, it potentially leads to a valuable result (the improved model or algorithm) that benefits the whole network thereafter.
Another type of catalyst is a protocol update or new primitive that suddenly opens a rich vein of opportunity. For example, if a new type of seed is introduced (say a generative art seed that can create NFT graphics via compute), it could catalyze a flurry of compute investment by users trying to generate novel art and sell it, thereby using a lot of GPU time and tokens. Or consider a scenario where the network provides a small initial compute subsidy to any new seed – this could catalyze a burst of seed creations and initial runs, bootstrapping the system’s growth (a form of “growth hack” for the network).
In essence, computational catalysts in our framework are akin to high-yield opportunities or triggers that encourage participants to pour in tokens and compute. They are important to discuss because they ensure the system isn’t purely reactive or linear. Catalysts can create phase changes – sudden growth spurts or shifts in network behavior that move the ecosystem to a higher level of capability or activity.
One can also interpret the seeds themselves as catalysts – hence the name Catalyze (CTZ) for our token. Each seed is a catalyst for computation: a small piece of code or model that, once released, might incite thousands of times its own complexity in further processing. For instance, a 100 KB piece of code might trigger teraflops of computation in optimization runs. The CTZ token’s role is to “catalyze” value creation by unlocking reasoning and rewarding results. Holders of CTZ essentially fuel these reactions. The more CTZ spent on a seed, the more that seed catalyzes compute, and (if designed well) the more value emerges.
It’s worth noting that catalysts can be positive-sum or negative-sum. A positive catalyst leads to net gain (the compute invested yields an outcome more valuable than the cost). A negative catalyst could be a trap (lots of compute spent for little gain, perhaps due to hype or misaligned incentives). The governance of the ecosystem and reputation systems can help identify which catalysts are worthwhile. Ideally, over time the community learns what kinds of seeds or challenges produce fruitful computation (like scientifically or economically useful results) and which do not, allocating tokens accordingly.
In summary, Computational Catalysts are the sparks that ignite computational processes in the network. They ensure the system remains dynamic and can accelerate rapidly when needed. By recognizing and naming this primitive, we remind system designers to include mechanisms that spur activity – whether via rewards, novel features, or strategic subsidies. In chemical terms, we don’t want the reaction (value creation) to proceed slowly; we want to lower the activation energy so that many reactions happen and the ecosystem thrives. CTZ tokens, seeds, and bounties all serve as catalysts that reduce friction and invite more participants to contribute their compute and ideas.
With these four primitives – Reasoning Rights, Computational Arbitrage, Intelligence Middleware, and Computational Catalysts – we have the conceptual toolkit to design the token economy. Next, we will see how these ideas coalesce into the Catalyze (CTZ) token design and how CTZ coordinates the various actors and processes we’ve described.
At the heart of this system lies the Catalyze (CTZ) token – the fuel and currency of the computational economy. CTZ is more than just a medium of exchange; its design encodes the rules and incentives that make the whole framework function. In this section, we break down the CTZ token’s roles and mechanics in detail:
Allocating Reasoning Capacity (CTZ as “gas” for AI compute)
Middleware Rewards for Optimization (incentivizing the intelligence layer)
Parametric Product Interest Models (how seeds accrue value via CTZ)
Computation Yield and Yield Farming (earning “interest” on compute investments)
Hybrid Attention–Market Integration (interfaces with human attention markets, where needed)
Each of these facets addresses a specific aspect of the value flow and control in the system. We describe each in turn below.
The most direct function of CTZ is to serve as the metering unit for computation. Much like how Ethereum’s gas limits and gas prices govern how much computing a smart contract can use, CTZ governs how much “reasoning” or AI processing one can invoke. To execute a task in the network, a user or seed must pay a certain amount of CTZ, proportional to the computational resources required (CPU/GPU time, memory, energy) and possibly the complexity or priority of the task. These spent tokens are then distributed to those providing the compute power and other services (miners, model hosts, etc.), minus any protocol fees.
This mechanism turns CTZ into a consumption token: its value is intrinsically linked to demand for AI compute. If a certain AI service becomes popular or a certain seed requires massive processing, more CTZ will be spent, effectively “burning” tokens in exchange for work done (or transferring them to compute providers, who may later sell or reuse them). This is akin to a utility token model, but in a more tangible way: CTZ literally measures how many joules of AI “brainpower” you can buy. In economics terms, CTZ represents the opportunity cost of using the network’s finite compute – spending CTZ on one task means that compute can’t be used elsewhere, enforcing a trade-off.
A robust design might include an oracle or benchmark for how much computation a single CTZ can buy (e.g., 1 CTZ = X operations on a standard model). However, since different tasks have different values, a free market approach is better: tasks will advertise bounties or required fees in CTZ and providers will choose tasks accordingly. If a task is underpriced in CTZ for its difficulty, no one will pick it up; if overpriced, many will compete to do it, possibly lowering the effective cost via arbitrage.
Importantly, CTZ could be designed with a burn mechanism to create a deflationary pressure as usage increases. For example, a percentage of CTZ spent on each task could be burned (permanently removed from supply), while the rest goes to providers. This mimics how some blockchain systems burn gas fees to reward all token holders indirectly (as in Ethereum’s EIP-1559 fee burn). The effect is that heavy use of the network (lots of computation) makes remaining CTZ more scarce, potentially increasing its value. This aligns the incentives of token holders with network usage: the more the platform is used for AI reasoning, the more valuable CTZ becomes.
Beyond just metering, CTZ staking might be required for certain persistent allocations of compute. For instance, if someone wants to reserve a high-memory model or keep a large model loaded in VRAM for quick usage, they might stake CTZ to that provider as a retainer. This is analogous to cloud computing reservations (like reserving an AWS instance) but done via token escrow. The staked CTZ ensures the provider is compensated for keeping resources ready, and if the user fails to utilize it, perhaps some portion is still given to the provider for their opportunity cost.
By using CTZ to allocate reasoning capacity, we also create a robust pricing signal for AI capabilities. Over time, one could see a “cost curve” in CTZ for different levels of model intelligence or different services. For example, a very advanced reasoning (like GPT-5 level response with multiple steps) might cost 0.1 CTZ per query, whereas a simpler classification might cost 0.001 CTZ. This helps both users and providers decide where to spend their tokens or effort. If one CTZ can be used to get 10 answers from a basic model or 1 answer from a super-intelligent model, that ratio informs the user’s choice and the market’s direction (perhaps training better basic models if the super model is too expensive, etc.).
To summarize, CTZ serves as the “gas and oil” of the ecosystem – it lubricates and limits the use of compute. It ensures finite AI resources are allocated to those who pay, it compensates those who provide the resources, and it creates a feedback loop where rising demand translates to rising token value (benefiting participants). By tying CTZ issuance or burning to actual computational work, we achieve a kind of proof-of-work 2.0: instead of hashing meaningless numbers, work in this network is meaningful AI computation, and CTZ is both the reward and the payment for it.
As discussed earlier, Intelligence Middleware layers play a crucial role in orchestrating tasks and improving efficiency. The CTZ token design explicitly rewards these contributions by allocating a portion of token flow to middleware functions. The goal is to incentivize optimizations that reduce overall compute costs, improve success rates, and coordinate complex workflows.
One way to implement this is through a fee split or rebate mechanism. For any given task, we can imagine the total CTZ spent is divided among: (a) the compute providers who did the heavy lifting, (b) the middleware/orchestrator that coordinated the task, (c) potentially the seed owner (royalty, which we cover later), and (d) optionally a burn or treasury tax. For example, out of 100 CTZ spent on a complex task, maybe 85 CTZ go to various low-level compute executors (GPUs running models), 5 CTZ go to the middleware agent that structured the solution, 5 CTZ go back to the seed creator as a reward for providing the base algorithm, and 5 CTZ are burned or sent to a community pool. The exact numbers are tunable, but the key is the middleware gets a cut.
Why give tokens to middleware? Because without coordination, many tasks would either fail or use far more compute than necessary. A smart middleware can reduce redundancy (ensuring the same sub-task isn’t computed twice) and choose near-optimal approaches. In essence, it saves tokens for the user. It’s fair that it earns some of those saved tokens as a reward. This can be structured as a performance bounty: the protocol could estimate a default cost for a job (say by a naive approach) and if a middleware completes it for less, it earns a fraction of the difference. This creates a direct financial incentive to innovate in orchestration strategies. In DeFi analogies, this is similar to how arbitrage bots or liquidators earn profits by making markets more efficient – here the middleware earns by making computation more efficient.
We also need middleware to handle verification and quality control, which is critical in a trustless environment. Validators in Bittensor, for example, evaluate the quality of model outputs and thereby influence reward distribution (Comprehensive Analysis of the Decentralized AI Network Bittensor - ChainCatcher). In our design, middleware or separate validator nodes could verify results of tasks (did the algorithm find a correct solution? does the improved model actually perform better?). If they identify poor results or cheating, they can penalize or withhold tokens from the offending compute nodes. For doing this policing work, validators/middleware should earn CTZ. This might come from a small tax on each task that funds the validation process. Alternatively, validators could stake CTZ and earn inflation rewards for maintaining network integrity (like proof-of-stake blockchains do). Either way, CTZ must fund the governance and reliability layer – ensuring the outputs have value and that bad actors can’t easily game the system.
Another nuance is middleware competition. There could be multiple middleware services vying to handle a given user request, each with their own strategy. Perhaps they even “bid” by saying how much CTZ they need to complete it. The user (or an automated broker) might pick the one that offers the best trade-off of cost vs. reputation (success probability). In such a case, market forces again decide the middleware’s reward – they effectively quote their fee. We anticipate middleware offerings becoming a vibrant part of the ecosystem, with some advertising specialized expertise (e.g., “We handle code optimization tasks with 20% fewer tokens than others, proven by track record!”). CTZ facilitates this by being the common unit in which all such offers are made and settled.
The CTZ design could also reserve some tokens for continuous R&D of middleware algorithms. For example, a portion of block rewards (if CTZ has inflation) might go into a fund that middleware devs can draw from if they demonstrate improvements to the network’s overall efficiency. This would be akin to a grant or retroactive public goods funding model, recognizing that better middleware benefits everyone (lower costs, faster turnaround).
In conclusion, the CTZ tokenomics explicitly bakes in rewards for the “brain between brains” – the intelligence middleware. By allocating fees or bounties to orchestrators and verifiers, the system ensures there’s always an incentive for skilled coordination. This in turn keeps the compute economy optimized and trustworthy. The end result should be that any given problem is solved with as little waste and as much reliability as possible, thanks to those middleware actors who are properly compensated in CTZ.
One of the most innovative aspects of this framework is how products (seeds) themselves can be modeled in token-economic terms. We introduce the idea of Parametric Product Models that accrue “interest” from compute investment. Essentially, each seed can be thought of as having its own micro-economy or even a tokenized representation that tracks the value it accumulates over time.
Imagine each seed is associated with a sort of score or token balance internally, which increments as the seed gets more compute or improvements – this is its computational interest account. When someone runs computation to improve that seed, a portion of CTZ (as set by the token design) is credited to the seed’s “account” or to its creator. This is analogous to a bank account earning interest: the more and longer you invest in it, the more it pays out. In practice, this could be implemented via smart contracts that automatically distribute a tiny fraction of the tokens spent on a seed’s tasks to the seed’s creator or to a contract representing the seed.
For example, suppose you create a seed (maybe an AI model that translates English to French). You deploy it and specify (or the protocol enforces) a royalty rate of say 2%. Now, whenever someone uses CTZ to run or fine-tune your translation model – effectively investing compute to generate translations or improve accuracy – 2% of those CTZ are sent to your address as the seed creator. Over time, as many people use and improve this model, you accumulate CTZ. This is the interest on the seed you planted. The more the seed is used (i.e., the more interest it generates), likely the better it becomes (because usage presumably involves fine-tuning or augmenting data, etc.), which in turn attracts more usage. So your “interest payments” grow – a compounding effect.
From the user’s perspective, this 2% might just be seen as part of the service fee. They pay for usage and know a small cut goes to the original developer, which is fair (much like artists get royalties). Interestingly, this aligns incentives towards quality seeds – a developer who releases a highly useful seed can earn ongoing income, which motivates creators to contribute their best ideas to the network (rather than hoard them or sell them one-off). It’s a sustainable model for open innovation, similar to how NFT royalties aimed to reward creators on each resale (How NFT royalties work: Designs, challenges, and new ideas - a16z crypto). Here it’s not resale, but reuse.
These parametric product tokens could even be fractionalized or sold. As a seed creator, I might tokenize ownership of my seed’s future interest (like issuing shares). Investors could buy a stake, giving me upfront capital, and then they receive a portion of the future CTZ interest that seed generates. This is analogous to financing a project by selling equity or a revenue share. It could accelerate development: if I have a promising AI model idea, I could raise CTZ by selling half the “interest rights” to fund training the initial version. Once deployed, those investors will get half of the subsequent interest yield. All of this can be managed by smart contracts on an EVM-compatible chain, making it transparent and trustless.
Now, one must consider: do all seeds get interest from compute, or only if they explicitly set it? Perhaps the protocol should enforce a default creator royalty on all seeds (to guarantee creator incentive). However, this could be adjustable or waivable. A creator might choose to waive their interest (setting 0% royalty) maybe to attract more usage via lower cost – effectively a competitive move. Alternatively, they might set a higher royalty if they believe their product is unique enough that users will pay a premium. This starts to look like setting a “tax rate” on your digital product’s usage. The market will judge the right level; too high and someone might fork your seed or make a competing one with lower rate.
Parametric product models with accruing interest also hint at algorithmic governance of products. If a seed is accruing a lot of value, perhaps that indicates it should be further supported or replicated. If it’s not accruing much, it might be deprecated. The protocol could automatically highlight “top-earning seeds” as success stories to learn from. It’s a bit like how app stores show top grossing apps. Here we’d have top CTZ-accruing seeds, which likely correlate with those delivering the most computational value to users.
There’s also an analogy to yield-bearing assets in DeFi. In DeFi, you can deposit tokens into a yield farm and you get more tokens over time. Here, if you “hold” a seed (as its creator or investor), you get yield in CTZ as it’s used. If someone else wants that position, perhaps you could sell the seed’s NFT (which entitles the holder to future interest). This creates a dynamic market for ownership of productive algorithms, without necessarily owning the algorithm’s IP (since it might be open-source). It’s more owning the right to earn from its usage.
We should note that interest accrual to creators must be balanced so as not to overly burden usage with fees. Likely these rates are single-digit percentages. Also, the protocol might reduce them over time – for example, after a seed has been out for a year, maybe the creator royalty automatically tapers off, encouraging them to publish updates or new seeds (to keep innovating). Or perhaps not – that’s a design choice.
In summary, parametric product models that accrue interest turn every deployed algorithm or AI model into a sort of micro-investment opportunity. Creators effectively “invest” by creating the seed, and then earn a passive return in CTZ as others contribute compute to it. This aligns well with our notion of computational interest and ensures that those who spark value (the creators) are not left out of the value flow. The CTZ token contract can facilitate this by automatically splitting fees and maintaining records of seed ownership and royalty rates.
If “computational interest” is the concept, computation yield mechanics are the concrete implementation of how returns on compute are calculated and distributed. We draw inspiration from yield farming and proof-of-stake rewards in the crypto world to design how participants who “stake” compute or tokens into the system get rewarded.
One basic mechanic could be analogous to liquidity mining in DeFi: those who contribute scarce resources (here, computational power or optimized algorithms) earn newly minted CTZ tokens in proportion to their contribution. For instance, the network could have an inflation schedule where X CTZ per day are minted to reward active participants. These could be split among active compute providers and perhaps seed creators proportionally to their work. This is similar to how Bittensor continuously issues TAO tokens (one token every 12 seconds, halving over years) to miners and validators who contribute machine intelligence (Comprehensive Analysis of the Decentralized AI Network Bittensor - ChainCatcher). Each block in Bittensor shares a reward between those roles (TAO Token Economy Explained. In January 2021, the first Bittensor… | by Opentensor Foundation | Medium). We can adopt a similar approach: a continuous “drip” of CTZ goes to those currently doing valuable compute (like training models, answering queries) and those verifying results. This provides a baseline yield for being an active node in the network.
On top of that, there’s the concept of performance-based yield. If you stake CTZ on a particular seed (effectively backing that seed’s growth), you might earn a higher yield if that seed becomes popular or yields a lot of value. This would be akin to curation markets or prediction markets: you’re betting tokens on which seeds will be productive, and if correct, you earn a return. This creates a role for analysts or curators who allocate their CTZ to promising projects (seeds) and thereby fund their compute. If the project succeeds (lots of usage), the staked CTZ could earn a share of the royalty or some reward pool. It’s somewhat speculative but also accelerates the right seeds getting resources early. The protocol could, for example, match stake with grants (“if you stake 100 CTZ on this new seed, the network will also reward that seed with 100 CTZ from the treasury for compute”). Those who staked early might get a one-time issuance of new CTZ if the seed hits certain milestones (like reaching a certain accuracy or usage count).
Computation yield can also refer to the efficiency gains that come from investing compute. For instance, if a seed’s performance improves by 10% after a compute campaign of 1000 CTZ, one could say it yielded a 10% “ROI” in terms of quality. While not directly a token mechanic, the system could measure such improvements and possibly give bonus CTZ to those involved for achieving a big jump (sort of like a prize for a breakthrough). This encourages focusing compute where it has high marginal benefit, not just brute forcing where there’s little left to gain (diminishing returns).
We might incorporate a notion of compute bonds: you lock some CTZ to fund a long-running computation (like training a large model). Once the computation is successfully completed and yields a verifiable result (model reaches target metrics), you get your stake back plus some interest in CTZ. If it fails (model didn’t converge), maybe you lose some or all. This shares risk and reward of heavy compute jobs. Participants who are confident in their approach would stake their own CTZ to show they believe the yield will be achieved.
To maintain system equilibrium, these yields have to be backed by either inflation or fees. We can’t create value from nothing – either new CTZ are minted (diluting supply slightly, akin to inflationary financing which is fine if it drives network growth), or the yields are paid from fees that others have paid in (which is sustainable if the network is being actively used). A combination is likely: early on, inflationary rewards (bootstrapping phase); later on, primarily fee-based yields (mature phase).
A concrete example: Suppose a particular optimization task in the network has a yield farming program – say optimizing a popular open-source model. The protocol sets aside 1000 CTZ per week for 4 weeks to distribute to anyone who contributes to that optimization (through a defined metric like test accuracy improvement or computational work done). People rush in to train that model. At the end of the period, those tokens are split among contributors by contribution. Each effectively gets an APR (annual percentage rate) on the CTZ or compute they spent, maybe it comes out to, say, 20% over that month. After that, the program ends, but the model is now much better and maybe running usage fees that yield ongoing royalties. This is analogous to how DeFi protocols incentivize liquidity early on.
In short, computation yield mechanics ensure that participants see a clear return for providing what the network needs (be it compute power, improvements, or capital stake). It takes the abstract idea of “computational interest” and turns it into quantifiable APY-like metrics and token flows. Over time, the expectation of yield should drive more rational capital allocation: tokens and compute will flow to where the yield is highest until it balances out (thanks to computational arbitrage). Ideally, the highest yields will be on the most impactful projects, indicating the network’s values are aligned with its incentives.
While the vision is largely about AI and compute, we cannot completely ignore the human element – after all, humans set goals and benefit from the outcomes. A hybrid attention-market integration means the system selectively interfaces with traditional attention economy markets when beneficial, creating a bridge between human-driven value and compute-driven value.
One example of this hybrid model could be AI services that have human-facing outputs. Suppose one of the application domains is adaptive media content creation. The seed might generate personalized news articles for users. Ultimately, a human reads that article (attention). We could integrate a mechanism where if human users spend time or give high ratings to the AI-generated content, that feedback loops into the token economy: perhaps the content seed gets a bonus or the system charges an advertiser for that attention and converts it to CTZ to reinvest in improving the content generator. In other words, when human attention is captured as a side-effect of these computational products, we should not waste that economic value – we can channel it back as tokens. This could be done via microtransactions or by platform models. For instance, an AI-driven game that adapts itself (seed that uses compute to generate levels) might still sell in-game items or subscriptions to players (human market), but a portion of that revenue is used to buy CTZ and feed it into the game’s AI to further improve it (a closed loop of human->token->compute).
Another integration point is using attention economy platforms as data or evaluation sources. Human attention can be used to guide what computations are valuable. For example, if we see a particular AI service’s outputs trending on social media, that’s a signal to allocate more compute there. The network could have oracles that pull in metrics from Web2 platforms (trending topics, popular app usage stats) and use them to adjust CTZ reward allocations. Essentially, it’s letting the attention economy tell us where AI might usefully focus compute. This hybrid approach prevents the token-compute economy from being too insular and solving problems no one cares about. It injects a bit of human preference into the loop in a quantitative way.
In terms of token mechanics, CTZ might be listed and traded on broader crypto markets, meaning its price could be influenced by both speculation (like any token) and by the health of the underlying compute economy. If some application domain crosses over into mainstream success, demand for CTZ could spike. Conversely, interest from the crypto community (perhaps seeing CTZ as “the next big AI token”) could bring more capital into the system that funds more compute. We should design for this by ensuring transparency and perhaps some governance, so human investors can steer the system through CTZ ownership if they see fit. CTZ might have governance votes (common in many tokens) for decisions like adjusting reward rates or funding certain seeds from a community pool. Thus, human-led governance (attention of token holders) intersects with the automated economy.
Finally, selective integration means we don’t force human involvement where it’s not needed. Most of the heavy lifting should remain automated and token-driven. But where humans naturally appear – as end-users of a product, as external investors, or as providers of rare human feedback (like labeling data for a tricky AI task) – the system should accommodate that and reward it in CTZ if possible. For instance, a human domain expert who provides a crucial insight or evaluation that helps an AI seed could be tipped or paid in CTZ for their contribution. This way, even human knowledge can be tokenized to an extent and folded into the creation process.
The result of a well-designed hybrid integration is a symbiotic relationship: the AI-token economy delivers better and more customized products to human markets, and in return, the attention and money from those markets feed back to strengthen the AI-token economy. Over time, as Karpathy’s quote suggests, AI attention might dwarf human, but humans remain the North Star for defining value – the AIs are ultimately optimizing things that matter to us (whether directly or indirectly). Ensuring a feedback channel from human value to token incentives will keep the system relevant and grounded.
Having outlined the conceptual and economic design, we turn to the technical realization. How can we actually build this tokenized compute economy? We propose a vision using either EVM-compatible blockchains (e.g., Ethereum and its Layer 2 networks, or other chains that support smart contracts) or leveraging specialized AI-focused networks like Bittensor (TAO) (Bittensor Paradigm) (Comprehensive Analysis of the Decentralized AI Network Bittensor - ChainCatcher). A combination of both might in fact yield the best results: using EVM for its mature smart contract ecosystem and DeFi integration, while using networks like TAO for heavy-duty AI compute consensus.
At a high level, the architecture is layered as follows:
Layer 1: Blockchain & Smart Contracts – This is the settlement and coordination layer. All CTZ token transactions, staking, reward distribution, and seed metadata (like ownership, royalty rates) live here. An EVM chain (Ethereum, Polygon, etc.) could host the CTZ token contract and a suite of smart contracts for the marketplace of tasks, staking pools, and reward algorithms. We’d implement things like: a TaskRegistry contract where a user can post a task with attached CTZ, a SeedRegistry with all seeds (perhaps represented as NFTs) linking to their code or model hash, a RewardManager contract that executes the tokenomics rules (splitting fees, minting new CTZ to participants, etc.). EVM compatibility ensures we can integrate with wallets, exchanges, and other services easily. It also allows using established standards (ERC-20 for CTZ, ERC-721 or ERC-1155 for seed NFTs, etc.), and possibly even existing governance tooling (like snapshot for voting or multi-sigs for treasury).
Layer 2: Intelligence Middleware & Off-Chain Compute – The actual AI computations (model training, inference, data processing) typically cannot run on-chain due to their intensity. Instead, they run off-chain on participants’ hardware. Intelligence Middleware acts here to connect on-chain intent to off-chain execution. A likely approach is to use a network of nodes (similar to how oracle networks like Chainlink operate) that listen for task assignments from the blockchain, execute them, and then return results or proofs back on-chain. These nodes would be the miners/validators in Bittensor’s analogy, but orchestrated through our own protocol rules. We could adapt something like Golem or Celestia (modular execution layers) to handle scheduling of compute tasks. Or use Bittensor’s existing substrate-based chain as a complementary system: e.g., tasks and rewards are posted on Ethereum, but actual AI model consensus is reached on Bittensor, then checkpointed back to Ethereum for payouts in CTZ. This hybrid approach leverages Bittensor’s expertise in decentralized AI training (with its Yuma consensus and subnets (Comprehensive Analysis of the Decentralized AI Network Bittensor - ChainCatcher)) while keeping the financial layer on EVM for broad accessibility.
Communication between these layers can be achieved via bridges or oracles. For example, once a compute node finishes a task, it might submit a transaction to the blockchain with the result’s hash and claim for reward. Other validators (or an optimistic/zero-knowledge proof) can verify the result off-chain. If valid, the RewardManager releases CTZ to the node. This could use an optimistic execution model (like optimistic rollups): assume the result is correct unless someone challenges within X blocks by providing a counter-proof. If challenged, run a dispute resolution (maybe a quorum of validators check the work or a smaller sub-computation is done on-chain if possible).
Figure 2 illustrates a simplified view of the token and compute flow across these layers:
Figure 2: Token Flow Across Compute Layers. The diagram depicts the high-level architecture. An End-User or a seed triggers a request, which is handled by a Parametric Product Seed (smart contract or on-chain reference to a model) that defines the task parameters. This in turn engages the Intelligence Middleware layer, which orchestrates the distributed compute off-chain. The middleware breaks the work into distributed jobs and assigns them to available Compute Providers (miners with hardware). CTZ tokens flow downward to pay for these jobs (either pre-staked or as payments for results), while results and proofs flow upward. The Compute Providers return the results (with proof-of-compute) to the middleware, which integrates them into an optimized solution for the seed. The seed contract then produces the final output for the end-user or the next stage of processing. In parallel, token flows reward the middleware (for orchestration) and possibly the seed owner (as royalty) from the CTZ spent. This layered approach ensures the blockchain handles transactions and value, while off-chain networks handle heavy computations, with CTZ tokens bridging the two.
A crucial component in making this work on existing infrastructure is EVM extensions or Layer-2 solutions to reduce gas costs. Many interactions (especially microtransactions for each compute sub-task) would be too expensive on Ethereum L1. We likely need a dedicated Layer 2 rollup or sidechain for CTZ that is optimized for high-frequency events. Perhaps a rollup that uses zk-proofs for verifying compute could be developed: e.g., a zero-knowledge proof that a certain model was trained to X accuracy could be verified on-chain, thereby rewarding accordingly without posting all intermediate data. Projects like StarkNet or zkSync might be leveraged for such proofs of compute integrity.
On the other hand, Bittensor’s TAO network offers an alternative: it’s a purpose-built blockchain for AI with its own consensus and token. We could conceive of CTZ being interoperable or even minted on Bittensor itself as a subnet token. Bittensor’s approach to subnets (Comprehensive Analysis of the Decentralized AI Network Bittensor - ChainCatcher) (Comprehensive Analysis of the Decentralized AI Network Bittensor - ChainCatcher) allows specialized clusters (e.g., one for vision models, one for language, etc.) each with possibly their own dynamic token (dTAO). One idea: use TAO as the low-level reward for raw compute (since that’s already in place), then have CTZ as an overlay token that encapsulates higher-level logic and cross-subnet orchestration. CTZ could be exchangeable with TAO at some rate or earned by converting TAO outputs into more user-facing value. If the audience are frontier tech builders, they might appreciate a discussion of how to integrate with Bittensor – for example, using TAO’s proof-of-intelligence mechanisms to validate our tasks.
Data availability and storage is another consideration. Seeds (AI models) can be large (gigabytes). Storing those on-chain is infeasible; instead we use content-addressed storage like IPFS or Arweave for model weights, code, and datasets. The seed’s smart contract would store just a hash or pointer to the model on IPFS. Compute nodes retrieve it, run it, maybe update it (if fine-tuned) and put a new version on IPFS, updating the pointer via the seed contract. This way the heavy content lives off-chain but is verified by cryptographic hashes on-chain.
Security is paramount. Attacks could include bogus results, model theft, or denial-of-service by spamming tasks. Our design mitigates these with the incentive alignment: nodes that provide bad results don’t get paid and lose reputation, while spamming tasks costs CTZ (due to reasoning rights pricing). We might implement stake-slashing: compute providers and middleware perhaps stake some CTZ and can lose it if they are caught cheating (like an on-chain dispute finding they falsified a result). This creates a trustless yet secure environment similar to how proof-of-stake validators operate.
Finally, since this is to be a public PDF and web platform white paper, a note on front-end: likely there would be a dashboard or DApp where one can see available seeds, ongoing tasks, yields, etc., interacting with the smart contracts. Builders reading this could envision creating such front-ends where seed creators upload models, users trigger jobs, and providers offer their hardware, all mediated by CTZ under the hood.
In summary, the technical architecture uses a combination of on-chain smart contracts (for economics and coordination) and off-chain compute networks (for actual AI tasks). Whether using Ethereum & rollups or a specialized chain like TAO, the key is ensuring interoperability so that value can seamlessly flow as tokens while computation flows across devices globally. The approach is modular: one could swap out the compute layer (say use a cloud provider or a decentralized network) without changing the economic logic on-chain, because CTZ and the contracts abstract that away. Conversely, CTZ could hook into other ecosystems (for example, integrate with DeFi by allowing CTZ to be borrowed/lent, enabling leveraging compute investments). This flexible, layered architecture is both ambitious and feasible given today’s blockchain and AI technology trends.
Identifying the right initial application domains is critical for proving out this tokenized compute paradigm. We target digital-only domains where value can be generated entirely through computation, and where rapid, iterative improvement yields significant rewards. Below is a ranked list of five application domains best suited for early deployment, along with why they are ideal candidates:
Autonomous Code Optimization and Software Improvement – Rank 1. This domain is highly suitable because it directly deals with self-improving systems. An AI can refactor code, search for more efficient algorithms, and apply patches continuously. The value is immediate: faster or more efficient code saves money and improves performance. We already have proof that AI can discover better algorithms – e.g., DeepMind’s AlphaDev found sorting algorithms “up to 70% faster” than humans had achieved (AlphaDev discovers faster sorting algorithms - Google DeepMind). In a token economy, a Code Optimization Seed could take in any software project and continuously profile and improve it, earning CTZ based on the performance gains delivered. Companies or open-source projects would stake CTZ for improvements, and optimizers (AI or human-assisted AI) would compete to earn that by delivering better code. Because code is purely digital and improvements can be verified quantitatively (e.g., runtime reduced by X%, memory by Y%), it’s an excellent testbed. Success here not only yields direct economic benefits (cost savings) but also can produce components (optimized code) that feed into other domains.
Algorithmic Trading and Financial AI Agents – Rank 2. Finance has been a playground for algorithms for decades. In this domain, an AI agent makes high-frequency trading decisions, portfolio optimizations, or crypto market-making strategies. It is entirely digital, and results are directly monetary (profits/losses). This suits our model because significant compute can be thrown at finding even slight edges in the market, which then translate into real value. An algorithmic trading seed could allow traders to invest CTZ to run massive simulations or optimizations of strategies, with rewards coming from actual trading gains shared back as CTZ (perhaps via tokenized profit-sharing agreements). The reason this is ranked high is the immediate willingness to pay – traders will gladly spend on computation if it gives them an edge. Also, the data for training (market data) is abundant and digital. One caveat: connecting to real financial systems requires careful bridging (potentially via oracles for price data). But early on, this could be tested in crypto markets natively, where CTZ might even be used as a stake to run certain automated strategies on DEXes. The competitive nature of trading will push the arbitrage and optimization aspects of our system to the limit, which is great for battle-testing it.
Adaptive UI/UX and Personalization Engines – Rank 3. This domain focuses on improving user interfaces or user experiences automatically by analyzing user interaction data and adapting software on the fly. While it interfaces with human users, it is digital-only in the sense that the product (a website or app) is software that can self-modify. An adaptive UI seed might, for example, use AI to rearrange a website’s layout to better suit each user’s preferences, or adjust difficulty in a game based on player behavior. This is well-suited because it generates a lot of subtle computation – A/B testing different variants, reinforcement learning from user clicks, etc. Human attention is involved here as the thing to optimize (so it’s a bit hybrid), but that’s fine because we can measure success by improved engagement or conversion rates and feed that back into the token rewards (the “hybrid integration” we discussed). Companies currently spend vast effort on manual UI optimization; automating it with AI that continuously learns could be very valuable. The token model could be that a site owner stakes CTZ and tasks the system to improve a KPI (say sign-up rate). The AI tries many tweaks (compute intensive if done in a high-dimensional space) and once it finds improvements, part of the value (like increased revenue) can be converted to CTZ to reward the process. The reason it’s rank 3 is that it’s slightly harder to directly monetize than trading or code (value is indirect via user satisfaction), but it’s an area where even small improvements are worthwhile at scale (think of giants like Amazon fine-tuning their checkout process, a small UX change can yield millions more in sales).
Autonomous Cloud/DevOps Optimization (AIOps) – Rank 4. Managing cloud infrastructure and IT systems is complex, and AI can be used to optimize resource allocation, auto-scale services, detect anomalies, and even orchestrate microservices better than static rules. This domain is digital and has clear metrics (cost of infrastructure, uptime, latency etc.). An AI Ops seed could, for example, observe the usage patterns of servers and reconfigure them or redistribute loads in a data center to save energy or improve performance. The token incentive could be tied to cost savings: for instance, if the AI reduces monthly cloud bill by 10%, a portion of that savings is used to buy CTZ and pay the AI (and its seed creator). This domain is attractive because companies spend billions on cloud bills and IT operations, and even a small percentage improvement is big money. It leverages computational interest: an AI that continuously monitors and tweaks systems will perform better over time as it learns the patterns of usage. It’s ranked slightly lower at 4 mostly because it often requires integration with legacy systems and careful safety checks (you don’t want an AI crashing servers in an attempt to optimize). But as a contained experiment (perhaps on a smaller scale cloud or a specific subsystem), it can show quick wins. Importantly, it’s entirely within the digital realm of server configurations and software deployments, so it fits the criterion.
Automated Cybersecurity and Threat Detection – Rank 5. In cybersecurity, AI can analyze network traffic, user behavior, and code to detect intrusions or vulnerabilities. This is a race against malicious actors, and heavy compute (for analyzing millions of events, running attack simulations, etc.) is often needed. A security agent seed could continuously mutate and test software for weaknesses (fuzzing) or scan logs for anomalies in real-time, improving its detection rules through machine learning. The value of preventing a breach is enormous, though hard to quantify. A token model could involve insurance-like pools: companies stake CTZ for an AI security service that guarantees a certain level of protection, and the AI network earns those tokens by fulfilling that guarantee (with penalties if a breach occurs that it should have caught). This domain is digital because all attacks and defenses happen in software space. It’s ranked 5 because, while very important, it is reactive (you only see value when something is prevented, which is hard to measure) and adversarial (attackers might also target the AI). However, it’s a perfect candidate to demonstrate computational arbitrage: the AI can run far more checks and analyze far more data than a human team could, and token incentives could crowdsource security expertise (people writing detection plugins as seeds, etc.). If executed well, any detected threat and fix could be immediately propagated across the network (seeds sharing learned immunities), acting like an immune system. Early deployment could focus on something like smart contract security auditing – an AI that continuously checks deployed Ethereum contracts for vulnerabilities (there’s demand for this in the crypto world itself).
These five domains are by no means exhaustive, but they represent a mix of areas where our paradigm likely outperforms traditional approaches. In each case, success would demonstrate that focusing on token-fueled compute leads to faster or better outcomes than focusing on human attention or static software releases. Moreover, these domains are not siloed – improvements in one (say code optimization) directly benefit others (the algorithms discovered could be used by the trading bots or the security agents). So there is a compounding effect: as each domain’s seeds generate computational interest, the knowledge and tools they produce can be shared.
In initial deployments, it’s wise to start with controlled environments – perhaps private testnets or consortia for a given domain. For example, partner with a software company to use CTZ incentives internally for code optimization and document the efficiency gains. Or run a pilot in algorithmic crypto trading with a few strategies to show profitability. Each success story will attract more developers to publish seeds in that domain and more token holders to stake, creating a network effect.
Over time, one can imagine vertical-specific communities forming around each domain within the platform, analogous to subreddits or DAOs: a Code Optimization DAO, an Algo Trading DAO, etc., each curating and governing their seeds and maybe issuing domain-specific tokens that interface with CTZ. This would further drive participation by experts in those fields.
In conclusion, focusing on these digital-only, high-impact areas provides the best chance to validate the token economy concept. They have clear metrics to optimize, strong demand for improvement, and purely computational value generation. Early wins in these areas will pave the way for expanding the paradigm to even broader scopes (potentially including physical domains eventually, though that introduces new complexities). But as a starting point, the above domains offer fertile ground for “computational seeds” to grow and showcase the power of this new paradigm.
A cornerstone of this paradigm is ensuring that those who create the initial seeds – the innovators and developers – are fairly rewarded as their creations evolve and generate value. Traditional tech often struggles with this (e.g., open source maintainers under-appreciated while big companies profit from their code). Our token economy explicitly addresses it through built-in value capture mechanisms for seed creators. We touched on some of these earlier (like computational interest royalties); here we consolidate and expand on how value flows back to creators, and how it can be distributed in a sustainable, equitable way.
Creator Royalties on Compute Usage: As described, whenever a seed is used or improved via token-backed compute, a small percentage of the CTZ spent is allocated to the seed’s creator. This is automatically enforced by the smart contract governing the seed (Chain Insights — How Can NFT Creators Enforce Royalty Fees Once and For All? | by Chain | Medium). If Alice deploys a new AI model seed, every time anyone runs that model (directly or as part of a pipeline) and pays CTZ, Alice gets, say, 1-5% of that payment. This model is analogous to NFT royalties but applied to usage rather than resale (How NFT royalties work: Designs, challenges, and new ideas - a16z crypto). The distribution can be immediate (Alice’s balance increments in real-time as usage happens) or periodic (accumulated and paid out daily/weekly). The crucial effect is that creators have ongoing income as long as their seed remains valuable in the network. They need not constantly intervene or sell services manually; the protocol handles it. Over time, a portfolio of seeds can become a significant revenue stream – empowering independent developers and small teams to focus on creation while the network works for them.
Initial Token Grants or Retroactive Rewards: Sometimes a seed might require significant upfront work (e.g., training a large model, which costs a lot of compute before deployment). To encourage building such seeds, the ecosystem can offer grants or retroactive rewards. A grant could be given in CTZ to fund the initial compute or development time, possibly via a community pool or foundation. Retroactive rewards mean that if a seed becomes very successful (judged by, say, total CTZ spent on it in the first 3 months), the creator gets a bonus lump sum of CTZ as a “thank you” from the network for creating a high-impact seed. This could be algorithmic or decided by governance vote. It parallels programs in some crypto communities where developers of widely-used protocols were rewarded after the fact (e.g., Uniswap’s UNI token airdrop was partly to give back to early contributors/users). In our case, a portion of CTZ inflation or ecosystem fund could be reserved to reward top seed creators periodically.
Ownership Tokens for Seeds (Creator DAOs): A seed could be launched as a mini-project with multiple contributors. In that case, one can issue sub-tokens or fractional ownership of the seed’s future CTZ flows. This turns a seed into a micro-DAO or community project. For instance, a group of 5 researchers co-create a new AI model. They mint an NFT or a set of tokens representing ownership, split among themselves, maybe even with some sold to fundraise. The CTZ royalty from that seed is then automatically split to the holders of those tokens (which could be encoded in the seed’s contract – e.g., it could reference an ownership registry to distribute the royalty). This way, collaboration is facilitated and everyone’s share is transparent. If someone wants to exit, they can sell their ownership stake without affecting the seed’s operation. This mechanism allows creative teams to form around seeds and ensures fair splits – all enforced by code. It could also involve community: perhaps early adopters or testers of the seed get a small ownership token as well, aligning incentives for them to help improve it.
Economic Moats for Creators: One might worry: if a seed is fully open (code and model accessible), could someone copy it and redeploy without the original royalty, undercutting the original? To counteract that, several factors help maintain the creator’s edge: (1) Network effects and reputation – the original seed might already have usage and integration, so a copycat would have to attract users away. (2) The protocol could discourage identical duplicates by, for example, downranking seeds that are mere clones without improvement (maybe via governance or a curation layer). (3) If someone does copy and eliminate the royalty, they might lower costs a bit, but they also lose the creator’s ongoing support/expertise. A savvy user might stick with the original knowing the creator has incentive to keep updating it (since they earn from it), whereas a clone has no guaranteed maintenance. That said, competition can happen – if someone truly improves the seed and redeploys, then they become a creator in their own right (with their version’s royalty). This is akin to forks in open source – healthy competition but typically the innovative edge stays with those pushing the frontier. The token incentives ideally foster continuous innovation by creators to stay ahead, rather than complacency. If you invented a popular seed, you’d be motivated to keep enhancing it (or release a v2) to retain users, because clones could nip at your heels. The perpetual royalty gives you resources to do so.
Distribution of Value to Support Ecosystem: Creators might also choose to distribute some of their earned value to others, which could further strengthen the system. For instance, a successful seed creator might allocate a portion of their royalties to those who supplied training data or who helped with testing. This can be done programmatically: maybe the seed contract says 3% goes to creator, 1% goes to data providers (with a list of addresses or another contract managing that). Another scenario is creators “pay it forward” by reinvesting CTZ into new seeds or bounties, effectively becoming patrons in the ecosystem. This voluntary distribution, while not enforced, could become a norm or be encouraged through social consensus (akin to open source ethos but with real currency). The existence of token rewards could ironically encourage more sharing: since creators are rewarded by the system, they might feel less need to hoard their breakthroughs and can afford to support others.
Transparency and Trust: All flows to creators are recorded on-chain. If you use a seed, you could see exactly what cut goes to whom. This transparency builds trust – users won’t feel gouged because they know the fee structure upfront and see it executed exactly. It’s also flexible: creators can adjust their fee (maybe lowering it if they got enough or raising if they think it’s warranted), but such changes can be made transparent and possibly subject to user acceptance (users could get notified of a royalty change and choose to switch seeds if they dislike it, applying pressure to keep fees reasonable).
In effect, the token economy aligns incentives so that creators, users, and compute providers are in a positive-sum game. Creators want to publish great seeds because they’ll earn from widespread use. Users want to use seeds because they get better service through AI, and they accept creators earning a slice as fair compensation. Compute providers just care about total CTZ they earn, which is unaffected by who gets the small royalty – it just slightly increases the cost users pay, which they’re fine with if the product is worth it.
To illustrate with a hypothetical: Suppose a developer publishes a seed “AutoCoder” that uses AI to optimize code (domain rank 1 scenario). She sets a 2% royalty. The seed quickly gains traction – in a month, 10,000 CTZ worth of compute has been spent by various projects using AutoCoder to improve their software. She earns 200 CTZ from that activity. If CTZ were trading say at $10, that’s $2,000 earned in a month passively. Meanwhile, those using it might have saved much more in engineering costs or improved performance, so it’s a win-win. Now she can use some of that CTZ to maybe pay for further research to upgrade AutoCoder (ensuring it stays the best and people keep using it), and maybe stake some CTZ to back another newcomer’s seed she finds promising – thus acting like an investor too. Over time, her initial effort of making a great seed pays back continuously, potentially far exceeding what she could have made by selling the software outright once.
Preventing Centralization: One must consider whether a few creators could dominate and capture outsized value (like Big Tech in attention economy). The difference here is the low barrier to entry – anyone can introduce a competing seed, and usage can shift quickly if it’s better or cheaper. The token system is open. So while some may become “rockstar” creators, there’s always the threat of new innovators, keeping them in check. Additionally, governance could impose caps or progressive decentralization of very fundamental seeds (for example, if a seed becomes part of critical infrastructure, the community might vote to reduce its royalty over time, while compensating the creator in a lump sum – speculative, but mechanisms exist if needed).
In summary, the design ensures creators capture a fair slice of the value pie they help bake, distributed automatically as their seeds flourish. This transforms the traditional model of value distribution: instead of creators needing to monetize via ads, subscriptions, or selling out to big companies, they can plug into this network and directly earn from usage. It could usher in a golden era of independent AI and software creators, analogous to how YouTube or app stores enabled independent content creators, but with far more equity since the terms are coded and not controlled by a central platform taking a huge cut. We envision a future where someone can publish a brilliant AI model from their home and that model itself becomes a revenue-generating “micro-business” through tokens. This democratizes the innovation economy and keeps it sustainable by continuously funding those who push it forward.
We are on the cusp of a paradigm shift in how digital products are conceived, built, and monetized. “Creation in the Token Economy” heralds a move away from designing for clicks and fleeting attention, towards designing for endless loops of algorithmic improvement fueled by cryptographic tokens. In this white paper, we articulated a framework where parameterized seeds – malleable, self-evolving product kernels – continuously grow in value via computational interest, paid for and measured in tokens rather than eyeballs. We explored the primitives that make this possible (Reasoning Rights, Computational Arbitrage, Intelligence Middleware, Computational Catalysts) and detailed how the Catalyze (CTZ) token interweaves incentive streams to align all participants in this ecosystem.
The implications of this new paradigm are profound. Products are no longer static offerings; they become living entities in a digital ecosystem, economically motivated to learn, adapt, and improve autonomously. Users cease to be targets for monetization, and instead become beneficiaries of ever-improving services – perhaps even investors and collaborators in those services via tokens. Developers and creators gain new avenues to profit from innovation without ceding control to ad-driven gatekeepers, receiving due rewards as their creations flourish in the network (Chain Insights — How Can NFT Creators Enforce Royalty Fees Once and For All? | by Chain | Medium). Infrastructure providers find efficient markets for their compute resources, directed to the most valuable tasks through token signals (Bittensor has several serious flaws. Is it doomed to fail? | 金色财经 on Binance Square). In short, the token economy can turn what used to be friction (the cost to run AI, the effort to update software) into the engine of a new value cycle.
This white paper is both an educational resource and a strategic call-to-action. To technologists, entrepreneurs, and builders at the intersection of AI and Web3: the tools and conditions are ripe to start building this future. Large language models and AI services are here and improving. Blockchains and tokens have matured to a point where complex economic behaviors can be encoded on-chain. The cost of compute remains a gating factor for AI – which is exactly why a market-based approach to harness every bit of useful compute is needed (Navigating the High Cost of AI Compute | Andreessen Horowitz). By aligning incentives properly, we can unlock an unprecedented scale of collaboration between humans and machines: think of it as Decentralized Autonomous R&D.
Consider this a blueprint, not a finished blueprint but a starting plan. Many challenges remain to be solved through actual implementation and experimentation. For example, refining the security of off-chain computation, fine-tuning the tokenomics parameters, and ensuring that the pursuit of token rewards remains correlated with delivering real-world value (avoiding Goodhart’s law traps where proxies diverge from goals). These require the collective intelligence of the community. We envision launching prototypes in one or two of the high-priority application domains (like an Autonomous Code Optimizer DAO or an Algo Trading network running on CTZ) to gather data and iterate on the concept.
To that end, we invite collaborators and pioneers across disciplines:
AI Researchers & Engineers: Help design seeds or middleware that can learn and adapt – your innovations can now be directly monetized and scaled on-chain. For instance, if you’ve developed a novel model training trick, embedding it in a tokenized service could amplify its impact and funding.
Blockchain Developers & Economists: Assist in building the smart contract infrastructure and refining the game theory. Challenges like oracle design for verifying compute, or creating stable reward loops, are in need of clever solutions. There’s space for new DeFi-like constructs (futures on compute, stake-for-service models) to emerge.
Frontier Entrepreneurs: Identify niche problems that this paradigm can solve better than existing methods and drive real adoption there. Whether it's a startup offering “AI-as-a-service” via CTZ or a platform integrating these tokenized AI services into mainstream products, there are first-mover advantages to seize.
Investors & Visionaries: Support this direction with capital and insight. Bootstrapping such an ecosystem might require initial funding of development and seeding the token economy with liquidity and usage. By backing these efforts, you’re not just investing in a project but potentially in the next foundational layer of the digital economy.
We also encourage engaging with existing communities like SingularityNET, Fetch.ai, Ocean Protocol, and Bittensor, which have been exploring elements of AI marketplaces, agent economies, and tokenized compute. Our paradigm is aligned in spirit with these, and collaboration or at least mutual learning could accelerate progress. For instance, Bittensor’s success and challenges in incentivizing decentralized model training (Bittensor has several serious flaws. Is it doomed to fail? | 金色财经 on Binance Square) (Bittensor has several serious flaws. Is it doomed to fail? | 金色财经 on Binance Square) provide invaluable lessons we can incorporate (and we have cited some along the way). By learning from such projects, we can iterate faster and avoid known pitfalls.
Ultimately, the vision is grand: a self-sustaining computational economy that drives continuous technological advancement. One where attention as we know it becomes a niche currency (perhaps 99.9% of “attention” indeed becomes AI attention (Have humans passed peak brain power? | Martin Signoux)), and the predominant currency is intelligence – measured in reasoning operations, bought and paid by tokens. This might sound like science fiction, but as we’ve shown, all the pieces are here in 2025 to start assembling this reality.
In closing, imagine a not-too-distant future scenario: A developer releases a “seed” for autonomous scientific research – an AI that can form hypotheses and run virtual experiments. They release it into the token economy. Thousands of CTZ tokens are allocated as the AI scours data, collaborates with other seeds (some focusing on simulations, some on literature review), and over months it churns out discoveries – perhaps even a potential new material or drug candidate – which are then validated and patented (with smart contracts ensuring credit and token rewards to all contributing seeds). The developer who started it sees their small initial idea blossom into something far beyond their own capacity, supported by a whole network of computation and earning them and many others ongoing rewards. Humanity gains new knowledge and solutions faster than ever, propelled by an army of tireless digital researchers that themselves grow smarter each day. This is the promise of creation in the token economy: a new paradigm where innovation is automated, incentivized, and democratized at scale.
It’s an ambitious path, but it’s one worth pursuing. Let us move from vision to reality – one computational seed at a time. The call to action is clear: join us in building this new paradigm, and together, let’s catalyze the next great wave of digital value creation.