<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/">
    <channel>
        <title>0xjacobzhao</title>
        <link>https://paragraph.com/@0xjacobzhao</link>
        <description>Crypto x AI | ex-@ArweaveSCP @Mirana @OKX_Ventures @Indodax | ENTJ/INTJ</description>
        <lastBuildDate>Wed, 15 Apr 2026 03:20:06 GMT</lastBuildDate>
        <docs>https://validator.w3.org/feed/docs/rss2.html</docs>
        <generator>https://github.com/jpmonette/feed</generator>
        <language>en</language>
        
        <copyright>All rights reserved</copyright>
        <item>
            <title><![CDATA[Turning Probability into Assets: A Look Ahead at Prediction Market Agents]]></title>
            <link>https://paragraph.com/@0xjacobzhao/turning-probability-into-assets-a-look-ahead-at-prediction-market-agents</link>
            <guid>ZrvT1AvXlFrc3DCe7z7r</guid>
            <pubDate>Wed, 04 Mar 2026 16:03:46 GMT</pubDate>
            <description><![CDATA[Prediction markets are evolving from betting tools into a global truth layer, aggregating information into tradable probability signals. Prediction Market Agents enable executable probabilistic portfolio management—using data, ML, and automation to capture mispricing via deterministic arbitrage and structured signals. With infrastructure, strategy ecosystems, and vault models emerging, the space is still early—a breakout moment may be approaching.]]></description>
            <content:encoded><![CDATA[<p>In our previous Crypto AI research, we established that while <strong>stablecoins</strong> and <strong>DeFi </strong>offer immediate utility, Agents represent the critical user interface for the AI industry. Consequently, we define two primary value paths for Crypto-AI integration: a short-term focus on <strong>AgentFi</strong>, which automates yield strategies on mature DeFi protocols, and a medium-to-long-term evolution toward <strong>Agent Payment</strong>, enabling autonomous stablecoin settlement via emerging standards like ACP, x402, and ERC-8004.</p><p><strong>Prediction markets</strong> have become an undeniable new industry trend in 2025, with total annual trading volume surging from approximately $9 billion in 2024 to over $40 billion in 2025, achieving a year-on-year growth of over 400%. This significant growth is driven by multiple factors: demand for uncertainty hedging brought by macro-political events, the maturation of infrastructure and trading models, and the breaking of ice in the regulatory environment (Kalshi's lawsuit victory and Polymarket's return to the US). <strong>Prediction Market Agents</strong> are showing early prototypes in early 2026 and are poised to become a new product form in the agent field over the coming year.</p><h3 id="h-i-prediction-markets-from-betting-tools-to-a-global-truth-layer" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>I. Prediction Markets: From Betting Tools to a "Global Truth Layer"</strong></h3><p>A prediction market is a financial mechanism for trading around the outcomes of <strong>future events</strong>. Contract prices essentially reflect the market's collective judgment on the probability of an event occurring. Its effectiveness stems from the combination of <strong>crowd wisdom</strong> and <strong>economic incentives</strong>: in an environment of anonymous, real-money betting, dispersed information is rapidly integrated into price signals weighted by financial willingness, thereby significantly reducing noise and false judgments.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/383166890863137c40183bdd8cd4e2ec2bfd31b95c7426c7beb0d34763f3c9a5.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAPCAIAAAAK4lpAAAAACXBIWXMAAAsTAAALEwEAmpwYAAAC50lEQVR4nKVT30tTYRh+b/ofootu6iYi6LquhG5CRBLzJiKpvJGKoCASDJOgsiQTQyNSotAaKOqmtk3PTgfX2Rwe58llTpcy5+R4nGduZ5/fdnbOF+c7NZd5EfO5+n693/M+7/s+4Pf77XYHz/vsdjvLsuPjE2NjYwzDjIyMMozH5XKL4jf9v5HXtMKaEIIQgkgkIghCkEIUxUAgIMsyxlhVMwjtIIQwxqRUYIxBK+IsJi8B2Uwmk0oVn5gKSv6uGAb9RF4UIx4H3eatb00Fuq5rFOQA0Gm4r/tlaKC7sEUIRaM/gRCyTWFelAqDECk8ew9AtLVbChRFCQaDg4MMHDB3g5YitSm/qzpRC8A63NZhMBjkOE4cnwBBEKz0948npqxIJBKLxSytGONCVc3ktZxZnM6GmwCnAToGgoSQ+R9hluX8/qlhxgeiKMqyvO8saXmzVwghnudFUcQYS5IUi8UkSYpSYIxzdIgfX6sDExfb+mcIIaqKeN7HcdyEc9TqQWrfHmQQHvwSVlFuM7GlKEld16k5TOxQo2RzOUKM5bUEwFWAcoDafsEkIIQsLJgihp3sX2NKC2JoeSOr5QkhTv/y0evvKbFhXVkL489jLW8uoaoT4DIcq4HqC864SWAQQ1XVeHx9dVsyCf61wu/Iig44fKOphz9Z3zf9fY0QQ5JVVc0mtpAkpbcURAipf+GCU5VwuwIenYem8qEFgYab+e36QNf1VDq9nVY3EsnwysaDnsnnvb6G1xwcqoPjlwCuANTAkVtwrhXKmqD8KZxphLN3oawRqluhpg5aKqG5Eu5XwMOq0aVZi4COLiVQlGQ8Hue4yakZcW1TmZ5ffdbjvtPS19bLsaFVbzT0dtj56tPYR0Z44wh8mPQ+6bN1fXa4I3POxVCXa6R9qHdo2ju1vuJZmutx272BwA4yle0qaG5uttlsLMt6GOard1KSJMspe4qWTiWX5ufoldUCywSG5YMC9hQcY/wLFmHrauMOlwgAAAAASUVORK5CYII=" nextheight="429" nextwidth="900" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p style="text-align: center"><em>(Note: "Prediction Market Nominal Trading Volume Trend Chart" from </em><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://dune.com/queries/5753743/9335712"><em><u>Dune Analytics</u></em></a><em> here.)</em></p><p>By the end of 2025, prediction markets have largely formed a duopoly dominated by <strong>Polymarket</strong> and <strong>Kalshi</strong>. According to <em>Forbes</em>, total trading volume in 2025 reached approximately $44 billion, with Polymarket contributing about $21.5 billion and Kalshi about $17.1 billion. February 2026 weekly data shows Kalshi's trading volume ($25.9B) has surpassed Polymarket ($18.3B), approaching 50% market share. Kalshi, leveraging its legal victory in the previous election contract case, its first-mover compliance advantage in the US sports prediction market, and relatively clear regulatory expectations, achieved rapid expansion. Currently, their development paths have clearly diverged:</p><ul><li><p><strong>Polymarket</strong> adopts a hybrid CLOB (Central Limit Order Book) architecture with "off-chain matching, on-chain settlement" and a decentralized settlement mechanism. It has built a globalized, non-custodial high-liquidity market, forming an "onshore + offshore" dual-track operational structure after its compliant return to the US.</p></li><li><p><strong>Kalshi</strong> integrates into the traditional financial system, accessing mainstream retail brokers via API to attract Wall Street market makers for deep participation in macro and data-based contract trading. Its products are constrained by traditional regulatory processes, leading to a lag in addressing long-tail demands and sudden events.</p></li></ul><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/abaa6581c2cc4e395ff917120d2b27051ec450ddfc8c93d5fef786f7394c0614.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAARCAIAAAAzPjmrAAAACXBIWXMAAAsTAAALEwEAmpwYAAAGZklEQVR4nC2S/VPSCR7HuT9gb27mur3Z27vb8sqypbXVTFwLj7y10jSfEAX0KworoEggghI+ICj5QCIiaZSa2rpyeZzmWoqQEg6k+QAmKsrD8CVBuC+aed023bU31H7mPZ+Zz0+veX3mDXO4vaDHD3r8vsD+3NLKSfhXSclpOHxeWkYGIja2jMNuvHkzC4f75vw55F/jmOXleYT8by9eJJHJyemZOAIpIxufnI5JTsdcRedcSUXjCUVboNftfw369l5C+w63F7bpdC+a1xbMFrPFaphfzsYDkdExCZeTI84gQ49HZqALUjPzUzOBEyejv4QjcHhqbj7twt9SI6Pjz8cgUi8hr1HykdHhxw7/Hok4jYqNuIyKmdWOmZ7plozaZePMunUTtmC2jKufiKXyiChETOx5dDZePa2f0M7GxLFORdFORdF+dywXFpxDH/aZsMjiI3DSp2GFnOzEXm6kvCS8m40YrItPOgULhcEa8EdbgC9EmM8bsX9sxP5pTvcoaHC3b6CtQ67o6eu9PyiWyrT62U0XWMO/czY68YsjERfis7A4WvjpuOMnEGejL+UB5TLZgzaZStXfLRcUE7FX2NTs3HQUPg2FTjpXyyKtzI4tz6iWn6hMulHLmgUGevxyxV02lyvr7JLeusUX1uvn5g/evevuHUWjgfj4JDg8MioqFg6PDA2FHz0aFh5+9vLltN5+lcW80CkVolBxSGTMibDQixfjkchvMjOvJiUmEAuBOYN+G7QFXwR6/L33B1kVFXLF3f2f3v7nv/+D9g/cPqhdpiSSmClpOazyOtMKaFoBX1hemlfdphXQYNyY0Czq1ON9txpqqjhFRPzyvP792903r3Ze+cFdrzPgdbi2Vq0vFrbsdpjD7R1SqWSdinvf/2AD3dsQtA1BO3t7MtkgANCupGTn5BABQkkRuYzF4XO49XQ6T1gv1cyYddoJRXs9h82gkAAuhyFuElZVltXzr/OrK8RNAvPzWdfW6rrVGgT0/6AUS2UNTc1Lq6vQwcHO3q43EGiV3EtMzDoR9jUCgQr5y0k4/MxvD31++MjxkJCwTz75tKd/VD0+UsulfnUqLOL0ydBjh1kMqlwmHrrf06vouN0hMT+fdVhXfjEYGFL+ODX15v37nb29j9mGoA75YF1dK48nYjJr6uvb2RwBQChpbOx4YXEbjJYJzeK0eny4XyaorRQJq5pFfOlN0cT4sPL77r5u2ciDAY/TumVZDgJcHh+7gqvouff255/dPr8HgryBgNvnr6mVEUnM5Ks5mWggK5uYdAWDxhRgcCQ6o4rO4ClV2h+H798UMPH4rGx0iqRZaNCpDTq1fnpyWj36TK91WFeslsUgYNPpbmmTKof/MW8yuzxebyDggSC3D2Kym+mM6iwMIQ+g5gKUhEtplGK2UNROZ/CKyGWDD6ZGlf1NtTRaCbmUSpSIRXrtI5328UfG44fDL5aMvxjY3dvyrtuU4hIWm7O6YYX2DzwQ5H/1WijsotOrMFhiEbkMINDQWfms8lqDccNoXFswOXSG9ccjf1dIajhsBo1aKBELiymFlKICfjWHw6YV5mPH/qn0ONfXbbaggaRdRi0tFTW3cK7zXB6v2+d3+/ytknslJdxMNEChluPwRXn5xQmX0jPQAJHEmFQ/G5+cm3o0equ1lkIppFIILAYFyMUQCTjydwD5OwCLSauqLLNaFoMAh9ur6Onh8qqFN25MaJ7s//TWu7vr23t9p2eYw22gM7i0Ui6VVsFgVrHK62ilwXNGZ9ZMmwz6aXmr4ALqXGpKAjojKSkxHo/NpBQVjKmG/uVxgDbLhxY5YJtOl+rh2N2+gc473ZqnTwP/fuMNBLYhqLd/hM0RcCqF1GI293oDl3eDwayilXKJJMaU9rlGu2zQT/d0ibGY9LzcrGJKQXLytyJh9UPVkHZyZM08v2ae37QsBQHrNmfnne5qPn9gSHnw7p13d3dnb88DQc3igdjYBBjsV3D42aio8yEhYb/+zWeHDv3hQnyKwbgxPvn8qXqsVXAN/uXxI3/+LCb6a1RcDJfDIBOBSs61UiqxhFwA2iwfW+TSGYxTM/qnhjkbCNpA94e81Ew/6+4drODx22S3G1ukAmFTS6u8oVmiGn28ZNowzJlNJpNGPdYmFXd2yZqbGltaGu12K+iyOxxbdrvVYd+027e2nM7/AxU1n0ptjqZqAAAAAElFTkSuQmCC" nextheight="491" nextwidth="900" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>Beyond Polymarket and Kalshi, other competitive participants in the prediction market field are developing along two main paths:</p><ol><li><p><strong>Compliant Distribution Path:</strong> Embedding event contracts into the existing account and clearing systems of brokers or large platforms, relying on channel coverage, compliance qualifications, and institutional trust to build advantages (e.g., Interactive Brokers × ForecastEx’s ForecastTrader, FanDuel × CME Group’s FanDuel Predicts). While compliance and resource advantages are significant, product and user scale are still in the early stages.</p></li><li><p><strong>Crypto-Native On-Chain Path:</strong> Represented by Opinion.trade, Limitless, and Myriad, these leverage points mining, short-cycle contracts, and media distribution to achieve rapid volume growth. They emphasize performance and capital efficiency, but their long-term sustainability and risk control robustness remain to be verified.</p></li></ol><p>These two paths—traditional financial compliance entry and crypto-native performance advantages—together constitute the diversified competitive landscape of the prediction market ecosystem.</p><p>While prediction markets superficially resemble gambling and are essentially zero-sum games, the core difference lies in whether they possess <strong>positive externalities</strong>: aggregating dispersed information through real-money trading to publicly price real-world events, forming a valuable signal layer. The trend is shifting from gaming to a "<strong>Global Truth Layer</strong>"—as institutions like CME and Bloomberg connect, event probabilities have become decision-making metadata directly callable by financial and corporate systems, providing a more timely, quantifiable, market-based truth.</p><p>From a global regulatory perspective, compliance paths for prediction markets are highly divergent. The US is the only major economy explicitly including prediction markets in its financial derivatives regulatory framework. Markets in Europe, the UK, Australia, and Singapore generally view them as gambling and tend to tighten regulations, while China and India completely ban them. Future global expansion of prediction markets still depends on national regulatory frameworks.</p><h3 id="h-ii-architecture-design-of-prediction-market-agents" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>II. Architecture Design of Prediction Market Agents</strong></h3><p><strong>Prediction Market Agents</strong> are currently entering an early practice stage. Their value lies not in "AI predicting more accurately," but in amplifying information processing and execution efficiency within prediction markets. Prediction markets are essentially information aggregation mechanisms where price reflects the collective judgment of event probability; real-world market inefficiencies stem from information asymmetry, liquidity, and attention constraints. The reasonable positioning for a Prediction Market Agent is <strong>Executable Probabilistic Portfolio Management</strong>: converting news, rule texts, and on-chain data into verifiable pricing deviations, executing strategies in a faster, more disciplined, and lower-cost manner, and capturing structural opportunities through cross-platform arbitrage and portfolio risk control.</p><p>An ideal Prediction Market Agent can be abstracted into a four-layer architecture:</p><ul><li><p><strong>Information Layer:</strong> Aggregates news, social media, on-chain, and official data.</p></li><li><p><strong>Analysis Layer:</strong> Uses LLMs and ML to identify mispricing and calculate Edge.</p></li><li><p><strong>Strategy Layer:</strong> Converts Edge into positions using the Kelly Criterion, staggered entry, and risk control.</p></li><li><p><strong>Execution Layer:</strong> Completes multi-market order placement, slippage and Gas optimization, and arbitrage execution, forming an efficient automated closed loop.</p></li></ul><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/f054950caea7bae6d3eee5acf701842bc818f42511be9f6c78a20ea344d1d217.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAASCAIAAAC1qksFAAAACXBIWXMAAAsTAAALEwEAmpwYAAAGRElEQVR4nJ2Ua0xb5xnHn09VpmmXTppY0naNVKWV+mFTPjSpuqTT1nZJ0zZJE5Y0bVNQaBPahpSkTbFZCBeTAK4NONhcbPAFg48xmIvxBffYB+MrtRPAxg43E2YgIeAbxvhgjjmcyd6mfd9ff7169Lyv9JOeywskSQYCAZPJZDQYLBaz1WI2Ggyu0dFhzKjVqEcdDgzDLBaL0+lEUVSv17vdbofDYchofHzcZrcbMw/cbjeGYTabbWRkxGKxDA6q5ufnU6kUUBQ1OTlZz20SSDo16LCiTy3q7NKgmFSuZNZzVeohGSITiUXt7ZK6unoWu+7+g/symaypuYnD4XQrlf0qlVAk4vP5SqWyqbkZQRCxWKzs7a2qrna5XBRFwXZya5ei/H1Mc+ErNtohG+2wg/665eZrDvrr9qKDM+03Vn3Di87BJZc6PGXcCNifjGmWXNpFlybkM0RnhlfHddGZ4ZDPEPL+uD43EvOb1udMkYdobMm9Q+4mk8k0ILW7O3jtIOcQtLwLjW8B9yhw34DGo1B3CHov/ryvsbSt/EtBaX5P7bdjPWxMWNZy+zKvOM8hvbtoaCn+/KyksqC1PF9a9Q1Sc51x9YKKV9xeWRCwKbaSacEWniApys7+pPUoyD4A6QmQvgsd76Vj4V8AK/nzmFboULCtyA/mDpZRUuNU1ptlLGtX3TTaumTtMCOsscFGn47v1fEnVI1qAeN+P8+uYK949EmCwPEEpCkEgcej0ZWFeHA5HlqOhx5nvJyIPIlHVhPRUDIeScTCJJHwPHC4bGaSwBOxcCIa+u9VKB5ZjUfW4tHgViKKr4dioSfr0WAyuY3jOGxu4hRFff8RMwve2A8nnodjz8OxfXBsP5zIgjdzj9zSIVZpw0BXs17RrJc2DCCN6k7uINKolfHUknt9Xc26boF+SObQdqatFBgGxFhfm+kn1EOkiDQAx7coivrr3lyArH1wOAv++Ds4+Et4ZR+8BvDcq5DdUCmifXmn6Aqj5Gp1SQGzhtZMu8wousL4x1dV9SXCtprub3KKuWWSghw6t0Jy8/Oy9lpVXbHQoBhNEv/pQRpQfZX/Kpx+8xe5R5757MgzF4/s+fTons8OQvbNUyxUYett1StaNP0iFOEOyBtVvYIhJV+r5GtNSqe1f9zYNWpAHGqxEUVsOqlpROlSi00PMC9J7qQBBEFsJ7dX1tfGVqbdq3PuNb8nOO9Z9bvX5rzBR4uxlQge+7fjJG522s1OO75LhDOZEL4ejEeD8ejaRjiajIcyyaex4JNoMBSLbOHpOYKt7SRFUS/U5sJxgAu/hfO/gXO/grN74EIWvA9QfpyrF92WsyqUdVxrh9DTV2toK++uY/Q3VPZzy7s5d/u5TJ2gabSrVM6u0fGrNS21hrZK1T3lpDFFpAV4Mt1kKDkGhwFOPwtnnoWTe9Kwv2fBnwCu/6FUXJNXcz2fffNGa/ltOYsmuZNXdS2f/d0XzOslCOuOineZ+S29o+rkV+duCMquNtAreusLmorl7qEUkUZAAsd3d3bvYiKozX656dJ+Xs5+Xs4BXu4BXu6v7318TcM2PDQPeH5UedBBN9qq7xQZEbUX63Fqun/SDM1a0Ud2lQfV+kyo34r6rWofhs5Z1dOm0YWJ7eR2ukSbm/hOKuX1r4p1vk7jNILNItisPHMixlnL5PLM0trUUtgXWJt5HNFaH2jNruXolv/pxvza5vRieGopvBDGZ1fW555uzC5H/E83ZpfC3kBwcTVKEJk9iG8m0iW6WAM/Ow4vfwS/z4YXz8LeU3DgfDrzfvn3HPF7+fSPacxCrqJe7y6VonkVvK+Zkmucjrt9DoZiJK+imS7UFDbIaa3qSxW8kvahQq5CavakUqk0IJEZU8hmALwDL52DvR/Cc2cg60N46TzA2/C322X8npP5xZ8U/VDI6aobmrwl1l2ksy+Vca9UCor4A3eUjpwSLk0w+DVTTBdqr1QJb7WpCzhy8Yg3RWRKlCQIiqI6TBNvM6SnmD2nmd0nmT1nWMpTzJ4PqhWsAatt6p8m7wLm8RvGpk2+hWHvPOZ5ZPTMGT1+48Ss+eGife6xeSpgnVke9i6MTC2YfAF0wj+xsExRmd+UJMlUKkWSO9T/r93/ReROxsROKi2SJP8FHKORIpJVXt4AAAAASUVORK5CYII=" nextheight="497" nextwidth="900" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><h3 id="h-iii-strategy-framework-for-prediction-market-agents" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>III. Strategy Framework for Prediction Market Agents</strong></h3><p>Unlike traditional trading environments, prediction markets have significant differences in <strong>settlement mechanisms</strong>, <strong>liquidity</strong>, and <strong>information distribution</strong>. Not all markets and strategies are suitable for automated execution. The core of a Prediction Market Agent lies in whether it is deployed in scenarios with clear rules, codifiability, and structural advantages. The following analysis covers target selection, position management, and strategy structure.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/a4ec0aaab5e2892c7532bfaa9f0f18678fe117d069a16f2e217c5cec87e3f7f4.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAARCAIAAAAzPjmrAAAACXBIWXMAAAsTAAALEwEAmpwYAAAFGElEQVR4nG1UfUwTZxi/ZIv/LUuWuGX730031LHpiIgfY6IsG4sEHVui0m4JodqqM5lCsmbIp3N8qUWmZDhGQLR+zIpmpq3yYVt2UITSL9pCy5WetPaglpfe23u5vsvdKdmHT55cnnsu9/7yPL/f7yVYNm6xmGbDdBKxLJtIwgRCMAlZ8ZmIxxgIhb6UCEH0LDgIIcYphDiMU2LyGPMQsvGFeQDisdg8AAsAxAmv15mZmXH/gYG8f7OjvrT1oqbudEX96aqaqh9O1ahHyN7ZmUksRiQS/qqwYOOG9A/S1+fn523J2rRm9art27fIZEUbN6RnZW3Kzt6q093CGCOEMMYQQgAWCADmHQ67b2ryckNZaf77tT+dUqmUJSXFbW2tPXduB/yeifEhCYCmQ2nvvUP8I3Jzc956843l1xUvEWtWr9q7t2DXzh3r1601GO5hjIlUiuMQ8rodAY8NY6xSHUpPX//FnvyycrVaXdZ+6ULAY5cAIGQNBv0vLZquzs5mzdm2tlaLxWQ06ltaNFrtldbWFq326tkzTTU11c0aTXm5miQtGKcEgGCQ8npdZP+f5IC+89L5i811H6587e2XiU935ai/P6aQHyj97rDT9ijJLSIEOcTxqSVx+xBCFkKBFZEeTuJDYgghBECcZRMEn+JksqKxsRGM8RwTSSYAj/E6gkgjiPrmc4dVyqtabcu5pqH+Powxv7Qk/SYezUp0S8NJhIt4AhJCnKQaAaCw8EurlVwmJ/yYtg39FYk+yd7x8easzQ9NptjcXNPJE9qOFr3+7pTPuQjmIo+nQ0Gf2dyn69H19T4wGPQWi4mm6Ugk7PdPSedwKCkAYJwqkh0gyUGMcVJQHo7NMT2XO4x3dT9XlTfUVjbUVupv396+ZuXWtNcVSmVX5+9Oh31sbPTRyNCxY6p9RftlsgMlimKVSmmxmGKxeY93gmUTGGNJ2QTGfGHhXrP5IcYpqcUhLjDpG7cOeexjk25X0OcdNj+0WQenfO6PMjLy8j5XqQ7t279PrS69e+fmTChYWna8uPibo0eVh48oNZomr9fDMFHRBPMJdpFACBbsKRAZxwjxkmCeywZKw8afxkasZrO579VXVuR9trOhvqaurrq6Um003Iky0W3btmRlZWVnf5KZuUkul//W3i76TiCGZQGBEJebu+s/APySULAs4JeWpM5McMphH/W6HR63nfL7fF7nlM9ttZL9AwPnz59pbKy+cKGxsbG2p+e6yWQMhSanp90MMwshK0yg1V51usb5lGB0DiWluwEhGH8akzbGoSSfeoYNISvK9HmNsc1mOnFcdVDxrVz29cnyskh4OhwOjo0O2MfNvGi0JISQogKiyATxSQAQsrHY3LIil/tiE0hsAbCAEG+zmatrqhSKkorKcgCAtJ9IOOSykwghQaYALFCUH/8vEuyixMELiVmWtcdjVamUu/N3K1XKzs6ua9eu37ip6+j41eUk+RQvOJlhmBcCSPbB/w7Rrsu1AGAft5SVHlEo5AcVct2tKyTZ3/vg3o3r7RMuq7AilgUazTmStFBUIBikpJwW60mfh6ICFBXw+6eCQYqiAm63a9xus9ttfv+k9GkmFJzwOAYHe8dGLQ77kNM54nAOj45ahof7XC47QpwAgDHu7u7q7r4MwAJNh2bDNE3P0HToeR2aCQajzBOnazwnJzszMyNt7bs63R8AxGk6FImEGSY6H5tnmCgjRpRhEixrNBorKn5ECP4NE4D7BGYCUTkAAAAASUVORK5CYII=" nextheight="491" nextwidth="900" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><h4 id="h-1-prediction-market-target-selection" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>1. Prediction Market Target Selection</strong></h4><p>Not all prediction markets have tradable value. Participation value depends on: <strong>Settlement Clarity (are rules clear, is the data source unique)</strong>, <strong>Liquidity Quality (market depth, spread, and volume)</strong>, <strong>Insider Risk (degree of information asymmetry)</strong>, Time Structure (expiration time and event pacing), and the trader's own Information Advantage and Professional Background. A prediction market only has a basis for participation when most dimensions meet basic requirements. Participants should match based on their own strengths and market characteristics:</p><ul><li><p><strong>Human Core Advantage:</strong> Markets relying on domain expertise, judgment, and integration of ambiguous information, with relatively loose time windows (days/weeks). Typical examples: Political elections, macro trends, and corporate milestones.</p></li><li><p><strong>AI Agent Core Advantage:</strong> Markets relying on data processing, pattern recognition, and rapid execution, with extremely short decision windows (seconds/minutes). Typical examples: High-frequency crypto prices, cross-market arbitrage, and automated market making.</p></li><li><p><strong>Unsuitable Areas:</strong> Markets dominated by insider information or purely random/highly manipulated markets, which offer no advantage to any participant.</p></li></ul><table style="min-width: 75px"><colgroup><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p><strong>Suitable For</strong></p></td><td colspan="1" rowspan="1"><p><strong>Core Logic</strong></p></td><td colspan="1" rowspan="1"><p><strong>Best Applicable Market Scenarios</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Human Strength</strong></p></td><td colspan="1" rowspan="1"><p>Relies on <strong>"Judgment"</strong> (Only when possessing mechanism/data/regional knowledge advantages)</p></td><td colspan="1" rowspan="1"><p>• <strong>Political Prediction:</strong> Election trends, policy directions, personnel appointments</p><p>• <strong>Long-cycle Macro:</strong> Annual GDP, inflation rates, economic judgments</p><p>• <strong>Corporate/Tech:</strong> Product launches, M&amp;A cases, IPO processes</p><p>• <strong>Entertainment/Culture:</strong> Oscars, reality show results, celebrity updates</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Agent Strength</strong></p></td><td colspan="1" rowspan="1"><p>Relies on <strong>"Speed"</strong> &amp; <strong>"Scale"</strong> (High Frequency &amp; Data Driven)</p></td><td colspan="1" rowspan="1"><p>• <strong>High-frequency Crypto Prices:</strong> 1h / 15min / 1min price fluctuations</p><p>• <strong>Arbitrage Strategies:</strong> Cross-platform spreads, portfolio arbitrage</p><p>• <strong>Market Making:</strong> Providing buy/sell liquidity</p><p>• <strong>Statistical Prediction:</strong> Win rate calculation based on massive historical data</p></td></tr><tr><td colspan="1" rowspan="1"><p><span data-name="cross_mark" class="emoji" data-type="emoji">❌</span><strong> Danger Zones</strong></p></td><td colspan="1" rowspan="1"><p>Uncontrollable / Information Black Box</p></td><td colspan="1" rowspan="1"><p>• <strong>Insider Info Dominated:</strong> Sudden appointments, undisclosed regulatory decisions</p><p>• <strong>Extremely Poor Liquidity:</strong> Long-tail markets, unpopular bets on new platforms</p><p>• <strong>Purely Random Events:</strong> Viral social media spread, illogical hype</p><p>• <strong>High Manipulation Risk:</strong> Events with disputed settlement standards</p></td></tr></tbody></table><p><br><strong>2. Position Management in Prediction Markets</strong></p><p>The <strong>Kelly Criterion</strong> is the most representative capital management theory in repeated games. Its goal is not to maximize the return of a single trade, but to maximize the long-term compound growth rate of capital. It calculates the theoretical optimal position ratio based on estimates of win rate and odds, improving capital growth efficiency under the premise of positive expectancy. It is widely used in quantitative investment, professional gambling, poker, and asset management.</p><ul><li><p><strong>Classic Formula:</strong>&nbsp; &nbsp; f^* = (bp - q) / b</p><ul><li><p>Where f∗&nbsp; is optimal betting fraction, b is net odds, p is win rate, and q=1−p.</p></li></ul></li><li><p><strong>Simplified for PM: </strong>&nbsp;f^* = (p - market\_price) / (1 - market\_price)</p><ul><li><p>Where p is the subjective true probability, market\_price is the market implied probability.</p></li></ul></li></ul><p>The theoretical effectiveness of the Kelly formula is highly dependent on accurate estimates of true probability and odds. In reality, traders find it difficult to consistently and accurately grasp the true probability. In practice, professional gamblers and prediction market participants tend to adopt rule-based strategies that are more executable and less dependent on probability estimation:</p><ul><li><p><strong>Unit System:</strong> Splits capital into fixed units (e.g., 1%) and invests different numbers of units based on confidence levels. This automatically constrains single-bet risk through a unit cap and is the most common practical method.</p></li><li><p><strong>Flat Betting:</strong> Uses a fixed percentage of capital for each bet. Emphasizes discipline and stability, suitable for risk-averse or low-conviction environments.</p></li><li><p><strong>Confidence Tiers:</strong> Presets discrete position tiers and sets absolute caps to reduce decision complexity and avoid the false precision problem of the Kelly model.</p></li><li><p><strong>Inverted Risk Approach:</strong> Calculates position size backwards starting from the maximum tolerable loss. It defines boundaries from risk constraints rather than profit expectations.</p></li></ul><p>For Prediction Market Agents, strategy design should prioritize <strong>executability and stability</strong> over theoretical optimality. The key lies in clear rules, simple parameters, and tolerance for judgment errors. Under these constraints, the <strong>Confidence Tiers method combined with fixed position caps</strong> is the most suitable general position management scheme for PM Agents. This method does not rely on precise probability estimates but divides opportunities into limited tiers based on signal strength, setting clear caps to control risk even in high-conviction scenarios.</p><br><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/c1b6be14d26a7314a08231a0a4210c72413f1fc531b4fd026b9082678874b71b.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAARCAIAAAAzPjmrAAAACXBIWXMAAAsTAAALEwEAmpwYAAAFYElEQVR4nLVUaVBTVxQ+an+0taMdp9ZpnbogKC1T2tJFEQG1ajtVB1AiCMgeMFCsLBoWCYuAGLZa0EECDRhs8EEIxvRhkpeQlYSwhTQsstWZaIJDpQ0BdIDkdV7STqf/2zPf3HPu3HPvufeb813A/2eDqakpgUCg0w2I5NpL12obOZ0MNo/Zym/i8FkctIWHIvfRNh7K5aP3+SgP7UQFApFIgGEPJRKhTCJQSEUquUijxDRKrE8l1kge9EofGAwGmUwmkUhmZmbAZDKJMKy/r1fWM1RS29KOqRC0q61TxsXkPJGcL5Y/lCgFXUqxXCGVK6VKlVypVGu6NZruXq1qsK9b16f6ZUA9rNOM6DRj+h59b5e+Rzw+Pq5QKDAMIwrYbHa74y02m9P/F2az/e1t4AjsDhDzf6fZHO6veNlmd2KVALG2YrfbiJ3EFe2ORGJ0LDlhx3Gobxe/Tcr1iqe7hBeyeF0x1274pxf4XqSdL6vlK9Bvb6VTmTRKTZpQJTRgdwY6Kgc7KnSiZp24/cFVkqg8ilcYrBNzWi4H1p11vXVmF5J9Svojo+oTz5t++0s93NVsFpyj1cOGQHAjw+YzCUX1cCIKvE/D3kAISrrOrtqW6b2PfvJdqncN5+Z0e94kQplgnx9oy5M10BASCMlrERJgdZmFnlAAkA9Qtg/qI0IzAPIA0gHavqMAuYgF7ufWheTAR/G+UdlAIq+JToWoFAggk4svuJM/2/ql6/YYr9SK9FEkj5EV0Jwf3M3KQorJJccg9vPXKk/AXdpZiht8BXAMIMkdCg/7hAMEA5wG4FFTIfFaM3hEQgAVvMg+kTk7opMgPPnV2ItuMWkR1ITtpA/cTnluOf1+Gj1N00StyTjenB8sa8psKyEzSWtT/Nazw9exaKF5nhAKEAZQ/AXk+XnHAUQBhDgLEBS9EQiuBEWJ/1AUBMHJpexy1+wD/lVB26j7q1trprn5U5yUSYTS35ovbchtDQEx5ZV7JBDVZl/1hBIgWKI7KLoEQANIdVIkUA2GUr9PLKqPodVK1LoiBivpejWltLqiCRkYHSy5U1bJrilj39BP6IcUPAWPoUaZw1rhxJCK+0O6kJH/8HbOpF4tYVxlZZ5hXzkrbSoelmCN0RGsxLg78ZHjSjmsrKyoVQp+B0c/NIjjeL9Wq5RLUT6/T6vFcXxkeITdzH409gjH8eGR0bb2+winw2h8YrPZUIEY4fJEXTIcx1++fIkKJcIuhbOzhwyGZk67zmAgvgocxy3z8w2se0Sj21ZxHMek0p7+fqcUFhYWEC6ysrxiWyWEotP1jw7rndpYXFoS8jnLK8urDoX2quR9SgmO46uOg7gNjKUXLwgdMFrFEJQDAVmbQwqYHFFUafWu8CSXyJSE8jq+gh9flRxRGJtwI0WowfrRRlFjbicjS402DomQnzK+brhwGMk6PiBq42UHZPhsyDnwOp8WKGXUFe7eHfnWlsLtLtq7LAjNvg3gDmt8ALyiabVwMAg27IRNu+CbGDpSBad2bDr0HoS7MtDbvPLzF0mfFsV782pSlcwrcW4QsgdSPEDOyIx7Bw4CgbTdUBtGOgKwH+AoQEsSGS5XtcDWk/DhOXAh5d5EtgaGg4cv7NnrHkJh/Mz0SvLbF+v/cbI/S8AytNOZBWGN+WG93EpVS0VZ0JtFQTuvHFuvYtPzDm30AfAFoB3d2JGZcQRgr6NNsZoq+MNiHZs2jj82jU0bLfPzT2efT5ufTT01mWafW+YtxmdPzHMzRrPRumD9fe63RcusdW6GCKzWOdOvllnjnPnxwoJ1zmw0TY+Zp0efm41Wi8U0MWaenJg1GpcWF/8EeYEH/75STL8AAAAASUVORK5CYII=" nextheight="475" nextwidth="900" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p><br></p><h4 id="h-3-strategy-selection-for-prediction-markets" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>3. Strategy Selection for Prediction Markets</strong></h4><p>Structurally, strategies fall into two main categories: <strong>Deterministic Arbitrage</strong> strategies (characterized by clear rules and codifiability) and <strong>Speculative Directional</strong> strategies (relying on information interpretation and direction judgment). Additionally, there are Market Making and Hedging strategies, mainly for professional institutions with high capital and infrastructure requirements.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/85f1358a42c2866fcdfaef59f31f867fd66ef0730cbdbf6fe1d9316eaf5c7105.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAARCAIAAAAzPjmrAAAACXBIWXMAAAsTAAALEwEAmpwYAAAEnUlEQVR4nIVTbUxTVxg+cwYX0Swzy3Q4GdtctpnFuIXVbcaPucxkZs4ffvwgkGmdiSAMGwpDlCrQS/i40BaBC61toaVgYXUV+QyGusu4uKuSNeGqjalrtEoLbbmFSw/sMs8Cd7t2sGXPj5Nzbp57nud9zvsCnucRQtGryWSOjV29YmXs8mUxG16LBwCsWvViwutvxMauBgCsXbsOALA85oUv9uwBAGzfsUMqlQLwHACgydIsXiKA53lgsVhwHI9EIqMB/2P/Y39w/P6D+1c6rtTq6qrratsut5maTTrjxZZWq91ut1gs9VptR0dnV3c3RVENer21ydzb0WE2GGrU6l9pmh0fe+LxjHm9Y95Ho48ezs7MAIPBgON4KDzx8XbJpwd3bU35fPPBbZ8d33c455v9WUl7Mw/vlR3an5V0olTGRbhFtSbt+2o9ADvjXt30PPjklZffBeC9ZWBTDHgbgM0rYxLXrHHfuwsQQgzDsCzrun/voc/726gnDKeqLLXrdr6ja9CXaMrwapWqRlOtrWlobHQ4HBBCfgEIods3b9IDZMj3BE5Ps+Nj4XF/yPdkwu8PPPZOBgJcKMTzv88LOJ1Ojpt3hxCCECKEGn40r/gwrt1mv9TUYjaZDAaD0WhUq9UEQQiEhSLmhL0mP3/rhnid0Viv1er0+kKl0myxVKhUqqoqm80GIIQ0TbMsy/M8hJDjuGluepafRQjRzK0K/YWffxpov9refrVdMMFxHFwAx3GhYBAhRGDF+xI/Mjc1VWk0Op0Ox3GCICorKiorKg0GPejv77fZ2sSnj464vb/zSNbxRr2xUqWqJS56PB7BRHSToP8DwLAiDFOKZ47jhIjFoBFCc3yk16bcuPHNLVs+IElSLpcXl5RiGKZQKCCM8DwfZlnxx0U3AAxTlpQUC3aiLxWoEEYgnA2zoV/INrk8KyMjY4S5W60uPpf9NTkwZLVeiub/uwCOlxcWFgg8j8dDUYNOp/PG0JDX6xVJEM4KhDme/wOhG9dUZcrUC9X1OF6mVqswDDtz5kxfX59o6x8ChYWFeXmnBRcMcyc+fkNCfNzu3bsoalC0BiGcnZlZ0HjK83MP7l2/1tfZ3Gxtbm6xWlt7enoIgqBpOroPn01yIBCgqEGxTYeH6ZcS3pfl5m87kJqWlpaTnY3j5QqFQi6XZ2Sk5+WdTk5OJklSzGTpgwuD8kxAmINAIMDz/OTUvMyxnLL1iXtPnZJJpdKjR45IpVK5PEsmkx07JmUYxuVy+XyjQhTRLSsOoBCRePxrkqemJv+28tTjHa0z/jBw/VpvV1dvT09vT/d1R7/dbrfZbKLrpR0RXUF0u89HdGdkxGrvemvbl6cKynPLa74rKC/RGovrzcX1pnyN9lyVznylVyguOor/imjRcACCqNOoVU2tlw8dPZl5tiinqPTE94pvs/Jk+crMs0XpuefTc88XVFTdun2bYRjnApgFuFwuYV0E8SNN0w6HYz6i4eHhk2mpwYA/Ms1NBANTYXYqzE4Eg2F2QtiP+30+n48kSYlEkpKSkpqaSlGU1+t1u92eJXC73T6fj6IoiURCkuSfwrTbBp310ooAAAAASUVORK5CYII=" nextheight="491" nextwidth="900" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p><strong>Deterministic Arbitrage Strategies (Arbitrage)</strong></p><ul><li><p><strong>Resolution Arbitrage:</strong> Occurs when an event outcome is basically determined but the market hasn't fully priced it in yet. Returns come from information synchronization and execution speed. Rules are clear, risk is low, and it is fully codifiable—the core strategy most suitable for Agent execution.</p></li><li><p><strong>Dutch Book Arbitrage (Probability Conservation):</strong> Exploits structural imbalances where the sum of prices for a mutually exclusive and exhaustive set of events deviates from the probability conservation constraint ($\sum P \neq 1$). By building a portfolio, it locks in risk-free returns. It relies only on rules and price relationships, has low risk, and can be highly regularized. It is a typical deterministic arbitrage form suitable for automated Agent execution.</p></li><li><p><strong>Cross-Platform Arbitrage:</strong> Profits by capturing pricing deviations for the same event across different markets. Low risk but high requirements for latency and parallel monitoring. Suitable for Agents with infrastructure advantages, but competition is intensifying, leading to declining marginal returns.</p></li><li><p><strong>Bundle Arbitrage:</strong> Exploits pricing inconsistencies between related contracts. Logic is clear but opportunities are limited. Can be executed by Agents but requires some engineering for rule parsing and portfolio constraints. Agent suitability is medium.</p></li></ul><p><strong>Speculative Directional Strategies (Speculative)</strong></p><ul><li><p><strong>Structured Information Driven (Information Trading):</strong> Centers around clear events or structured information, such as official data releases, announcements, or ruling windows. As long as the information source is clear and trigger conditions are definable, Agents can leverage speed and discipline in monitoring and execution. However, when information turns into semantic judgment or scenario interpretation, human intervention is still needed.</p></li><li><p><strong>Signal Following:</strong> Profits by following accounts or capital behaviors with historically superior performance. Rules are relatively simple and automatable. The core risk lies in signal decay and being front-run/counter-traded, requiring filtering mechanisms and strict position management. Suitable as an auxiliary strategy for Agents.</p></li><li><p><strong>Unstructured / Noise-driven:</strong> Highly dependent on sentiment, randomness, or participation behavior. Lacks a stable, reproducible edge, and long-term expected value is unstable. Difficult to model and extremely high risk; not suitable for systematic Agent execution and not recommended as a long-term strategy.</p></li></ul><p><strong>High-Frequency Price &amp; Liquidity Strategies (Market Microstructure):</strong> Relies on extremely short decision windows, continuous quoting, or high-frequency trading. Requirements for latency, models, and capital are extremely high. While theoretically suitable for Agents, they are often limited by liquidity and competition intensity in prediction markets, suitable only for a few participants with significant infrastructure advantages.</p><p><strong>Risk Control &amp; Hedging:</strong> Does not directly seek profit but is used to reduce overall risk exposure. Clear rules and objectives; runs long-term as an underlying risk control module.</p><p><strong>Summary:</strong> Strategies suitable for Agent execution in prediction markets are concentrated in scenarios with <strong>clear rules, codifiability, and weak subjective judgment</strong>. Deterministic arbitrage should be the core revenue source, with structured information and signal following strategies as supplements. High-noise and emotional trading should be systematically excluded. An Agent's long-term advantage lies in disciplined, high-speed execution and risk control capabilities.</p><table style="min-width: 150px"><colgroup><col><col><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Strategy Type</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Strategy Name</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Expected Return</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Risk</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Tech Difficulty</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Agent Suitability</strong></p></td></tr><tr><td colspan="1" rowspan="4"><p style="text-align: center"><strong>Arbitrage</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">Resolution Arbitrage</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Medium</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Low</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Medium</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center">Dutch Book Arbitrage</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Low–Medium</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Low</p></td><td colspan="1" rowspan="1"><p style="text-align: center">High</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center">Cross-Platform Arbitrage</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Low</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Low</p></td><td colspan="1" rowspan="1"><p style="text-align: center">High</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center">Bundle Arbitrage</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Low</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Low</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Medium</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span></p></td></tr><tr><td colspan="1" rowspan="3"><p style="text-align: center"><strong>Speculative</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">Information Driven</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Medium</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Medium</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Medium</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center">Signal Following</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Medium</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Medium</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Medium</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center">Unstructured Speculation</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Negative</p></td><td colspan="1" rowspan="1"><p style="text-align: center">High</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Low</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="star" class="emoji" data-type="emoji">⭐</span></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Market Making</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">Active/Passive Market Making</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Low–Medium</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Medium</p></td><td colspan="1" rowspan="1"><p style="text-align: center">High</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Hedging</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">Risk Management Hedging</p></td><td colspan="1" rowspan="1"><p style="text-align: center">N/A</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Reduces</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Medium</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span></p></td></tr></tbody></table><p><br></p><h3 id="h-iv-business-models-and-product-forms-of-prediction-market-agents" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>IV. Business Models and Product Forms of Prediction Market Agents</strong></h3><p>Ideal business model designs for Prediction Market Agents have exploration space at different levels:</p><ul><li><p><strong>Infrastructure Layer:</strong> Provides multi-source real-time data aggregation, Smart Money address libraries, unified prediction market execution engines, and backtesting tools. Charges B2B fees to obtain stable revenue unrelated to prediction accuracy.</p></li><li><p><strong>Strategy Layer:</strong> Introduces community and third-party strategies to build a reusable, evaluable strategy ecosystem. Captures value through calls, weights, or execution profit-sharing, reducing dependence on a single Alpha.</p></li><li><p><strong>Agent / Vault Layer:</strong> Agents directly participate in live trading via entrusted management, relying on on-chain transparent records and strict risk control systems to earn management fees and performance fees based on capability.</p></li></ul><p>Corresponding product forms can be divided into:</p><ul><li><p><strong>Entertainment / Gamification Mode:</strong> Lowers participation barriers through Tinder-like intuitive interaction. Has the strongest user growth and market education capability, making it an ideal entry point for breaking out of the niche, but needs to funnel users to subscription or execution products for monetization.</p></li><li><p><strong>Strategy Subscription / Signal Mode:</strong> Does not involve capital custody, is regulatory-friendly with clear rights and responsibilities, and has a relatively stable SaaS revenue structure. It is currently the most feasible commercialization path. Its limitation is that strategies are easily copied and execution suffers from slippage. Long-term revenue ceilings are limited, but experience and retention can be significantly improved through a "Signal + One-Click Execution" semi-automated form.</p></li><li><p><strong>Vault Custody Mode:</strong> Possesses scale effects and execution efficiency advantages, resembling asset management products. However, it faces multiple structural constraints such as asset management licenses, trust thresholds, and centralized technical risks. The business model is highly dependent on the market environment and sustained profitability. Unless possessing a long-term track record and institutional-grade endorsement, it should not be the main path.</p></li></ul><p>Overall, a diversified revenue structure of <strong>"Infrastructure Monetization + Strategy Ecosystem Expansion + Performance Participation"</strong> helps reduce reliance on the single assumption that "AI consistently beats the market." Even if Alpha converges as the market matures, underlying capabilities like execution, risk control, and settlement retain long-term value, thus building a more sustainable business closed loop.</p><table style="min-width: 125px"><colgroup><col><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p><strong>Level</strong></p></td><td colspan="1" rowspan="1"><p><strong>Product Form</strong></p></td><td colspan="1" rowspan="1"><p><strong>Core Capability</strong></p></td><td colspan="1" rowspan="1"><p><strong>Target User</strong></p></td><td colspan="1" rowspan="1"><p><strong>Monetization</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Entry Layer</strong></p></td><td colspan="1" rowspan="1"><p>Entertainment Market</p></td><td colspan="1" rowspan="1"><p><strong>Info Aggregation:</strong> Cross-platform hot topic scraping</p><p><strong>Visualization:</strong> Basic win rate/odds display</p><p><strong>Light Interaction:</strong> Paper trading/Voting experience</p></td><td colspan="1" rowspan="1"><p>Entertainment Users</p></td><td colspan="1" rowspan="1"><p>Free, trading traffic for data</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Tool Layer</strong></p></td><td colspan="1" rowspan="1"><p>Decision Copilot</p></td><td colspan="1" rowspan="1"><p><strong>Deep Analysis:</strong> EV calculation, Evidence chain</p><p><strong>Risk Control Assist:</strong> Position advice, Stop-loss alerts</p><p><strong>One-Click Copy:</strong> Execution after human confirmation</p></td><td colspan="1" rowspan="1"><p>Pro Retail, Heavy Players</p></td><td colspan="1" rowspan="1"><p>Subscription Fee</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Asset Mgmt Layer</strong></p></td><td colspan="1" rowspan="1"><p>Managed Execution Vaults</p></td><td colspan="1" rowspan="1"><p><strong>Fully Auto Strategy:</strong> 7x24h monitoring &amp; execution</p><p><strong>Strategy Packs:</strong> Macro/Sports/Reg/Crypto</p><p><strong>Transparency:</strong> On-chain auditable performance</p></td><td colspan="1" rowspan="1"><p>High Net Worth</p></td><td colspan="1" rowspan="1"><p>Mgmt Fee + Carry (2/20)</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Infrastructure Layer</strong></p></td><td colspan="1" rowspan="1"><p>B2B Data/Execution API</p></td><td colspan="1" rowspan="1"><p><strong>Advanced Data:</strong> Implied prob curves, Risk index</p><p><strong>Arbitrage Radar:</strong> Cross-market spread monitoring</p><p><strong>Execution Engine:</strong> Low-latency order interface</p></td><td colspan="1" rowspan="1"><p>Quant Teams, Exchanges, Info Platforms</p></td><td colspan="1" rowspan="1"><p>Enterprise SaaS</p></td></tr></tbody></table><p><br></p><h3 id="h-v-project-cases-of-prediction-market-agents" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>V. Project Cases of Prediction Market Agents</strong></h3><p>Currently, Prediction Market Agents are still in the early exploration stage. Although the market has seen diverse attempts from underlying frameworks to upper-layer tools, a standardized product that is mature in strategy generation, execution efficiency, risk control systems, and business closed loops has not yet formed.</p><p>We classify the current ecosystem landscape into three levels: <strong>Infrastructure</strong>, <strong>Autonomous Agents</strong>, and <strong>Prediction Market Tools</strong>.</p><h4 id="h-infrastructure-layer" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>Infrastructure Layer</strong></h4><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://github.com/Polymarket/agents"><strong><u>Polymarket Agents Framework</u></strong></a></p><p>This official developer framework standardizes "connection and interaction," handling data retrieval, order construction, and basic LLM interfaces. However, it functions primarily as an access standard rather than a turnkey solution; it solves "how to code an order" but leaves core trading capabilities—such as strategy generation, probability calibration, and risk management—entirely to the developer.</p><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://github.com/gnosis/prediction-market-agent-tooling"><strong><u>Gnosis Prediction Market Tools</u></strong></a></p><p>Offering complete read/write support for the Gnosis ecosystem (Omen/Manifold), this toolset provides only read access for Polymarket, creating clear ecosystem barriers. It serves as a strong foundation for Gnosis-native agents but has limited utility for cross-platform development.</p><p>Polymarket and Gnosis are currently the only prediction market ecosystems that have clearly productized "Agent Development" into official frameworks. Other prediction markets like Kalshi still mainly remain at the API and Python SDK level, requiring developers to self-complete key system capabilities like strategy, risk control, operation, and monitoring.</p><h4 id="h-autonomous-agents" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>Autonomous Agents</strong></h4><p>Current "Prediction Market AI Agents" on the market are mostly still in early stages. Although labeled "Agent," their actual capabilities are significantly far from delegatable automated closed-loop trading. They generally lack independent, systematic risk control layers and have not incorporated position management, stop-loss, hedging, and expected value constraints into the decision process. Overall productization is low, and mature systems for long-term operation have not yet formed.</p><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://olas.network/agent-economies/predict"><strong><u>Olas Predict</u></strong></a></p><p>Olas Predict is currently the most productized prediction market agent ecosystem. Its core product<strong> “Omenstrat”</strong> is built on Omen within the Gnosis system, utilizing FPMM and decentralized arbitration mechanisms. It supports small-scale high-frequency interactions but is constrained by Omen's limited single-market liquidity. Its "AI prediction" primarily relies on generic LLMs, lacking real-time data and systematic risk control, with historical win rates varying significantly across categories.&nbsp;</p><p>In February 2026, Olas launched “Polystrat”, extending Agent capabilities to Polymarket—users can define strategies in natural language, and the Agent automatically identifies probability deviations in markets settling within 4 days and executes trades. The system controls risk through Pearl local execution, self-custodied Safe accounts, and hardcoded limits, making it the first consumer-grade autonomous trading Agent for Polymarket.</p><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://chat.unifai.network/strategies/topic/polymarket-banner"><strong><u>UnifAI Network Polymarket Strategy</u></strong></a></p><p>Provides automated trading Agent for Polymarket, with a core <strong>tail risk strategy</strong>: scanning contracts near settlement with &gt;95% implied probability and buying in, targeting 3–5% spread capture. On-chain data shows a win rate close to 95%, but returns diverge significantly across categories. The strategy is highly dependent on execution frequency and category selection.</p><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://noya.ai"><strong><u>NOYA.ai</u></strong></a></p><p>Attempts a comprehensive "Research-Judgment-Execution" closed loop. Its architecture features an Intelligence Layer for signal aggregation and an Abstraction Layer using Intents to manage cross-chain complexity. Currently, its Omnichain Vaults have been delivered; the Prediction Market Agent remains under development, and a complete mainnet closed loop has not yet formed. Overall, it is in the vision validation stage.</p><h4 id="h-prediction-market-tools" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>Prediction Market Tools</strong></h4><p>Current prediction market analysis tools are insufficient to constitute complete "Prediction Market Agents." Their value is mainly concentrated in the Information and Analysis layers of the agent architecture; trade execution, position management, and risk control must still be borne by the trader. Product forms align more with "Strategy Subscription / Signal Assistance / Research Enhancement" and can be viewed as early prototypes of Prediction Market Agents.</p><p>Based on a systematic review of<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://github.com/aarora4/Awesome-Prediction-Market-Tools"> <u>Awesome-Prediction-Market-Tools</u></a>, we selected representative projects with preliminary product forms:</p><p><strong>Market Analysis Tools</strong></p><ul><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.polyseer.xyz/"><strong><u>Polyseer</u></strong></a> <strong>:</strong> Research-oriented tool using a multi-Agent architecture (Planner/Researcher/Critic/Analyst/Reporter) for evidence collection and Bayesian aggregation to output structured reports. Transparent methodology, open-source.</p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.oddpool.com/"><strong><u>Oddpool</u></strong></a><strong>:</strong> "Bloomberg Terminal for Prediction Markets," aggregating Polymarket, Kalshi, CME, etc., with arbitrage scanning.</p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://polymarketanalytics.com/"><strong><u>Polymarket Analytics</u></strong></a><strong>:</strong> Global data analysis platform for Polymarket, showing trader, market, position, and volume data.</p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.hashdive.com"><strong><u>Hashdive</u></strong></a><strong>:</strong> Trader-oriented data tool using Smart Score to identify "Smart Money."</p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.polyfactual.com/"><strong><u>Polyfactual</u></strong></a> <strong>:</strong> Focuses on AI market intelligence and sentiment/risk analysis via Chrome extension.</p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://predly.ai/"><strong><u>Predly</u></strong></a>: AI mispricing detection platform comparing market prices with AI-calculated probabilities on Polymarket and Kalshi. Claims 89% alert accuracy.</p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://app.polysights.xyz/"><strong><u>Polysights</u></strong></a>: Covers 30+ markets and on-chain metrics with Insider Finder tracking new wallets and large unidirectional bets.</p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.polyradar.io/"><strong><u>PolyRadar</u></strong></a>: Multi-model parallel analysis with real-time interpretation, timeline evolution, and confidence scoring.</p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.alphascope.app/"><strong><u>Alphascope</u></strong></a>: AI-driven intelligence engine for real-time signals and research summaries (early stage).</p></li></ul><p><strong>Alerts / Whale Tracking</strong></p><ul><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.stand.trade/"><strong><u>Stand</u></strong></a><strong>:</strong> Focuses on whale copy-trading and high-conviction alerts.</p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://whale-tracker-livid.vercel.app/"><strong><u>Whale Tracker Livid</u></strong></a> <strong>:</strong> Productizes whale position changes.</p></li></ul><p><strong>Arbitrage Discovery Tools</strong></p><ul><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://getarbitragebets.com"><strong><u>ArbBets</u></strong></a><strong>:</strong> AI-driven tool identifying cross-platform arbitrage (Polymarket, Kalshi, Sportsbooks).</p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://polyscalping.org/"><strong><u>PolyScalping</u></strong></a><strong>:</strong> Real-time arbitrage and scalping analysis for Polymarket (1-minute scans).</p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.eventarb.com/"><strong><u>Eventarb</u></strong></a> <strong>:</strong> Lightweight cross-platform arbitrage calculator (Polymarket, Kalshi, Robinhood).</p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://predictionhunt.com/"><strong><u>Prediction Hunt</u></strong></a><strong>:</strong> Cross-exchange aggregator comparing prices for arbitrage (Polymarket, Kalshi, PredictIt).</p></li></ul><p><strong>Trading Terminals / Aggregated Execution</strong></p><ul><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.verso.trading/"><strong><u>Verso</u></strong></a>: Institutional-grade terminal (YC Fall 2024) with Bloomberg-style interface, covering 15,000+ contracts across Polymarket and Kalshi with AI news intelligence.</p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.matchr.xyz/"><strong><u>Matchr</u></strong></a>: Cross-platform aggregator covering 1,500+ markets with smart routing for optimal price matching and planned automated yield strategies.</p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://thetradefox.com/"><strong><u>TradeFox</u></strong></a>: Professional aggregation and Prime Brokerage platform backed by Alliance DAO and CMT Digital. Offers advanced order execution (limit, stop-loss, TWAP), self-custody, and multi-platform smart routing. Expanding to Kalshi, Limitless, and SxBet.</p></li></ul><h3 id="h-vi-summary-and-outlook" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>VI. Summary and Outlook</strong></h3><p>Currently, Prediction Market Agents are in the early exploration stage of development.</p><ul><li><p><strong>Market Essence:</strong> Backed by the Polymarket and Kalshi duopoly, prediction markets differ from gambling by acting as a "<strong>Global Truth Layer</strong>" that aggregates information via real-money trading.</p></li><li><p><strong>Core Positioning:</strong> Agents function as <strong>Executable Probabilistic Portfolio Management</strong> tools. They convert data into verifiable pricing deviations, prioritizing discipline and execution speed.</p></li><li><p><strong>Strategy &amp; Risk:</strong> <strong>Deterministic Arbitrage</strong> is the optimal strategy for automation, with speculation serving only as a supplement. Risk management should prioritize executability using <strong>Confidence Tiers with Fixed Caps</strong>.</p></li><li><p><strong>Business Model:</strong> The most sustainable path combines <strong>Infrastructure</strong> (B2B data/execution fees), <strong>Strategy Ecosystems</strong> (third-party licensing), and <strong>Vaults</strong> (performance-based asset management).</p></li></ul><p>Despite the emergence of diverse tools and frameworks in the ecosystem, a mature, standardized product capable of closing the loop on strategy generation, execution efficiency, and risk control has yet to appear. We look forward to the continued iteration and evolution of Prediction Market Agents.</p><p><strong><em>Disclaimer:</em></strong><em> This article was created with the assistance of AI tools including ChatGPT-5.2, Gemini 3, and Claude Opus 4.5. While the author has strived for accuracy, errors may exist. Please note that crypto asset fundamentals often diverge from secondary market prices. This content is for information and research purposes only and does not constitute investment advice or a recommendation to buy or sell any tokens.</em></p><p><br></p>]]></content:encoded>
            <author>0xjacobzhao@newsletter.paragraph.com (jacobzhao)</author>
            <category>predictionmarket</category>
            <category>agent</category>
            <category>polymarket</category>
            <category>kalshi</category>
            <enclosure url="https://storage.googleapis.com/papyrus_images/beb1fe306acce9ba418c77cd22a30195e0b9d893a74361e73dc1d29ac86a8fa9.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[让概率成为资产：预测市场智能体前瞻]]></title>
            <link>https://paragraph.com/@0xjacobzhao/让概率成为资产：预测市场智能体前瞻</link>
            <guid>YzK7NvjmP4Y3yfRW8JNb</guid>
            <pubDate>Wed, 04 Mar 2026 08:00:59 GMT</pubDate>
            <description><![CDATA[预测市场通过“真金白银的交易”将分散信息压缩为可交易的概率价格信号，正从类博彩工具演化为金融与企业可调用的**“全球真相层”。预测市场智能体的关键不在于“AI预测更准”，而在于通过数据→错价识别→仓位风控→自动执行的架构，将概率偏差转化为可执行策略；核心以确定性套利为主，并采用阶梯信心+仓位上限的规则化资金管理。商业上更可持续的路径是基础设施B2B + 策略生态 + Agent/Vault业绩分成。当前生态仍处早期，玩家主要分为基础设施、交易Agent与工具终端三类，我们或正处在预测市场智能体爆发前夜。]]></description>
            <content:encoded><![CDATA[<p>在过往Crypto AI系列研报中我们持续强调的观点：当前加密领域最具实际应用价值的场景，主要集中在<strong>稳定币支付</strong>与<strong>DeFi</strong>，而Agent是AI产业面向用户的关键界面。因此，在Crypto与AI融合的趋势中，最具价值的两条路径分别是：短期内基于现有成熟<strong>DeFi协议</strong>（借贷、流动性挖矿等基础策略，以及Swap、Pendle PT、资金费率套利等高级策略）的<strong>AgentFi</strong>，以及中长期围绕稳定币结算、并依托ACP/AP2/x402/ERC-8004等协议的<strong>Agent Payment</strong>。</p><p><strong>预测市场</strong>在2025年已成为不容忽视的行业新趋势，其年度总交易量从2024年的约90亿美元激增至2025年的超过400亿美元，实现超过400%的年同比增长。这一显著增长由多重因素共同推动：宏观政治事件带来不确定性需求，基础设施与交易模式的成熟，以及监管环境出现破冰（Kalshi胜诉与Polymarket回归美国）。<strong>预测市场智能体(Prediction Market Agent)</strong>在2026年初呈现早期雏形，有望在未来一年成为智能体领域的新兴产品形态。</p><h2 id="h-" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">一、<strong>预测市场：从下注工具到“全球真相层”</strong></h2><p>预测市场是一种围绕<strong>未来事件结果</strong>进行交易的金融机制，合约价格本质上反映了市场对事件发生概率的集体判断。其有效性源于<strong>群体智慧</strong>与<strong>经济激励</strong>的结合：在匿名、真金白银下注的环境中，分散信息被快速整合为按资金意愿加权的价格信号，从而显著降低噪音与虚假判断。</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/fd7dd5aa98e85d09bffb81d4cb80d1190604cb29164c9b892ae0b08d91ae0e02.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAPCAIAAAAK4lpAAAAACXBIWXMAAAsTAAALEwEAmpwYAAAC70lEQVR4nKVTS08TURQ+iYn/wrhy4cbEGP+BkYVGjY1sfC18AEuIiS40mvhMI5pofEfBFzFRotKqQG0hQ+mUgRkZ2zKFBiyFaX3gjJbp7e3M3DlmOoCNjw18mUzm3jv3nO+c7zsgSdK7d93x+FBPT288PsRxXCgUikYHQ6FQLDbEcQOy/BER2bJACIFcLpdKpRKJhKKkFUWRJEnXdUopIYQuApcLSilYlsUY897ex7LDVUqEzpdqdwgpA2MMVwynGuRrcmQ23r+0XKjAY70S4ojIqtc/tF0fD3ZUE9he9LyaA0QsFouGYSxbSduyHJe+eApgrPOul0DTtFQq9eoV52qw8v7YiO071h0E4MNRRLQtK5lMipKUet8Hsix79P9bPmPZbFZVVa9WSmmtKRxmO4ijjy41A2wEaH87hohTU1lBGJZlORARQFEUXdf/KTVjjtdKQRASiQSlVNO0got8oVDI5XJL4l1vPAou9j4IphBx3iCiKImiFOP6FzQoFot/a1CmVld00ihbc981Tf/BGDOMUpnSkjsipmEYFdNEdDIzOqxuBNgNqxsDH5MeuYmJTCzGv+zm/rSpg2hatmm5NugVptc2dCyV4lSfmh/Rqe6B7y7AEVi/Dw76wmrCOyOEaJqWL2pugr/7Y1Ujgu8OrGk5+YDf3PxiJP3FQdR0UiKm9rP8ba6k6QQRm671waY9cHw3XNwOZ3f0ZN0Elu3yQ8SyN2iMsYppVmxmEHPmc/HEvcHW59KxmxysaoANhwAOA+yHNS2w5Spsuww7b8BWP9SdgbrzcPQ+HGiC1l1woR5O++BcfXja1eB3AkpB13VVVaODPD88OjaZH0mrt55xV9rePA6IwviMOJvu7It0hgeC8dGnIaFrePj268CT8Pvop3R/RnkYDj8Od0WSklyYjk8pL2OR0WSi1vfuJPv9/kAgGIvxPM9LoqjrGqLjMHNp3G33glMy5jNpZcGXi0fuBFQsZrlLp2reMiG1DaeU/gLtN/RGEy18WQAAAABJRU5ErkJggg==" nextheight="515" nextwidth="1080" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p style="text-align: center"><strong>预测市场名义交易量趋势图</strong> <em>数据来源：</em><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://dune.com/queries/5753743/9335712"><em><u>Dune Analytics (Query ID: 5753743)</u></em></a></p><p>截至2025年底，预测市场已基本形成<strong> Polymarket</strong>与<strong>Kalshi </strong>&nbsp;双寡头主导的格局。据《福布斯》统计，2025年总交易量约达<strong>440亿美元</strong>，其中Polymarket贡献约<strong>215亿美元</strong>，Kalshi约为<strong>171亿美元</strong>。2026年2月周数据显示Kalshi交易量（$25.9B）已超过Polymarket（$18.3B），接近50%市场份额，Kalshi凭借此前选举合约案的法律胜诉、在美国体育预测市场的合规先发优势，以及相对明确的监管预期，实现了快速扩张。目前，二者的发展路径已呈现清晰分化：</p><ul><li><p><strong>Polymarket </strong>采用“链下撮合、链上结算”的混合CLOB架构与去中心化结算机制，构建起全球化、非托管的高流动性市场，合规重返美国后形成“在岸+离岸”双轨运营结构；</p></li><li><p><strong>Kalshi </strong>融入传统金融体系，通过API接入主流零售券商，吸引华尔街做市商深度参与宏观与数据型合约交易，产品受制于传统监管流程，长尾需求与突发事件相对滞后。</p></li></ul><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/ee65ce13d1b84dc0fadf85195e6e65444978a06279f3369a73f492546b5beda8.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAARCAIAAAAzPjmrAAAACXBIWXMAAAsTAAALEwEAmpwYAAAGXElEQVR4nD2SfUzaBxrHf1nu1v8uS3Zpsrvr1tu1NtXcZeu2altrrbUwQRRhoCIKiFDQggoilhZfwTfsTS1TUhTnqKj4gkXU/qRQaGG178hEzA/4QXkXsZ7Xy11uySUXe8slnzx/PP988n2eL5DYe5vY+3swGk/svY0l91okEmZ1taRT2iZp59RymttauPV14jYxk8VsEAp4DbwTJ1NyLufkIhFzhlXBjdZLeRgskYTBF4s6OrMvo546t/b//Z/t/X8m//Hzzv6/Ysk9IBiNO7e21jehA1zu2yMjGFzp2fMYVAEl5zKRQGSVVdSWU/nZOfhCfBWlkn/s+JdZOfjsi/nyvpaOJg6FgOJWEVkU3LU6ejkeaVq84123bT5/sPXcsul4FN5OABAcvD2mVk/PnfridAX9iv3ZGourBH7HBFLqgBN1wBEWABwBgBQA+BgAEMBx7sH+GOfop5SbWGCACCiov/A5AGQfBvqwQG8+0I0CpEigOx8IBPwHCdZd7slZHbmCJhK3On7a+OvIxEk08ZNPM44ey/zNhydPpF0GgA9/9d7RjEziR3/46tChP/360HEKmXdLiBvvYnbWETikbEZROptwTkDNlYvIfQKiY3HQbRx2LA1FYjEgvL0DwX6r/fFVDrfv237f67DB/LBK2JJ1pvCLP19Co8tEIhkCgcMWVYhEfTxeW2FhGbaIYjQ+mVdKjNoBdb+wk1+u+LZlbLhTo5St6sbm1fKQ69GOby3ktke3EweCjS0IDsekPbJJ7Wz8ze6cznyjUVVwoR5zrmF40DAxaRlVrcwtPBlS6DVTVpvN+8AKGfT2BaXMOt5skJGXb1YkPdb9yMu/Bdd2ffZdnz26YYpumGCH8RcBBPsntTNYHJ5ELocCfpVaV0y7lovEfZ1HKCVVp6adQaHJ/MYeFKaC39hzz+jS3X2mm7ei80t62wVCTgW78hsi5oLwKplOQuNQmTbDeHTzQcABel+B/xcEl1dNDBaLJ2j0BQIareFqbQsCQcy6UIDD0ak03nXxQP+t6es3bn2vNhrvbxoMzmXD44K89Gml1Din0GsGZlW9U8qeh3qV3wEGnfcjLnPEZQ44zbFkEghvJ9xeeGnVRCCWUGh0t8e7sPTgmqgbgSAiUaUcblsZmZNzCUel8SRdI6oxcAV06fUOw+JjZAFOIqoxzt/2vlh2PV5wWGcDTmPSY49vPYy6LaGf7vsdxv8JdiAYntHdLadWMqtr3D54emahgdeEQBSnpp1hs8UEAgOHo3dIlUugcwl06hafLSw6dPMWdAlNWF8l7xa8ME8ZNIN6zYDNMGbRj66B6g27LumxeV+BseTOgcDthVdMJklXt3xYAQVez87M8JlUVk1LGZnTd1Pd3qEYUuhsa/A9o2sJdN4zukDjJgg+KcQS2JXFFCKymU8vJyBqq/C1TCKtFMWh41v4VS9Mk95XYGJ3FwjFtoPReLtEmotEkisozx2vJufu55OETeKBtvZhPIEpbpWXlNaczUSdy8xDoUlnM1Ffnc4dkE+wWJRCRMaxo4c/S/0kLeX3GERGceHFakqR8Gp5E5dyS8qHnq8mdt+8S+CD5/WGag6nb2AQDoUU4zMoIptR1UggXhEIZT2ycQLxSnPrkGbKOjFpOUBj1S/aWhsY9awSRNYpPCoLm5eZmZ460t/881vPfuRF0mNLbD2ED37w7kQQ7NdotafT06s5XH84rLqjw+ArL13EnsvK/xpVxm/sRSCKO6TKRz/6TBbIZIHMjzwg+IxRkieupzLImBpa0TUumVaa19pYpf5OMqXsirjMSY8t4DQHwpF3LfJ49Ssgu4Yj6ep1eyDtvInFldHoDbU86WefX6DRBRRKPaeufWRsRdKlkt1Ut0tHp6eMzSz8UHcDk1rUVEfpaeFiEWdLinIyTqXQSlEWvUo72vujSRtNvGuR73VQZ1hGYwpoDEZ4Ozak0B1JIb3//m9Ppp5hMK9z6zuZbHFxMVt0fZBAZFMo9d8Qqofkd/oY55Hn0w5/AHz5lz+O9osba0ja0W55t2BW1fPDd21XytBtnPJNlwtIvNmDQ2EI9r90bmxsQf5o9OU6pDfYx36YVI5pIF8M8sUhX9wX2DmYwV1fYMcXfONyw+tPzeC9u9PaCa1WEw1595KRaAROxF9HQ95EPAitP+2oYYC6uf8ChnKTf7XNpYcAAAAASUVORK5CYII=" nextheight="559" nextwidth="1024" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>除Polymarket与Kalshi之外，预测市场领域具备竞争力的其他参与者主要沿着两条路径发展：</p><ul><li><p>一是<strong>合规分发路径</strong>，将事件合约嵌入券商或大型平台的既有账户与清算体系，依托渠道覆盖、合规资质与机构信任建立优势（如 Interactive Brokers × ForecastEx 的 ForecastTrader，FanDuel × CME Group 的 FanDuel Predicts），合规与资源优势显著，但产品与用户规模仍早期。</p></li><li><p>二是<strong>Crypto原生链上路径</strong>，以 Opinion.trade、Limitless、Myriad 为代表，借助积分挖矿、短周期合约与媒体分发实现快速放量，强调性能与资金效率，但其长期可持续性与风控稳健性仍有待验证。</p></li></ul><p>传统金融合规入口与加密原生性能优势这两类路径共同构成预测市场生态的多元竞争格局。</p><p>预测市场表面上与赌博相似，本质是零和博弈，但二者的核心区别在于是否具有正外部性：通过真金白银的交易聚合分散信息，对现实事件进行公共定价，形成有价值的信号层。其趋势正从博弈转向“全球真相层”——随着CME、彭博等机构的接入，事件概率已成为可被金融与企业系统直接调用的决策元数据，提供更及时、可量化的市场化真相。</p><p>从全球监管现状看，预测市场的合规路径高度分化。美国是唯一明确将预测市场纳入金融衍生品监管框架的主要经济体，欧洲、英国、澳大利亚、新加坡等市场普遍将其视为博彩并趋于收紧监管，中国、印度等则完全禁止，预测市场未来全球化扩张仍依赖于各国的监管框架。</p><h2 id="h-" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>二、预测市场智能体的架构设计</strong></h2><p>当下<strong>预测市场智能体(Prediction Market Agent)</strong>正在进入早期实践阶段，其价值不在于“AI 预测更准”，而在于放大预测市场中的<strong>信息处理与执行效率</strong>。预测市场本质是信息聚合机制，价格反映对事件概率的集体判断；现实中的市场低效源于信息不对称、流动性与注意力约束。预测市场智能体 的合理定位是<strong>可执行的概率资产管理（Executable Probabilistic Portfolio Management）</strong>：将新闻、规则文本与链上数据转化为可验证的定价偏差，以更快、更纪律化、低成本的方式执行策略，并通过跨平台套利与组合风控捕获结构性机会。</p><p>理想的<strong>预测市场智能体</strong> 可抽象为<strong>四层架构</strong>：</p><ul><li><p><strong>信息层</strong>汇集新闻、社交、链上与官方数据；</p></li><li><p><strong>分析层</strong>以 LLM 与 ML 识别错价并计算 Edge；</p></li><li><p><strong>策略层</strong>通过凯利公式、分批建仓与风控将 Edge 转化为仓位；</p></li><li><p><strong>执行层</strong>完成多市场下单、滑点与 Gas 优化与套利执行，形成高效自动化闭环。</p></li></ul><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/990cb776b14549548af7ea6b34fd094311f88e6e077303e6cd534bf060013414.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAARCAIAAAAzPjmrAAAACXBIWXMAAAsTAAALEwEAmpwYAAAGDElEQVR4nDWUW2xa6RHH57GqVFVtpd2mUau0Uvelq6rbKk/Z3WalNk1SKQ+brqKq2qhdyenaTeo4e4kdJzGx40viGjtrwOADPthgA4fYxPgeGgIYDMZAwObiADaYBQyYS7idgw+Xr8JWP/3014w0mr80M/oAIbTj9/NxIQvDpU8VkqezvLGJacUcjy98xGRJnyoEY0Lh+IRgTDjMYg99Mzwzq8THJ1ickREuTyojBGPCUb6AjwsnRCIOl4cJcEyAS2XyR48fezwehGqAUC1foGwmjUnB39ZMO18QjucSn17p0ShMzwQe84vga/uO2+KxG102g9u2Ftvz7LitHrvRbTf6tiyRXefea/uu2+pzbmw71kM+x47bGnBbXQ5rLperVmkoUYUaQjuS1vkLsHQFlv4CS5/AwiVY/LiutrunbcsCBee+EutMW+Vpq9y9xFOJHk1z7u+vSdIWQiXq3VzkOpRsu5Lj+y/uULKDWpF3kZ0PWmsIUVQRDo8MbE+u4u+B7M8g+RNIzsHkhyA9B6L3QdN4YprXxWFcv9v0t3luR9pCWKefMNs/b2u84pxjxfUTj283EMPti6MPlDzGHI/BbG9c4ncTA1/l/IZy3SAPJYosI/TG83x3/LM9ojkovRGU3tgjmveImwFRQ0LHyYQ2Et7VqEeb3Fnzb8z5zMo3wfXYtiaza8yHzGTEWoxYyLCFjFiKIXM2aCTDlnxgtZDcLZfLRwYlCqHy9mZYzNFLeEbpqOkYGdco4RlXl9zhnYNv/XUiwYzhpW3t5ato4E3Inwz5k4HtaGA7HtnJ7HmTYX865E+G/cmQN+FzRhOxVKVK10dULFAIIcannB/Ar38BF0/CH34K507A2Z/D+bfh9IUfNT/Flhkt/e03uvvavlkRmeW858x7vCcd/K6vhoY6sAXc0H+HPYOphcwZGWcZeySR85Ynhxbdxl2EqiRZBJIkEUL3rrK/C++erDc9+zacfQt+/2P44w/hdx99v2lqZK61sbv9eg+rU6gS2cRD891fDjFaBrA+YkVkU2L6YQaO9yv673Lx/ml2j3iWr8O65XZd/UYpigSaPqyh8r43vi61r0s3zYTTTDg35FtH6vSZQulILhXOpaO5fJyyrjo2tK+y+8VUOJsK5w5CmVQ4m40W0+FsJlpI18vyqXA2HkhlUtlKhS6VSKDqO0Ay74tfzTR/OH/7zOLXZ+aPuf3ebMudDb5x36EOmdVBkzpoVrhUCpdKHTSrdg2ab63aiE0TsemiVv2BQxe36+J2fdyujdhW9oyvk4FapVI3IOn6iM6Lb8ElgKZTcO0ncO0E/P079fgKwMPTg4qRm4OttzmMjsn/8G0zAytYA+Pf17pusgxisX8Fs8rb+D0DK9hn95t6FaxOCbNPwWoT9+kjdoQQVaKgSOYRQleVPfAPgC/egVu/hOZTcPMU3HoHrn0PWJeFeuLeWO8DUb/IvSj2LLG14nv83lbuw1EjMeVbxu2KwQXB0AreIx/m6KbYWvGYeZqpGTeE7aiGSPLoTOnDUiqfdOy7NhPbrgOv88B7rO6kN5AOxYupBJlNkNl0uaizGDQb+jfVUqKUi2TioVQ0nIklyEysmIyT6f3cwX7uIJyJBVLRdDZF04f1JReKRYRqHTIDnHsInzDh44E6lwfr8YW+33bNEnrX6IJxdMH4RPaco1wVqp3Ygomj0E2teRX2qGTNh6teSQxeieG1eNUjVjsJk4+vdloDMVQtF4oFyBcKCKH370jgrb/CBy3wmyY4fR3e/RzONMPPrsKlAQY+++nXfQ2drJZhGWPq5QOJ5lLDl5f/dbd3Zh0z7XcS+n92jbRh818MT7bji409XIZI1cJ9ptoKIYRIioJCsW7QOqWDD+7AxYdwoavO+U642AMf3T/ZLp3SbQ1OqznzJrkjLrFGRlU2JqFiPdOJ1/xT5uCEwYu/3BrXeYRaF6514ZqtCZ2Ls2Jf8wRz4WA6FoNKha5UytUKjaplhI6pof+/SqVySNOVavXoYyEpiqKP02qVKpWoEnVIH9Ll8iFN0+UyVSqRFFUoFsgSlYlHjMTk3qbtf5WRc/zhRm6DAAAAAElFTkSuQmCC" nextheight="559" nextwidth="1024" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><h2 id="h-" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>三、预测市场智能体的策略框架</strong></h2><p>不同于传统交易环境，预测市场在<strong>结算机制</strong>、<strong>流动性</strong>与<strong>信息分布</strong>上具有显著差异，并非所有市场与策略都适合自动化执行。预测市场智能体的核心在于是否被部署于规则清晰、可编码且符合其结构性优势的场景中。下文将从标的选择、仓位管理与策略结构三个层面展开分析。</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/ff4a0f24a213aae99e8f90f52235004c6acd90117cfb008f33e00550a4beb74e.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAARCAIAAAAzPjmrAAAACXBIWXMAAAsTAAALEwEAmpwYAAAEjklEQVR4nI1UbUgbZxy/T2PIYF+6gd+GFqQ4ulI69lLYVltWrVRqV2QI3QedolCcb9HUSn2LrVvs1NYaWOqsq2wlWZWtvkTErIjWaHyLNTHm9ZqLOZOYSy577u6Jp3nG5Yyzzo39+HP8eB7u+f3fMQg5i8WyGdiEkGU5hufhNh+JQMDzkOchxzEQcnFjEYoiAeIX8TGIHKFojEfjJtxCyGI47rx27dr8gt4wOay+V18tKb3TUv904HFrS2PvA8XgQB+96REfgJDDXsX1muqLFy+I/MR7qQ8fdmMYlpDw2vHjqRiGKRSdCCGM5Ri73e4iiN+6Gm5f/aSouDgnJ6dV3jo0NDSs0UxOPCNsxriPqK+vr6OjXa1WyWRNjY0Ni4tzU1OTSuUPcrn8iVpttVk7O+/19PSMjY0pFF0mk/AjhtDO9vaO07720mpACBUWFiYlJX2emdksl6vVKv3Mcx9hP5AWdAh2b/dDTCDG8zyOu3DcYdBpX+gnuru+b5PLUt9IOIJhTfV1yva2uqqKNpnMYTZt8REAQjQdBCAMQAiAEMMAhgEAhCkqQNNBCNnYVVg8BCC8K3DjRs3a2ipCKEgFIMsihM68/uYJDKu+efO7lpYn/f2PHz0yTD+PVwL+M4h4eQ+LAMedEkml1WbZizS4ubkyO8twbGJiYvLRoyzHglBIKa//XdUzPzezQbpAmCJwm9O+qtdPD40MmwQYNZqRlZUVv9/vdhOiB7sCJSUlWVlZOO4UvYsixDJgrF+lHX7aeF3aKf92anx0anz80ofJFz5ILq2o+lWtsloEbyKQbWqozS8oKC8vk0gkaWlpnZ1C29jtNoYBfwvk5eWdO3uOIFyigJgB3wb50mGlvCTlJd1Op91sXDHoyfX1U6feP5N2Njv7cllZmV4/+4d2lCQ3MjMzU1JS0tPPS6VSkvRYbVaaDvE8H5sqHlOrVbm5uQ6n9dA2QDGwDDAs6pYWZ7/M+aKoML+yvOSzT09Lq0onJ8ZDNJ2RkZGSknLsWKpM1tyt7JnVz76SIgg5iUSyahaKfAAQwj2+7nZZVl9QlDfg9wT8GyHK63E5lpeXpnXTCsXdltv1He0toyMDRuP8jO6Z3+fy+1wABKMIYVt8RKMZsdms4qzvhxijiP1zsBPdiXPBgzWz/tat2p96H+TlXT158l2tdoQBlMW8ZDTqdgcNQi5Wg8MFtiKR2FYR1tEWH2EYEONQPIki5HIai4uLPvr4dEND414rskzIuDy9K8AwzLrH/R81+PeWFwjhMl25cvnIW2+fT09XKn/s6/s5ihDHMTabsBqEGuh0OrGL/rcA2i/gsBna7jTX1lRWSb6Z10+ZzUteLzGq6u2R1/ERHiNJj1zeqtVqSdJDkuukAM8B8/m8drvNZDIShMvtJtxuAsed4jlFbTrs5jn9pMk4bzEbrNbl1dX5hYWpuvLiiryv6GBQiAAhNDg4KJM1GJYNNB3y+/0UFdgzn8/L87xC0ZWU9I5UWlVRWfZ1QX529iW1WgUh5/N5aZpmGPZPwNJ0mKbDDMN6SPLu/fu/qFQIRf8CJAAg7qU9ph4AAAAASUVORK5CYII=" nextheight="559" nextwidth="1024" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><h3 id="h-" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>预测市场的标的选择</strong></h3><p>并非所有预测市场都具备可交易价值，其参与价值取决于：结算清晰度（规则是否明确、数据源是否唯一）、流动性质量（市场深度、点差与成交量）、内幕风险（信息不对称程度）、时间结构（到期时间与事件节奏）、以及交易者自身的信息优势与专业背景。仅多数维度满足基本要求时，预测市场才具备参与的基础，参与者应依据自身优势与市场特性进行匹配：</p><ul><li><p><strong>人类核心优势：</strong>依赖专业知识、判断力与模糊信息整合，且时间窗口相对宽松（以天/周计）的市场。典型如政治选举、宏观趋势及企业里程碑。</p></li><li><p><strong>AI Agent核心优势</strong>：依赖数据处理、模式识别与快速执行，且决策窗口极短（以秒/分计）的市场。典型如高频加密价格、跨市场套利及自动化做市。</p></li><li><p><strong>不适配领域</strong>：由内幕信息主导或纯随机/高操纵性的市场，对任何参与者不构成优势。</p></li></ul><table style="min-width: 75px"><colgroup><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>适配对象</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>核心逻辑</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>最佳适用市场场景</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>人类强项</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>依赖“判断力”</strong></p><p><br></p><p style="text-align: center"><em>(仅当具备机制/数据/区域知识优势时）</em></p></td><td colspan="1" rowspan="1"><p>• <strong>政治预测</strong>：选举趋势、政策走向、人员任命</p><p>• <strong>长周期宏观</strong>：年度 GDP、通胀率、经济判定</p><p>• <strong>企业/科技</strong>：产品发布会、并购案、IPO 进程</p><p>• <strong>娱乐/文化</strong>：奥斯卡、真人秀结果、名人动态</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Agent 强项</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>依赖“速度”与“规模”</strong></p><p><br></p><p style="text-align: center"><em>(高频 &amp; 数据驱动)</em></p></td><td colspan="1" rowspan="1"><p>• <strong>高频加密价格</strong>：1h / 15min / 1min 价格波动</p><p>• <strong>套利策略</strong>：跨平台差价、组合套利</p><p>• <strong>做市商</strong>：提供买卖流动性</p><p>• <strong>统计预测</strong>：基于庞大历史数据的胜率计算</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="cross_mark" class="emoji" data-type="emoji">❌</span><strong> </strong><em>避雷区</em></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>不可控 / 信息黑盒</strong></p></td><td colspan="1" rowspan="1"><p>• <strong>内幕信息主导</strong>：突发任命、未公开监管决定</p><p>• <strong>流动性极差</strong>：长尾市场、新平台冷门盘口</p><p>• <strong>纯随机事件</strong>：社媒病毒传播、无逻辑的炒作</p><p>• <strong>高操纵风险</strong>：结算标准有争议的事件</p></td></tr></tbody></table><h3 id="h-" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>预测市场的仓位管理</strong></h3><p><strong>凯利公式（Kelly Criterion）</strong>是重复博弈场景中最具代表性的资金管理理论，其目标并非最大化单次收益，而是最大化资金的长期复利增长率。该方法基于对胜率与赔率的估计，计算理论最优仓位比例，在具备正期望的前提下提升资本增长效率，广泛应用于量化投资、职业博彩、扑克及资产管理领域。</p><ul><li><p>经典形式为： &nbsp; f^* = (bp - q) / b</p></li></ul><p>其中，f∗为最优投注比例，b为净赔率，p为胜率，q=1−p</p><ul><li><p>预测市场可简化为：f^* = (p - market\_price) / (1 - market\_price)</p></li></ul><p>其中，p为主观真实概率，market_price 为市场隐含概率</p><p>凯利公式的理论有效性高度依赖对真实概率与赔率的准确估计，现实中交易者难以持续准确地掌握真实概率，在实际操作中，职业博彩者与预测市场参与者更倾向采用可执行性更强、对概率估计依赖更低的规则化策略：</p><ul><li><p><strong>Unit System（单位下注法）</strong>：将资金拆分为固定单位（如 1%），根据信心等级投入不同单位数，通过单位上限自动约束单笔风险，是最常见的实务方法。</p></li><li><p><strong>固定比例法（Flat Betting）</strong>：每次下注使用固定资金比例，强调纪律性与稳定性，适合风险厌恶型或低确信度环境。</p></li><li><p><strong>阶梯信心法（Confidence Tiers）</strong>：预设离散仓位档位并设置绝对上限，以降低决策复杂度，避免凯利模型的伪精确问题。</p></li><li><p><strong>反向风险法（Inverted Risk Approach）</strong>：以可承受最大亏损为起点反推仓位规模，从风险约束而非收益预期出发，形成稳定的风险边界。</p></li></ul><p>对于<strong>预测市场智能体</strong>而言，策略设计应优先强调可执行性与稳定性，而非追求理论最优。关键在于规则清晰、参数简洁、对判断误差具备容错性。在此约束下，<strong>阶梯信心法结合固定仓位上限</strong>是最适合 PM Agent 的通用仓位管理方案。该方法不依赖精确概率估计，而是根据信号强弱将机会划分为有限档位并对应固定仓位；即便在高确信场景下亦设定明确上限控制风险。</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/5d316621e6e54dfeef993c1943215271a1afa3fd6683be70b035500da09e728c.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAARCAIAAAAzPjmrAAAACXBIWXMAAAsTAAALEwEAmpwYAAAF0UlEQVR4nI2Ue0xTZxjGX02Mm5psc9HIEqYTr1O2qdT7XRE1CkVShhdEQXGo836DCUI7vIEgUOQmKwNliGEIIo4xvA1WAdGKSEt72gK9gNCe09PTcw7taXuWQ12y7K+9efIl3/vH++T75f0eYP9HuVm3y+VyMi6Gk9Nud9rtjiGapukhirZTtN1G0jaSIgjKSpAW3EaSFG23oxhuNA4Cy7KtbbKU69nlFZVv1bo3qm5E16fqMar0fYjOI6Oy26DqGW72GJFhabR6Ta9O1WPQ9Bo1vTpE24toezVavVKtQdSagqLSS6mZz5paUIzgDDo65Zk5BTUP6zoQnUyhUXQb5N16uUbfpTUotPpOtaFTbejS6hVqnUKtU2r1Ko0O0fQqtTqlVocMXz0dRKPtUqrVmu7C4rKLqZmNz9veG3jK5XK7XO7/sHENk3kPx8EwDmaIdtA0TXngULTNRlkJmrCSHuE4ieFWK0HaSLvFQvb1m2F4qJtlmeHT7facw24Mw9jtziGOuJMeZs1NJEkbSdlIjjtBUhaCthDkP7KhGGHBrSiGm1ECRYmBATP3gnMZ5bBoH6z9fhL/XFJJBfDDIXgPBO5KLK3gCy/yTpyZfejExbLyB7X1WXmFWTmF9x7U3Sj4OSg0KiwiZs/h2LqSG3lh83K3L8wPm19bkrtu15El/N1zA0KFaXk2K8kZrIlJAfADWAleWzcnpMKMJTBjOXzhF5icNjryAPC/hc388NR0ieT2eWFKvCglX3LrVKxoqu9y34XreQGhRcLjwtkg+gZEc6BQdHKk30bw4cFnvuFH40kbxRlExOfB/DBYGfXJpmMBZ5IBPgAYBTBum+jqqFVrYeIk+Hzq7uQrGRnZW4IFq9b4J4kuHzkVByM+GjH64wk+vsXJp/3HwooPYdunkHPh+MhpC2D0RIBx4YfPUJSDMxCczQbvzTBLAIsig+JTwMsHvGbApOmCi9fGh4aB3zKYvzjiUkpevmT/oeOR0TEZ4txTscIJ3jMnT/edtdi/OPn0vskQMRWiJkN+0skxX68Gr5kw1nv34Vib5wWrvrsCwANYwSGKT4Vpi2D6Upjit0V4Ffbsg6BQCAgMu5z6b0QnY0VT5iyds3DdAn+BJPGocDYIvwLhbChMOgHzNsBUHnjN3XU47r1B1aPWmB9vHrssSRDfqW1+EZ2Ruz89Lyo9p0b6QlxRmSgpSSgout/YJJW2VlY/qLxf2yht/r3hyZU08bXM3Oybt17++Ud12vnq64nVaedbnjZcEksSrmSdFV27W1VnRglgGIZlWQfDoBjGsqznarVYXNyaOlmWRVGU4T4A42bdGG4xY5jL5XZwgWF/NzjIsixB0lbG2W1CrYzTQlIsyyLaXi4tKIfBExWNrxQFFfVFlQ0lD6RSOZL3W31+bV1WTV2TXFktfV7e8Ph2w6O/5HIlom5+IWt9+VqhRFraXpdX1tyreXi/7olC9rKlqqSpuqyl6pZC9qqsqr64vOqn0l8fNbaaTDhnwD+dDQt2eAcdAV5kiEgM/nwIPwBrggQpWT4Hj62NSxwTFROVKS79pTwj56a4oKi49O4PSZeXrQ8Mj4xZHRhedjUube14yaEN19ZPKEuN914TIog8OH3Fxr3HE7g1ZZx03fOOZcHRvABBori8vu1NdEYuT7BjQejO+vbOC7kFi4NCVm/fWdvS+uRpU2T0gYi9UdU1tQ2PG/nBoZs2bzl2Kk7ZJj273X8rb+a57RteNz7aGBLmM2vOl/N5BSV3CNIBlH1oAMXkam1HlwrpMQ6gGKLve4to27vUxkGzWm+Qybs6VIjRhOr0xk6FSoloeg39On2/rF0ua+9UIj19xn6k823HizZN51t9t17Wrmhpe/28ua358TN1FwJcLlGknWGG7HYbSRKkjcsyiqYoCsdJgqC40KdoDMMtuI0gKAtuM2NWM4pjOIVipBklBk1Wk4kwmYh3A9b+fnTQZMFwSqvS3E1Pl9Y3/A04F0vzvJ9EAQAAAABJRU5ErkJggg==" nextheight="559" nextwidth="1024" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><h3 id="h-" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>预测市场的策略选择</strong></h3><p>从策略结构看，预测市场主要可分为两大类：以规则清晰、可编码为特征的<strong>确定性套利策略（Arbitrage）</strong>，以及依赖信息解读与方向判断的<strong>投机类方向策略（Speculative）</strong>；此外，还存在以专业机构为主、对资本与基础设施要求较高的做市与对冲策略。</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/bf817b9f365e13e172e42d8c37090a4aedf032a4fd339038e8eb550887b1428d.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAARCAIAAAAzPjmrAAAACXBIWXMAAAsTAAALEwEAmpwYAAAELUlEQVR4nHVTb0wbZRh/jfGrJnzR8HnGJSwR/yYmoiPugxCymEwiRJkgCaSROSoTZI5tbrNy5sC0glYKLTgZs1vHUZrOG+BOMUGlzuOEcmwNhQPKHS3tSuF6L9fyGvrCeSJ7Plyee/K8v+d5fs/vAQhtIYRSqRTSGf612ToBAA89/AjQWd2puoLCAgCAyWSqNdYCABzd3f9H0HAAwzBO5w+yLEcTMWlVkqIroYi4GF6WoiuTM1NOyjV40+O8cW1oePjWrSGGYXh+2u/30zQdDAb/+tPn6PiG/f23qLgsCnPSwkJ4ORSPRKKiGBWXY2FJljeAz+fzeDyR+9HSipJXjr/+/PHXHj9y8FDJy+9dPHG0oey0w/S+pdHi6UqjtH64dOY7xozkP33opUcfywHgKQDyn8guffaZogMHqvMPv5mTU1tUKC0KACEUDM6urcXnQnML4lJgcTaaiHcNfPfcO4e7LjtMZDNp+bK1zRyLxTC6oiRVVcVlwpLIjv+RiK4moqsxSUxEInL8fnhBUOSNeFiShHlZXgcIbfH89NraGm4QQogQco1QB9944UrP923mto5vOxwOhyzLOEFVVW1tyUywz2wuezXfSfWTJGmzdf5I0yeNxsWlpUyysj1BIHAvFoulUikIoSzLipLEDXKBqW6qd3DAfd11PRAIaAmyLMOMrScSCKEbNtuJY8eGGabDaqUoimGYz00mQRBSqbSiJAGEkOf5/za4QzQzPvoR8Umb+aszTU0zM3dxXK8Wzd9XQhgN2Gw2u92uD2EGtGIIIRWu/+olGho+nOAmOI6rqampb/jYZPoME5vOcAUhxOUhhP8+VFWQm5tbVVWF2dcYwCRsbKxnnM14PDrN3hwcpARBmBcWbdYWi+ntO+ykoiTxQ4007ONKWBEgKyursrJy3wHxe7S7edzRFkK8r4u4UNV4uqm8vNzpdOKEfenanmBkZISmaRyNRMIkSbaSzQTRvCJJeq7VXWluIRQKjjG36b6+qz09PRw3gYE03D0kb6uI53ks82VRdFP9T7545GT9mfxSI0m24A3Z7Q6v10sQRF5ensFg0G4Cg2q8azvQKMIFdu4glUpDuIkQOmvuLaxo6LI72tu/NlssLpeLIAiDwVBcXGy1WkOhkCxvqKqqp1sPqihJXG8Twk24ewe7h7aFUHphSersdU+w46Ojv/zM3GZZlmGYvozxPJ/J2WFAT8sDdyAI85N/c0OjYwXvfvDWqYvFdRfKGptt/V7zFcruHrK7f+LuzWqIe97rQfcUuHtnfHzIq8gyoGnaTfVfvuqqrj9XWX++qvFSdeOls1+0GT8lzrVsf68NeDhugmVZn883mjGfz+fP2NTUFHa4jGn+NM+3nm+qOFo0OEBtUyRJUnt7+75KxXeEZer3+7OzswEAJSUlerE/yOzd3XNC8B8NbjfEg5bhWgAAAABJRU5ErkJggg==" nextheight="559" nextwidth="1024" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p><strong>确定性套利策略（Arbitrage）</strong></p><ul><li><p><strong>结算套利（Resolution Arbitrage）：</strong> 结算套利发生在事件结果已基本确定、但市场尚未完全定价的阶段，收益主要来自信息同步与执行速度。该策略规则清晰、风险较低且可完全编码，<strong>是预测市场中最适合 Agent 执行的核心策略</strong>。</p></li><li><p><strong>概率守恒套利（Dutch Book Arbitrage）</strong>：Dutch Book 套利利用互斥且完备事件集合的价格之和偏离概率守恒约束（∑P≠1）所形成的结构性失衡，通过组合建仓锁定无方向风险收益。该策略仅依赖规则与价格关系，风险较低且可高度规则化，是<strong>适合 Agent 自动化执行的典型确定性套利形式</strong>。</p></li><li><p><strong>跨平台套利：</strong> 跨平台套利通过捕捉同一事件在不同市场间的定价偏差获利，风险较低但对延迟与并行监控要求较高。该策略适合具备基础设施优势的 Agent 执行，但竞争加剧使边际收益持续下降。</p></li><li><p><strong>组合套利（Bundle）：</strong> 组合套利利用相关合约之间的定价不一致进行交易，逻辑清晰但机会有限。该策略可由 Agent 执行，但对规则解析与组合约束有一定工程要求，<strong>Agent 适配度中等</strong>。</p></li></ul><p><strong>投机类方向策略（Speculative）</strong></p><ul><li><p><strong>结构化信息驱动策略（Information Trading）</strong>：该类策略围绕明确事件或结构化信息展开，如官方数据发布、公告或裁决窗口。只要信息来源清晰、触发条件可定义，Agent 可在监测与执行层面发挥速度与纪律优势；但当信息转为语义判断或情景解读时，仍需人类介入。</p></li><li><p><strong>信号跟随策略（Signal Following）</strong>：该策略通过跟随历史表现较优的账户或资金行为获取收益，规则相对简单、可自动化执行。其核心风险在于信号退化与被反向利用，因此需要过滤机制与严格的仓位管理。适合作为 Agent 的<strong>辅助型策略</strong>。</p></li><li><p><strong>非结构化与高噪声策略（Unstructured / Noise-driven）</strong>：该类策略高度依赖情绪、随机性或参与行为，缺乏稳定可复制的 edge，长期期望值不稳定。由于难以建模、风险极高，<strong>不适合 Agent 系统性执行</strong>，也不建议作为长期策略。</p></li></ul><p><strong>高频价格与流动性策略（Market Microstructure）：</strong>该类策略依赖极短决策窗口、持续报价或高频交易，对延迟、模型与资本要求极高。虽然理论上适合 Agent，但在预测市场中往往受限于流动性与竞争强度，仅适合少数具备显著基础设施优势的参与者。</p><p><strong>风险管理与对冲策略（Risk Control &amp; Hedging）</strong>：该类策略并不直接追求收益，而是用于降低整体风险暴露。规则明确、目标清晰，作为<strong>底层风险控制模块</strong>长期运行。</p><p>总体而言，预测市场中适合 Agent 执行的策略集中于规则清晰、可编码且弱主观判断的场景，其中<strong>确定性套利</strong>应作为核心收益来源，<strong>结构化信息</strong>与<strong>信号跟随策略</strong>作为补充，高噪声与情绪型交易应被系统性排除。Agent 的长期优势在于<strong>高纪律</strong>、<strong>高速度的执行</strong>与<strong>风险控制</strong>能力。<br></p><table style="min-width: 150px"><colgroup><col><col><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>策略类型</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>策略类型</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>期望收益</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>风险</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>技术难度</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Agent 适合度</strong></p></td></tr><tr><td colspan="1" rowspan="4"><p style="text-align: center"><strong>套利类</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">结算套利</p><p style="text-align: center">（Resolution Arbitrage）</p></td><td colspan="1" rowspan="1"><p style="text-align: center">中</p></td><td colspan="1" rowspan="1"><p style="text-align: center">低</p></td><td colspan="1" rowspan="1"><p style="text-align: center">中</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center">Dutch Book 套利（概率守恒套利）</p></td><td colspan="1" rowspan="1"><p style="text-align: center">低–中</p></td><td colspan="1" rowspan="1"><p style="text-align: center">低</p></td><td colspan="1" rowspan="1"><p style="text-align: center">高</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center">跨平台套利</p></td><td colspan="1" rowspan="1"><p style="text-align: center">低</p></td><td colspan="1" rowspan="1"><p style="text-align: center">低</p></td><td colspan="1" rowspan="1"><p style="text-align: center">高</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center">Bundle套利</p></td><td colspan="1" rowspan="1"><p style="text-align: center">低</p></td><td colspan="1" rowspan="1"><p style="text-align: center">低</p></td><td colspan="1" rowspan="1"><p style="text-align: center">中</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span></p></td></tr><tr><td colspan="1" rowspan="3"><p style="text-align: center"><strong>投机类</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">信息驱动</p></td><td colspan="1" rowspan="1"><p style="text-align: center">中</p></td><td colspan="1" rowspan="1"><p style="text-align: center">中</p></td><td colspan="1" rowspan="1"><p style="text-align: center">中</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center">信号跟随</p></td><td colspan="1" rowspan="1"><p style="text-align: center">中</p></td><td colspan="1" rowspan="1"><p style="text-align: center">中</p></td><td colspan="1" rowspan="1"><p style="text-align: center">中</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center">非结构化投机</p></td><td colspan="1" rowspan="1"><p style="text-align: center">负</p></td><td colspan="1" rowspan="1"><p style="text-align: center">高</p></td><td colspan="1" rowspan="1"><p style="text-align: center">低</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="star" class="emoji" data-type="emoji">⭐</span></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>做市类</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">主动被动做市</p></td><td colspan="1" rowspan="1"><p style="text-align: center">低–中</p></td><td colspan="1" rowspan="1"><p style="text-align: center">中</p></td><td colspan="1" rowspan="1"><p style="text-align: center">高</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>对冲策略</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">风险管理对冲</p></td><td colspan="1" rowspan="1"><p style="text-align: center">N/A</p></td><td colspan="1" rowspan="1"><p style="text-align: center">降低</p></td><td colspan="1" rowspan="1"><p style="text-align: center">中</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span></p></td></tr></tbody></table><p><br><strong>四、预测市场智能体商业模式与产品形态</strong></p><p>预测市场智能体的<strong>理想的商业模式设计</strong>在不同层级有不同方向的探索空间：</p><ul><li><p><strong>基建层(Infrastructure )</strong>，提供多源实时数据聚合、Smart Money 地址库、统一的预测市场执行引擎与回测工具，向 B2B收费，获取与预测准确率无关的稳定收入；</p></li><li><p><strong>策略层(Strategy) </strong>，引入社区与第三方策略，构建可复用、可评估的策略生态，并通过调用、权重或执行分成实现价值捕获，从而降低对单一 Alpha 的依赖。</p></li><li><p><strong>Agent / Vault 层</strong>，智能体以受托管理方式直接参与实盘执行，依托链上透明记录与严格风控体系，收取管理费与绩效费兑现能力。</p></li></ul><p>而不同商业模式对应的产品形态，亦可以划分为：</p><ul><li><p><strong>娱乐化 / 游戏化模式：通过类 Tinder 的直觉交互降低参与门槛，具备最强的用户增长与市场教育能力，是实现破圈的理想入口，但需承接至订阅或执行型产品变现。</strong></p></li><li><p><strong>策略订阅 / 信号模式：不涉及资金托管，监管友好、权责清晰，SaaS 收入结构相对稳定，是当前阶段最可行的商业化路径。其局限在于策略易被复制、执行存在损耗，长期收入天花板有限，可通过“信号 + 一键执行”的半自动化形态显著改善体验与留存。</strong></p></li><li><p><strong>Vault 托管模式：具备规模效应与执行效率优势，形态接近资管产品，但面临资产管理牌照、信任门槛与集中化技术风险等多重结构性约束，商业模式高度依赖市场环境与持续盈利能力。除非具备长期业绩与机构级背书，否则不宜作为主路径。</strong></p></li></ul><p>总体而言，“<strong>基础设施变现 + 策略生态扩展 + 业绩参与</strong>”的多元收入结构，有助于降低对“AI 持续战胜市场”的单一假设依赖。即便 Alpha 随市场成熟而收敛，执行、风控与结算等底层能力仍具长期价值，从而构建更具可持续性的商业闭环。</p><table style="min-width: 125px"><colgroup><col><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>层级</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>产品形态</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>核心能力</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>目标用户</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>变现方式</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>入口层</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">娱乐市场</p></td><td colspan="1" rowspan="1"><p><strong>信息聚合</strong>：跨平台热点抓取</p><p><strong>可视化</strong>：基础胜率/赔率展示</p><p><strong>轻交互</strong>：模拟盘/投票体验</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>娱乐用户</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">免费，以流量换数据</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>工具层</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">辅助决策&nbsp;</p><p style="text-align: center">Copilot</p></td><td colspan="1" rowspan="1"><p><strong>深度分析</strong>：EV计算、证据链</p><p><strong>风控辅助</strong>：仓位建议止损提醒</p><p><strong>一键跟单</strong>：人类确认后执行</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>专业散户</strong></p><p style="text-align: center">重度玩家</p></td><td colspan="1" rowspan="1"><p style="text-align: center">订阅费</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>资管层</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">托管执行 Vaults</p></td><td colspan="1" rowspan="1"><p><strong>全自动策略</strong>：7x24h 监控执行</p><p><strong>策略包</strong>：宏观/体育/监管/加密</p><p><strong>透明化</strong>：链上业绩可审计</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>高净值</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">管理费 + 分成 (2/20)</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>基建层</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">B2B 数据/执行 API</p></td><td colspan="1" rowspan="1"><p><strong>高阶数据</strong>：隐含概率曲线、风险指数</p><p><strong>套利雷达</strong>：跨市场价差监控</p><p><strong>执行引擎</strong>：低延迟下单接口</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>量化团队</strong></p><p style="text-align: center">交易所</p><p style="text-align: center">资讯平台</p></td><td colspan="1" rowspan="1"><p style="text-align: center">企业级SaaS</p></td></tr></tbody></table><p><br><strong>五、预测市场智能体的项目案例</strong></p><p>目前，预测市场智能体（Prediction Market Agents）仍处于早期探索阶段。市场虽然涌现出从底层框架到上层工具的多样化尝试，但尚未形成一套在策略生成、执行效率、风控体系及商业闭环上均成熟的标准化产品。</p><p>我们将目前的生态版图划分为三个层级：<strong>基础设施层（Infrastructure）</strong>、<strong>自主交易智能体（Autonomous Agents）</strong> 以及 <strong>预测市场工具（Prediction Market Tools）</strong>。</p><h3 id="h-infrastructure" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>基础设施层（Infrastructure）</strong></h3><p><strong>Polymarket Agents框架：</strong>&nbsp;</p><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://github.com/Polymarket/agents"><strong><u>Polymarket Agents</u></strong><u> </u></a>Polymarket 官方推出的开发者框架，旨在解决“连接与交互”的工程标准化问题。该框架封装了市场数据获取、订单构建及基础的 LLM 调用接口。它解决了“如何用代码下单”的问题，但在核心的交易能力——如策略生成、概率校准、动态仓位管理及回测系统上基本留白。它更像是官方认可的“接入规范”，而非具备 Alpha 收益的成品。商业级的 Agent 仍需在此基础上自建完整的投研与风控内核。</p><p><strong>Gnosis 预测市场工具：</strong></p><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://github.com/gnosis/prediction-market-agent-tooling"><strong><u>Gnosis Prediction Market Agent Tooling（PMAT）</u></strong></a>对 Omen/AIOmen 及 Manifold 提供了完整的读写支持，但对 Polymarket 仅开放只读权限，生态壁垒明显。它适合作为 Gnosis 体系内Agent 的开发基石，但对于以 Polymarket 为主战场的开发者而言，实用性有限。</p><p><strong>Polymarket 与 Gnosis 是目前将“Agent 开发”明确产品化为官方框架的预测市场生态。</strong> Kalshi 等其他预测市场仍主要停留在 API 及 Python SDK层，开发者需自行补齐策略、风控、运行与监控等关键系统能力。</p><h3 id="h-autonomous-agent" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>自主交易智能体（Autonomous Agent）</strong></h3><p>当前市场上的“预测市场 AI Agents”多仍处于早期阶段，虽冠以“Agent”之名，但实际能力距离可放权的自动化闭环交易仍有显著差距，普遍缺乏独立、系统化的风控层，未将仓位管理、止损、对冲与期望值约束纳入决策流程，整体产品化程度偏低尚未形成可长期运行的成熟系统。</p><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://olas.network/agent-economies/predict"><strong><u>Olas Predict</u></strong></a>：是当前产品化程度最高的预测市场智能体生态。其核心产品 <strong>Omenstrat</strong> 基于 Gnosis 体系内的 Omen 构建，底层采用 FPMM 与去中心化仲裁机制，支持小额高频交互，但受限于 Omen 单市场流动性不足。其"AI 预测"主要依赖通用 LLM，缺乏实时数据与系统化风控，历史胜率在品类间分化明显。2026年2月，Olas 推出 <strong>Polystrat</strong>，将 Agent 能力扩展至 Polymarket——用户可用自然语言设定策略，Agent 自动识别 4 天内结算市场的概率偏差并执行交易。系统通过 Pearl 本地运行、自托管 Safe 账户与硬编码限制控制风险，是目前<strong>首个面向 Polymarket 的消费级自主交易 Agent</strong>。</p><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://chat.unifai.network/strategies/topic/polymarket-banner"><strong><u>UnifAI Network Polymarket Strategy</u></strong></a>：提供 Polymarket 自动化交易 Agent，核心为<strong>尾部风险承担策略</strong>：扫描隐含概率 &gt;95% 的临近结算合约并买入，目标获取 3–5% 价差。链上数据显示胜率接近 95%，但收益在品类间分化明显，策略高度依赖执行频率与品类选择。</p><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://noya.ai"><strong><u>NOYA.ai</u></strong><u> </u></a>试图将"研究—判断—执行—监控"整合为 Agent 闭环，架构涵盖情报层、抽象层与执行层。当前已交付 Omnichain Vaults；Prediction Market Agent 仍处开发阶段，尚未形成完整主网闭环，整体处于愿景验证期。</p><h3 id="h-prediction-market-tools" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>预测市场工具 (Prediction Market Tools)</strong></h3><p>当前预测市场分析工具尚不足以构成完整的“预测市场智能体”，其价值主要集中在智能体架构中的<strong>信息层与分析层</strong>，交易执行、仓位管理与风险控制仍需由交易者自行承担。从产品形态看，更符合“策略订阅 / 信号辅助 / 研究增强”的定位，可被视为预测市场智能体的早期雏形。</p><p>通过对 <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://github.com/aarora4/Awesome-Prediction-Market-Tools"><u>Awesome-Prediction-Market-Tools</u></a> 收录项目的系统梳理与实证筛选，本文选取其中<strong>已具备初步产品形态与使用场景</strong>的代表性项目作为研报案例。主要集中于四个方向：<strong>分析与信号层、警报与鲸鱼追踪系统、套利发现工具和交易终端与聚合执行</strong>。</p><p><strong>市场分析工具</strong></p><ul><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.polyseer.xyz/"><strong><u>Polyseer</u></strong></a> ：研究型预测市场工具，采用多 Agent 分工架构（Planner / Researcher / Critic / Analyst / Reporter）进行双边证据搜集与贝叶斯概率聚合，输出结构化研报。其优势在于方法论透明、流程工程化、完全开源可审计。</p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.oddpool.com/"><strong><u>Oddpool</u></strong></a> ：定位为“预测市场的 Bloomberg 终端”，提供 Polymarket、Kalshi、CME 等跨平台聚合、套利扫描与实时数据仪表盘终端。</p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://polymarketanalytics.com/"><strong><u>Polymarket Analytics</u></strong></a>：全球化的 Polymarket 数据分析平台，系统性展示交易者、市场、仓位与成交数据，定位清晰、数据直观，适合作为基础数据查询与研究参考。</p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.hashdive.com"><strong><u>Hashdive</u></strong></a>：面向交易者的数据工具，通过 Smart Score 与多维 Screener 量化筛选交易者与市场，在“聪明钱识别”和跟单决策上具备实用性。</p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.polyfactual.com/"><strong><u>Polyfactual</u></strong></a> ：聚焦 AI 市场情报与情绪/风险分析，通过 Chrome 扩展将分析结果嵌入交易界面，偏向 B2B 与机构用户场景。</p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://predly.ai/"><strong><u>Predly</u></strong></a> ：AI 错价检测平台，通过对比市场价格与 AI 计算概率识别 Polymarket 与 Kalshi 的定价偏差，官方声称警报准确率达 89%，定位于信号发现与机会筛选。</p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://app.polysights.xyz/"><strong><u>Polysights</u></strong></a> : 覆盖 30+ 市场与链上指标，并以 Insider Finder 追踪新钱包、大额单向下注等异常行为，适合日常监控与信号发现。</p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.polyradar.io/"><strong><u>PolyRadar</u></strong><u> </u></a>：多模型并行分析平台，对单一事件提供实时解读、时间线演化、置信度评分与来源透明度，强调多 AI 交叉验证，定位分析工具。</p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.alphascope.app/"><strong><u>Alphascope </u></strong></a>：AI 驱动的预测市场情报引擎，提供实时信号、研究摘要与概率变化监控，整体仍处早期阶段，偏研究与信号支持。</p></li></ul><p><strong>警报/鲸鱼追踪</strong></p><ul><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.stand.trade/"><strong><u>Stand</u></strong></a>: 明确定位鲸鱼跟单与高确信动作提醒。</p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://whale-tracker-livid.vercel.app/"><strong><u>Whale Tracker Livid</u></strong></a> ：将鲸鱼仓位变化产品化</p></li></ul><p><strong>套利发现工具：</strong></p><ul><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://getarbitragebets.com"><strong><u>ArbBets</u></strong></a>&nbsp; :&nbsp; AI 驱动的套利发现工具，聚焦于 Polymarket、Kalshi 及体育博彩市场，识别跨平台套利与正期望值（+EV）交易机会，定位于高频机会扫描层。</p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://polyscalping.org/"><strong><u>PolyScalping</u></strong></a><strong> </strong>:&nbsp; 面向 Polymarket 的实时套利与剥头皮分析平台，支持每 60 秒全市场扫描、ROI 计算与 Telegram 推送，并可按流动性、价差与成交量等维度筛选机会，偏向主动交易者。</p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.eventarb.com/"><strong><u>Eventarb</u></strong></a> :&nbsp; 轻量级跨平台套利计算与提醒工具，覆盖 Polymarket、Kalshi 与 Robinhood，功能聚焦、免费使用，适合作为基础套利辅助。</p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://predictionhunt.com/"><strong><u>Prediction Hunt</u></strong></a>：&nbsp; 跨交易所预测市场聚合与对比工具，提供 Polymarket、Kalshi 与 PredictIt 的实时价格比较与套利识别（约 5 分钟刷新），定位于信息对称与市场低效发现。</p></li></ul><p><strong>交易终端/聚合执行</strong></p><ul><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.verso.trading/"><strong><u>Verso</u></strong></a>：获 YC Fall 2024 支持的机构级预测市场交易终端，提供 Bloomberg 风格界面，覆盖 Polymarket 与 Kalshi 的 15,000+ 合约实时追踪、深度数据分析与 AI 新闻情报，定位于专业与机构交易者。</p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.matchr.xyz/"><strong><u>Matchr</u></strong></a>：跨平台预测市场聚合与执行工具，覆盖 1,500+ 市场，通过智能路由实现最优价格撮合，并规划基于高概率事件、跨场套利与事件驱动的自动化收益策略，定位于执行与资金效率层。</p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://thetradefox.com/"><strong><u>TradeFox</u></strong></a>：由 Alliance DAO 与 CMT Digital 支持的专业预测市场聚合与 Prime Brokerage 平台，提供高级订单执行（限价单、止盈止损、TWAP）、自托管交易与多平台智能路由，定位机构级交易者，计划扩展至 Kalshi、Limitless、SxBet 等平台。</p></li></ul><h2 id="h-" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>六、总结与展望</strong></h2><p>当前，<strong>预测市场智能体(Prediction Market Agent)</strong>正处于发展的早期探索阶段。</p><ol><li><p><strong>市场基础与本质演进</strong>：P<strong>olymarket与Kalshi已形成双寡头结构</strong>，围绕其构建智能体具备充分的流动性与场景基础。预测市场与赌博的核心区别在于正外部性，通过真实交易聚合分散信息，对现实事件进行公共定价，逐步演化为<strong>“全球真相层”</strong>。</p></li><li><p><strong>核心定位：</strong>预测市场智能体应定位为<strong>可执行的概率资产管理工具</strong>，其核心任务是将新闻、规则文本与链上数据转化为可验证的定价偏差，并以更高纪律性、更低成本和跨市场能力执行策略。<strong>理想架构可抽象为信息、分析、策略与执行四层</strong>，但其实际可交易性高度依赖于结算的清晰度、流动性的质量以及信息的结构化程度。</p></li><li><p><strong>策略选择与风控逻辑</strong>：从策略层面看，<strong>确定性套利</strong>（包括结算套利、概率守恒套利及跨平台价差交易）最适合由智能体自动化执行，而<strong>方向性投机</strong>仅可作为补充。在仓位管理上，应优先考虑可执行性与容错性，<strong>阶梯法结合固定仓位上限</strong>最适合。</p></li><li><p><strong>商业模式与前景</strong>：商业化主要分为三层：<strong>基建层</strong>以数据执行基础设施获取稳定 B2B 收入，<strong>策略层</strong>通过第三方策略调用或分成变现，<strong>Agent/Vault 层</strong>在链上透明风控约束下参与实盘并收取管理费与绩效费。对应形态包括<strong>娱乐化入口</strong>、<strong>策略订阅/信号</strong>（当前最可行）及<strong>高门槛的 Vault 托管</strong>，“基建 + 策略生态 + 业绩参与”为更可持续路径。</p></li></ol><p>尽管预测市场智能体（Prediction Market Agents）生态中已涌现出从底层框架到上层工具的多样化尝试，但在策略生成、执行效率、风险控制与商业闭环等关键维度上，目前尚未出现成熟、可复制的标准化产品，我们期待未来预测市场智能体的迭代与进化。</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/d252680d9214ae9697260f642e246d9a06ef31d0d21ac87dcfa4de22f38458f4.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAASCAIAAAC1qksFAAAACXBIWXMAAAsTAAALEwEAmpwYAAAGkElEQVR4nCVUbUxb5xV+tf7JJi2aNEX9UXVdRtdM7RjZpo01DVtT8kGyhCRsASUlXzRkg5a0CWpCQgIYChgSjMHXNNigCziJ+UpZgVAR167rubFjGxQcMJ5jfI0/ru17X1/b1y8X21yjye3Ro6Oj8+McnfM85wA/inliESIS9sYiXpYlIhk/53B0y+WC9jYM75cpFPjISDuGGW22SbWqRdzZJeuVKRQtYnE7JumWy7vl8haxWG+1Usl1Tyzii8f8KOZHiERxN8MAP0IMz6N0mk4kSYToRDKxuTmj0+7Ky9uzb29+wYF9hw7u3vNuSekprcVUfevm6zt27P0++cZbb27/9eu7/paXl//eq1nb8ZHhMM8TEHpZNoNYxI/QKsOAIMfhSuWFygrF2JjeahVJsNrGxmZRx21M0tguXPZ4YCoRXU8skMHwRjKWTKJ0ei2dZtNphufpVIrd4Om1tWAs5mXZEMeFeZ5OJOlEMsrzYZ73siwgEdKYjPfGhvULlgBi8a8m7uI4rnyAjwwrHo47gqF4mi94vxZs/z1oKV9wufTzlim1ymSzBRAXQOhbi/G6oPVcVc2X35mMC89Gp6bGpqenNZoevF9tNGYm8LHxGJdSza3+sVkO6mt+c1fQKRYfPHL4QsW/caVyVvdNa68SgK3VrXfBhwf+VFaS9corue/klpWX9Q4O+Nho/+jErtLK/WWVX5stDU2NhwsP/rP4ROm5s0eOFbaJO5e9fgBTiaIjNwH4GcjYj0D+KTJMq4zG/z6bVz15ojE+9bHRrHc/AD/NAVdKv9TrBh8MBzIEImcwSHPcyGNteXVdTVv3nNvlgbQzGFz2eOYdjgXnixWK8jHRzIoEop6yT64JRNgNYdfAwxmtxaR6YtCazCRCQS7hY+OecEQ2rlr2kfF0OpJM+hnGx7I+liUgo3nu0Fnt+qWVeW9ozuGQK5VPbTaUTkd4HiZSBIQZFckH8YYmQd8Q3ipqm9VrW+7caRN31jU320lylWFWGYaAzFo6bQ6EqgeUH2B9tjjrY6IBhLQLNvCTndknPv7FgbPg7eIbTc0AgKP/KJpSqRRjY2qjMTNBiFsz2OyYYvS5xw8TKYbndRbL5zi+5HbH02l2gw8gRNBheyh8uX8S5BSAnfkf943bAhSJ0LyTqGoQ5R0/X1BaKZQrH+l0nRj2jcn4rcU0o9Ho5+YyJNOp1MXqujd+l1t49qNTF6u6Bx58cu3TH2956bfZb3VIJHcwidlu97HxOVfgQtMQ+NVesLOw4jPFU4eHgDDEJQxLtt7hL27Lh6YNFvOyTSAUqozGCM//IOjModEcNzw9W3lDUNMq3vGH3RevNuDK+8UnizswSQ/eL5XLjYuLJEI+lr1apwBb3tnx98uKcRMBIwRkAgi9+d4xsPU1ALbtLzqDyWQAgD35e7Jzsk+fP3f/4fiyxwO8LEutcRGep9a4LzTfzTvdBB22ub0hLgETqXBiw8ewKxQMoPilm51Kpe5sVdtXhnmYSjgp6GPjXQPDfz5RVrz/sLSoyLBs75BIptVq2eDA9fp6TCbLkBzk1mqbmkrPn/9XVVVJ6fuzOh3D80GOI2jGRVErGcBVJkpAhozGI3yKZCM+JhpNbgQQcjNMiOO8cSQamlAbF8x2e+/goBTHxTKZ1bnCbW56wplfFCMgveTxOIMkQdFfmwy3GuqFGKZfeEZvZKpQcS6Y+SpRH8uuQEYb8KtJf8fMpNq6QHOcm4mqzca3T1d1zzxWTk689stXc3KyX35528kzpbN6vSMYBDE+2ayYBnsrpw1L/RYVqDwkNWiXVIP3y4DTPP6fSfPOLQUP5DOhjfVVmtG+WD0p+PyYUAqOnNEv/c8TDjM8P9Tbs7+kolDQYlxcrKmvaxS2iqTY7e6uzh6pMxgE3ObmqattAGwVDU5dH8dB1s8/Uz9eHMWuA2CdkErljwB4qVkgD/M8leTq25UA7AbbdoGs03ZXiERIv2Tbd+lW/uVPj9Y09iiUBYcKdv81T/FwPMrzfoRcEAIqub7ocl26Vgu5pJ0kb9xpz6yCom5dueyC0ElRwnYJARk/QnRqXWNarG8ev1J7r29I42PZKM8PPVLvK7l4rPzD8quNd3FF/oH83L/kHj1+vOLSR51dnSqNGrggTUBIRiMrFEVAGPi+7QoF3fH4D4E3FiEgdEHaSVEBDnnD0RcemkSsm2FWKBjk1lSzTxXYhLJvxvqCsJPkc7d73uHQmE0D94ZGR4f/D+KUpbsSHJUnAAAAAElFTkSuQmCC" nextheight="602" nextwidth="1080" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p><strong><em>免责声明：</em></strong><em>本文在创作过程中借助了 ChatGPT-5.2, Gemini 3和Claude Opus 4.5等 AI 工具辅助完成，作者已尽力校对并确保信息真实与准确，但仍难免存在疏漏，敬请谅解。需特别提示的是，加密资产市场普遍存在项目基本面与二级市场价格表现背离的情况。本文内容仅用于信息整合与学术/研究交流，不构成任何投资建议，亦不应视为任何代币的买卖推荐。</em></p><p><br></p>]]></content:encoded>
            <author>0xjacobzhao@newsletter.paragraph.com (jacobzhao)</author>
            <category>预测市场</category>
            <category>智能体</category>
            <category>polymarket</category>
            <category>kalshi</category>
            <enclosure url="https://storage.googleapis.com/papyrus_images/beb1fe306acce9ba418c77cd22a30195e0b9d893a74361e73dc1d29ac86a8fa9.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Ethereum Repricing: From Rollup-Centric to Security Settlement Layer]]></title>
            <link>https://paragraph.com/@0xjacobzhao/ethereum-repricing-from-rollup-centric-to-security-settlement-layer</link>
            <guid>egIi580Gui7baoihdV5a</guid>
            <pubDate>Mon, 16 Feb 2026 14:50:43 GMT</pubDate>
            <description><![CDATA[Ethereum is pivoting from "Rollup-Centric" to a "Security Settlement Layer." This is its "Constitutional Moment"—unifying a loose L2 confederation into a digital federation. Viewing ETH as a "Tech Company" is a category error; it is neutral infrastructure prioritizing security over short-term revenue.

Valuation Framework: We propose a dynamic SOTP model anchored in "Security (45%) + Money (35%)," replacing static P/E. A "Regime Adaptation" mechanism adjusts these weights based on macro liquidit]]></description>
            <content:encoded><![CDATA[<br><p>On February 3, 2026, Vitalik published a significant reflection on the Ethereum scaling roadmap on X. As the practical difficulties of Layer 2 evolving into a fully decentralized form are being re-evaluated, and with the mainnet's own throughput expected to increase significantly in the coming years, the original assumption of relying solely on L2 for throughput scaling is being corrected. A new "Settlement-Service" collaborative paradigm is forming between L1 and L2: <strong>L1 focuses on providing the highest level of security, censorship resistance, and settlement sovereignty, while L2 evolves into "differentiated service providers" (such as privacy, AI, high-frequency trading).</strong> Ethereum's strategic focus is returning to the mainnet itself, reinforcing its positioning as the world's most trusted settlement layer. Scaling is no longer the sole objective; security, neutrality, and predictability are once again becoming Ethereum's core assets.</p><p><strong>Core Changes:</strong></p><ul><li><p><strong>Ethereum is entering an "L1-First Paradigm":</strong> With direct mainnet scaling and continuously decreasing fees, the original assumption relying on L2 to shoulder the core role of scaling no longer holds.</p></li><li><p><strong>L2 is no longer "Branded Sharding," but a Trust Spectrum:</strong> The progress of L2 decentralization is much slower than expected, making it difficult to uniformly inherit Ethereum's security. Their role is being redefined as a spectrum of networks with different trust levels.</p></li><li><p><strong>Ethereum's core value is shifting from "Traffic" to "Settlement Sovereignty":</strong> The value of ETH is no longer limited to Gas or Blob revenue, but lies in its institutional premium as the world's most secure EVM settlement layer and native monetary asset.</p></li><li><p><strong>Scaling strategy is adjusting towards protocol internalization:</strong> Based on continuous direct L1 scaling, the exploration of protocol-layer native verification and security mechanisms may reshape the security boundary and value capture structure between L1 and L2.</p></li><li><p><strong>Valuation framework acts a structural migration:</strong> The weight of security and institutional credibility has risen significantly, while the weight of fees and platform effects has decreased. ETH's pricing is shifting from a cash flow model to an asset premium model.</p></li></ul><p>This article will analyze the paradigm shift in Ethereum's pricing model and valuation reconstruction according to a layered approach: <strong>Facts</strong> (technological and institutional changes that have occurred), <strong>Mechanisms</strong> (impact on value capture and pricing logic), and <strong>Deductions</strong> (implications for allocation and risk-return).</p><h2 id="h-i-back-to-origins-ethereum-values" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>I. Back to Origins: Ethereum Values</strong></h2><p>To understand the long-term value of Ethereum, the key lies not in short-term price fluctuations, but in its consistent design philosophy and value orientation.</p><ul><li><p><strong>Credible Neutrality:</strong> Ethereum's core goal is not the maximization of efficiency or profit, but to become a set of credibly neutral infrastructure—with open rules, predictability, no favoritism towards any participant, no control by a single entity, and where anyone can participate without permission. The security of ETH and its on-chain assets ultimately depends on the protocol itself, not on any institutional credit.</p></li><li><p><strong>Ecosystem First, Not Revenue First:</strong> Multiple key upgrades of Ethereum reflect a consistent decision-making logic—actively foregoing short-term protocol revenue in exchange for lower usage costs, larger ecosystem scale, and stronger system resilience. Its goal is not to "collect tolls," but to become the irreplaceable neutral settlement and trust foundation in the digital economy.</p></li><li><p><strong>Decentralization as a Means:</strong> The mainnet focuses on the highest level of security and finality, while Layer 2 networks are located on a connection spectrum with varying degrees to the mainnet: some inherit mainnet security and pursue efficiency, while others position themselves with differentiated functions. This enables the system to serve both global settlement and high-performance applications simultaneously, rather than L2s being "Branded Shards."</p></li><li><p><strong>Long-Termist Technical Route:</strong> Ethereum adheres to a slow but certain evolutionary path, prioritizing system security and credibility. From the PoS transition to subsequent scaling and confirmation mechanism optimizations, its roadmap pursues sustainable, verifiable, and irreversible correctness.</p></li></ul><p><strong>Security Settlement Layer:</strong> Refers to the Ethereum mainnet providing irreversible <strong>Finality</strong> services for Layer 2 and on-chain assets through decentralized validator nodes and consensus mechanisms.</p><p>This positioning as a Security Settlement Layer marks the establishment of "Settlement Sovereignty." It is a transition for Ethereum from a "Confederation" to a "Federation," representing the <strong>"Constitutional Moment"</strong> of the establishment of the Ethereum digital nation, and a significant upgrade to Ethereum's architecture and core.</p><p>After the American Revolutionary War, under the Articles of Confederation, the 13 states were like a loose alliance. Each state printed its own currency and levied tariffs on others. Every state was free-riding: enjoying common defense but refusing to pay; enjoying the alliance's brand but acting independently. This structural problem led to reduced national credit and an inability to unify foreign trade, severely hindering the economy.</p><p>1787 was America's "Constitutional Moment." The new Constitution granted the federal government three key powers: the power to tax directly, the power to regulate interstate commerce, and the power to unify currency. But what truly brought the federal government "to life" was Hamilton's economic plan of 1790: the federal assumption of state debts, repayment at face value to rebuild national credit, and the establishment of a National Bank as a financial hub. A unified market released economies of scale, national credit attracted more capital, and infrastructure construction gained financing capability. The US moved from 13 mutually guarded small states to become the world's largest economy.</p><p><strong>Today's structural dilemma in the Ethereum ecosystem is exactly the same.</strong></p><p>Each L2 is like a "Sovereign State," with its own user base, liquidity pool, and governance token. Liquidity is fragmented, cross-L2 interaction friction is high, and L2s enjoy Ethereum's security layer and brand without being able to return value to L1. Locking liquidity on their own chain is short-term rational for each L2, but if all L2s do this, the core competitive advantage of the entire Ethereum ecosystem is lost.</p><p><strong>The roadmap Ethereum is currently advancing is essentially its constitution-making and the establishment of a central economic system, that is, the establishment of "Settlement Sovereignty":</strong></p><ul><li><p><strong>Native Rollup Precompile = Federal Constitution.</strong> L2s can freely build differentiated functions outside the EVM, while the EVM part can obtain Ethereum-level security verification through native precompiles. Not connecting is an option, but the cost is losing trustless interoperability with the Ethereum ecosystem.</p></li><li><p><strong>Synchronous Composability = Unified Market.</strong> Through mechanisms like Native Rollup Precompiles, trustless interoperability and synchronous composability between L2s and between L2 and L1 are becoming possible. This directly eliminates "interstate trade barriers," and liquidity is no longer trapped in respective silos.</p></li><li><p><strong>L1 Value Capture Reconstruction = Federal Taxing Power.</strong> When all critical cross-L2 interactions return to L1 for settlement, ETH re-becomes the settlement hub and trust anchor for the entire ecosystem. Whoever controls the settlement layer captures the value.</p></li></ul><p>Ethereum is using a unified settlement and verification system to turn a fragmented L2 ecosystem into an irreplaceable "Digital Nation." This is a historical inevitability. Of course, the transition process may be slow, but history tells us that once this transition is complete, the released network effects will far exceed the linear growth of the fragmentation era. The US used a unified economic system to turn 13 small states into the world's largest economy. Ethereum will also transform a loose L2 ecosystem into the largest Security Settlement Layer, and even a global financial carrier.</p><p><strong>Ethereum Core Upgrade Roadmap &amp; Valuation Impact (2025-2026)</strong></p><table style="min-width: 100px"><colgroup><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p><strong>Upgrade Code</strong></p></td><td colspan="1" rowspan="1"><p><strong>Status</strong></p></td><td colspan="1" rowspan="1"><p><strong>Key Features</strong></p></td><td colspan="1" rowspan="1"><p><strong>Valuation &amp; Strategic Impact</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Pectra</strong></p></td><td colspan="1" rowspan="1"><p><span data-name="check_mark_button" class="emoji" data-type="emoji">✅</span> Completed</p><p><br></p><p>2025-05-07</p></td><td colspan="1" rowspan="1"><p>• <strong>EIP-7702:</strong> Account Abstraction (Programmable EOA)</p><p><br></p><p>• <strong>EIP-7251 (MaxEB):</strong> Validator cap raised to 2048 ETH</p><p><br></p><p>• <strong>Blob Params:</strong> Target 6 / Max 9</p></td><td colspan="1" rowspan="1"><p><strong>UX &amp; Capital Efficiency Improvement</strong></p><p><br></p><p>Reduces operational complexity for large institutions (node consolidation), optimizes wallet entry experience, and clears obstacles for large-scale capital entry post-ETF.</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Fusaka</strong></p></td><td colspan="1" rowspan="1"><p><span data-name="check_mark_button" class="emoji" data-type="emoji">✅</span> Completed</p><p><br></p><p>2025-12-03</p></td><td colspan="1" rowspan="1"><p>• <strong>PeerDAS:</strong> Introduces Data Availability Sampling</p><p><br></p><p>• <strong>DoS Hardening:</strong> Single tx Gas Cap ~16.7M</p><p><br></p><p>• <strong>Execution Layer Gas Limit:</strong> Increased</p></td><td colspan="1" rowspan="1"><p><strong>L1 Controlled Scaling</strong></p><p><br></p><p>Engineering throughput constraints are significantly alleviated, moving the settlement layer's "physical capacity" upwards; DoS protection enhances network security resilience under scaled conditions.</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>BPO 1 &amp; 2</strong></p><p><br></p><p>(Blob Only)</p></td><td colspan="1" rowspan="1"><p><span data-name="check_mark_button" class="emoji" data-type="emoji">✅</span> Completed</p><p><br></p><p>2025-12-09</p><p><br></p><p>2026-01-07</p></td><td colspan="1" rowspan="1"><p>• <strong>BPO 1:</strong> Blob Target 10 / Max 15</p><p><br></p><p>• <strong>BPO 2:</strong> Blob Target 14 / Max 21</p><p><br></p><p>(Lightweight forks adjusting only Blob params)</p></td><td colspan="1" rowspan="1"><p><strong>Institutionalized Expansion of DA Supply</strong></p><p><br></p><p>DA supply increased compared to Pre-Fusaka, causing a structural downward shift in L2 cost curves, solidifying Ethereum's monopoly status as a modular foundation.</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Glamsterdam</strong></p></td><td colspan="1" rowspan="1"><p><span data-name="yellow_circle" class="emoji" data-type="emoji">🟡</span> Planned</p><p><br></p><p>2026 (TBD)</p></td><td colspan="1" rowspan="1"><p>• <strong>Headliners:</strong> ePBS (Protocol Enshrined PBS) + BALs</p><p><br></p><p>• <strong>Non-headliners:</strong> Other features still under discussion</p></td><td colspan="1" rowspan="1"><p><strong>Neutrality Premium Reinforcement</strong></p><p><br></p><p>Further eliminates centralized relay risks through ePBS, strengthening censorship resistance; other incremental value depends on the final combination of features included.</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Hegota</strong></p></td><td colspan="1" rowspan="1"><p><span data-name="blue_circle" class="emoji" data-type="emoji">🔵</span> Candidate</p><p><br></p><p>2026 (TBD)</p></td><td colspan="1" rowspan="1"><p>• <strong>Status:</strong> Headliner not yet finalized</p><p><br></p><p>• <strong>Candidates:</strong> Verkle Trees, State Expiry, <strong>FOCIL</strong> (Censorship Resistance Mechanism), etc.</p></td><td colspan="1" rowspan="1"><p><strong>Decentralization Resilience Narrative</strong></p><p><br></p><p>Aims to solve state bloat issues and reduce node burden; <em>Note: Implementation time of research items is not guaranteed to sync with the 2026 Fork.</em></p></td></tr></tbody></table><p><br></p><h2 id="h-ii-valuation-misconceptions-why-ethereum-should-not-be-viewed-as-a-tech-company" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>II. Valuation Misconceptions: Why Ethereum Should Not Be Viewed as a "Tech Company"</strong></h2><p>Applying traditional corporate valuation models (P/E, DCF, EV/EBITDA) to Ethereum is essentially a category error. Ethereum is not a company aiming for profit maximization, but an open digital economic infrastructure. Corporations pursue shareholder value maximization, while Ethereum pursues the maximization of ecosystem scale, security, and censorship resistance. To achieve this goal, Ethereum has repeatedly actively suppressed protocol revenue (e.g., via EIP-4844 introducing Blob DA to structurally lower L2 data publishing costs and suppress L1 revenue from rollup data)—which approximates "revenue self-destruction" from a corporate perspective, but from an infrastructure perspective, is sacrificing short-term fees for long-term neutrality premium and network effects.</p><p>A more reasonable framework is to view Ethereum as a globally neutral settlement and consensus layer: providing security, finality, and trusted coordination for the digital economy. ETH's value is reflected across multiple structural demands—rigid demand for final settlement, the scale of on-chain finance and stablecoins, the impact of staking and burning mechanisms on supply, and long-term, sticky capital brought by institutional adoption such as ETFs, corporate treasuries, and RWAs.</p><br><table style="min-width: 75px"><colgroup><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Metaphor</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Similarities</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Differences</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Internet Protocol (TCP/IP)</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">Open, ownerless, available to anyone</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Ethereum has a native asset (ETH)</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Global Settlement Network (SWIFT)</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">Final settlement layer for financial transactions</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Ethereum is decentralized, operates 24/7</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Cloud Computing Platform (AWS)</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">Provides compute and storage infrastructure</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Ethereum has no single owner, censorship-resistant</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Sovereign Currency Issuer</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">ETH as the "base money" of the on-chain economy</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Ethereum has no government backing, globally universal</p></td></tr></tbody></table><p><br></p><h2 id="h-iii-paradigm-restructuring-finding-the-pricing-anchor-beyond-cash-flow" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>III. Paradigm Restructuring: Finding the Pricing Anchor Beyond Cash Flow</strong></h2><p>The <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://ethval.com"><strong><u>ethval.com</u></strong></a> launched by the Hashed team at the end of 2025 provided a detailed set of reproducible quantitative models for Ethereum, but traditional static models struggle to capture the dramatic pivot in Ethereum's narrative in 2026. Therefore, we reused their systematic, transparent, and reproducible underlying models (covering yield, money, network effects, and supply structure), but reshaped the valuation architecture and weighting logic:</p><ol><li><p><strong>Structural Restructuring:</strong> Mapping models to four value quadrants: "Security, Money, Platform, Revenue," aggregated for pricing.</p></li><li><p><strong>Weight Rebalancing:</strong> Significantly increasing the weight of security and settlement premium, weakening the marginal contribution of protocol revenue and L2 expansion.</p></li><li><p><strong>Risk Control Overlay:</strong> Introducing a circuit breaker mechanism sensing macro and on-chain risks, making the valuation framework adaptable across cycles.</p></li><li><p><strong>Removing "Circular Reasoning":</strong> Models containing current price inputs (like Staking Scarcity, Liquidity Premium) are no longer used as fair value anchors, but retained only as indicators for position and risk appetite adjustment.</p></li></ol><p><em>Note: The following models are not for precise point prediction, but to depict the relative pricing direction of different value sources in different cycles.</em></p><table style="min-width: 125px"><colgroup><col><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p><strong>Benchmark Weight</strong></p></td><td colspan="1" rowspan="1"><p><strong>Core Definition</strong></p></td><td colspan="1" rowspan="1"><p><strong>Cycle</strong></p></td><td colspan="1" rowspan="1"><p><strong>Pricing Model</strong></p></td><td colspan="1" rowspan="1"><p><strong>Key Observation Indicators</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Security Settlement Layer</strong></p><p><br></p><p>45%</p></td><td colspan="1" rowspan="1"><p><strong>Institutional Premium</strong></p><p><br></p><p>ETH's credit pricing as the "Global Most Secure EVM Final Settlement Layer."</p></td><td colspan="1" rowspan="1"><p>Neutral / Institutional Allocation Period</p><p><br></p><p>(Core Anchor)</p></td><td colspan="1" rowspan="1"><p><strong>Validator Econ + Staking DCF</strong></p><p><br></p><p>(Discount based on real yield)</p><p><br></p><p><em>Note: Uses Real Yield (Nominal Return - Inflation)</em></p></td><td colspan="1" rowspan="1"><p>• <strong>Real Yield:</strong> On-chain benchmark against US Treasury real rates</p><p><br></p><p>• <strong>L1 Scaling Pace:</strong> Physical capacity after Fusaka/BPO</p><p><br></p><p>• <strong>Censorship Resistance:</strong> Implementation progress of neutral components like ePBS</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Monetary Attribute</strong></p><p><br></p><p>35%</p></td><td colspan="1" rowspan="1"><p><strong>Native Collateral</strong></p><p><br></p><p>ETH as the "Base Collateral + Settlement Fuel" for the on-chain finance and stablecoin system.</p></td><td colspan="1" rowspan="1"><p>Neutral / Utility Expansion Period</p><p><br></p><p>(Utility Anchor)</p></td><td colspan="1" rowspan="1"><p><strong>MV = PQ + Collateral Premium</strong></p><p><br></p><p>(Quantity Theory of Money variant)</p><p><br></p><p><em>Note: Includes ETH transfers and full ecosystem settlement demand</em></p></td><td colspan="1" rowspan="1"><p>• <strong>Collateral Penetration:</strong> % of ETH locked in lending/derivatives</p><p><br></p><p>• <strong>Settlement Scale:</strong> Annual settlement volume of stablecoins and RWAs</p><p><br></p><p>• <strong>Restaking Structure:</strong> Balance between liquidity and security in LRTs</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Platform / Network Effect</strong></p><p><br></p><p>10%</p></td><td colspan="1" rowspan="1"><p><strong>Growth Option</strong></p><p><br></p><p>Non-linear premium brought by ecosystem prosperity (similar to tech stock growth part).</p></td><td colspan="1" rowspan="1"><p>Bull Market / Bubble Period</p><p><br></p><p>(Sentiment Amplifier)</p></td><td colspan="1" rowspan="1"><p><strong>Metcalfe + L2 Ecosystem Correction Model</strong></p><p><br></p><p>(Metcalfe + TrustIndex)</p><p><br></p><p><em>Note: L2 TVL needs to be discounted by "Trust Spectrum"</em></p></td><td colspan="1" rowspan="1"><p>• <strong>Activity:</strong> Active addresses on L1+L2, interaction frequency</p><p><br></p><p>• <strong>L2 Trust Spectrum:</strong> Dependency of different Stage L2s on L1</p><p><br></p><p>• <strong>Innovation Emergence:</strong> Explosion of AI Agents / Consumer apps</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Revenue Asset</strong></p><p><br></p><p>10%</p></td><td colspan="1" rowspan="1"><p><strong>Cash Flow Floor</strong></p><p><br></p><p>"Safety margin" provided by Gas/Blob fees, not a growth engine.</p></td><td colspan="1" rowspan="1"><p>Bear Market / Bottom Range</p><p><br></p><p>(Valuation Iron Bottom)</p></td><td colspan="1" rowspan="1"><p><strong>Min (P/S Ratio, Dividend Yield Model)</strong></p><p><br></p><p>(Minimum Value Principle)</p><p><br></p><p><em>Note: Only as a bear market bottom valuation reference</em></p></td><td colspan="1" rowspan="1"><p>• <strong>Burn Rate:</strong> Deflation/Inflation boundary brought by EIP-1559</p><p><br></p><p>• <strong>DA Supply:</strong> Blob supply/demand balance after BPO upgrade</p><p><br></p><p>• <strong>L1 Revenue:</strong> "Minimum maintenance fee" to maintain neutrality</p></td></tr></tbody></table><p><br></p><h3 id="h-1-security-settlement-layer-core-value-anchor-45percent-increased-in-risk-off" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>1. Security Settlement Layer: Core Value Anchor (45%, Increased in Risk-Off)</strong></h3><p>We view the security settlement layer as Ethereum's most core source of value and assign it a 45% benchmark weight; this weight is further increased during periods of rising macro uncertainty or declining risk appetite. This judgment stems from Vitalik's latest definition of "truly scaling Ethereum": the essence of scaling is not increasing TPS, but creating block space fully backed by Ethereum itself. Any high-performance execution environment relying on external trust assumptions does not constitute an extension of the Ethereum entity.</p><p>Under this framework, ETH's value is mainly reflected as the credit premium of a global sovereign-less settlement layer, rather than protocol revenue. This premium is jointly supported by structural factors such as validator scale and degree of decentralization, long-term security record, institutional adoption, clarity of compliance paths, and protocol-endogenous Rollup verification mechanisms.</p><p>In specific pricing, we mainly use two complementary methods: <strong>Validator Economics</strong> (Yield Equilibrium Mapping) and <strong>Staking DCF</strong> (Perpetual Staking Discount), to jointly depict the institutional premium of ETH as the "Global Secure Settlement Layer."</p><ul><li><p><strong>Validator Economics (Yield Equilibrium Pricing):</strong> Based on the ratio of annualized staking cash flow per ETH to the target real yield, deriving a theoretical fair price. This expression is used to depict the equilibrium relationship between yield and price, serving as a directional relative valuation tool rather than an independent pricing model.</p></li><li><p><strong>Staking DCF (Perpetual Staking Discount):</strong> Viewing ETH as a long-term asset capable of generating sustainable real staking yields, discounting its cash flow in perpetuity. Essentially, this value layer does not benchmark against the revenue capability of platform companies, but is similar to the settlement credit of a global clearing network.</p></li></ul><h3 id="h-2-monetary-attribute-settlement-and-collateral-35percent-dominant-in-utility-expansion" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>2. Monetary Attribute: Settlement and Collateral (35%, Dominant in Utility Expansion)</strong></h3><p>We view the monetary attribute as Ethereum's second core source of value and assign it a 35% benchmark weight, becoming the main utility anchor in neutral markets or during on-chain economic expansion. This judgment is not based on the narrative that "ETH equals USD," but on its structural role as the native settlement fuel and ultimate collateral asset of the on-chain financial system. The security of stablecoin circulation, DeFi liquidation, and RWA settlement all rely on the settlement layer supported by ETH.</p><p>For pricing, we use an extended form of the Quantity Theory of Money (MV = PQ), but model ETH's usage scenarios in layers to address the order-of-magnitude differences in circulation velocity across different scenarios:</p><ol><li><p><strong>High-Frequency Settlement Layer (Gas Payment, Stablecoin Transfers)</strong></p><ul><li><p>M_transaction = Annual Transaction Settlement Volume / V_high</p></li><li><p>V_high ≈ 15-25 (Referencing historical on-chain data)</p></li></ul></li><li><p><strong>Medium-Frequency Financial Layer (DeFi Interaction, Lending Liquidation)</strong></p><ul><li><p>M_defi = Annual DeFi Settlement Volume / V_medium</p></li><li><p>V_medium ≈ 3-8 (Based on mainstream DeFi protocol capital turnover rate)</p></li></ul></li><li><p><strong>Low-Frequency Collateral Layer (Staking, Restaking, Long-term Locking)</strong></p><ul><li><p>M_collateral = Total ETH Collateral Value × (1 + Liquidity Premium)</p></li><li><p>Liquidity Premium = 10-30% (Reflecting compensation for liquidity sacrifice)</p></li></ul></li></ol><h3 id="h-3-platform-network-effect-growth-option-10percent-bull-market-amplifier" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>3. Platform / Network Effect: Growth Option (10%, Bull Market Amplifier)</strong></h3><p>Platform and network effects are viewed as growth options in Ethereum's valuation, assigned only a 10% weight, used to explain the non-linear premium brought by ecosystem expansion during bull market phases. We use a trust-corrected Metcalfe model to avoid weighting L2 assets of different security levels equally in the valuation.</p><h3 id="h-4-revenue-asset-cash-flow-floor-10percent-bear-market-bottom" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>4. Revenue Asset: Cash Flow Floor (10%, Bear Market Bottom)</strong></h3><p>We view protocol revenue as the cash flow floor in the Ethereum valuation system, rather than a growth engine, also assigning a 10% weight. This layer mainly functions during bear markets or extreme risk phases to depict the valuation lower limit.</p><p>Gas and Blob fees provide the minimum operating cost for the network and affect the supply structure through EIP-1559. For valuation, we use Price-to-Sales (P/S) and Fee Yield models, taking the conservative value among them, serving only as a bottom reference. As the mainnet continues to scale, the relative importance of protocol revenue declines, with its core role reflected as a safety margin during downturns.</p><ul><li><p><strong>Price-to-Sales Model (P/S Floor):</strong> ETH Price (PS) = M_PS / Circulating Supply</p></li><li><p><strong>Fee Yield Model:</strong> ETH Price(Yield) = M_Yield / Circulating Supply</p></li><li><p><strong>Cash Flow Floor Pricing (Minimum Value Principle):</strong> P_Revenue_Floor = min(P_PS , P_Yield)</p></li></ul><h2 id="h-iv-dynamic-calibration-macro-constraints-and-cycle-adaptation" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>IV. Dynamic Calibration: Macro Constraints and Cycle Adaptation</strong></h2><p>If the previous text established Ethereum's "intrinsic value pivot," this chapter introduces an "external environment adaptation system" independent of fundamentals. Valuation cannot operate in a vacuum and must be constrained by three major external factors: Macro Environment (Cost of Capital), Market Structure (Relative Strength), and On-Chain Sentiment (Crowdedness). Based on this, we constructed a <strong>Regime Adaptation</strong> mechanism to dynamically adjust valuation weights across different cycles—releasing option premiums during loose periods and retreating to the revenue floor during risk-off periods, thereby achieving a leap from static models to dynamic strategies. <em>(Note: Due to space limitations, this article only presents the core logical framework of this mechanism.)</em></p><table style="min-width: 100px"><colgroup><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p><strong>Monitoring Dimension</strong></p></td><td colspan="1" rowspan="1"><p><strong>Key Indicator</strong></p></td><td colspan="1" rowspan="1"><p><strong>Interpretation Logic</strong></p></td><td colspan="1" rowspan="1"><p><strong>Dynamic Impact on Valuation Weights</strong></p></td></tr><tr><td colspan="1" rowspan="3"><p><strong>A. Macro Environment</strong></p><p><br></p><p>(Determines Cost of Capital)</p></td><td colspan="1" rowspan="1"><p><strong>1. Dollar Liquidity</strong></p><p><br></p><p>(Net Liquidity)</p></td><td colspan="1" rowspan="1"><p>YoY Expansion (Ample Funds)</p></td><td colspan="1" rowspan="1"><p><strong>Release Option Value:</strong> When macro capital costs decrease, the market allows for higher valuation premiums. Valuation weight can be released towards "Platform/Network Effect" to capture non-linear growth.</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>2. Real Yield</strong></p><p><br></p><p>(10Y Real Yield)</p></td><td colspan="1" rowspan="1"><p>Low or Declining (Holding Cost Drops)</p></td><td colspan="1" rowspan="1"><p><strong>Release Option Value:</strong> When macro capital costs decrease, the market allows for higher valuation premiums. Valuation weight can be released towards "Platform/Network Effect" to capture non-linear growth.</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>3. Credit Spreads</strong></p><p><br></p><p>(HY OAS)</p></td><td colspan="1" rowspan="1"><p>Low and Stable (No Systemic Credit Stress)</p></td><td colspan="1" rowspan="1"><p><strong>Release Option Value:</strong> When macro capital costs decrease, the market allows for higher valuation premiums. Valuation weight can be released towards "Platform/Network Effect" to capture non-linear growth.</p></td></tr><tr><td colspan="1" rowspan="2"><p><strong>B. Market Structure</strong></p><p><br></p><p>(Determines Relative Strength)</p><p><br></p><p><em>Trend Confirmation</em></p><br></td><td colspan="1" rowspan="1"><p><strong>4. ETH/BTC Exchange Rate</strong></p></td><td colspan="1" rowspan="1"><p>Trending Up (ETH Strengthening)</p></td><td colspan="1" rowspan="1"><p><strong>Confirm Asset Attribute:</strong> When relative strength indicators are positive and incremental funds (stablecoins) enter, confirming the return of the ETH narrative, the "Monetary Attribute" weight should be increased.</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>5. Stablecoin Growth</strong></p></td><td colspan="1" rowspan="1"><p>Positive Growth (New Funds Entering)</p></td><td colspan="1" rowspan="1"><p><strong>Confirm Asset Attribute:</strong> When relative strength indicators are positive and incremental funds (stablecoins) enter, confirming the return of the ETH narrative, the "Monetary Attribute" weight should be increased.</p></td></tr><tr><td colspan="1" rowspan="2"><p><strong>C. On-Chain Sentiment</strong></p><p><br></p><p>(Determines Crowdedness)</p><p><br></p><p><em>Sentiment Check</em></p></td><td colspan="1" rowspan="1"><p><strong>6. Funding Rate</strong></p></td><td colspan="1" rowspan="1"><p>Mildly Positive (No One-Sided Crowding)</p></td><td colspan="1" rowspan="1"><p><strong>Two-Way Risk Gate:</strong> When sentiment is too hot (extremely high rates) or too cold (panic deleveraging), it is a risk signal. Valuation logic should be forced to switch to "Revenue Floor / Defense Mode."</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>7. On-Chain Liquidation</strong></p></td><td colspan="1" rowspan="1"><p>Low and Stable (No Forced Deleveraging Risk)</p></td><td colspan="1" rowspan="1"><p><strong>Two-Way Risk Gate:</strong> When sentiment is too hot (extremely high rates) or too cold (panic deleveraging), it is a risk signal. Valuation logic should be forced to switch to "Revenue Floor / Defense Mode."</p></td></tr></tbody></table><p><br></p><h2 id="h-v-the-conditional-path-for-the-institutional-second-curve" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>V. The Conditional Path for the Institutional Second Curve</strong></h2><p>The analysis above is based on internal crypto technical, valuation, and cycle logic. This chapter discusses a problem at a different level: When ETH is no longer priced solely by crypto-native funds but is gradually integrated into the traditional financial system, how will its pricing power, asset attributes, and risk structure change? The "Institutional Second Curve" is not an extension of existing logic, but a redefinition of Ethereum by exogenous forces:</p><ul><li><p><strong>Change in Asset Attribute (Beta → Carry):</strong> Spot ETH ETFs solve compliance and custody issues, essentially still being price exposure; while the future advancement of Staking ETFs introduces on-chain yields into the institutional system via compliant carriers for the first time. ETH thus shifts from a "non-interest-bearing high-volatility asset" to an "allocation asset with predictable yield," expanding potential buyers from trading funds to pension, insurance, and long-term accounts sensitive to yield and duration.</p></li><li><p><strong>Change in Usage (Holding → Using):</strong> Institutions may no longer just view ETH as a tradable ticker, but start using it as settlement and collateral infrastructure. Whether it's JPMorgan's tokenized funds or the deployment of compliant stablecoins and RWAs on Ethereum, it indicates demand for ETH is shifting from "Holding Demand" to "Running Demand"—institutions not only hold ETH but use it for settlement, clearing, and risk management.</p></li><li><p><strong>Change in Tail Risk (Uncertainty → Pricing):</strong> As stablecoin regulatory frameworks (like the GENIUS Act) are gradually established, and with increased transparency in Ethereum's roadmap and governance, the regulatory and technical uncertainties most sensitive to institutions are being systematically compressed. This means uncertainty starts being priced in, rather than avoided.</p></li></ul><p>The so-called "Institutional Second Curve" is a change in the <strong>nature of demand</strong>, providing a real demand source for the "Security Settlement Layer + Monetary Attribute" valuation logic, driving ETH to transition from a sentiment-driven speculative asset to a foundational asset carrying both allocation and functional needs.</p><h2 id="h-vi-conclusion-value-anchoring-in-the-darkest-hour" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>VI. Conclusion: Value Anchoring in the Darkest Hour</strong></h2><p>In the past week, the industry has undergone a severe deleveraging wash, with market sentiment dropping to freezing point—undoubtedly a "darkest hour" for the crypto world. Pessimism is spreading among practitioners, and Ethereum, as the asset most representative of the crypto spirit, is also in the eye of the storm of controversy.</p><p>However, as rational observers, we need to pierce through the fog of panic: What Ethereum is currently experiencing is not a "collapse of value," but a profound "migration of pricing anchor." With L1 scaling advancing directly, L2 being redefined as a network spectrum of different trust levels, and protocol revenue actively giving way to system security and neutrality, ETH's pricing logic has structurally shifted to "Security Settlement Layer + Native Monetary Attribute."</p><p>Against the backdrop of high macro real interest rates, liquidity not yet being loose, and on-chain growth options not yet permitted to be priced by the market, ETH's price naturally converges to a structural value range supported by settlement certainty, verifiable yield, and institutional consensus. This range is not a sentiment bottom, but a value pivot after stripping away platform growth premiums.</p><p>As long-term builders of the Ethereum ecosystem, we refuse to be "mindless bulls" for ETH. We hope to use a rigorous logical framework to carefully demonstrate our prediction: Only when macro liquidity, risk appetite, and network effects simultaneously meet market state trigger conditions will higher valuations be re-factored in by the market.</p><p>Therefore, for long-term investors, the critical question now is not anxiously asking "Can Ethereum still go up," but to clearly recognize—in the current environment, which layer of core value are we buying at a "floor price"?</p><table style="min-width: 125px"><colgroup><col><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p><strong>Market State</strong></p></td><td colspan="1" rowspan="1"><p><strong>Trigger Conditions&nbsp;</strong></p></td><td colspan="1" rowspan="1"><p><strong>Dominant Weight</strong></p></td><td colspan="1" rowspan="1"><p><strong>Valuation Range</strong></p></td><td colspan="1" rowspan="1"><p><strong>Investment Logic Explanation</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Pressure/Defense</strong></p><p><br></p><p>(Risk-Off)</p></td><td colspan="1" rowspan="1"><p>• <strong>Liquidity:</strong> USD contraction / Credit spreads widening</p><p>• <strong>Real Rate:</strong> &gt; 2.5 (Capital expensive)</p><p>• <strong>Sentiment:</strong> Funding rate negative</p></td><td colspan="1" rowspan="1"><p><strong>Security + Revenue Floor</strong></p><p><br></p><p>50% / 25% / 0% / 25%</p></td><td colspan="1" rowspan="1"><p><strong>$1,700 – $2,400</strong></p></td><td colspan="1" rowspan="1"><p><strong>Survival Pricing:</strong> Market rejects all "growth narratives," pricing ETH only as the "Safest Settlement Asset" and its cash flow floor (P/S). Suitable for heavy left-side positioning.</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Neutral/Allocation</strong></p><p><br></p><p>(Neutral Base Case)</p></td><td colspan="1" rowspan="1"><p>• <strong>Liquidity:</strong> Stopped falling &amp; stabilized</p><p>• <strong>Real Rate:</strong> Stable or mildly declining</p><p>• <strong>Structure:</strong> ETH/BTC rate stabilizing</p></td><td colspan="1" rowspan="1"><p><strong>Security + Monetary Attribute</strong></p><p><br></p><p>50% / 40% / 0% / 10%</p></td><td colspan="1" rowspan="1"><p><strong>$2,200 – $2,800</strong></p></td><td colspan="1" rowspan="1"><p><strong>Institutional Pricing:</strong> ETH returns to the value pivot of "Secure Settlement Layer + Native Collateral." Institutional funds complete positioning in this range, awaiting trend confirmation.</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Aggressive/Expansion</strong></p><p><br></p><p>(Risk-On)</p></td><td colspan="1" rowspan="1"><p>• <strong>Liquidity:</strong> Significant YoY expansion</p><p>• <strong>Real Rate:</strong> &lt; 2.0% (Capital cheap)</p><p>• <strong>On-Chain:</strong> Significant stablecoin increase</p></td><td colspan="1" rowspan="1"><p><strong>Security + Platform Option</strong></p><p><br></p><p>35% / 30% / 35% / 5%</p></td><td colspan="1" rowspan="1"><p><strong>$3,200 – $4,500</strong></p></td><td colspan="1" rowspan="1"><p><strong>Option Release:</strong> Capital costs drop, "Network Effect" weight increases non-linearly (10%→30%). Market starts paying high premiums for L2 prosperity and app explosion.</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Overheating/Bubble</strong></p><p><br></p><p>(Euphoria)</p></td><td colspan="1" rowspan="1"><p>• <strong>Sentiment:</strong> Extreme funding rates</p><p><br></p><p>• <strong>Liquidation:</strong> Daily liquidation volume surges</p><p><br></p><p>• <strong>Narrative:</strong> Price completely detached from fundamentals</p></td><td colspan="1" rowspan="1"><p><strong>Platform/Network Effect Dominant</strong></p><p><br></p><p>(Fundamental factors fail)</p><p><br></p><p>20% / 15% / 65% / 0%</p></td><td colspan="1" rowspan="1"><p><strong>&gt; $4,500</strong></p><p><br></p><p>(Unstable)</p></td><td colspan="1" rowspan="1"><p><strong>Irrational Exuberance:</strong> Price no longer reflects intrinsic value but liquidity spillover. <strong>Kill Switch</strong> should be forcibly enabled, taking profit in batches or hedging.</p></td></tr></tbody></table><p><br><strong>Disclaimer:</strong> This article was assisted by AI tools such as ChatGPT-5.2, Gemini 3, and Claude Opus 4.5 during the creation process. The author has made every effort to proofread and ensure the information is true and accurate, but omissions are inevitable, and we ask for your understanding. It should be specially noted that the crypto asset market universally experiences deviations between project fundamentals and secondary market price performance. The content of this article is for information consolidation and academic/research exchange only, does not constitute any investment advice, and should not be considered as a recommendation for any token.</p>]]></content:encoded>
            <author>0xjacobzhao@newsletter.paragraph.com (jacobzhao)</author>
            <category>eth</category>
            <category>valuation</category>
            <category>security</category>
            <category>settlement</category>
            <category>rollup</category>
            <enclosure url="https://storage.googleapis.com/papyrus_images/2b39ac0752f363ec9c67ce311421cfb6f982d8317328235e91f4b0223578ff44.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[以太坊再定价：从 Rollup-Centric 到“安全性结算层”]]></title>
            <link>https://paragraph.com/@0xjacobzhao/以太坊再定价：从-rollup-centric-到安全性结算层</link>
            <guid>d6DRJxXYvEIPTGLPlzzz</guid>
            <pubDate>Tue, 10 Feb 2026 04:22:20 GMT</pubDate>
            <description><![CDATA[以太坊正在进入“L1优先+ L2 信任光谱”的范式，安全中立的结算层成为以太坊的核心价值。我们为以太坊设计了“安全性、货币、平台、收入”四大价值象限并引入宏观约束与周期适配的动态校准估值框架，未来ETH亦可能逐步纳入传统金融体系，是外生力量对以太坊的再定义。作为理性的观察者，我们需要穿透当下市场恐慌的迷雾：以太坊当下所经历的并非“价值坍塌”而是“定价锚迁移”。]]></description>
            <content:encoded><![CDATA[<p>2026 年 2 月 3 日，Vitalik 在 X 上发表了关于以太坊扩容路线的重要反思。随着 Layer 2 向完全去中心化形态演进的现实难度被重新认识，同时主网自身吞吐能力在未来数年内预计大幅提升，<strong>单纯依赖 L2 进行吞吐量扩容的原始设想正在修正</strong>，L1 与 L2 正在形成新的‘结算-服务’协同范式： L1 专注于提供最高等级的安全性、抗审查性与结算主权，而 L2 则向‘差异化服务商’演进（如隐私、AI、高频交易），以太坊的战略重心正回归主网本身，强化其作为<strong>全球最可信结算层</strong>的定位。扩容不再是唯一目标，<strong>安全性、中立性与可预测性</strong>，重新成为以太坊的核心资产。</p><p>&nbsp;<strong>核心变化：</strong></p><ul><li><p><strong>以太坊正在进入“L1 优先范式”：</strong> 随着主网直接扩展、费用持续下降，依赖 L2 承担规模化核心角色的原始假设已不再成立。</p></li><li><p><strong>L2 不再是“品牌分片”，而是信任光谱：</strong> L2 去中心化推进远慢于预期，难以统一继承以太坊安全性，其角色正被重新定义为不同信任级别的网络光谱。</p></li><li><p><strong>以太坊的核心价值从“流量”转向“结算主权”：</strong> ETH 的价值不再限于 Gas 或 Blob 收入，而在于其作为全球最安全 EVM 结算层与原生货币资产的制度性溢价。</p></li><li><p><strong>扩展策略正在向协议内生化调整：</strong> 在 L1 持续直接扩展的基础上，协议层原生验证与安全机制的探索，可能重塑 L1–L2 的安全边界与价值捕获结构。</p></li><li><p><strong>估值框架发生结构性迁移：</strong> 安全性与机构可信度权重显著上升，手续费与平台效应权重下降，ETH 的定价正从现金流模型转向资产溢价模型。</p></li></ul><p>本文将依照<strong>事实</strong>（已发生的技术与制度变化）、<strong>机制</strong>（对价值捕获与定价逻辑的影响）、<strong>推演</strong>（对配置与风险回报的含义）的分层对以太坊定价模型的<strong>范式转变与估值重构</strong>展开分析。</p><h2 id="h-" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>一、原点回归：以太坊价值观</strong></h2><p>理解以太坊的长期价值，关键不在短期价格波动，而在于其<strong>始终如一的设计理念与价值取向</strong>。</p><ul><li><p><strong>可信中立性</strong>：以太坊的核心目标并非效率或利润最大化，而是成为一套可信中立的基础设施——规则公开、可预测，不偏袒任何参与者，不受单一主体控制，任何人均可无需许可地参与。ETH 及其链上资产的安全性，最终依赖的是协议本身，而非任何机构信用。</p></li><li><p><strong>生态优先非收入优先</strong>：以太坊多次关键升级体现出一致的决策逻辑——主动放弃短期协议收入，以换取更低的使用成本、更大的生态规模与更强的系统韧性。其目标不是“收取过路费”，而是成为数字经济中不可替代的中立结算与信任底座。</p></li><li><p><strong>去中心化作为手段</strong>：主网专注于最高等级的安全性与最终性，而 Layer 2 网络位于与主网不同程度的连接光谱上：有的继承主网安全性并追求效率，有的则以差异化功能为价值定位。使系统能够同时服务全球结算与高性能应用，而非 L2 “品牌分片”。</p></li><li><p><strong>长期主义技术路线</strong>：以太坊坚持慢而确定的演进路径，优先保障系统安全与可信度。从 PoS 转型到后续扩容与确认机制优化，其路线图追求<strong>可持续、可验证、不可逆的正确性</strong>。</p></li></ul><p><strong>安全性结算层 (Security Settlement Layer)：</strong> 指以太坊主网通过去中心化验证节点和共识机制，为 Layer 2 及链上资产提供不可逆转的最终性（Finality）服务。</p><p>这种<strong>安全性结算层的</strong>定位，标志了<strong>“结算主权”的建立，是以太坊从“邦联制”转向“联邦制” 的转变，是以太坊数字国家建立的 “宪法时刻”，更是以太坊架构与核心的重要升级。</strong></p><p>美国独立战争以后，在邦联制的条款下，13个州像是一个松散联盟，各州各印各的货币、互相征收关税， 每个州都在搭便车：享受共同国防，却拒绝缴费；享受联盟的品牌，却各自为政。这个结构性的问题导致国家信用降低，并且无法统一对外贸易，严重阻碍经济。</p><p>1787年是美国的“宪法时刻”，新宪法赋予联邦政府三项关键权力：直接征税权、州际贸易管制权、统一货币权。但真正让联邦政府"活过来"的是汉密尔顿1790年的经济方案，联邦承担各州债务、按面值兑付重建国家信用、<strong>建立国家银行作为金融中枢</strong>。统一市场释放了规模效应，国家信用吸引了更多资本，基础设施建设获得了融资能力。美国从13个互相设防的小邦，走向了世界第一大经济体。</p><p>今天的以太坊生态的<strong>结构性困境完全一致</strong>。</p><p>每条L2就像一个"主权州"，各自有自己的用户群、流动性池和治理代币。流动性被切割成碎片，跨L2交互摩擦大，L2享受以太坊的安全层和品牌却无法回馈L1价值。每条L2把流动性锁在自己链上是短期理性的，但所有L2都这样做就导致整个以太坊生态的最核心的竞争优势丧失。</p><p><strong>以太坊现在推进的路线图，本质上就是它的制宪和建立中央经济系统，也就是建立“结算主权”：</strong></p><ul><li><p><strong>原生Rollup预编译（Native Rollup Precompile）= 联邦宪法。</strong> L2可以在EVM之外自由构建差异化功能，而EVM部分可以通过原生预编译获得以太坊级别的安全验证。不接入当然也可以，但代价是失去与以太坊生态的免信任互操作性。</p></li><li><p><strong>同步可组合性（Synchronous Composability）= 统一市场。</strong> 通过原生Rollup预编译等机制，L2之间、L2与L1之间的免信任互操作和同步可组合性正在成为可能，这直接消除了"州际贸易壁垒"，流动性不再被困在各自的孤岛上。</p></li><li><p><strong>L1价值捕获重建 = 联邦征税权。</strong> 当所有关键的跨L2交互都回归L1结算时，ETH重新成为整个生态的结算中枢和信任锚点。谁控制结算层，谁就捕获价值。</p></li></ul><p><strong>以太坊正在用统一的结算和验证体系，把碎片化的L2生态变成一个不可替代的“数字国家”，这是一个历史必然。当然， 转变的过程可能缓慢，</strong>而历史告诉我们，这个转变一旦完成，释放出的网络效应将远超碎片化时代的线性增长。美国用统一的经济系统把13个小邦变成了世界第一大经济体。<strong>以太坊也将把松散的L2生态转化成最大的安全性结算层，乃至全球金融载体。</strong></p><p><strong>以太坊核心升级路线图与估值影响 (2025-2026)</strong></p><table style="min-width: 100px"><colgroup><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>升级代号</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>状态</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>关键特性&nbsp;</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>估值与战略影响&nbsp;</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Pectra</strong></p></td><td colspan="1" rowspan="1"><p><span data-name="check_mark_button" class="emoji" data-type="emoji">✅</span> 已完成</p><p>2025-05-07</p></td><td colspan="1" rowspan="1"><p>• <strong>EIP-7702</strong>：账户抽象 (EOA 可编程)</p><p>• <strong>EIP-7251 (MaxEB)</strong>：验证者上限提至 2048 ETH</p><p>• <strong>Blob 参数</strong>：Target 6 / Max 9</p></td><td colspan="1" rowspan="1"><p><strong>UX 与资本效率改善</strong></p><p>降低大机构运维复杂度（节点合并），优化钱包入口体验，为 ETF 后的大规模资金入场扫清障碍。</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Fusaka</strong></p></td><td colspan="1" rowspan="1"><p><span data-name="check_mark_button" class="emoji" data-type="emoji">✅</span> 已完成</p><p>2025-12-03</p></td><td colspan="1" rowspan="1"><p>• <strong>PeerDAS</strong>引入数据可用性采样</p><p>• <strong>DoS Hardening</strong>：单笔交易 Gas Cap 约 16.7M</p><p>• <strong>执行层</strong> Gas Limit上调</p></td><td colspan="1" rowspan="1"><p><strong>L1 可控扩展工程化</strong></p><p>吞吐约束大幅缓解，结算层“物理容量”上移；DoS 防护增强了网络在扩容状态下的安全韧性。</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>BPO 1 &amp; 2</strong></p><p><br></p><p><em>(Blob Only)</em></p></td><td colspan="1" rowspan="1"><p><span data-name="check_mark_button" class="emoji" data-type="emoji">✅</span> 已完成</p><p>2025-12-09</p><p>2026-01-07</p></td><td colspan="1" rowspan="1"><p>• <strong>BPO 1</strong>：Blob Target 10 / Max 15</p><p>• <strong>BPO 2</strong>：Blob Target 14 / Max 21</p><p><em>(仅调整 Blob 参数的轻量分叉)</em></p></td><td colspan="1" rowspan="1"><p><strong>DA 供给制度化扩容</strong></p><p>DA 供给较 Pre-Fusaka 提升，导致 L2 成本曲线结构性下移，巩固以太坊作为模块化底座的垄断地位。</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Glamsterdam</strong></p></td><td colspan="1" rowspan="1"><p><span data-name="yellow_circle" class="emoji" data-type="emoji">🟡</span> 计划中</p><p>2026(TBD)</p></td><td colspan="1" rowspan="1"><p>• <strong>Headliners</strong>：ePBS (协议内生 PBS) + BALs</p><p>• <strong>Non-headliners</strong>：其余特性仍在讨论中</p><br></td><td colspan="1" rowspan="1"><p><strong>中立性溢价强化</strong></p><p>通过 ePBS 进一步消除中心化中继风险，强化抗审查能力；其余增量价值取决于最终纳入的特性组合。</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Hegota</strong></p></td><td colspan="1" rowspan="1"><p><span data-name="blue_circle" class="emoji" data-type="emoji">🔵</span> 候选期</p><p>2026 (TBD)</p></td><td colspan="1" rowspan="1"><p>• <strong>状态</strong>：Headliner 尚未最终锁定</p><p>• <strong>候选方向</strong>：Verkle Trees、State Expiry、FOCIL反审查机制 等</p></td><td colspan="1" rowspan="1"><p><strong>去中心化韧性叙事</strong></p><p>旨在解决状态膨胀问题，降低节点负担；<em>注：需强调研究项落地时间不保证与 2026 Fork 同步。</em></p></td></tr></tbody></table><p><br></p><h2 id="h-" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>二、估值误区：为何不应将以太坊视为“科技公司”</strong></h2><p>将传统企业估值模型（P/E、DCF、EV/EBITDA）套用于以太坊，本质上是一种<strong>类别错误</strong>。以太坊并非以利润最大化为目标的公司，而是一套开放的数字经济基础设施。企业追求股东价值最大化，而以太坊追求的是生态规模、安全性与抗审查性的最大化。为实现这一目标，以太坊多次主动压低协议收入（如通过EIP-4844 通过引入 Blob DA，结构性下移 L2 数据发布成本，并压低 L1 来自 rollup 数据的费用收入）——在公司视角下近似“收入自毁”，但在基础设施视角下，则是以牺牲短期费用换取长期的中立性溢价与网络效应。</p><p>更合理的理解框架，是将以太坊视为<strong>全球中立的结算与共识层</strong>：为数字经济提供安全性、最终性与可信协调。ETH 的价值体现在多重结构性需求之上——最终结算的刚性需求、链上金融与稳定币规模、质押与销毁机制对供给的影响，以及 ETF、企业财库与 RWA 等机构级采用所带来的长期、粘性资金。</p><table style="min-width: 75px"><colgroup><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>比喻</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>相似点</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>差异点</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center">互联网协议（TCP/IP）</p></td><td colspan="1" rowspan="1"><p style="text-align: center">开放、无主、任何人可用</p></td><td colspan="1" rowspan="1"><p style="text-align: center">以太坊有原生资产ETH</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center">全球结算网络（SWIFT）</p></td><td colspan="1" rowspan="1"><p style="text-align: center">金融交易的最终结算层</p></td><td colspan="1" rowspan="1"><p style="text-align: center">以太坊去中心化、24/7运作</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center">云计算平台（AWS）</p></td><td colspan="1" rowspan="1"><p style="text-align: center">提供计算和存储基础设施</p></td><td colspan="1" rowspan="1"><p style="text-align: center">以太坊无单一所有者、抗审查</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center">主权货币发行国</p></td><td colspan="1" rowspan="1"><p style="text-align: center">ETH作为链上经济的"基础货币"</p></td><td colspan="1" rowspan="1"><p style="text-align: center">以太坊无政府背书、全球通用</p></td></tr></tbody></table><h2 id="h-" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>三、范式重构：寻找现金流之外的定价锚</strong></h2><p>2025年底 Hashed团队推出的 ethval.com 为以太坊提供了详尽的可复现量化模型集合，但传统的静态模型难以捕捉 2026 年以太坊叙事的剧烈转折。因此，我们复用了其系统性、透明且可复现的底层模型（涵盖收益、货币、网络效应与供给结构），在<strong>估值架构</strong>与<strong>权重逻辑</strong>上进行了重塑：</p><ol><li><p><strong>结构重构：</strong> 将模型映射至<strong>“安全性、货币、平台、收入”</strong>四大价值象限，分类加总定价。</p></li><li><p><strong>权重再平衡：</strong> 显著上调安全性与结算溢价权重，弱化协议收入与 L2 扩张的边际贡献。</p></li><li><p><strong>风控叠加层：</strong> 引入宏观与链上风险感知的熔断机制，使估值框架具备跨周期适应性。</p></li><li><p><strong>剔除“循环论证”</strong>：对含现价输入的模型（如 Staking Scarcity、Liquidity Premium）不再作为公允价值锚，仅保留其作为仓位与风险偏好调节指标。</p></li></ol><p>注：下述模型并非用于精确点位预测，而用于刻画不同价值来源在不同周期中的相对定价方向</p><table style="min-width: 125px"><colgroup><col><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>基准权重</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>核心定义</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>周期</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>定价模型&nbsp;</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>关键观测指标</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>安全性结算层</strong></p><p><br></p><p><strong>45%</strong></p></td><td colspan="1" rowspan="1"><p><strong>制度性溢价</strong></p><p>ETH 作为“全球最安全的 EVM 最终结算层”的信用定价。</p></td><td colspan="1" rowspan="1"><p><strong>中性 / 机构配置期</strong></p><p><em>(核心锚)</em></p></td><td colspan="1" rowspan="1"><p><strong>Validator Econ + Staking DCF</strong></p><p><em>(基于真实收益率的折现)</em></p><p><em>注：采用 Real Yield (名义回报-通胀)</em></p></td><td colspan="1" rowspan="1"><p>• <strong>真实收益率</strong>：对标美债真实利率的链上基准</p><p>• <strong>L1 扩展节奏</strong>：Fusaka/BPO 后的物理扩容能力</p><p>• <strong>抗审查预期</strong>：ePBS 等中立组件的落地进度</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>货币属性</strong></p><p><br></p><p><strong>35%</strong></p></td><td colspan="1" rowspan="1"><p><strong>原生抵押品</strong></p><p>ETH 作为链上金融与稳定币体系的“底层抵押品 + 结算燃料”。</p></td><td colspan="1" rowspan="1"><p><strong>中性 / 效用扩张期</strong></p><p><em>(效用锚)</em></p></td><td colspan="1" rowspan="1"><p><strong>MV = PQ + 抵押溢价</strong></p><p><em>(货币数量论变体)</em></p><p><em>注：含 ETH 转账与全生态结算需求</em></p></td><td colspan="1" rowspan="1"><p>• <strong>抵押渗透率</strong>：ETH 在借贷/衍生品中的锁仓占比</p><p>• <strong>结算规模</strong>：稳定币及 RWA 的年度结算量</p><p>• <strong>再质押结构</strong>：流动性与安全性在 LRT 中的平衡</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>平台/网络效应</strong></p><p><br></p><p><strong>10%</strong></p></td><td colspan="1" rowspan="1"><p><strong>增长期权</strong></p><p>生态繁荣带来的非线性溢价（类似科技股增长部）。</p></td><td colspan="1" rowspan="1"><p><strong>牛市 / 泡沫期</strong></p><p><br></p><p><em>(情绪放大器)</em></p></td><td colspan="1" rowspan="1"><p><strong>梅特卡夫 + L2 生态修正模型</strong></p><p><em>(Metcalfe + TrustIndex)</em></p><p><em>注：L2 TVL 需经“信任光谱”折算</em></p></td><td colspan="1" rowspan="1"><p>• <strong>活跃度</strong>：L1+L2 活跃地址数、交互频率</p><p>• <strong>L2 信任光谱</strong>：不同 Stage 的 L2 对 L1 的依附度</p><p>• <strong>创新涌现</strong>：AI Agent / 消费级应用的爆发</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>收入资产</strong></p><p><br></p><p><strong>10%</strong></p></td><td colspan="1" rowspan="1"><p><strong>现金流地板</strong></p><p>Gas/Blob 费用提供的“安全边际”，而非增长引擎。</p></td><td colspan="1" rowspan="1"><p><strong>熊市 / 底部区间</strong></p><p><em>(估值铁底)</em></p></td><td colspan="1" rowspan="1"><p><strong>Min (市销率 P/S, 股息率模型)</strong><em>(取极小值原则)</em></p><p><em>注：仅作为熊市托底估值参考</em></p></td><td colspan="1" rowspan="1"><p>• <strong>燃烧率</strong>：EIP-1559 带来的通缩/通胀分界线</p><p>• <strong>DA 供给</strong>：BPO 升级后的 Blob 供需平衡</p><p>• <strong>L1 收入</strong>：维护中立性的“最低维持费”</p></td></tr></tbody></table><p><br></p><h3 id="h-1-45percent" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>1. 安全性结算层：核心价值锚（45%，避险期上调）</strong></h3><p>我们将<strong>安全性结算层</strong>视为以太坊最核心的价值来源，并赋予其 45% 的基准权重；在宏观不确定性上升或风险偏好回落阶段，该权重进一步上调。这一判断源于 Vitalik 对“真正扩展以太坊”的最新界定：扩容的本质不是提升 TPS，而是创造<strong>由以太坊本身完全背书的区块空间</strong>。任何依赖外部信任假设的高性能执行环境，都不构成对以太坊本体的扩展。</p><p>在此框架下，ETH 的价值主要体现为<strong>全球无主权结算层的信用溢价</strong>，而非协议收入。该溢价由验证者规模与去中心化程度、长期安全记录、机构级采用、合规路径清晰度，以及协议内生 Rollup 验证机制等结构性因素共同支撑。</p><p>在具体定价上，我们主要采用两种互补的方法：<strong>Validator Economics（收益均衡映射）</strong>与 <strong>Staking DCF（永续质押折现）</strong>，共同刻画 ETH 作为“全球安全结算层”的制度性溢价。</p><ul><li><p><strong>Validator Economics（收益均衡定价）</strong>：基于每枚ETH的年化质押现金流与目标真实收益率的比值，推导理论公允价格：</p></li></ul><p>Fair Price = (Annual Staking Cash Flow per ETH) / Target Real Yield</p><p>该表达用于刻画收益与价格的均衡关系，作为方向性相对估值工具，而非独立定价模型。</p><ul><li><p><strong>&nbsp;Staking DCF（永续质押折现）</strong>：将 ETH 视为一项可持续产生真实质押收益的长期资产，对其现金流进行永续折现：</p></li></ul><p>M_staking = Total Real Staking Cash Flow / (Discount Rate − Longterm Growth Rate)</p><p>ETH Price (staking) = M_staking / Circulating Supply</p><p>从本质上看，这一价值层并非对标平台型公司的收入能力，而是类似<strong>全球清算网络的结算信用</strong>。</p><h3 id="h-2-35percent" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><br><strong>2. 货币属性：结算与抵押（35%，效用扩张期主导）</strong></h3><p>我们将<strong>货币属性</strong>视为以太坊第二核心的价值来源，并赋予其 35% 的基准权重，在中性市场或链上经济扩张阶段成为主要效用锚。这一判断并非基于“ETH 等同于美元”的叙事，而在于其作为<strong>链上金融体系的原生结算燃料与最终抵押资产</strong>的结构性角色。稳定币流转、DeFi 清算与 RWA 结算的安全性，均依赖 ETH 所支撑的结算层。</p><p>定价上，我们采用货币数量论的扩展形式（MV = PQ），但将ETH的使用场景<strong>分层建模</strong>，以应对不同场景下流通速度的数量级差异<strong>分层货币需求模型：</strong></p><ol><li><p><strong>高频结算层</strong>（Gas支付、稳定币转账）</p><ul><li><p>M_transaction = Annual Transaction Settlement Volume / V_high</p></li><li><p>V_high ≈ 15-25（参考历史链上数据）</p></li></ul></li><li><p><strong>中频金融层</strong>（DeFi交互、借贷清算）</p><ul><li><p>M_defi = Annual DeFi Settlement Volume / V_medium</p></li><li><p>V_medium ≈ 3-8（基于主流DeFi协议资金周转率）</p></li></ul></li><li><p><strong>低频抵押层</strong>（质押、再质押、长期锁仓）</p><ul><li><p>M_collateral = Total ETH Collateral Value × (1 + Liquidity Premium)</p></li><li><p>Liquidity Premium = 10-30%（反映流动性牺牲的补偿）</p></li></ul></li></ol><h3 id="h-3-10percent" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>3. 平台 / 网络效应：增长期权（10%，牛市放大器）</strong></h3><p>平台与网络效应被视为以太坊估值中的<strong>增长期权</strong>，仅赋予 10% 权重，用于解释牛市阶段生态扩张带来的非线性溢价。我们采用经信任修正的梅特卡夫模型，避免将不同安全级别的 L2 资产等权计入估值：</p><ul><li><p><strong>梅特卡夫模型</strong>： M_network = a × (Active Users)^b&nbsp; +&nbsp; m × Σ (L2 TVL_i × TrustScore_i)</p></li><li><p><strong>平台/网络效应估值价格</strong>：ETH Price(network) = M_network / Circulating Supply</p></li></ul><h3 id="h-4-10percent" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>4. 收入资产：现金流地板（10%，熊市托底）</strong></h3><p>我们将协议收入视为以太坊估值体系中的<strong>现金流地板</strong>，而非增长引擎，同样赋予 10% 权重。该层主要在熊市或极端风险阶段发挥作用，用于刻画估值下限。</p><p>Gas 与 Blob 费用为网络提供最低运作成本，并通过 EIP-1559 影响供给结构。估值上，我们采用市销率与费用收益率模型，并取其中的保守值，仅作为底部参考。随着主网持续扩容，协议收入的重要性相对下降，其核心作用体现在下行阶段的安全边际。</p><ul><li><p><strong>市销率模型（P/S Floor）</strong>：M_PS = Annual Protocol Revenue × P/S_multiple</p></li><li><p><strong>市销率估值价格</strong>：ETH Price (PS) = M_PS / Circulating Supply</p></li><li><p><strong>费用收益率模型</strong>：M_Yield = Annual Protocol Revenue / Target Fee Yield</p></li><li><p><strong>费用收益估值价格</strong>：ETH Price(Yield) = M_Yield / Circulating Supply</p></li><li><p><strong>现金流地板定价（取两者极小值）</strong>：P_Revenue_Floor = min(P_PS , P_Yield)</p></li></ul><h2 id="h-" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>四、动态校准：宏观约束与周期适配</strong></h2><p>如果说前文确立了以太坊的“内在价值中枢”<strong>，本章则引入一套独立于基本面的</strong>“外在环境适配系统”<strong>。估值无法真空运行，必须受制于宏观环境</strong>（资金成本）、<strong>市场结构</strong>（相对强弱）与<strong>链上情绪</strong>（拥挤度）三大外部约束。基于此，我们构建了<strong>状态适配（Regime Adaptation）机制</strong>，在不同周期动态调整估值权重——宽松期释放期权溢价，避险期退守收入地板，从而实现从静态模型到动态策略的跨越。（注：限于篇幅，本文仅展示该机制的核心逻辑框架。）</p><table style="min-width: 100px"><colgroup><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>监测维度</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>关键指标</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>解读逻辑</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>对估值权重的动态影响</strong></p></td></tr><tr><td colspan="1" rowspan="3"><p><strong>A. 宏观环境</strong></p><p><em>(决定资金成本)</em></p><p><em>The First Constraint</em></p></td><td colspan="1" rowspan="1"><p><strong>1. 美元流动性</strong></p><p><em>(Net Liquidity)</em></p></td><td colspan="1" rowspan="1"><p><strong>同比扩张</strong></p><p><em>(资金面宽裕)</em></p></td><td colspan="1" rowspan="3"><p><strong>释放期权价值：</strong>当宏观资金成本降低时，市场<strong>允许</strong>更高的估值溢价。此时估值权重可向「平台/网络效应」释放，捕捉非线性增长。</p></td></tr><tr><td colspan="1" rowspan="1"><p>2. 实际利率</p><p><em>(10Y Real Yield)</em></p></td><td colspan="1" rowspan="1"><p>低位或下行</p><p><em>(持有成本下降)</em></p></td></tr><tr><td colspan="1" rowspan="1"><p>3. 信用利差</p><p><em>(HY OAS)</em></p></td><td colspan="1" rowspan="1"><p>低位且稳定</p><p><em>(无系统性信用压力)</em></p></td></tr><tr><td colspan="1" rowspan="2"><p>B. 市场结构</p><p><em>(决定相对强弱)</em></p><p><em>Trend Confirmation</em></p></td><td colspan="1" rowspan="1"><p>4. ETH/BTC 汇率</p><br></td><td colspan="1" rowspan="1"><p>趋势向上</p><p><em>(以太坊相对走强)</em></p></td><td colspan="1" rowspan="2"><p>确认资产属性：当相对强弱指标向好且有增量资金（稳定币）入场时，确认 ETH 自身叙事回归，应提升「货币属性」权重。</p></td></tr><tr><td colspan="1" rowspan="1"><p>5. 稳定币增速</p><br></td><td colspan="1" rowspan="1"><p>正增长</p><p><em>(新增资金持续入场)</em></p></td></tr><tr><td colspan="1" rowspan="2"><p>C. 链上情绪</p><p><em>(决定拥挤度)</em></p><p><em>Sentiment Check</em></p></td><td colspan="1" rowspan="1"><p>6. 资金费率</p><p><em>(Funding Rate)</em></p></td><td colspan="1" rowspan="1"><p>温和正向</p><p><em>(未出现单边拥挤)</em></p></td><td colspan="1" rowspan="2"><p>双向风控门禁：当情绪过热（费率极高）或过冷（恐慌去杠杆）时，均视为风险信号。此时估值逻辑应强制转向「收入地板 / 防御模式」。</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>7. 链上清算量</strong></p><p><em>(Liquidation)</em></p></td><td colspan="1" rowspan="1"><p><strong>低位平稳</strong></p><p><em>(无强制去杠杆风险)</em></p></td></tr></tbody></table><p><br></p><h2 id="h-" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>五、机构化第二曲线的条件路径</strong></h2><p>前文分析均基于加密体系内部的技术、估值与周期逻辑，而本章讨论的是一个不同层级的问题：<strong>当 ETH 不再仅由加密原生资金定价，而被逐步纳入传统金融体系，其定价权、资产属性与风险结构将如何变化</strong>。机构化第二曲线并非对既有逻辑的延伸，而是外生力量对以太坊的再定义：</p><ul><li><p><strong>资产属性的变化（Beta → Carry）：</strong>现货 ETH ETF 解决的是合规与托管问题，本质仍是价格暴露；而未来Staking ETF 的推进，首次将<strong>链上收益通过合规载体引入机构体系</strong>。ETH 由此从“无息高波动资产”转向“具备可预期收益的配置型资产”，潜在买家从交易型资金扩展至对收益与久期敏感的养老金、保险及长期账户。</p></li><li><p><strong>使用方式的变化（Holding → Using）：</strong>如果机构不再仅将 ETH 视为可交易标的，而是开始将其作为结算与抵押基础设施使用。无论是 JPMorgan 的代币化基金，还是合规稳定币与 RWA 在以太坊上的部署，都表明 ETH 的需求正从“持有需求”转向“运行需求”——机构不仅持有 ETH，更在其上完成结算、清算与风险管理。</p></li><li><p><strong>尾部风险的变化（Uncertainty → Pricing）：</strong> 随着稳定币监管框架（如 GENIUS Act）未来逐步确立，以及以太坊路线图与治理透明度提升，机构最为敏感的监管与技术不确定性正在被系统性压缩，意味着不确定性开始被定价，而非被回避。</p></li></ul><p>所谓“机构化第二曲线”是 <strong>需求性质的改变</strong>，为“安全性结算层 + 货币属性”的估值逻辑提供了真实需求来源，推动 ETH 从以情绪驱动的投机资产过渡为同时承载<strong>配置性与功能性需求</strong>的基础资产。</p><p><br></p><h2 id="h-" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>六、结语：至暗时刻的价值锚定</strong></h2><p>过去一周，行业经历了剧烈的去杠杆化洗礼，市场情绪降至冰点，这无疑是加密世界的“至暗时刻”。悲观情绪在从业者中蔓延，而作为最能代表加密精神的资产标的，以太坊亦处于争议的风暴眼中。</p><p>然而，作为理性的观察者，我们需要穿透恐慌的迷雾：<strong>以太坊当前所经历的，并非“价值的坍塌”，而是一次深刻的“定价锚迁移”。</strong>随着 L1 扩容直接推进、L2 被重新界定为不同信任等级的网络光谱，以及协议收入主动让位于系统安全与中立性，ETH 的定价逻辑已结构性转向“<strong>安全性结算层 + 原生货币属性</strong>”。</p><p>在宏观真实利率高位、流动性尚未宽松、链上增长期权暂未被市场允许定价的背景下，ETH 的价格自然收敛至由结算确定性、可验证收益与机构共识支撑的<strong>结构性价值区间</strong>。这一区间并非情绪底，而是在剥离平台型增长溢价后的价值中枢。</p><p>作为以太坊生态的长期建设者，<strong>我们拒绝做 ETH 的“无脑多头”</strong>。我们希望通过严谨的逻辑框架，审慎地论证我们的预判：只有当宏观流动性、风险偏好与网络效应同时满足市场状态的触发条件时，更高的估值才会被市场重新计入。</p><p>因此，对于长线投资者而言，当下的关键问题不再是焦虑地追问“以太坊还能不能涨”，而是要清醒地认识到——<strong>在当前环境下，我们正在以“地板价”买入哪一层核心价值？</strong></p><table style="min-width: 125px"><colgroup><col><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p><strong>市场状态&nbsp;</strong></p></td><td colspan="1" rowspan="1"><p><strong>触发条件&nbsp;</strong></p><p><strong>(必须同时满足)</strong></p></td><td colspan="1" rowspan="1"><p><strong>主导权重</strong></p></td><td colspan="1" rowspan="1"><p><strong>估值区间</strong></p></td><td colspan="1" rowspan="1"><p><strong>投资逻辑说明</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>压力/防御</strong></p><p><em>(Risk-Off)</em></p></td><td colspan="1" rowspan="1"><p>• <strong>流动性：</strong> 美元收缩 / 信用利差走阔</p><p>• <strong>实际利率：</strong>&nbsp; 2.5 (资金昂贵)</p><p>• <strong>情绪：</strong> 资金费率转负</p></td><td colspan="1" rowspan="1"><p><strong>安全性 + 收入地板</strong></p><p><br></p><p>50% / 25% / 0% / 25%</p></td><td colspan="1" rowspan="1"><p><strong>$1,700 – $2,400</strong></p></td><td colspan="1" rowspan="1"><p><strong>生存定价：</strong>市场拒绝一切“增长叙事”，仅为ETH作为“最安全的结算资产”及其现金流地板（P/S）定价。适合<strong>左侧重仓</strong>。</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>中性/配置</strong></p><p><em>(Neutral Base Case)</em></p></td><td colspan="1" rowspan="1"><p>• <strong>流动性：</strong> 止跌企稳</p><p>• <strong>实际利率：</strong> 稳定或温和回落</p><p>• <strong>结构：</strong> ETH/BTC 汇率企稳</p></td><td colspan="1" rowspan="1"><p><strong>安全性 + 货币属性</strong></p><p><br></p><p>50% / 40% / 0% / 10%</p></td><td colspan="1" rowspan="1"><p><strong>$2,200 – $2,800</strong></p></td><td colspan="1" rowspan="1"><p><strong>制度定价：</strong>ETH 回归“安全结算层 + 原生抵押品”的价值中枢。机构资金在此区间完成建仓，等待趋势确认。</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>进攻/扩张</strong></p><p><em>(Risk-On)</em></p></td><td colspan="1" rowspan="1"><p>• <strong>流动性：</strong> 同比显著扩张</p><p>• <strong>实际利率：</strong> &lt; 2.0% (资金廉价)</p><p>• <strong>链上：</strong> 稳定币大幅增量</p></td><td colspan="1" rowspan="1"><p><strong>安全性 + 平台期权</strong></p><p><br></p><p>35% / 30% / <strong>35%</strong> / 5%</p></td><td colspan="1" rowspan="1"><p><strong>$3,200 – $4,500</strong></p></td><td colspan="1" rowspan="1"><p><strong>期权释放：</strong>资金成本降低，“网络效应”权重非线性提升（10%→30%）。市场开始为 L2 繁荣与应用爆发支付高溢价。</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>过热/泡沫</strong></p><p><em>(Euphoria)</em></p></td><td colspan="1" rowspan="1"><p>• <strong>情绪：</strong> 极端资金费率</p><p>• <strong>清算：</strong> 单日清算量剧增</p><p>• <strong>叙事：</strong> 价格完全脱离基本面</p></td><td colspan="1" rowspan="1"><p><strong>平台/网络效应主导</strong><em>(基本面因子失效)</em></p><p><br></p><p><em>20% / 15% / 65% / 0%</em></p></td><td colspan="1" rowspan="1"><p><strong><em>&gt; $4,500</em></strong></p><p><em>(不稳定)</em></p></td><td colspan="1" rowspan="1"><p><strong>非理性繁荣：</strong>价格不再反映内在价值，而是反映流动性溢出。此时应强制启用 <strong>Kill Switch</strong>，分批止盈或对冲。</p></td></tr></tbody></table><p><br></p><p><strong><em>免责声明：</em></strong><em>本文在创作过程中借助了 ChatGPT-5.2, Gemini 3和Claude Opus 4.5等 AI 工具辅助完成，作者已尽力校对并确保信息真实与准确，但仍难免存在疏漏，敬请谅解。需特别提示的是，加密资产市场普遍存在项目基本面与二级市场价格表现背离的情况。本文内容仅用于信息整合与学术/研究交流，不构成任何投资建议，亦不应视为任何代币的买卖推荐。</em></p><p><br></p>]]></content:encoded>
            <author>0xjacobzhao@newsletter.paragraph.com (jacobzhao)</author>
            <category>以太坊</category>
            <category>估值</category>
            <category>去中心化</category>
            <category>可信中立</category>
            <category>安全性结算层</category>
            <enclosure url="https://storage.googleapis.com/papyrus_images/2b39ac0752f363ec9c67ce311421cfb6f982d8317328235e91f4b0223578ff44.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Noya.ai: Agents in Prediction Markets]]></title>
            <link>https://paragraph.com/@0xjacobzhao/noyaai-agents-in-prediction-markets</link>
            <guid>9nyOU9LJPichIeUUK4P4</guid>
            <pubDate>Mon, 05 Jan 2026 06:25:52 GMT</pubDate>
            <description><![CDATA[Prediction markets have emerged as a significant and increasingly influential industry trend in 2025. This report focuses on the emerging category of Prediction Market Agents, systematically examining their market landscape, product structures, and business models. Using Noya.ai as a representative case study, it analyzes how AI agents can integrate the end-to-end workflow from research and decision-making to execution, and evaluates the long-term value proposition and potential risks at the int]]></description>
            <content:encoded><![CDATA[<p>In our previous Crypto AI series research reports, we have consistently emphasized the view that the most practical application scenarios in the current crypto field are mainly concentrated in <strong>stablecoin payments</strong> and <strong>DeFi</strong>, while <strong>Agents</strong> are the key interface for the AI industry facing users. Therefore, in the trend of Crypto and AI integration, the two most valuable paths are: <strong>AgentFi</strong>, based on existing mature DeFi protocols (basic strategies like lending and liquidity mining, as well as advanced strategies like Swap, Pendle PT, and funding rate arbitrage) in the short term; and <strong>Agent Payment</strong>, centering on stablecoin settlement and relying on protocols such as ACP/AP2/x402/ERC-8004 in the medium to long term.</p><p><strong>Prediction markets</strong> have become an undeniable new industry trend in 2025, with their total annual trading volume surging from approximately $9 billion in 2024 to over $40 billion in 2025, achieving a year-over-year growth of more than 400%. This significant growth is driven by multiple factors: uncertainty demand brought by macro-political events (such as the 2024 US election), the maturity of infrastructure and trading models, and the thawing of the regulatory environment (Kalshi's lawsuit victory and Polymarket's return to the US). <strong>Prediction Market Agents</strong> are showing early embryonic forms in early 2026 and are poised to become a continuously emerging product form in the agent field over the coming year.</p><h2 id="h-i-prediction-markets-from-betting-to-truth-layer" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>I. Prediction Markets: From Betting to&nbsp; Truth Layer</strong></h2><p>A prediction market is a financial mechanism for trading on <strong>the outcomes of future events</strong>. Contract prices essentially reflect the market's collective judgment on the probability of an event occurring. Its effectiveness stems from the combination of <strong>crowd wisdom</strong> and <strong>economic incentives</strong>: in an environment of anonymous, real-money betting, scattered information is quickly integrated into price signals weighted by financial willingness, thereby significantly reducing noise and false judgments.</p><p>By the end of 2025, prediction markets have basically formed a duopoly dominated by <strong>Polymarket</strong> and <strong>Kalshi</strong>. According to <em>Forbes</em>, the total trading volume in 2025 reached approximately $44 billion, with Polymarket contributing about $21.5 billion and Kalshi about $17.1 billion. Relying on its legal victory in the previous election contract case, its first-mover compliance advantage in the US sports prediction market, and relatively clear regulatory expectations, Kalshi has achieved rapid expansion. Currently, the development paths of the two have shown clear differentiation:</p><ul><li><p><strong>Polymarket</strong> adopts a mixed CLOB architecture with "off-chain matching, on-chain settlement" and a decentralized settlement mechanism, building a globalized, non-custodial high-liquidity market. After returning to the US with compliance, it formed an "onshore + offshore" dual-track operating structure.</p></li><li><p><strong>Kalshi</strong> integrates into the traditional financial system, accessing mainstream retail brokerages via API, attracting Wall Street market makers to participate deeply in macro and data-type contract trading. Its products are constrained by traditional regulatory processes, and long-tail demands and sudden events lag relatively behind.</p></li></ul><p>Apart from Polymarket and Kalshi, other competitive players in the prediction market field are developing mainly along two paths:</p><ul><li><p><strong>First is the compliance distribution path</strong>, embedding event contracts into the existing account systems of brokerages or large platforms, relying on channel coverage, clearing capabilities, and institutional trust to build advantages (e.g., ForecastTrader by Interactive Brokers and ForecastEx, and FanDuel Predicts by FanDuel and CME).</p></li><li><p><strong>Second is the on-chain performance and capital efficiency path</strong>. Taking the Solana ecosystem's perpetual contract DEX Drift as an example, it added a prediction market module B.E.T (prediction markets) on top of its original product line.</p></li></ul><p>The two paths—traditional financial compliance entry and crypto-native performance advantages—together constitute the diversified competitive landscape of the prediction market ecosystem.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/adcb869ae28e18a5badc134b5f8ed2705916aa6d349378e5d105e7559e74febf.jpg" alt="" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAARCAIAAAAzPjmrAAAACXBIWXMAAAsTAAALEwEAmpwYAAAGVUlEQVR4nD2Uf1CSCRrH3+umnbmZm6smT23d1szLtDLL0364bompiIBIECgIvYoQKK+Bxs9EQmkVNTXvVbfVpF1a9UjTUClWJU9NBH0ZwF+YKA4pq7vjzv6xt/17Q+3cM995/nw+83y/zzzAotvzUWveLfOUJfFCUj6zgAfdLihio7OyJPK7msZmBgjmkEjpaLRYKmOAIIlCFZSX54OFfKGITM3PB9msW8VMFpdAzuMLRc4Vz1vvltu7/da7tej2AItuj83ustkXVtd9886lq6hrBw8HR0adPBp+4s/7D15JQaPSsSnpuM+ORgUFf4bNpmZkEi4np4Z8GhF+9GhCfCwOgw4LDd4HADFRkcc/P3Iq+sSEedyO2OasM8i81bm0Alhs9meDw/fUNZEnomPOnENnZQ+9Ghs2mc9e4EWfZ5+MY/01nAwEav+HHhN5lnXweO7hf9CgnDT49ulGTiwM/bO97FJ6FBAJAKrc40pysAx34C7hUAUhaNzYBziW3B3aJ/UP/6VpbG5uba9QVRlemiyIg8O9HxH5RVDomdi4jOQvSeERCYcOR4WHJxKIJUrV43IJ3FGvrhflMbKT77BwNExidkrcl+cjRKwcp7l35kWHWQ+/7m+dnZ0BVjybX9XVsXk8TWOjsrqaC0Ejo2N7v/0Gt/VeS8Ofj78YFnbsRNTpsLCIAwdC9n9y6GR0XEpKZgv8xGGz1KrECYkJsbGnQkKCTp+JiY09lZp6hUgkYLFol2N+dWUBcboCgPbOLh5U+qAF9u/98st/f9/076x531VVt+cQmRloYh6NMzQybTRZjSbrsHFmaORNr97UrR97OfCs4Z6QywGzcRmGfp1/073htq84bYjltcM26bBN2m2TAcCi26PVPVXXato7te4N78b2tm93d9O/U9egpVLZmRgSlcbmFosFwsqKyjr1/RapvKa5pevl6Nyo0dDWVCURC7gsRkONCm7SKBXihtqq5np1c73ahViWnTbE6QwA4EcdSrVaLJNPzc769/Y2trc3/TuqajjuXFJIaHh0TDwAAMEhnwPAX4A//S0gAIC/7jb09ZYV5/896GDksU+PhBzGYzPuSoRwk6ZGrVTIyudmzMtOq8MV2GBdq3vaPzT0cfRHeXzvGpq6JBJ1qaDiJsiXyms4t0TUPJZYUm2esA8YJvqfm03DL7rg+3KJUCm/IxMLNF8pB/S6Ht0jnba9R9ex4Xa6kKk/MigTSZrgtl/fv1/z+Ty+dwGGf6esvJbOKEZjSGgMmc7gXUsjZOfQiSRm5b0HEtn9bv0PA73aaimXcoNIIeGrKqWG/qf9PU9GXvT26Dqe679ddztciMXhWgAW3G+lCmWH9jv74vLSmsfje+fx+da8W1y+igFCF5PSMrPImVnkc/Ff5NHYIAjh8LkZaOLD1u4+XaeiDLx+HU8h4ZUKiXV6fGJsxDI1Nj3+0mwyWCZHXcibgEUut6eltY0H3S4V3nFveLd++tnr92/6d6SyByV8OZ1xq4AloNLYRBJTIKy0IWuIc33W5n41ivR//+TrBwqIzym8SZWWQyJhCcigigTF5QIeG6QP6nVrS3bE7ggAKlXVVBpdLJVzIeijS6ten6oaptO5ly6nkigFKBSOkEO/cAl1FYUjkph9z8cGDVODz7pbamSUGzkUEh7isVg382i51+m5JC6LicVc43FBh206YNHS6nrro29EUrmmvkE/aPj/FdU1dEGlCgiSQZCcVSQoFVRAkIzNKePcEr0wTAwOT4+bjHVV4sSEuOSkBDw2/fzZmCw0Co9Jh5s0iGXCOjW6aP+QwaLb097Zpa6plSoqe/R9P+7tBTLw+eC27gJWaR6tiEIpIFHAIo6QTL6Zz+Di8LkDfwBe1atlKVcvZ+MycrIxaanJpXzOgP7p+CvDimtu0W5xIZbAFc07l9o7uyruqbr1+l/f/76586Nv9yff7m5NrTYpCb1v34GExCsoFDbmdELokYhjETGoVPwPY9aB4TevjYNN1WUXL8QfCw+9lBiHw6RKy0voFGIJr7CYwyzls72rrsA3XXC/HRg26np6ng8bnSvupTXPB238u98kU6jTMrDMQh6Lzb9+gwYWcvGEGw/hx9MzTtOoZXryP/rux4WFII/LYrNZrKLCublZZN6KzFstbybnrDPLywHA/wAkjlM/coFpAQAAAABJRU5ErkJggg==" nextheight="559" nextwidth="1024" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>Prediction markets appear similar to gambling on the surface and are essentially zero-sum games. However, the core difference lies not in the form, but in whether they possess <strong>positive externalities</strong>: aggregating scattered information through real-money trading to publicly price real-world events, forming a valuable signal layer. Despite limitations such as entertainment-focused participation, the trend is shifting from gaming to a "Global Truth Layer"—with the access of institutions like CME and Bloomberg, event probabilities have become decision-making metadata that can be directly called by financial and enterprise systems, providing a more timely and quantifiable market-based truth.</p><h2 id="h-ii-prediction-agents-architecture-business-strategy" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>II. Prediction Agents: Architecture, Business, Strategy</strong></h2><p>Currently, <strong>Prediction Market Agents</strong> are entering an early practice stage. Their value lies not in "AI predicting more accurately," but in amplifying information processing and execution efficiency in prediction markets. The essence of a prediction market is an information aggregation mechanism, where price reflects the collective judgment of event probability; market inefficiencies in reality stem from information asymmetry, liquidity, and attention constraints. The reasonable positioning of a Prediction Market Agent is <strong>Executable Probabilistic Portfolio Management</strong>: converting news, rule texts, and on-chain data into verifiable pricing deviations, executing strategies in a faster, more disciplined, and lower-cost manner, and capturing structural opportunities through cross-platform arbitrage and portfolio risk control.</p><p>An ideal Prediction Market Agent can be abstracted into a four-layer architecture:</p><ul><li><p><strong>Information Layer:</strong> Aggregates news, social media, on-chain, and official data.</p></li><li><p><strong>Analysis Layer:</strong> Uses LLMs and ML to identify mispricing and calculate Edge.</p></li><li><p><strong>Strategy Layer:</strong> Converts Edge into positions through the Kelly criterion, staggered entry, and risk control.</p></li><li><p><strong>Execution Layer:</strong> Completes multi-market order placement, slippage and Gas optimization, and arbitrage execution, forming an efficient automated closed loop.</p></li></ul><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/fa3a29c6f6f1c5eb48366c8104e767e4c27502b36921e49f73b3f415f663b096.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAASCAIAAAC1qksFAAAACXBIWXMAAAsTAAALEwEAmpwYAAAGMUlEQVR4nK1U/U9b5xl9fqi0qNLWalJLF6nZNFVTu3VSl22t2oyunTRlJGmataNRlLK0IhCSJR1JWGjiEjCh4Jivgs2HfQu2sY3BwRAbMBj7Xn9cLhgSvhzbpDbYCthgsK+NIZhr35o7Xdi0f2BHR4/O++qVjvQ8z3mBYZhAIGCzWiwWDMdtBEGYMWxycmIUtw0PDY0RhMViGd8DhmEmEzr18KHdbmdfY9jc3ByO4xiGEQQxMzOD4zhBEDiO2+12vV7v8XgYhgGGYVwu1zeCJkSmHDCg3RqdVKnuHzaq+/prGpt7tTqVSiWXy8VisVDY1CgQ4jje2alCEEQgEHZ339NoNAqlUiKVKpRKBPlWLlfK5XJdf39NTS2O46xBkqJ2Gcanq8Wv/Wqc8/Y45/f2krdHv/zdOOetseLfeORFkXnryvTgyrSedBo3/aPrc8PB6aHg9GDUjcbmMfKRMepGSaeRdBs3vJb4om3Da426RzaXHbsMk0qlIEnt0Lu7g4WHBW+COAtEf4bmTGh5l2X9m6A998P7rdy20guS8kt93/zLoamzSO4gpQVNt3InVLwlk+hW7il55Rdt3IuS8n9o6ooqr5wZbOGoeIVPCPUORbEGO4ntNMOM8c+0Z0LnB6A4DvK/gDyLFW1/BDMn0zEiJVR8u7qO6Kodaa942NeAq2pGVdVeY9sKobTfa3DqWx8bEPcQ4tKLtK1l07qmyXt1qw5DMkUnEglIpVI0TSe24pGgP74WYBkOstwXkdWtaJjajCU2yO93njqm7A8I/HsqsR2LbMciT6Pkdjz2NBqOR9bikdBmeI3aYo/RtZVYlEzRNEVRsJPYYRjm5qf1GfCHV+DkITh2CLJ+ClmvwMkMeC/3fa5JMyFr1HWJDD2IUSns7xIZulsG92tHY28PYuhtM450243dkyPdk/elFp3cqpNapyzuVCq1b0AxDPP+wXMAGQfhSAYcfgl++zy8dhDeAfjJLyFbfFdxI6+i+MKd21f4JZd51TfFnEtVxRfu3MgrF5bJJNW9hedKhGWyotzbwjJZcX6FpFrTcFti7Zui9mdAUUmGYSovtb4KpzIP5Bx55uw7z5w9cuBs5oGcX8PHxR/Voj1jWgna1z7S34F1NevUrQMDHZhOimolxtH7M/aBR5h6ElNPDkhRTG03qghCN2voJKYsrtQegKbpNJ2OJDYca15nePFReNEZ9rlJvyvic5O+xUhgNRYOxclQnFxPxFHChhI2MhFfiUVCcTJArgbI1WB0bWl9NRQnl8m1YCy8FA76Q8vhjWiaTu9tUZJtUUbtZ5AFcOYFOP1jOP0jyH6W1ScAKo6LTIqKPkGVVog80CjmhwSYlKcVVmmFfL2Ip22uHkLqTBLkgebrPkEjKq3RixtRKW+wRee2Jqnk/wyg5Cjb8+wMyH4RPnoOjgNkvwTvAlw7XC7j5/OvXmn48sa35TdllaVd1fn8qxdriwpqr5d2VfMMovyqwlsK3l8vn+F0VF1rLuFq6q9LuBqniaKSNE3DDsUa8G3yF4Vn30DyfiHKfR3JfR3JewPJO9R0rsjQgHpGB5wmvQsbcpslRpXUojbM2wYcRt2sybhIWJbspgVixDuK+sbM/nHUN4b6CL3HNhVwJ6kku0UUldxNp2YXQpJhl9L8XZfVqzJ7uveqyuzB3Su+9S0fue0JxRfXt9BJx8j47HKcWghvLYS3Hgej3wWjPvKpNxT3hjY9K7GF8JZnJeZaJgPhGL2fg+29HMCnVQB/gpc/hhdOwsEP4fnjcCibvTnKKfi6+a2/5Z/6ouxiraKyb6K4XZ9TJizgS/L5HaVdtq8U2N+5LVdbei/Vd15t6f38jqi4XV9Qr5JZHTRNszP4j8EnlfDsMXgtB352Gn7+CbycDa/mwA+OwdHSMrE6+5/c81zhdYG6fmCmRDr02a3681zh5bsSDtJfqR49X956U3y/sE5e0jF8mS/7SjJ4pUGlwF1Uaj8HqRTDMHJs9r0y+Ul+zwme+gRP/SG/5wOeJqtSfVc7PuENWl1LZqcfnfVaXE8sriesdvjMTj/m8Fnnl8cXQvjjAP44gDn9mNNvdS0ZZxfn/KsM+5vSkP4vmP8bdhlmN02zSKfT/wa2DJbhgJ+OpgAAAABJRU5ErkJggg==" nextheight="565" nextwidth="1024" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>The ideal business model design for Prediction Market Agents has different exploration spaces at different levels:</p><ul><li><p><strong>Bottom Infrastructure Layer:</strong> Provides multi-source real-time data aggregation, Smart Money address libraries, unified prediction market execution engines, and backtesting tools. Charges B2B/B2D fees to obtain stable revenue unrelated to prediction accuracy.</p></li><li><p><strong>Middle Strategy Layer:</strong> Precipitates modular strategy components and community-contributed strategies in an open-source or Token-Gated manner, forming a composable strategy ecosystem and achieving value capture.</p></li><li><p><strong>Top Agent Layer:</strong> Directly runs live trading through trusted managed Vaults, realizing capabilities with transparent on-chain records and a 20–30% performance fee (plus a small management fee).</p></li></ul><p>The ideal Prediction Market Agent is closer to an "AI-driven probabilistic asset management product," gaining returns through long-term disciplined execution and cross-market mispricing gaming, rather than relying on single-time prediction accuracy. The core logic of the diversified revenue structure of "Infrastructure Monetization + Ecosystem Expansion + Performance Participation" is that even if Alpha converges as the market matures, bottom-layer capabilities such as execution, risk control, and settlement still have long-term value, reducing dependence on the single assumption that "AI consistently beats the market."</p><h4 id="h-prediction-market-agent-strategy-analysis" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>Prediction Market Agent Strategy Analysis:</strong></h4><p>Theoretically, Agents have advantages in high-speed, 24/7, and emotion-free execution. However, in prediction markets, this is often difficult to convert into sustainable Alpha. Its effective application is mainly limited to specific structures, such as automated market making, cross-platform mispricing capture, and information integration of long-tail events. These opportunities are scarce and constrained by liquidity and capital.</p><ol><li><p><strong>Market Selection:</strong> Not all prediction markets have tradable value. Participation value depends on five dimensions: settlement clarity, liquidity quality, information advantage, time structure, and manipulation risk. It is recommended to prioritize the early stages of new markets, long-tail events with few professional players, and fleeting pricing windows caused by time zone differences; avoid high-heat political events, subjective settlement markets, and varieties with extremely low liquidity.</p></li><li><p><strong>Order Strategy:</strong> Adopt strict systematic position management. The prerequisite for entry is that one's own probability judgment is significantly higher than the market implied probability. Positions are determined based on the fractional Kelly criterion (usually 1/10–1/4 Kelly), and single event risk exposure does not exceed 15%, to achieve robust growth with controllable risk, bearable drawdowns, and compoundable advantages in the long run.</p></li><li><p><strong>Arbitrage Strategy:</strong> Arbitrage in prediction markets is mainly manifested in four types: cross-platform spread (be wary of settlement differences), Dutch Book arbitrage (high certainty but strict liquidity requirements), settlement arbitrage (relies on execution speed), and correlated asset hedging (limited by structural mismatch). The key to practice lies not in discovering spreads, but in strictly aligning contract definitions and settlement standards to avoid pseudo-arbitrage caused by subtle rule differences.</p></li><li><p><strong>Smart Money Copy-Trading:</strong> On-chain "Smart Money" signals are not suitable as a main strategy due to lagging, inducement risks, and sample issues. A more reasonable usage is as a confidence adjustment factor, used to assist core judgments based on information and pricing deviations.</p></li></ol><h2 id="h-iii-noyaai-intelligence-to-action" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>III. </strong><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://Noya.ai"><strong>Noya.ai</strong></a><strong>: Intelligence to Action</strong></h2><p>As an early exploration of Prediction Market Agents, <strong>NOYA</strong>'s core philosophy is "<strong>Intelligence That Acts</strong>." In on-chain markets, pure analysis and insight are not enough to create value—although dashboards, data analysis, and research tools can help users understand "what might happen," there is still a large amount of manual operation, cross-chain friction, and execution risk between insight and execution. NOYA is built based on this pain point: compressing the complete link of "Research → Form Judgment → Execution → Continuous Monitoring" in the professional investment process into a unified system, enabling intelligence to be directly translated into on-chain action.</p><p>NOYA achieves this goal by integrating three core levels:</p><ul><li><p><strong>Intelligence Layer:</strong> Aggregates market data, token analysis, and prediction market signals.</p></li><li><p><strong>Abstraction Layer:</strong> Hides complex cross-chain routing; users only need to express Intent.</p></li><li><p><strong>Execution Layer:</strong> AI Agents execute operations across chains and protocols based on user authorization.</p></li></ul><p>In terms of product form, NOYA supports different participation methods for passive income users, active traders, and prediction market participants. Through designs like Omnichain Execution, AI Agents &amp; Intents, and Vault Abstraction, it modularizes and automates multi-chain liquidity management, complex strategy execution, and risk control.</p><p>The overall system forms a continuous closed loop: Intelligence → Intent → Execution → Monitoring, achieving efficient, verifiable, and low-friction conversion from insight to execution while ensuring users always maintain control over their assets.</p><table style="min-width: 100px"><colgroup><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p><strong>Level</strong></p></td><td colspan="1" rowspan="1"><p><strong>Product Module</strong></p></td><td colspan="1" rowspan="1"><p><strong>Functional Description</strong></p></td><td colspan="1" rowspan="1"><p><strong>Core Value</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Intelligence</strong></p></td><td colspan="1" rowspan="1"><p>NOYA Intelligence</p></td><td colspan="1" rowspan="1"><p>Institutional-grade research system based on fundamentals, on-chain data, narratives, and risk factors</p></td><td colspan="1" rowspan="1"><p>Compresses complex research into actionable Alpha leads, providing structured input for funding decisions</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Intelligence</strong></p></td><td colspan="1" rowspan="1"><p>Prediction Market Intelligence Copilot</p></td><td colspan="1" rowspan="1"><p>Probability analysis, EV calculation, Smart Wallet behavior, and fund flow tracking for prediction markets</p></td><td colspan="1" rowspan="1"><p>Identifies odds mismatches and structural opportunities, providing information advantages for prediction market trading</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Abstraction</strong></p></td><td colspan="1" rowspan="1"><p>NOYA AI Agent (Voice + Text)</p></td><td colspan="1" rowspan="1"><p>Receives Intent in voice/text form and orchestrates cross-chain, cross-protocol on-chain execution</p></td><td colspan="1" rowspan="1"><p>Directly converts "human intent" into on-chain actions; acts as the unified entrance and coordinator for the execution layer</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Execution</strong></p></td><td colspan="1" rowspan="1"><p>Omnichain Vaults</p></td><td colspan="1" rowspan="1"><p>Risk-adjusted vaults covering multiple chains and protocols, scheduled and managed by Agents</p></td><td colspan="1" rowspan="1"><p>Provides scalable funding pools for Agents to achieve continuous systematic returns</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Execution</strong></p></td><td colspan="1" rowspan="1"><p>Prediction Market Execution</p></td><td colspan="1" rowspan="1"><p>Order placement, rebalancing, and strategy execution in prediction markets like Polymarket</p></td><td colspan="1" rowspan="1"><p>Converts probability judgments into real positions, completing the closed loop from analysis to results</p></td></tr></tbody></table><p><br></p><h2 id="h-iv-noyaais-product-system-evolution" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>IV. </strong><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://Noya.ai"><strong>Noya.ai</strong></a><strong>'s Product System Evolution&nbsp;</strong></h2><h3 id="h-core-cornerstone-noya-omnichain-vaults" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>Core Cornerstone: Noya Omnichain Vaults</strong></h3><p>Omnivaults is NOYA's capital deployment layer, providing cross-chain, risk-controlled automated yield strategies. Users hand over assets to the system to run continuously across multiple chains and protocols through simple deposit and withdrawal operations, without the need for manual rebalancing or monitoring. The core goal is to achieve stable risk-adjusted returns rather than short-term speculation.</p><p>Omnivaults cover strategies like standard yield and Loop, clearly divided by asset and risk level, and support optional bonding incentive mechanisms. At the execution level, the system automatically completes cross-chain routing and optimization, and can introduce ZKML to provide verifiable proof for strategy decisions, enhancing the transparency and credibility of automated asset management. The overall design focuses on modularity and composability, supporting future access to more asset types and strategy forms.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/0c3329e2f1e72f713514b8e3f52e542e09230b968553d797a417873773546979.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAdCAIAAABE/PnQAAAACXBIWXMAAAsTAAALEwEAmpwYAAAGbklEQVR4nLWWe1QUVRzHb6n4yF4eUwGNfCQQZhYZmoDiY1V84AMhPAjKwwPyOJIKKBAuomToprFitsoSmitYPpBIW1222AGdYbeZaZw7sHOHZQdcF1fK6Jz+8J8OzDnjxqHQP5rzO/fcmXPv73Pv/c7v/n6AgoiCyMLyJOwz6ZWCiKAgJ4inzlav2bxtZ55ya9quxIxdO/OUG+KTo5LSopLStuzYuWrzthMV5zhBJChopjmCgpJZWCsF0S9MXwskX81mWm4lw1poq/1eVn4xAKMDQxUeXm/4BYUELV75ove0CdP9X387MGD+QgBGJWflik6XmeZoDskmwXCKfQKQFz7AkOjoedTb/duj1o6uC/UN14x3bt4mL9Q3UG1Cz6Penke9otNVU3dLpdFVXdGfvvhD1RW9SnO+vgFjkf02eZeE/BAAC8vfoVhOEA+XV0WmK32DI/xDIiLTlcqyCk4Qm8wMzaGiMm1wZIpP4OIpgUumvr9kVUL2MW0NEh1YCz00gOpnINGh0pxPylMFLo16/d2w+OzDB9RaJDqI/pNRllXMUcSAEZ7DJs4EL05ZlZCt0uieAUBBRHNCk5nJKjyyPDYtY3/pvk/LbmIEi+wWlmeRvabupqqi+uT5azsKSqvrjQfVldIR4RT7tAAKItHp0uguL9ywJXlXQc01PRId7viu7ocnKnXZB46IThfNCTQnSG7Nv7Y9FYCgOdv9BwXFnwWFLgtfH12iUksAyREF0eYtCQAAb59p8penBVj6BYCoA6dYje5ySq4yPCbxmOZrM80ZMMKAEY04aabhUbVmvM+MKt1FA0bI3qTOEABOELUXrhYf+7JN6PxcczY1vyQqLS8p5wAJ+8CcINIcMmDE3lL1rMXrurof/oSTjTgpzf0vDVhklyJZe/Gqf2j41sxca7tYcPQU6H/GePsHRyYoohM/St29NWvvvFVbABjuF7Lmm6t6mkO3MHwIAIvs5777nrG29/T+GR6bXG8wtXc6cIo9XF4FxngDADxnh67ZlnHdiJkIps5gSsk9FLY2Ni6zoFZvIiFvwAgZYKa5JwAS8haWJyhou/9gwfIIAEDIinXp+QcpiK4bsb5YO1Gx//ipgqPlXoGhAIybvThCXVlNULCr+6HodDG8TXI1+BGRkC/4rDxoRdTa+Ey9ybxg2RoAQL7yoOh0UbDvlJHo2J5T2IiTKbnKcq1ufULm+oS0N4MVMenZeYePX9ObRKerL/T+DUBBFJ2aCwDw8PLT6K7Omr8IALArJ19wdMuAuSs2YS10abk2Jj37eoMpbGO8trp2fULax0Wl6fkH4zJzNLpLnCC6H9E/AIamX4w4pTeZLSwfn7EnK7fA2NQiD807pALg5RK1JqdYBUZ6fXX+0pT3Fi1QbFy4IQ6M8EzP2b8kKnH0W8GXbzTcJplBNJCCnuYEFtlZZI/L2KM33ZZuG6p/6Pbd+WDY+KWRceGxyR6eMxTRiYpN8WDYxI9Sd4/19h092Tclr2RzRmFx2WmyPxkMIrL7v1+i1twwNrPILoWrsanFwlpZZG/t6EKig+Ft0jpaO7poDrV2dFnt9wgKNpmZARoMDpAY7q9YC30LwzGCxAiyQvfdDSOGEeQtDG/ESZy6W6c3nqrS4RTbbKYacRJroYcOtEGuIwreNOHontNn3jJ1ZbW0uZ9wkra2n6mpm7pgJbR1Ypa7jQQlT3kGAIvsdcYmRUzqRN+g0VMCwiKTvjc0i05XoeokeH6ih5cfAK8+N2E6GDP5k6Mn5d0/LcDC9lUCdQbsnWVRwz19Pbz8wyKTvv3BKDpdxWWnwQveL/sEAPCKh+eMkZP9C1VfugP+VYMBy//r8eMS9ZmUfUWuP3rLtNVzV27q6n4oTSSYNsHRvX1vUa2xGdo67/Qn+mfYAc0Jp3W1B459NS5gftjGeF3tj7EZ+8b6z9d+Wy9fMH89fjx76QZ1Zc39nt8HyDZ0PmCR/ZD6zEtTA6bN+XBaYMj2nKLo1Jy3Q1fv/+IMCXmCaXsnLGL8jEAwaebznn6zQlZfvvGzpP8zaMDwNqv9nmQ0JzC8jeFtJORpTrCw/PS5CjB2MhjlBUZMes33A3fAk4Qj1VtyZSddq5K2JOQJt5JNnimNN9McCXmGt0FbJ7R1MtZ2yYnkx0z3lXvgVxZZmFacYmWTijLZqXt/gEnjm8203MrfSchLvxNwT9P/h/0NCAKprJcGf8YAAAAASUVORK5CYII=" nextheight="919" nextwidth="1024" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p><strong>NOYA Vault Technical Architecture:</strong> Each vault is uniformly registered and managed through the Registry; the AccountingManager is responsible for user shares (ERC-20) and NAV pricing; the bottom layer connects to protocols like Aave and Uniswap through modular Connectors and calculates cross-protocol TVL, relying on Value Oracle (Chainlink + Uniswap v3 TWAP) for price routing and valuation; trading and cross-chain operations are executed by Swap Handler (LiFi); finally, strategy execution is triggered by Keeper Multi-sig, forming a composable and auditable execution closed loop.</p><h3 id="h-future-alpha-prediction-market-agent" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>Future Alpha: Prediction Market Agent</strong></h3><p>NOYA's most imaginative module: the Intelligence layer continuously tracks on-chain fund behavior and off-chain narrative changes, identifying news shocks, emotional fluctuations, and odds mismatches. When probability deviations are found in prediction markets like Polymarket, the Execution layer AI Agent can mobilize vault funds for arbitrage and rebalancing under user authorization. At the same time, Token Intelligence and Prediction Market Copilot provide users with structured token and prediction market analysis, directly converting external information into actionable trading decisions.</p><h4 id="h-prediction-market-intelligence-copilot" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>Prediction Market Intelligence Copilot</strong></h4><p>NOYA is committed to upgrading prediction markets from single-event betting to systematically manageable probabilistic assets. Its core module integrates diverse data such as market implied probability, liquidity structure, historical settlements, and on-chain smart money behavior. It uses Expected Value (EV) and scenario analysis to identify pricing deviations and focuses on tracking position signals of high-win-rate wallets to distinguish informed trading from market noise. Based on this, Copilot supports cross-market and cross-event correlation analysis and transmits real-time signals to AI Agents to drive automated execution such as opening and rebalancing positions, achieving portfolio management and dynamic optimization of prediction markets.</p><p>Core Strategy Mechanisms include:</p><ul><li><p><strong>Multi-source Edge Sourcing:</strong> Fuses Polymarket real-time odds, polling data, private and external information flows to cross-verify event implied probabilities, systematically mining information advantages that have not been fully priced in.</p></li><li><p><strong>Prediction Market Arbitrage:</strong> Builds probabilistic and structural arbitrage strategies based on pricing differences across different markets, different contract structures, or similar events, capturing odds convergence returns while controlling directional risk.</p></li><li><p><strong>Auto-adjust Positions (Odds-Driven):</strong> When odds shift significantly due to changes in information, capital, or sentiment, the AI Agent automatically adjusts position size and direction, achieving continuous optimization in the prediction market rather than a one-time bet.</p></li></ul><h4 id="h-noya-intelligence-token-reports" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>NOYA Intelligence Token Reports</strong></h4><p>NOYA's institutional-grade research and decision hub aims to automate the professional crypto investment research process and directly output decision-level signals usable for real asset allocation. This module presents clear investment stances, comprehensive scores, core logic, key catalysts, and risk warnings in a standardized report structure, continuously updated with real-time market and on-chain data. Unlike traditional research tools, NOYA's intelligence does not stop at static analysis but can be queried, compared, and followed up by AI Agents in natural language. It is directly fed to the execution layer to drive subsequent cross-chain trading, fund allocation, and portfolio management, thereby forming a "Research—Decision—Execution" integrated closed loop, making Intelligence an active signal source in the automated capital operation system.</p><h4 id="h-noya-ai-agent-voice-and-natural-language-driven" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>NOYA AI Agent (Voice &amp; Natural Language Driven)</strong></h4><p>The NOYA AI Agent is the platform's execution layer, whose core role is to directly translate user intent and market intelligence into authorized on-chain actions. Users can express goals via text or voice, and the Agent is responsible for planning and executing cross-chain, cross-protocol operations, compressing research and execution into a continuous process. It is a key product form for NOYA to lower the threshold for DeFi and prediction market operations.</p><p>Users do not need to understand the underlying links, protocols, or transaction paths. They only need to express their goals through natural language or voice to trigger the AI Agent to automatically plan and execute multi-step on-chain operations, achieving "Intent as Execution." Under the premise of full-process user signing and non-custody, the Agent operates in a closed loop of "Intent Understanding → Action Planning → User Confirmation → On-chain Execution → Result Monitoring." It does not replace decision-making but is only responsible for efficient implementation and execution, significantly reducing the friction and threshold of complex financial operations.</p><h4 id="h-trust-moat-zkml-verifiable-execution" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>Trust Moat: ZKML Verifiable Execution</strong></h4><p>Verifiable Execution aims to build a verifiable closed loop for the entire process of strategy, decision-making, and execution. NOYA introduces ZKML as a key mechanism to reduce trust assumptions: strategies are calculated off-chain and verifiable proofs are generated; corresponding fund operations can only be triggered after on-chain verification passes. This mechanism can provide credibility for strategy output without revealing model details and supports derivative capabilities such as verifiable backtesting. Currently, relevant modules are still marked as "under development" in public documents, and engineering details remain to be disclosed and verified.</p><h4 id="h-future-6-month-product-roadmap" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>Future 6-Month Product Roadmap</strong></h4><ul><li><p><strong>Prediction Market Advanced Order Capabilities:</strong> Improve strategy expression and execution precision to support Agent-based trading.</p></li><li><p><strong>Expansion to Multi-Prediction Markets:</strong> Access more platforms beyond Polymarket to expand event coverage and liquidity.</p></li><li><p><strong>Multi-source Edge Information Collection:</strong> Cross-verify with handicap odds to systematically capture underpriced probability deviations.</p></li><li><p><strong>Clearer Token Signals &amp; Advanced Reports:</strong> Output trading signals and in-depth on-chain analysis that can directly drive execution.</p></li><li><p><strong>Advanced On-chain DeFi Strategy Combinations:</strong> Launch complex strategy structures to improve capital efficiency, returns, and scalability.</p></li></ul><h2 id="h-v-noyaais-ecosystem-growth" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>V. </strong><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://Noya.ai"><strong>Noya.ai</strong></a><strong>'s Ecosystem Growth</strong></h2><p>Currently, Omnichain Vaults are in the early stage of ecosystem development, and their cross-chain execution and multi-strategy framework have been verified.</p><ul><li><p><strong>Strategy &amp; Coverage:</strong> The platform has integrated mainstream DeFi protocols such as Aave and Morpho, supports cross-chain allocation of stablecoins, ETH, and their derivative assets, and has preliminarily built a layered risk strategy (e.g., Basic Yield vs. Loop Strategy).</p></li><li><p><strong>Development Stage:</strong> The current TVL volume is limited. The core goal lies in functional verification (MVP) and risk control framework refinement. The architectural design has strong composability, reserving interfaces for the subsequent introduction of complex assets and advanced Agent scheduling.</p></li></ul><h4 id="h-incentive-system-kaito-linkage-and-space-race-dual-drive" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>Incentive System: Kaito Linkage &amp; Space Race Dual Drive</strong></h4><p>NOYA has built a growth flywheel deeply binding content narrative and liquidity anchored on "Real Contribution."</p><ol><li><p><strong>Ecosystem Partnership (Kaito Yaps):</strong> NOYA landed on Kaito Leaderboards with a composite narrative of "AI × DeFi × Agent," configuring an unlocked incentive pool of 5% of the total supply, and reserving an additional 1% for the Kaito ecosystem. Its mechanism deeply binds content creation (Yaps) with Vault deposits and Bond locking. User weekly contributions are converted into Stars that determine rank and multipliers, thereby synchronously strengthening narrative consensus and long-term capital stickiness at the incentive level.</p></li><li><p><strong>Growth Engine (Space Race):</strong> Space Race constitutes NOYA's core growth flywheel, replacing the traditional "capital scale first" airdrop model by using Stars as long-term equity credentials. This mechanism integrates Bond locking bonuses, two-way 10% referral incentives, and content dissemination into a weekly Points system, filtering out long-term users with high participation and strong consensus, and continuously optimizing community structure and token distribution.</p></li><li><p><strong>Community Building (Ambassador):</strong> NOYA adopts an invitation-only ambassador program, providing qualified participants with community round participation qualifications and performance rebates based on actual contributions (up to 10%).</p></li></ol><p>Currently, <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://Noya.ai">Noya.ai</a> has accumulated over 3,000 on-chain users, and its X platform followers have exceeded 41,000, ranking in the top five of the Kaito Mindshare list. This indicates that NOYA has occupied a favorable attention niche in the prediction market and Agent track.</p><p>In addition, <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://Noya.ai">Noya.ai</a>'s core contracts have passed dual audits by Code4rena and Hacken, and have accessed Hacken Extractor.</p><h2 id="h-vi-tokenomics-design-and-governance" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>VI. Tokenomics Design and Governance</strong></h2><p>NOYA adopts a <strong>Single-token</strong> ecosystem model, with <strong>$NOYA</strong> as the sole value carrier and governance vehicle.</p><p>NOYA employs a <strong>Buyback &amp; Burn</strong> value capture mechanism. The value generated by the protocol layer in products such as AI Agents, Omnivaults, and prediction markets is captured through mechanisms like staking, governance, access permissions, and buyback &amp; burn, forming a value closed loop of Use → Fee → Buyback, converting platform usage into long-term token value.</p><p>The project takes <strong>Fair Launch</strong> as its core principle. It did not introduce angel round or VC investment but completed distribution through a public community round (Launch-Raise) with a low valuation ($10M FDV), Space Race, and airdrops. It deliberately reserves asymmetric upside space for the community, making the chip structure more biased towards active users and long-term participants; team incentives mainly come from long-term locked token shares.</p><p><strong>Token Distribution:</strong></p><ul><li><p><strong>Total Supply:</strong> 1 Billion (1,000,000,000) NOYA</p></li><li><p><strong>Initial Float (Low Float):</strong> ~12%</p></li><li><p><strong>Valuation &amp; Financing (The Raise):</strong> Financing Amount: $1 Million; Valuation (FDV): $10 Million</p></li></ul><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/ec3ae4a35440c480e33d057c6bd50aa29dab7afeadc6c94a2d1850207fd984fd.jpg" alt="" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAASCAIAAAC1qksFAAAACXBIWXMAAAsTAAALEwEAmpwYAAAGJklEQVR4nE2U2W/b2BXGL9q+FGiLAkVnqR8GbacL0mnR6fSxD52n+S+KvhSdGcQo3Ok4mcJZHDd2vClSJMaKLQ8da6FFi6JEybQkWjJlkyItUosVyU4iSnREiVpMMtI4dRYjQEHnpcDBxe98uPc8fPe7FzS63Zaudwb9tmE0db3T73f7g06/rw4G3cFA0Xrq4KkpDgYdw2gbhgn9py1DN7nf7xhGo92uK6260pJVVW6pR02Tj9T2kao22ipotNt2CJqzzONEeG5+fmHxnuMuZHXcwQjC6Vp0wcvwCjwxeXPp62XHXchis9ns9qmZaZ9/bWpm2mK1Wqy3FVXtGkbnWOto56VrHV03FU1rtFRQVZqbqV2U2CQS2zi5tR6NoUQcJ6kYzeDkFkGlcSoVpbbDZAINb0QpmognI4nUNsvHt3fCsS2Sohkhz4jFNJ9PMlmaE1KcmOIERiyy4v4juQHcRAyAd8D3L4C3fgfe+T1460Pw7ocAvAfAT8B3fgWG/gjALwB473vv/+lnH33y8z98MvTBx+BbvwTg/aELf373Nx+D714AYAiAHwPw028PfQR+8Gvwo9+CH34AwNsAvA2jUcDulz/9anrkmmV0wvrpFzeG/z09OmH9/KuZ4bG54bG5z/91Y3hsdnhs/svxO3P2xYkZ+/ic8x9jc38fvXX9lv3K1J3PLt8auTL/15Hxv41MXLz0n5GxmYuXJi9envzL8NXPRm8xuX2g6tqzVy9Ozl72n5+K+Xyzo56+Pjt59fzZy1PjdJArFju6dnr26tmrF1JDLlTK/dP/nrw8ff76TDqSpYb84vXZyctTSa6VK5ViuVytScXSfqFUfFB5YHxjXgOoVKWlpZWbU7NT03NWBzQ1O39zctZihXwItupFLTZo8Wu3D8HcvsCS674duge7/e7z1rkILyzBbl/A7Q1MTE7PWu5cuTZ+5eq169fHb0zcnJ6x2J1L5UcSkBRlh+W3kulYPBkmyA0yQSXpZHqXEwpMNheNbtJMhhNybDaXTDGxeJLN5jJCLiOIZDxJbiY5MZ8RRJYTGE4gIptuN4IgKJ3OsLzAZHNV+QlQej1V07TBN3JL3U7viIWifvLMjPzxsdLrCPvFRreraseqplXlJ8XKoarrzV6v2esJhQKXFVRNU7VjpdtTu8dtvc+wfLXe6Br9Rrfb7PWeqG1wWJPvuxGLDZqdtzqXYMjpmrZYF1wraJBA0KD97iLsQQKh6BoWhlcRyOnyosEATvgxwgV7XLBnDQuvYWEvGvT4MZsdskMLNju0bJ7BvWjwwaMqkBrNeIoORTcDOIGFI6ENMoATBLm5y2VphsdCRDyVZvi9HWYvTtGhCLnN8Lscv8tlI3EqEqd2uSzDZbcZPrnD4EQUC0dwIhqntmmWTzFsVZZBo9sVCoUUwzB7PC/mmD0+Iwg0y0qNo45hCIVCram0dV3Vn0pyI1cqNTWt2eu1df2wKh3UpE7/6blLpm8ZUeBEMSOKj2u1jmGomqacW1TzokHYg8AeZNWHun1+2O2DPQgWigQJEnYjCBYORWNBgvQHcDeCroc3QtFYKBJb9a2vetEgEQtFY+vhjfMhax4/tuJDYQ+K4kQgFD2sSmZMrY57Fhtks5v2Qc5li9VusUGuFa/Hj1kdTteKFwmE3H7MteKBnK5V37oXDa5hhHMJdi7CKB5BUNyPEcuw5/Ztx4LTZbE67NCSLxDyreOlg8dmiiqStMOwvJjjBJHl9zhRzJcqR2q71mqJpZKkNGTV/CkfSrV8qSyr6hO1ky+VU/QOy/H5crneUlle4MU8x2d5IccLeU7IJVJ0vlRudbug3lKZbDYap+LJFEGSZGKLpCiKpnOlirC/Tya2+Hw+Xy7nyhWWFxJJWiyV8+VDlhfjVJJMbNEsny8fpNK7iVQ6mjCHkFSSpLZCGyTNZmqKYlq06lv3oMH7SGDs6vi8HVoLEveRgBfFl1d9X166fNu+8MYiq8P5zy9GXSveN6H0vFn9mM8fNN+2H/OiuCmiQS8acqPBRGqnJjdATVEOpHqlKh1W6w+l2mG1fmiy9KgqmyUdneuyWdX6Q/no4JzPt5llitX/b00+kEx+LDdqSut/B4zLz+5M6bgAAAAASUVORK5CYII=" nextheight="565" nextwidth="1024" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><br><h2 id="h-vii-prediction-agent-competitive-analysis" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>VII. Prediction Agent Competitive Analysis</strong></h2><p>Currently, the Prediction Market Agent track is still in its early stages with a limited number of projects. Representative ones include <strong>Olas (Pearl Prediction Agents)</strong>, <strong>Warden (BetFlix)</strong>, and <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://Noya.ai"><strong>Noya.ai</strong></a>.</p><p>From the perspective of product form and user participation, each represents three types of paths in the current prediction market agent track:</p><ol><li><p><strong>Olas (Pearl Prediction Agents): Agent Productization &amp; Runnable Delivery.</strong> Participated by "running an automated prediction Agent," encapsulating prediction market trading into a runnable Agent: users inject capital and run it, and the system automatically completes information acquisition, probability judgment, betting, and settlement. The participation method requiring additional installation has relatively limited friendliness for ordinary users.</p></li><li><p><strong>Warden (BetFlix): Interactive Distribution &amp; Consumer-grade Betting Platform.</strong> Attracts user participation through a low-threshold, highly entertaining interactive experience. Adopts an interaction and distribution-oriented path, lowering participation costs with gamified and content-based frontends, emphasizing the consumption and entertainment attributes of prediction markets. Its competitive advantage mainly comes from user growth and distribution efficiency, rather than strategy or execution layer depth.</p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://NOYA.ai"><strong>NOYA.ai</strong></a><strong>:</strong> Centered on "<strong>Fund Custody + Strategy Execution on Behalf</strong>," abstracting prediction markets and DeFi execution into asset management products through Vaults, providing a participation method with low operation and low mental burden. If the Prediction Market Intelligence and Agent execution modules are superimposed later, it is expected to form a "Research—Execution—Monitoring" integrated workflow</p></li></ol><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/f6ea97d3ff429d5ea287011a3e794ded6803dbaec6b41fe9ce1cf98cf6955f0b.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAARCAIAAAAzPjmrAAAACXBIWXMAAAsTAAALEwEAmpwYAAAGQUlEQVR4nDWSX1BadwKFf9OZnd2HfdjXnU53Zmc23UkmLztpOzu72UxjUyPVoKXgDUFBERRQCGAEAQUBUSMoYC+oXPmjqBdQvGiQPyKICGKuxSBB1KDV1lbbfcvrzrTptJk98z18cx7O0wFv3vz85rdcXV26nFNm06hapcQwv9UCK3u6kQkrMmFdCWAuJ2Ic1euHn1otY4jNsovvDA5ofV50wYvq+jWDA/0edM7tdjrsE2aT/vXr1283f/rpf+Dk/HJn/wQvnm3tltK7B6F4xouFt56/jGfyWDgV3thZjWfjmVxsCw9GU9jqxrPYNhZObWT3V6Jbb2VtMxuOp6MbO5vZ/NpWLp7Z2y282j86yx18fX5xBX53lw/+8hn4Owm8XwferQa/vwvAp+AP98CtRnCjHvyjEdykAnD3V975BLxbB/7Z9Gtzox581AT+3QL+XA3eqQDg/9ykgg/p4E/3AfgPALc+eKQCfLWjoqmfIhgl8w0E1tAtiuIaQXStSnCXJv0Ykt6mdFXQZH+rEt6sld4gdl0nCCobFZ/Q5JVNynuNijuQ5IPazuvVoverhH/9tOO9e+23IdnHkPQ6QXCNIPrj7Va5fhpk8KLZ6tGbZoZGneN2LBh9zmvvauMI2RwxldbawOAyWY9b2B1y5SDqjTQzeSRyI5cvZbU+bmYJmSwBldbicmNy5SCDyaU3cT4nPWI0cVmtQnbrYyqNHYunQDSR6xuwdStHO0QancE+79+U95mriY8e0nlkiF1Pa6v9gl5Lbu4bGHeh6xQqt5JAbm7tJEHMLyjNtWQGCWqx2Jfb+L01RNpDOq+yGiIQH5EgZn0Dp5JAcc76AZ47WEvup/HjDF5eT71YDm2h/pjBhJhgpwWZGxmzj4zZjbADC20uriQGDVYj7DDBTiPsGoUdxrEpgwlB/RG3Nzg9t+L2rk7YPeMI6nJjXn90atq/mcHB2TeXGbzwLBT3LqzsF0/Xk9meXo28V2MwWYdGTIPDRrV26IlEPmQwh2KbSrWOJxBrBwzyXq1WNyyRKbtlqlgijdjdlkmnZdKJrUQtEw67C7XZZ4xj46l0Gpxf/JBM58YRl2lsHM8dlMrfZfCCGZ6U96qlij6hWMIXCBHnbC5/WHp14Q+EYQvSJVV0SRXdMqV+9Mt5L1Y6PjfBE3zhE7FE9kQi75YphwxGlVrXxuVvZjKgcFBOpPPbeCmbO0pmXqSy+7t7rzYzBefs0rwvaHP6pqYX8Hx5J3eczOSTmXwGL8KTbottxorMxlO5518dpbKF9E4xlX2Zyr5MpvNvPYMXNzOF4/I3IBrfnXCE7DNrU66wwx390ubji7UypUnSM6LSjsuUJmnPCE+k7uweGjZNE4g0Kr0dauBANC7vsRKicYkkev9TBEYCEI1XVQORIXYdhdnCkVTV1LO5Pdv4IYgl9gxmlMESM1slE47gtDcu7dE/IDVQIFblZ2QSmUEgPqyqqVfprLDNV0tmkKEWiMZhMEUUKruO0lRDpJksaIeo7351PZHUSKa2MViiqhoyoZryOaXFH4iBUAyHJ5dhG2aCfZPOoNMTQ9yrjUxRA1NYT2tjt8uo9PbWdvmcP60ZtN2peFBLbmawxHWU5oYmIQliV9wnKdRmKr39X3cIDU2COgrzAYlOgphEUmMlAbLaXGB3rxSO7yXShY10IZbciyTwZ5FseH0nksitp15EEngk8TySwGPJPc9SdMLu8S3F3N7VWV/I7VlFfeF5X9i3FF0Obc2gKw6334UuI64F39LavC/knF3eyx+A07Pv3Cg27faNWRDP4rOTr3+Ex+0qtU4zoJfIeg0mWKsblslUbnRxb/9YKJYqVGp5j1qrGzbC47IetehJ90o4voAFVWrdb/fVaPqfynr6lOoBrU5/XD4B5dPvA8GN7t4+qUIdjKTzxTPE7mYwWxnMNr5Q0int5fAEfKHEH4iuhJJjlqnBYTOzhaNQ9XN4Ak3/8OCwGfUtw5POBjpT3qvlCTrb+WKRuLuZzeV2iErHZXB+cVUqf1s4PC0enh6dXpSOz4uHp1ggNIcuBoLRQDCK+paCkcRR+dtS+fzk7PKr/cM51BeMxD2LWDqbOzm7PDn7Pp7aRr3+YCQeCEZXIwnPIuZZXE4kt69+/O8vleqMKWfD6BoAAAAASUVORK5CYII=" nextheight="559" nextwidth="1024" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>Compared with AgentFi projects that have achieved clear product delivery such as <strong>Giza</strong> and <strong>Almanak</strong>, NOYA's DeFi Agent is currently still in a relatively early stage. However, NOYA's differentiation lies in its positioning and entry level: it enters the same execution and asset management narrative track with a fair launch valuation of about $10M FDV, possessing significant valuation discount and growth potential at the current stage.</p><ul><li><p><strong>NOYA:</strong> An AgentFi project encapsulating asset management centered on Omnichain Vault. Current delivery focus is on infrastructure layers like cross-chain execution and risk control. Upper-layer Agent execution, prediction market capabilities, and ZKML-related mechanisms are still in the development and verification stage.</p></li><li><p><strong>Giza:</strong> Can directly run asset management strategies (ARMA, Pulse). Currently has the highest AgentFi product completion.</p></li><li><p><strong>Almanak:</strong> Positioned as AI Quant for DeFi, outputting strategy and risk signals through models and quantitative frameworks. Mainly targets professional fund and strategy management needs, emphasizing methodological systematicness and result reproducibility.</p></li><li><p><strong>Theoriq:</strong> Centered on multi-agent collaboration (Agent Swarms) strategy and execution framework, emphasizing scalable Agent collaboration systems and medium-to-long-term infrastructure narratives, leaning more towards bottom-layer capability construction.</p></li><li><p><strong>Infinit:</strong> An Agentic DeFi terminal leaning towards the execution layer. Through process orchestration of "Intent → Multi-step on-chain operation," it significantly lowers the execution threshold of complex DeFi operations, and users' perception of product value is relatively direct.</p></li></ul><h2 id="h-viii-summary-business-engineering-and-risks" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>VIII. Summary: Business, Engineering and Risks</strong></h2><p><strong>Business Logic:</strong></p><p>NOYA is a rare target in the current market that superimposes multiple narratives of AI Agent × Prediction Market × ZKML, and further combines the product direction of Intent-Driven Execution. At the asset pricing level, it launches with an FDV of approximately $10M, significantly lower than the common $75M–$100M valuation range of similar AI / DeFAI / Prediction related projects, forming a certain structural price difference.</p><p>Design-wise, NOYA attempts to unify <strong>Strategy Execution (Vault / Agent)</strong> and <strong>Information Advantage (Prediction Market Intelligence)</strong> into the same execution framework, and establishes a value capture closed loop through protocol revenue return (fees → buyback &amp; burn). Although the project is still in its early stages, under the combined effect of multi-narrative superposition and low valuation starting point, its risk-return structure is closer to a type of high-odds, asymmetric betting target.</p><p><strong>Engineering Implementation:</strong></p><p>At the verifiable delivery level, NOYA's core function currently online is Omnichain Vaults, providing cross-chain asset scheduling, yield strategy execution, and delayed settlement mechanisms. The engineering implementation is relatively foundational. The Prediction Market Intelligence (Copilot), NOYA AI Agent, and ZKML-driven verifiable execution emphasized in its vision are still in the development stage and have not yet formed a complete closed loop on the mainnet. It is not a mature DeFAI platform at this stage.</p><p><strong>Potential Risks &amp; Key Focus Points:</strong></p><ol><li><p><strong>Delivery Uncertainty:</strong> The technological span from "Basic Vault" to "All-round Agent" is huge. Be alert to the risk of Roadmap delays or ZKML implementation falling short of expectations.</p></li><li><p><strong>Potential System Risks:</strong> Including contract security, cross-chain bridge failures, and oracle disputes specific to prediction markets (such as fuzzy rules leading to inability to adjudicate). Any single point of failure could cause fund loss.</p></li></ol><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/20566cd044aa34413d547a42fad4662d53f8efe8cf9860679a46955c8b3244ca.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAASCAIAAAC1qksFAAAACXBIWXMAAAsTAAALEwEAmpwYAAAGZUlEQVR4nEVVbVAT6R1/Op3pl+vYdhw7Y5WZDuWLqIfX1qu29Dg61Tstju31jlFbCIIVsPISBATuGpOQJUsCCTHhLREC8ublRDqMg1oO8JAMXEpyEGBzBPKezW42u5vdBJaIenaS3Fyf+X/4f3me3/N7ef4PwKkwToVRggrRYY7b4WIxLrZLRaLrLjfJsGx0CyVCIZreeb7LxWIoESIZhmRYLxrY4naoSJSObiULD5Esx0U5jovFWI5LHuvBcJDscCp8OCPjjT17MrOyzubk/Onc+ew/nko/cjT34qXDGRkZb/3yp/v3px85eur0mdS0tLyCgp8dTMnMygLge+lHjh5IOXgprwAAsHffvl+kpf1k7973zp7xYhjJRDwYEQcgmQgd4SZnZj8Riptkcp2+H4LlHdo7lTdqBUKhCIIEQuHbJ07k5V+WwrJyfpVAKIJlsqLi4kt5vJqbdbzCIoFQ2NyqULd36fT9mq7uu0PDOBUmwpFvGSD2DdNXyw6vLxhmXGjAhQaWrKuzRqPJbJk3LZqXlhD7ht3l2XC5XGjA4ce8aAAPEjTDejHsK+uKdQ0xmS2rtq89KObDQx4M33D5lqxIQiICUJHosOH+jZpahULR0qJoENyqra9Xt3c2y1skTc1iSKrT98PyVljeOms07rx65SPp6VXk6Soyt745b1kuvX69uVXZLG+9JRKLIammq5tfXVNWUSkQipx+1E9QIESHMYr9ArHdn1+YXUO42K7d5RHcusUrLLxaUjIwNIzYHR6MkDTBVbW1pdfLPhsbC0W2fy+BP19ZW193tKk1uh5938AALJOpNBq5Urnp8iJ255Ppp3SE8+EhgIfogcmpAqX6Wqe2Qtc7NPUFw7Kqzu6eHt09g0Gj083OzdFstOW2uk2t7u7ufDI9vbv78mST3OJ002w0GRAvFkjmkGTY5TVbaXlli0q1+803Tj8GfHioqFF5LK+iVKqqVWrz6qEHc0bFg4cVmoGPe4cfmRbp6Nbdiekfns89/s8bHYMjbSqV7Wt7mqjJ4vSsO9w6ff+wwWBdQ5pg+GpJsbanp29w5GTmO3BLC7Mdi3tAMuycBWlU67Nzy75/+AyvQvBowVQl6/pzLVQMqQbHHy0jtnYlfOBw6vHfHqq/ya+qq1uYM6YIReMWy4vYrqxVdlt92+H1ihrFJddKO7RaxO5QtCmfzc+TzJYbDYIASZIMG+W4jr77+deFa5seNEA0SmVDQwN3eu7o9P0uNFBwuQQkVlnZjY6uLrPJZFgwgTzeuzUNx955T9yq0bS3iyBIrlRKZc036+uaYFjU2GgyWzCKjsfUh5M+nCKZCMmwXGx30+EuLrmSX5B/5R9FlXz+07nZ3tHHAOwBPzqquTPYru2amJj4uLMXFPM/rBMAAH535oPBoUF+dbUEgkSQRAJJRI3impu1UzMzOBWOAwQIGiUoH07iVNiMbFTLui5W1PHqRUVCqFSurNLpWyYnxBPj3mAw9vwFGdl++fLVD3KvfP7f5RexXePi8qrNbl5dxgjcvuG0Wa0ms8W8tBSk4sIkUpQYRAGCTpKogjsnnhoNhnuNzbKicn4Jv1oqk86bTFZkE8VJlAiZEVt0awu8/7cvV9ZJJmJds41PPBwbf3K1sgYc+jEovWZdWTUvr/oJ4v+jInl9H06Go5xY1Xd39LFE0anqv9f7eKp3YnL0P8/aeu8ff/fCrwpKU3IuvnX+UmoO79BHxZGteMw/HX2gH9K//5vizBOF8LAGZGen/6thpK9Pq9c7vD4fTn4r0XflRgl13wMA9uf8vUSrH7la3nDsdH7mR+U/z76Q9BkceDO3Blp3+ekIt7C4OD07u2JDAACnsv8K1/FTebngL+eCPv8SspaYHAkAlKCSKvkJGqdYkmE/HR0TS+UfXLj85tt/aL2tphmyTNL8xtlzIP3kRR7f6fSGo5yfoNxocM1mQ3xk7kFwYR8olw6AlF8fk0pev36NkjROheMxTTzFOEASA6dYxO6QQFDDJ/WwDNbqOianpmxOt9GyLJIqquoEiN2FbLoTu1iH1z88MgK1KJ491BdmpQKQCU5XTc3MGOe/DIYZZjtGMhHgJ0J+gvLHbQj54jbGGw+GezHc6fe7UZxkIjgVphM/D4oHSYb9jrQPD3kxHA/RWDiKb++gBPX8eczh9XowzGJdGTR8Zhj79/8A/LSJ7Sgs0qAAAAAASUVORK5CYII=" nextheight="813" nextwidth="1456" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p><strong>Disclaimer:</strong> This article was created with the assistance of AI tools such as ChatGPT-5.2, Gemini 3, and Claude Opus 4.5. The author has tried their best to proofread and ensure the information is true and accurate, but omissions are inevitable. Please understand. It should be specially noted that the crypto asset market generally has a divergence between project fundamentals and secondary market price performance. The content of this article is only for information integration and academic/research exchange, does not constitute any investment advice, and should not be considered as a recommendation to buy or sell any tokens.</p><p><br><br><br><br><br></p>]]></content:encoded>
            <author>0xjacobzhao@newsletter.paragraph.com (jacobzhao)</author>
            <category>ai</category>
            <category>predictionmarket</category>
            <category>agent</category>
            <category>polymarket</category>
            <category>noya.ai</category>
            <enclosure url="https://storage.googleapis.com/papyrus_images/f3c647dfedaf65d2956bae24eb7d61fb26363bf9caef8c5280d66dad75e88920.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Noya.ai 研报：预测市场智能体的前瞻]]></title>
            <link>https://paragraph.com/@0xjacobzhao/noyaai-研报：预测市场智能体的前瞻</link>
            <guid>2B9Saj1tm6eu8dzKVqax</guid>
            <pubDate>Mon, 05 Jan 2026 04:35:39 GMT</pubDate>
            <description><![CDATA[预测市场在 2025 年已成长为不容忽视的行业新趋势。本报告聚焦于“预测市场智能体（Prediction Market Agent）”这一新兴方向，系统梳理其市场格局、产品形态与商业模式，并以 Noya.ai 为典型案例，深入分析 AI Agent 如何打通“研究—决策—执行”的一体化链路，进而评估其在 AgentFi 与预测市场交汇处所蕴含的长期价值与潜在风险。]]></description>
            <content:encoded><![CDATA[<p>在过往Crypto AI系列研报中我们持续强调的观点：当前加密领域最具实际应用价值的场景，主要集中在<strong>稳定币支付</strong>与<strong>DeFi</strong>，而Agent是AI产业面向用户的关键界面。因此，在Crypto与AI融合的趋势中，最具价值的两条路径分别是：短期内基于现有成熟<strong>DeFi协议</strong>（借贷、流动性挖矿等基础策略，以及Swap、Pendle PT、资金费率套利等高级策略）的<strong>AgentFi</strong>，以及中长期围绕稳定币结算、并依托ACP/AP2/x402/ERC-8004等协议的<strong>Agent Payment</strong>。</p><p><strong>预测市场</strong>在2025年已成为不容忽视的行业新趋势，其年度总交易量从2024年的约90亿美元激增至2025年的超过400亿美元，实现超过400%的年同比增长。这一显著增长由多重因素共同推动：宏观政治事件（如2024年美国大选）带来不确定性需求，基础设施与交易模式的成熟，以及监管环境出现破冰（Kalshi胜诉与Polymarket回归美国）。<strong>预测市场智能体(Prediction Market Agent)</strong>在2026年初呈现早期雏形，有望在未来一年成为智能体领域的新兴产品形态。</p><h2 id="h-" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">一、<strong>预测市场：从下注工具到“全球真相层”</strong></h2><p>预测市场是一种围绕<strong>未来事件结果</strong>进行交易的金融机制，合约价格本质上反映了市场对事件发生概率的集体判断。其有效性源于<strong>群体智慧</strong>与<strong>经济激励</strong>的结合：在匿名、真金白银下注的环境中，分散信息被快速整合为按资金意愿加权的价格信号，从而显著降低噪音与虚假判断。</p><p>截至2025年底，预测市场已基本形成<strong> Polymarket</strong>与<strong>Kalshi </strong>&nbsp;双寡头主导的格局。据《福布斯》统计，2025年总交易量约达<strong>440亿美元</strong>，其中Polymarket贡献约<strong>215亿美元</strong>，Kalshi约为<strong>171亿美元</strong>。Kalshi凭借此前选举合约案的法律胜诉、在美国体育预测市场的合规先发优势，以及相对明确的监管预期，实现了快速扩张。目前，二者的发展路径已呈现清晰分化：</p><ul><li><p><strong>Polymarket </strong>采用“链下撮合、链上结算”的混合CLOB架构与去中心化结算机制，构建起全球化、非托管的高流动性市场，合规重返美国后形成“在岸+离岸”双轨运营结构；</p></li><li><p><strong>Kalshi </strong>融入传统金融体系，通过API接入主流零售券商，吸引华尔街做市商深度参与宏观与数据型合约交易，产品受制于传统监管流程，长尾需求与突发事件相对滞后。</p></li></ul><p>除Polymarket与Kalshi之外，预测市场领域具备竞争力的其他参与者主要沿着两条路径发展：</p><ul><li><p>一是<strong>合规分发路径</strong>，将事件合约嵌入券商或大型平台的现有账户体系，依靠渠道覆盖、清算能力与机构信任建立优势（例如Interactive Brokers与ForecastEx合作的ForecastTrader，以及FanDuel与CME合作的FanDuel Predicts）；</p></li><li><p>二是<strong>链上性能与资金效率路径</strong>，以Solana生态的永续合约DEX Drift为例，其在原有产品线基础上新增了预测市场模块B.E.T（prediction markets）。</p></li></ul><p>传统金融合规入口与加密原生性能优势这两类路径共同构成预测市场生态的多元竞争格局。</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/e2cf036bf132130c05f32f99b53915577b18a1537bb528571cd12eb1de57a7d2.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAARCAIAAAAzPjmrAAAACXBIWXMAAAsTAAALEwEAmpwYAAAGWklEQVR4nEWTW1CaBxqG/0k726vedNKZznbTdhprYtuZTG3TrNRYDwsIikEQBOUohSCKgkQxJqhR0CC1RkOiDR5iWUWJEYNEh/wekIRomhhPCJaf03I+SZrDdHa7F7vjpjM78158d8987zwvsP/it9jzl/5wPPH8VTT5vEUq5fB40g5ZW0c7v5bffKmlRiiQXJJwuGfPietrhIKU1JSc3G/hKOSUwXjuQmseClNUQirAExub27PhqMfWX17++z/Rl/9MvPo98epfocSvgC8c396DrHtO655zy+a4MTiEwZEzTmPQRbQcOIFA4pbTaugV57LheGwJi14hOpr6VVYuPiu74Hp3q7Sphk5A13BIlQzceQGrDIcAZ9TOrQe2J8u2p8t7m5ZANAZA3oBqdGxMO33iy6+pDLbl8Rqn+kfgPQ6QKgCOCYAjXAA4AgDHAeBDAEAAqYJDx4RACv/oJxU9eKCXCAwwABUbuFEBpANA9rtAN/Ygl9FAZz6gKAJ8PjfgDUW2du2aKR2FxhRflGzuWH8YHEsrIH7wccbRY6fffift2GdwADj8xqEPT2WSDv/5y7feOvrmn1LolDqlGH9TdrZDQOCXZbOLT1USvmlgwJVNFLmQsDmrtBkHrMYbwXAECMaSDrdnxbLK49d0fd/jDQQNS2amuPl0BuaLz3PykaVNTQoEogSLp1+UdPMFrZgz5VgcEwQf6VSdRk2P+oq4Q0S90dM6rJRphrrBmZFptTKwa4lDa377w1A0BoQSSeuewx0Iy+QKjXYqnNyf1i1JGkfO5AgxsLprfbNjE6ahYePt6bV+1ez4xMqDh67lFce8YXVGpTCpJYZu6t0eegIyvwhu/OpfT7ofJd2PIlZTxGrybi/F9pNAIBp3uD3jWu2Z4mIyhepwuwd/0pUyz+fBsUgUkUTmHU87lY8uFzXI0RhavbgLXLDpZtfvzJgLCkldbfUN1VQuA1+KyW6sLmeVFRQjYRaDOmQzeTdB17oxFEv8AZhfXGRzuXX1DW6/X3P7bnVtCxJBPJ2FOVNcQWXUXZD0Xrk62dyqvKkGwQXbvHHXaPy5CJUxoeoEpwb06p6pYYVWJTcbbno2Qd/2UnDXFLWZvdtL4UQC8Efjdqf77r2FEmIplcGyQ0793PL5pst5CCICQeLzW0nkqpwcHIUmaJepXgNmDVt6/UNkEU7aVAVOq5zrRptFv2XWebeXEtBa/JfVkG0lsLPoeQ0IRON2l0uru0OhMc/yqu1Ot0arqz93HoEoTUs7xauS4AlsPIHTLhucM+7MGXfuzD6Z0W8cVERiNtZxrnY0rC9OGMaVhvE+i2HUpB9+dG/MaplJQBbnBhhNxA8sev2BtPOysn8A8gdu37ot5lVwKpvLKfzvu9Vt7QPXBqYtax5wYXfOuAMu2MAFO2hcw+JKeCwynYCQCCsoODifha/lEJlkNJ+FbxF9t76ohdaNBwBfKOILRdqkslw4vIxKfby5OalbLCwTN0p626T9eMJZSauSRK6CwVAwWD4aTYJlok9+/bcf+tSVXHoxMuPjI4dPpH306SfvYxAZpZgcHr1YXE0V82lXOxoc62As+T+L7C6XzjDH4VUpevvcfl//yC00sZLBrMMTOPVihVwxSiBy29oHJifNmgmTZsI0Nr6in7nf3lgp5JIQWel4dBYWmZl58vjglebfXzpfBDcS0GrUZnZugOFE/P+app88yeXXekLBEc0dDI6Zm4fNzCzMR1NEDXIEgijtHH7w0GW6D5lWINN96J7xCZuEkggZbAqmiokTV1OYZFSLiPX36zKtSh6yrSSg1cCuyR+OHli0C0H6eSOHV9XeKbdDjlszIJffxWDW1Yk6TnzxLYMlotCEfMGlwZF5aeewolvdJhuanAAvcnFKqYhDwzbW0OXNfCwCRirO+2t6KpOMNutHJ1Rdy7MjB4BANO7y+3WGOVRhIYPNDsQj1wd0f0khvXHonbRPYWzOhRphJ6dSQiBWNl3oI/xRHbf/mqaXl4v85vN33wbSP/toqKdZXE3RDnZdlYmmBuWjyku13+HHVfKDJUeSz9z+gB1yPt227u45vKHIU6tDP7c6/JNGNTJmd4YcnpjDE3X79h2uiNu37/LGXb79Pcc/tn5eAkHDpFaj1Y6H/c5niWA05IlHfAGf8/URiwUjyWf/BbZdl31XHryCAAAAAElFTkSuQmCC" nextheight="559" nextwidth="1024" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>预测市场表面上与赌博相似，本质上也是一种零和博弈，但二者的核心区别并不在于形式，而在于是否具有正外部性：通过真金白银的交易聚合分散信息，对现实事件进行公共定价，形成有价值的信号层。尽管存在娱乐化参与等局限，但其趋势正从博弈转向“全球真相层”——随着CME、彭博等机构的接入，事件概率已成为可被金融与企业系统直接调用的决策元数据，提供更及时、可量化的市场化真相。</p><h2 id="h-" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>二、预测智能体：架构设计、商业模式与策略分析</strong></h2><p>当下<strong>预测市场智能体(Prediction Market Agent)</strong>正在进入早期实践阶段，其价值不在于“AI 预测更准”，而在于放大预测市场中的<strong>信息处理与执行效率</strong>。预测市场本质是信息聚合机制，价格反映对事件概率的集体判断；现实中的市场低效源于信息不对称、流动性与注意力约束。预测市场智能体 的合理定位是<strong>可执行的概率资产管理（Executable Probabilistic Portfolio Management）</strong>：将新闻、规则文本与链上数据转化为可验证的定价偏差，以更快、更纪律化、低成本的方式执行策略，并通过跨平台套利与组合风控捕获结构性机会。</p><p>理想的<strong>预测市场智能体</strong> 可抽象为<strong>四层架构</strong>：</p><ul><li><p><strong>信息层</strong>汇集新闻、社交、链上与官方数据；</p></li><li><p><strong>分析层</strong>以 LLM 与 ML 识别错价并计算 Edge；</p></li><li><p><strong>策略层</strong>通过凯利公式、分批建仓与风控将 Edge 转化为仓位；</p></li><li><p><strong>执行层</strong>完成多市场下单、滑点与 Gas 优化与套利执行，形成高效自动化闭环。</p></li></ul><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/20e95fe6e9066606c17e614ef0cd769c0b5a7ea05f4f8555781c910ce1c444b6.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAARCAIAAAAzPjmrAAAACXBIWXMAAAsTAAALEwEAmpwYAAAGFklEQVR4nDWUe0zb1xXHz5+Tpv3RTcrWRYq6Sls1qdNWrZ20JlO6qY2SVFukZtlD2yK1UmhD84BkCyQkwSEUSDIDqR/xExsMfvFwMMbEcnn5gY2NHRtsbIxtjOMHYGzHr9/v55/9406/VDv66iOdq6t7dO659wsIoUg4LB6QcgQD8jGNfEzDFw2Oa7R8kbSH+bVqbEI8IJUODokGpCw2p/9r1oTmqWRwiM19wuXxlSq1eEAqFIrFA9IhmYzL4wtEYoFIolCNPnj4IBAIIEQBQgflCuZemrdpRMGF8bVZldcg3zQ/DSxo7FpxwGGMBjxhv3PdY/O7rX730s5WMLLuolOPLbTmTEbXtjc80XXXps8R9C5vb3gj666tgNvnXSkWixRFAoGX6SYUrVOnQf830J+FmXMwfQb0n8D0n+A54133M7GGe3dK2JlzjeZcav8M3yh7OM65u2NT5FbURln3qp7v1XKfT7I3v5F4dZzYoiyo55RirgOECKICVQw7QMj9+Lz4V6D4GBQnQX4CZMdBeQKkx2D+4qFxfieXcel24z+mBYzsitI18bi39fPWhr/4p9h71qGHLQ1qVptecG+KT6v/9sUZUaeaeb0YttUQwvESkFWihlA+YIwOfhZTX44pv3ylK9vqK1uyC3smbj7u3AuZ00HTfsQWdk5tOrT5mD0dXMhHl0pxB5Z0VpJOLLmCJVcqcXshZsMSKy+j5nImWqvVcKIEBIEjVAuuxoe5FhXPphLYVTy7SmhX8WxKgd08s56IZOLhTDySScZy1nn30rw7tZV9EaYXo8HUVjCdjOS3Q3vx8H48nElEaG6sJnZT+3WKwPEKVMo4QojxL8734ZdvwunD8OER+PB1+OAncPKH8N6pH1wdEz5jND1qu9zV08oyDC+P8gzM27zH7aKv/t3f3y6cllgf3uRohHMS5oSK+0z4QKF+MiPv1a3bowhRlUoRMAxDCN05z/ku/OIwfegHh2gd/xF89Br8+g+vfSl/MtX6RWfbpS5Wh9QgWxnu13Vc72c0M/k9aqPcrRWbHjOkEqbm0W2e+NE4p0umES0KvlK7zUFEz6ACJEnUqVo6tLus9C4rvc5Rn0Ptc46ufcuwPZZLFXKJYj5VLO/iLrN3ZdFTSFeyiWIuUcy+yGcThUIKe7WhnEsUc6lSNlHYCe/nM4V6nSQIDHB6BkgVmv35xJVjuhtHdTd+q2s5qrtxTNfyzmTzLafInlqdiztmY/a5mEOzNqvxzy7EHMaofSHpWky6F5IuU8plzXgtux7TrseS9liS7m+27Rv7sYN6HScwwEj6ik6OXIMzAI1HoOHH0PA6fPodaHwD/grQ+R5Tw23qa2nhMu6MPBK6J5gG4QXG5Yb7TWzr8EjEIHSP3RTd7zUIP7vb2D3J6VD0PnjKvinvsSQ9CKEqSUCpUkIIndd2wacA134GTT+Fq29C0xtw7S1o+B6wzw5Z1Hek3fdG/itb1w8HZrgm+R1RVwuvQ2hTyzefSd2TTK2g3yDpGmVxTXLOonzAMd63MGhNeNABwoky/UzJajVX3vfu+Ff3gr5MyJcJ+b/l/mY0F08XM3tYYQ8r5GoV83PbvMP6kiIyRDGZ341nU4n8zi72cqeS3cNyyVImXdx/8XInlkvmClmSrGJ4BcqVCkKoXWWFE51wrhfO9sEnTJrn+uBUz7v3J9UWP19n4+uW+lUGrtYsnfMJpm0cjUm+FNJ400pbWGL0KJY2FdaNYXNgeM6nsm1K5nzurR2EahiGQalCe9H7txRw6O9wvBneaYTfXIK3v4CjV+HIeTjTe0+q/eeN7gvtrGaW6p5igSGf/+OF//z5cnu3xiG0pzrGLJ/ff3JLrLvOGmmT6C928RkyYzNv0uiPI3RAFyi/KtCqMMPvbsHpTjjVCSc7aJ7ugt+3H25Tys2+vvE5rt4+urqrfJ4UGN29agP7qWnYFhl2bA1ZQ5L5NZklIF30Sxb90kWfzBJgG9zLkdSrf4BDvV6jqDpFkYiq0UI1hCj0/6jX6yRJ1imKNhY6sFqtRqdUHScInCCqZJWs1aokSdLOQ2AEjuFYuVIkCJyi6iRZ/R+xiHW5VLJSdAAAAABJRU5ErkJggg==" nextheight="559" nextwidth="1024" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>预测市场智能体的<strong>理想的商业模式设计</strong>在不同层级有不同方向的探索空间：</p><ul><li><p>底层<strong>Infrastructure 层</strong>，提供多源实时数据聚合、Smart Money 地址库、统一的预测市场执行引擎与回测工具，向 B2B/B2D 收费，获取与预测准确率无关的稳定收入；</p></li><li><p>中间<strong>Strategy 层</strong>，以开源或 Token-Gated 方式沉淀模块化策略组件与社区贡献策略，形成可组合的策略生态并实现价值捕获；</p></li><li><p>顶层<strong>Agent 层</strong>，通过受托管理的 Vault 直接跑实盘，以透明链上记录和 20–30% 的绩效费（叠加少量管理费）兑现能力。</p></li></ul><p>理想的预测市场智能体 Agent 更接近一个“<strong>AI 驱动的概率型资管产品</strong>”，通过长期纪律化执行与跨市场错价博弈，而非依赖单次预测准确率来获取收益。而“基础设施变现 + 生态扩展 + 业绩参与”的多元收入结构设计的核心逻辑在于：即便 Alpha 随市场成熟而收敛，执行、风控与结算等底层能力仍具长期价值，可降低对单一“AI 持续战胜市场”假设的依赖。</p><h4 id="h-" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>预测市场智能体策略分析：</strong></h4><p>理论上，Agent 具备高速、全天候与去情绪化执行优势，但在预测市场中往往难以转化为持续 Alpha，其有效应用主要局限于特定结构，如自动化做市、跨平台错价捕捉及长尾事件的信息整合，这些机会稀缺且受流动性与资本约束。</p><ol><li><p><strong>市场选择：</strong>并非所有预测市场都具备可交易价值，参与价值取决于结算清晰度、流动性质量、信息优势、时间结构与操纵风险五个维度。建议优先关注<strong>新市场的早期阶段</strong>、<strong>专业玩家少的长尾事件</strong>以及<strong>时区差异导致的短暂定价窗口</strong>；避免高热度政治事件、主观结算市场与极低流动性品种。</p></li><li><p><strong>下单策略：</strong>采用严格的系统化仓位管理。入场前提是自身概率判断显著高于市场隐含概率，并依据<strong>分数化凯利公式</strong>（通常为1/10–1/4 Kelly）确定仓位，单事件风险敞口不超过15%，以在长期实现<strong>风险可控、回撤可承受、优势可复利</strong>的稳健增长。</p></li><li><p><strong>套利策略：</strong>预测市场中的套利主要体现为四类：<strong>跨平台价差</strong>（需警惕结算差异）、<strong>Dutch Book套利</strong>（确定性高但流动性要求严）、<strong>结算套利</strong>（依赖执行速度）及<strong>关联资产对冲</strong>（受结构错配限制）。实践关键不在于发现价差，而在于严格对齐合约定义与结算标准，避免因规则细微差异导致的伪套利。</p></li><li><p><strong>聪明钱跟单：</strong>链上“聪明钱”信号因滞后性、诱导风险与样本问题，不宜作为主策略。更合理的用法是作为置信度调节因子，用于辅助基于信息与定价偏差的核心判断。</p></li></ol><h2 id="h-noyaai" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>三、</strong><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://Noya.ai"><strong>Noya.ai</strong></a><strong>：从情报到行动的智能体网络</strong></h2><p>作为预测市场智能体的早期探索，NOYA 的核心理念是 <strong>“Intelligence That Acts（让情报直接行动）”</strong>。在链上市场中，单纯的分析与洞察并不足以创造价值——尽管仪表盘、数据分析和研究工具能够帮助用户理解“可能发生什么”，但从洞察到执行之间仍存在大量人工操作、跨链摩擦与执行风险。NOYA 正是基于这一痛点构建：将专业投资流程中“<strong>研究 → 形成判断 → 执行 → 持续监控</strong>”的完整链路，压缩进一个统一系统，使情报能够直接转化为链上行动。</p><p>NOYA 通过整合三大核心层级实现这一目标：</p><ul><li><p><strong>情报层 (Intelligence)：</strong> 聚合市场数据、代币分析和预测市场信号。</p></li><li><p><strong>抽象层 (Abstraction)：</strong> 隐藏复杂的跨链路由，用户只需表达意图（Intent）。</p></li><li><p><strong>执行层 (Execution)：</strong> AI Agent 根据用户授权，跨链、跨协议执行操作。</p></li></ul><p>在产品形态上，NOYA 支持被动收益型用户、主动交易者以及预测市场参与者等不同参与方式，并通过 <strong>Omnichain Execution、AI Agents &amp; Intents、Vault Abstraction</strong> 等设计，将多链流动性管理、复杂策略执行与风险控制模块化、自动化。</p><p>整体系统形成一个持续闭环：<strong>Intelligence → Intent → Execution → Monitoring</strong>，在确保用户始终掌握资产控制权的前提下，实现从洞察到执行的高效、可验证与低摩擦转化。</p><table style="min-width: 100px"><colgroup><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>层级</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>产品模块</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>功能描述</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>核心价值</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Intelligence</strong></p><p style="text-align: center"><strong>（情报层）</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>NOYA Intelligence</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">基于基本面、链上数据、叙事与风险因子的机构级研究系统</p></td><td colspan="1" rowspan="1"><p style="text-align: center">将复杂研究压缩为可执行的 Alpha 线索，为资金决策提供结构化输入</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Intelligence</strong></p><p style="text-align: center"><strong>（情报层）</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Prediction Market Intelligence Copilot</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">针对预测市场的概率分析、EV 计算、Smart Wallet 行为与资金流追踪</p></td><td colspan="1" rowspan="1"><p style="text-align: center">识别赔率错配与结构性机会，为预测市场交易提供信息优势</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>抽象层 (Abstraction)</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>NOYA AI Agent</strong></p><p style="text-align: center"><strong>（Voice + Text）</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">接收语音 / 文本形式的 Intent，并编排跨链、跨协议的链上执行</p></td><td colspan="1" rowspan="1"><p style="text-align: center">将“人类意图”直接转化为链上动作，是执行层的统一入口与协调器</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Execution</strong></p><p style="text-align: center"><strong>（执行层）</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Omnichain Vaults</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">覆盖多链、多协议的风险调整型金库，由 Agent 调度与管理</p></td><td colspan="1" rowspan="1"><p style="text-align: center">为 Agent 提供可规模化调度的资金池，实现持续系统化收益</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Execution</strong></p><p style="text-align: center"><strong>（执行层）</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Prediction Market Execution</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">在 Polymarket 等预测市场中进行下单、调仓与策略执行</p></td><td colspan="1" rowspan="1"><p style="text-align: center">将概率判断转化为真实仓位，完成从分析到结果的闭环</p></td></tr></tbody></table><p><br></p><h2 id="h-noyaai" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>四、</strong><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://Noya.ai"><strong>Noya.ai</strong></a><strong> 的产品体系与演进路径</strong></h2><h3 id="h-noya-omnichain-vaults" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>核心基石：Noya Omnichain Vaults</strong></h3><p>Omnivaults 是 NOYA 的资本部署层，提供<strong>跨链、风险可控的自动化收益策略</strong>。用户通过简单的存取操作，将资产交由系统在多链、多协议中持续运行，无需手动调仓或盯盘，核心目标是实现<strong>稳定的风险调整后收益</strong>而非短期投机。</p><p>Omnivaults 覆盖标准收益与循环（Loop）等策略，按资产与风险等级清晰划分，并支持可选的绑定激励机制。在执行层面，系统自动完成跨链路由与优化，并可引入 <strong>ZKML</strong> 对策略决策进行可验证证明，增强自动化资管的透明度与可信度。整体设计以模块化和可组合为核心，支持未来接入更多资产类型与策略形态。</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/0c3329e2f1e72f713514b8e3f52e542e09230b968553d797a417873773546979.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAdCAIAAABE/PnQAAAACXBIWXMAAAsTAAALEwEAmpwYAAAGbklEQVR4nLWWe1QUVRzHb6n4yF4eUwGNfCQQZhYZmoDiY1V84AMhPAjKwwPyOJIKKBAuomToprFitsoSmitYPpBIW1222AGdYbeZaZw7sHOHZQdcF1fK6Jz+8J8OzDnjxqHQP5rzO/fcmXPv73Pv/c7v/n6AgoiCyMLyJOwz6ZWCiKAgJ4inzlav2bxtZ55ya9quxIxdO/OUG+KTo5LSopLStuzYuWrzthMV5zhBJChopjmCgpJZWCsF0S9MXwskX81mWm4lw1poq/1eVn4xAKMDQxUeXm/4BYUELV75ove0CdP9X387MGD+QgBGJWflik6XmeZoDskmwXCKfQKQFz7AkOjoedTb/duj1o6uC/UN14x3bt4mL9Q3UG1Cz6Penke9otNVU3dLpdFVXdGfvvhD1RW9SnO+vgFjkf02eZeE/BAAC8vfoVhOEA+XV0WmK32DI/xDIiLTlcqyCk4Qm8wMzaGiMm1wZIpP4OIpgUumvr9kVUL2MW0NEh1YCz00gOpnINGh0pxPylMFLo16/d2w+OzDB9RaJDqI/pNRllXMUcSAEZ7DJs4EL05ZlZCt0uieAUBBRHNCk5nJKjyyPDYtY3/pvk/LbmIEi+wWlmeRvabupqqi+uT5azsKSqvrjQfVldIR4RT7tAAKItHp0uguL9ywJXlXQc01PRId7viu7ocnKnXZB46IThfNCTQnSG7Nv7Y9FYCgOdv9BwXFnwWFLgtfH12iUksAyREF0eYtCQAAb59p8penBVj6BYCoA6dYje5ySq4yPCbxmOZrM80ZMMKAEY04aabhUbVmvM+MKt1FA0bI3qTOEABOELUXrhYf+7JN6PxcczY1vyQqLS8p5wAJ+8CcINIcMmDE3lL1rMXrurof/oSTjTgpzf0vDVhklyJZe/Gqf2j41sxca7tYcPQU6H/GePsHRyYoohM/St29NWvvvFVbABjuF7Lmm6t6mkO3MHwIAIvs5777nrG29/T+GR6bXG8wtXc6cIo9XF4FxngDADxnh67ZlnHdiJkIps5gSsk9FLY2Ni6zoFZvIiFvwAgZYKa5JwAS8haWJyhou/9gwfIIAEDIinXp+QcpiK4bsb5YO1Gx//ipgqPlXoGhAIybvThCXVlNULCr+6HodDG8TXI1+BGRkC/4rDxoRdTa+Ey9ybxg2RoAQL7yoOh0UbDvlJHo2J5T2IiTKbnKcq1ufULm+oS0N4MVMenZeYePX9ObRKerL/T+DUBBFJ2aCwDw8PLT6K7Omr8IALArJ19wdMuAuSs2YS10abk2Jj37eoMpbGO8trp2fULax0Wl6fkH4zJzNLpLnCC6H9E/AIamX4w4pTeZLSwfn7EnK7fA2NQiD807pALg5RK1JqdYBUZ6fXX+0pT3Fi1QbFy4IQ6M8EzP2b8kKnH0W8GXbzTcJplBNJCCnuYEFtlZZI/L2KM33ZZuG6p/6Pbd+WDY+KWRceGxyR6eMxTRiYpN8WDYxI9Sd4/19h092Tclr2RzRmFx2WmyPxkMIrL7v1+i1twwNrPILoWrsanFwlpZZG/t6EKig+Ft0jpaO7poDrV2dFnt9wgKNpmZARoMDpAY7q9YC30LwzGCxAiyQvfdDSOGEeQtDG/ESZy6W6c3nqrS4RTbbKYacRJroYcOtEGuIwreNOHontNn3jJ1ZbW0uZ9wkra2n6mpm7pgJbR1Ypa7jQQlT3kGAIvsdcYmRUzqRN+g0VMCwiKTvjc0i05XoeokeH6ih5cfAK8+N2E6GDP5k6Mn5d0/LcDC9lUCdQbsnWVRwz19Pbz8wyKTvv3BKDpdxWWnwQveL/sEAPCKh+eMkZP9C1VfugP+VYMBy//r8eMS9ZmUfUWuP3rLtNVzV27q6n4oTSSYNsHRvX1vUa2xGdo67/Qn+mfYAc0Jp3W1B459NS5gftjGeF3tj7EZ+8b6z9d+Wy9fMH89fjx76QZ1Zc39nt8HyDZ0PmCR/ZD6zEtTA6bN+XBaYMj2nKLo1Jy3Q1fv/+IMCXmCaXsnLGL8jEAwaebznn6zQlZfvvGzpP8zaMDwNqv9nmQ0JzC8jeFtJORpTrCw/PS5CjB2MhjlBUZMes33A3fAk4Qj1VtyZSddq5K2JOQJt5JNnimNN9McCXmGt0FbJ7R1MtZ2yYnkx0z3lXvgVxZZmFacYmWTijLZqXt/gEnjm8203MrfSchLvxNwT9P/h/0NCAKprJcGf8YAAAAASUVORK5CYII=" nextheight="919" nextwidth="1024" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>NOYA&nbsp; <strong>Vault（金库）</strong>的技术架构：各金库通过 <strong>Registry</strong> 统一注册与管理，<strong>AccountingManager</strong> 负责用户份额（ERC-20）与净值定价；底层通过模块化 <strong>Connectors</strong> 对接 Aave、Uniswap 等协议并计算跨协议 TVL，依赖 <strong>Value Oracle</strong>（Chainlink + Uniswap v3 TWAP）完成价格路由与估值；交易与跨链由 <strong>Swap Handler（LiFi）</strong> 执行；最终，策略执行由 <strong>Keeper 多签</strong> 触发，形成可组合、可审计的执行闭环。</p><h3 id="h-alpha-prediction-market-agent" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>未来 Alpha：预测市场智能体 (Prediction Market Agent)</strong></h3><p>NOYA 最具想象空间的模块：情报层持续追踪链上资金行为与链下叙事变化，识别新闻冲击、情绪波动与赔率错配；当在 Polymarket 等预测市场发现概率偏差时，执行层 AI Agent 可在用户授权下调动金库资金进行套利与调仓。同时，Token Intelligence 与 Prediction Market Copilot 为用户提供结构化代币与预测市场分析，将外部信息直接转化为可执行的交易决策。</p><h4 id="h-prediction-market-intelligence-copilot" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>预测市场智能决策助理（Prediction Market Intelligence Copilot)</strong></h4><p>NOYA致力于将预测市场从单一事件下注升级为<strong>可系统管理的概率资产</strong>。其核心模块通过整合市场隐含概率、流动性结构、历史结算与链上聪明钱行为等多元数据，运用期望值（EV）与情景分析识别定价偏差，并重点追踪高胜率钱包的仓位信号以区分信息交易与市场噪音。基于此，Copilot 支持跨市场、跨事件的关联分析，并将实时信号传递至AI Agent，驱动开仓、调仓等自动化执行，实现预测市场的组合管理与动态优化。</p><p><strong>核心策略机制包括：</strong></p><ul><li><p><strong>多源 Edge 信息捕获（Multi-source Edge Sourcing）</strong>：融合 Polymarket 实时赔率、民调数据、私有与外部信息流，对事件隐含概率进行交叉验证，系统性挖掘尚未被充分定价的信息优势。</p></li><li><p><strong>跨市场与跨事件套利（Prediction Market Arbitrage）</strong>：基于不同市场、不同合约结构或相近事件间的定价差异，构建概率与结构性套利策略，在控制方向性风险的前提下捕获赔率收敛收益。</p></li><li><p><strong>赔率驱动的动态仓位管理（Auto-adjust Positions）</strong>：当赔率因信息、资金或情绪变化显著偏移时，由 AI Agent 自动调整仓位规模与方向，实现预测市场中的持续优化，而非一次性下注。</p></li></ul><h4 id="h-noya-noya-intelligence-token-reports" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>NOYA 智能代币情报报告：（NOYA Intelligence Token Reports）&nbsp;</strong></h4><p>&nbsp;NOYA 的机构级研究与决策中枢，目标在于将专业加密投研流程自动化，并直接输出可用于真实资产配置的决策级信号。该模块以标准化报告结构呈现明确的投资立场、综合评分、核心逻辑、关键催化剂与风险提示，并结合实时市场与链上数据持续更新。与传统研究工具不同，NOYA 的情报并不止步于静态分析，而是可通过 AI Agent 以自然语言调用、对比与追问，并被直接输送至执行层，驱动后续的跨链交易、资金配置与组合管理，从而形成“研究—决策—执行”一体化闭环，使 Intelligence 成为自动化资本运作体系中的主动信号源。</p><h4 id="h-noya-ai-agent" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>NOYA AI Agent (语音与自然语言驱动)</strong></h4><p>NOYA AI Agent 是平台的执行层，核心作用是将<strong>用户意图与市场情报直接转化为经授权的链上行动</strong>。用户可通过文本或语音表达目标，Agent 负责规划并执行跨链、跨协议的操作，将研究与执行压缩为一个连续流程。<strong> </strong>是 NOYA 降低 DeFi 与预测市场操作门槛的关键产品形态</p><p>用户无需理解底层链路、协议或交易路径，仅需通过自然语言或语音表达目标，即可触发 AI Agent 自动规划并执行多步链上操作，实现“意图即执行”。在全程用户签名与非托管前提下，Agent 按“意图理解 → 行动规划 → 用户确认 → 链上执行 → 结果监控”的闭环运行，不替代决策，仅负责高效落地执行，显著降低复杂金融操作的摩擦与门槛。</p><h4 id="h-zkml-verifiable-execution" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>信任护城河：ZKML 可信执行（Verifiable Execution）</strong></h4><p><strong>可信执行</strong>旨在构建策略、决策与执行的全流程可验证闭环。NOYA引入ZKML作为降低信任假设的关键机制：策略在链下计算，并生成可验证证明，链上验证通过后方可触发相应资金操作。该机制可在不泄露模型细节的前提下，为策略输出提供可信性，并支持可验证回测等衍生能力。目前相关模块在公开文档中仍标注为“开发中”，工程细节仍有待后续披露与验证。</p><h4 id="h-6" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>未来 6 个月产品路线图</strong></h4><ul><li><p><strong>预测市场高级订单能力</strong>：提升策略表达与执行精度，支撑 Agent 化交易。</p></li><li><p><strong>扩展至多预测市场</strong>：在 Polymarket 之外接入更多平台，扩大事件覆盖与流动性。</p></li><li><p><strong>多源 Edge 信息采集</strong>：与盘口赔率交叉验证，系统性捕获未充分定价的概率偏差。</p></li><li><p><strong>更清晰的代币信号与高阶报告</strong>：输出可直接驱动执行的交易信号与深度链上分析。</p></li><li><p><strong>更高级的链上 DeFi 策略组合</strong>：上线复杂策略结构，提升资金效率、收益与可扩展性。</p></li></ul><h2 id="h-noyaai" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>五、</strong><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://Noya.ai"><strong>Noya.ai</strong></a><strong>的生态增长与激励体系</strong></h2><p>目前 Omnichain Vaults 处于生态发展的早期阶段，其跨链执行与多策略框架已通过验证。</p><ul><li><p><strong>策略与覆盖：</strong> 平台已集成 Aave、Morpho 等主流 DeFi 协议，支持稳定币、ETH 及其衍生资产的跨链调配，并初步构建了分层风险策略（如基础收益 vs. Loop 策略）。</p></li><li><p><strong>发展阶段：</strong> 当前 TVL 体量有限，核心目标在于<strong>功能验证（MVP）与风控框架打磨</strong>，架构设计有较强的可组合性，为后续引入复杂资产及高级 Agent 调度预留接口。</p></li></ul><h4 id="h-kaito-space-race" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>激励体系：Kaito 联动与 Space Race 双轮驱动</strong></h4><p>NOYA 构建了一套以“真实贡献”为锚点，深度绑定内容叙事与流动性的增长飞轮。</p><ol><li><p><strong>生态合作（Kaito Yaps）：</strong>NOYA 以“AI × DeFi × Agent”的复合叙事登陆 Kaito Leaderboards，配置 <strong>总供应量 5% 的无锁仓激励池</strong>，并额外预留 <strong>1% 用于 Kaito 生态</strong>。其机制将内容创作（Yaps）与 Vault 存入、Bond 锁定深度绑定，用户周度贡献转化为决定等级与倍率的 Stars，从而在激励层面同步强化叙事共识与资金长期黏性。</p></li><li><p><strong>增长引擎（Space Race）：</strong>Space Race 构成 NOYA 的核心增长飞轮，通过以 Stars 作为长期权益凭证，替代传统“资金规模优先”的空投模式。该机制将 <strong>Bond 锁仓加成、双向 10% 推荐激励与内容传播</strong>统一纳入周度 Points 体系，筛选出高参与度、强共识的长期用户，持续优化社区结构与代币分布。</p></li><li><p><strong>社区建设（Ambassador）：</strong>NOYA 采用邀请制大使计划，向合格参与者提供社区轮参与资格及<strong>基于实际贡献的绩效返佣（最高 10%）</strong>。</p></li></ol><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://目前Noya.ai">目前Noya.ai</a>积累超 3,000 名链上用户，X 平台粉丝突破 4.1 万，位列 Kaito Mindshare 榜单前五。这表明 NOYA 在预测市场与 Agent 赛道中已占据了有利的注意力生态位。</p><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://此外Noya.ai">此外Noya.ai</a>核心合约通过 Code4rena 与 Hacken 双重审计，并接入 Hacken Extractor。</p><h2 id="h-" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>六、代币经济模型设计及治理</strong></h2><p>NOYA 采用<strong>单代币（Single-token）</strong>生态模型，以<strong> $NOYA</strong> 作为唯一的价值承载与治理载体。</p><p>NOYA 采用<strong>回购销毁（Buyback &amp; Burn） </strong>价值捕获机制，协议层在<strong> AI Agent</strong>、<strong>Omnivaults </strong>与<strong>预测市场</strong>等产品中产生的价值，通过<strong>质押、治理、访问权限</strong>及<strong>回购销毁</strong>等机制实现价值承接<strong>，</strong>形成 <strong>使用 → 收费 → 回购</strong>价值闭环，将平台使用度转化为代币长期价值。</p><p>项目以 <strong>Fair Launch</strong> 为核心原则，未引入天使轮或 VC 投资，而是通过<strong>低估值（$10M FDV）</strong>的<strong>公开社区轮（Launch-Raise）</strong>、Space Race 与空投完成分发，刻意为社区保留非对称上行空间，使筹码结构更偏向活跃用户与长期参与者；团队激励主要来自长期锁定的代币份额。</p><p><strong>代币分配 (Distribution)</strong></p><ul><li><p><strong>总供应量： 10 亿 (1,000,000,000) NOYA&nbsp;</strong></p></li><li><p><strong>初始流通量 (Low Float)： 约 12%&nbsp;</strong></p></li><li><p><strong>估值与融资 (The Raise)：融资额：100万美金；估值 (FDV)： 1000万美金&nbsp;</strong></p></li></ul><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/53182bf48f8f02fe46732ee22a19cf47c157a115c2d34246d9e3c1943bce4195.jpg" alt="" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAASCAIAAAC1qksFAAAACXBIWXMAAAsTAAALEwEAmpwYAAAGBUlEQVR4nD1UW28a6Rmediv1Zu8qpVLSbKu9aFVpe9xKyapXvWh703/RSr2o1GqdtFlbUdbehNjrxImNARuDOWQcMAMzMMMw9mAONgZzNsM5EE4GBjzDzHgmQHxQt6rGlld6Lp7n/d73k77ne/QCPUFoM2ybYXo83xmwfUGgOa7DDnocT3MczfEddkBzAs0OuixHs0KXFbrMJVhOBsPTrHANWV5Cbru6Aehx3Mrqqkqt0qjVE/cmwA3w6ezTiXsTRrPpyZPH0zPT0zPTvCSdDIffQrjEyXD8rbw+Gl8XL8loxPA80OozvkjMuxtFt4MQto37Qu6dEEYG/OEYRgYxMugig7FMMZouhRPZSCoXSedjmWIsU4ykcuFU7iCVj1MyD8Wz4VQ+lKBCiWxYltQBVWz2jgGICALALeDmnR/96s/A7c+AW3eAG78FgNsA8LMPProL3L4ryw8/+d5Hd2798k/f//j3H9z8FPjwE+C7P/3Bz/9w6xd/BG78GgB+8p3bdz/+3V+Am3fl8R/+BrjxqUyAH4MICRTrrdkVcFG3qTZDf//84azKsGyAFKpXSt3mssE2/bXm5ZpFZXYsGxwvtBuKRb3SACkN0KLBNq82LayAajOsWDTcfzj713/852//nJyYUkxMKabnNf968NWkQlmoNoCBJI7OT0+/+S/DcS4UK5RKZ//75v3F+ej89GQ0LFUqnHgyujh7d3baY5las/nu7FQ6fT+6OD/q9zu949HF+cno3TE3aNHd/UgYhmEIsrM832MZfihyogRUj44MGzaNzqiYW/h84sHExIMljW51/ZXVjoKbyKJq1QBaN2Hstd21ZgQ1WoPF5oQcmNWOrq6Da+vgazsKOT1Kte7rF8vPXqqezi3MPJ57+OirL6YegVakXG8BXXYQT1HeQMgf3DOYQDe+Rfp3A6GDw2I5mSv4g+FYOneYf0MVytF4ejd0kM4Xs4UyVSgHQgeBcJS65NEU5QvsoThBkDvkTtAb2A1Fokmq0OozACOK3FB6d3ZRb3UgOxIIHUjvz3lpPBBOGOGkUK72eZGVJE4cHvWOy7UGJ40YXmTFYbZUofJFVpQY4YSTRsVqDfV4EhkqW6q8rbf44ZgRxT7PA2/bXYsNNoGbX04rJqe+vHd/UmcEQasDcuIQ7NbojKDVAWME5MTNFseKTrYOdsrSBG6aNmyQ0wMjuA1x681WldawrNEqZp89e7FsgZwQ7H5TawLtPrMfSxpNG88XFlUarV5vWlpS4YQ3kcmH42l827cXicdTVDR5GAhHt3cC4Vg6lszEk1euhqMpKpbMRBOHXn8IxQgUJ6w2GHFi4Vg6HE836T7ASmKuXN0iScgOewgv7iFcKHoQjzO8wA/H2VKpy7LCcMxL43afKVarvDTmREkYjertdr1DC6PxQBKF0bhca8RTmWSG8gYCocjBpfNjhhOAeoeGYDeM4BsWx/MF5fMFJYzgVpvT5fGixI4FQmBsCycDGOGDUcJqd7k8XnzbjxE+q91pgZz4lh8n/S6cdOHka6vdYNrQ6o0WCMEIn8vjrbbawJtGW63RK9W6+efKx0/m5ucXF5dW5p8tgVaHDXGv6IygxeFAiU3EbbZAa+tmq90FOXEE9+qMoPGVFca2IASHYDm4Wr35pVIz8+Tp2vor2ElYIOfbegtgRbFJ99/UmqR3R6fXr2q1xUqt0aaPOXlZ5krVDjPoc0KfExqdXultneFFhhMS6cN0LhcKR4rVGiuKjXY3nkr7ArvbXh+VK/r8u8VqTV7MrLxN+Wjy0HX5Pw4YQZwYihO+wH6hWs+Wq1dJL1TrhUotSeX2IvF8pV6s1qPJw3A85Q3uJTLZUq1JFcr7sSTp3yNI/34sQfqD8Uw2X6kd9Rig0mpb7ahNfqZ75vGc3miB0W3ZBwQHLdAXU49Uq3oHStgQt2pVf//fk2YLBDnlXFphDHLimzAGwe6rUMpeXVbsKLEBOX2B/aPeMdBl2UabbrS79VanKROZN9rdVptuduhWp9fsyKR5LesdutHpXbXV2nS93Wt1utdTdK0lV65kp8d0Gfb/hZ/vNzNxvWkAAAAASUVORK5CYII=" nextheight="565" nextwidth="1024" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><h2 id="h-" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>七、预测智能体市场竞争分析</strong></h2><p>目前，<strong>预测市场智能体（Prediction Market Agent）赛道仍处于早期</strong>，项目数量有限，较具代表性的包括 <strong>Olas（Pearl&nbsp; Prediction Agents）</strong>、<strong>Warden（BetFlix）</strong> 与 <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://Noya.ai"><strong>Noya.ai</strong></a>。</p><p>从产品形态与用户参与方式看，各代表了目前<strong>预测市场智能体赛道的三类路径</strong>：</p><p>1）<strong>Olas（Pearl Prediction Agents）：Agent 产品化与可运行交付,</strong> 以“运行一个自动化预测 Agent”为参与方式，将预测市场交易封装为可运行的 Agent：用户注资并运行，系统自动完成信息获取、概率判断、下注与结算。需要额外安装的参与方式对普通用户的友好度相对有限。</p><p>2）<strong>Warden（BetFlix）：交互分发与消费级投注平台 ,</strong> 通过低门槛、强娱乐性的交互体验吸引用户参与，采用交互与分发导向路径，以游戏化、内容化前端降低参与成本，强调预测市场的消费与娱乐属性。其竞争优势主要来自用户增长与分发效率，而非策略或执行层深度。</p><p>3）<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://NOYA.ai"><strong>NOYA.ai</strong></a><strong>：</strong>以<strong>“资金托管 + 策略代执行”</strong>为核心，通过 Vault 将预测市场与 DeFi 执行抽象为资管产品，提供低操作、低心智负担的参与方式。若后续叠加 Prediction Market Intelligence 与 Agent 执行模块，有望形成“研究—执行—监控”的一体化工作流。</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/ed348169845d897596f6ebe9d7b6915ecc7360617e0b484942a2481977e98a37.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAARCAIAAAAzPjmrAAAACXBIWXMAAAsTAAALEwEAmpwYAAAGRElEQVR4nDWSW1RS+R7H/2tmrbPmZV7n6Tyc89DM1LRmytJJ89hFi4uIkoGiqGiDpokCjqLkJSRFRghFVEKBlDEvZU4MOYd23NnuHbgRaLNRvEBeKm0u67z01jpnWev81ufht37f7+/79AX//f8cHBzcnzJrtRqFotdqtYyODvf/JDcZxiYnjL9aHj+YmxnSDIyP3R3VDs3NTDvs0Ojw4TI//0ClVKiU/dPT9x/MzZlNRoNe9+7du4+Z79+/B1s7++HYDoYn0VB8ObJhc6C/LLrg5ZgLxRftqM21/MwTdC6F7L6gzfXcCsFWCLG5Au5D1W+xwT5/1OHDIE8AcgWWlqPOpbALeRGOvcTju6HY9u7+HwDkNANwFmRUge/KAbgAvsgHn1LB3xngRBn4JxOkcsGXReATEviUDD6ngiPF4BQXfF0K/lEIzvwA0qrAF3TwNwr4jHpoACRwjANSK8HnFAAuA5BOqlEDfvfk8auyqraxYpEuhdWTUSo7ze4lXVMxGzQUnqq4QUO/rkpjy9NL5elsGaVGXSoaLuQPUXj9RQIto34wq7wvnX37e7YsjS0/ze7NrVUz6gezr6nSy/uP5HcpxqwAC29qdTPNrXJhk9RofvzvZ5hA1NEklglE0itMrrx/rILLL6uoFbf1mCYWrjC5tLzi8xdypbeHGExuGZev6NeP6H7uuKXMyaHxagSc8tpWSV8ujdV1SyUQSYPhOHDBsUaxsoB5jV5Ydbvf+NAaYJbUZmRRqus76IVcKp1TxLnx1bHUqhrxvMVPK6hIz7x8MvV8RhYl81wuraCsXthtmno6qHt4/MTZY99m5F+tIueWHD1+5utv0lgcwQr+EgSCcYcvGiJeh4g3Di8OOVc8CLEIBSBnEHKuOGHc5lz5zY65EcKN4G6EsLtDkHPFZsdcMOFGCIcXd8ERF4wf3j/w8eUDodX4DkjuHsBo2GSemTDP4kQSRkPV1/n8xh8VSo121FjCqbwzMHLXYDaYpkzmmYpKXktbV2+f2jw9L2mX8htFtXUNOr1JOzpeUckTt3UO6wx8QbPh3rR21CAQteJ4DOy8+t3u9otbu2Q9yjC+FQhGG4VisaSrqeVm563bYkkXv/HHSyTKoFb3gtjSDN2VyvoEIjGnrJLJKqEzmAsWmx+LWp5AV5lF4tYOcWtnU7NE0i7lVFwz3JvZe/UWRFeTHjS6giexSMKH4ggWwyIJLJJAsbWPwIEY5MaWlmMreAKLJL1I1OEN2t1BHxr1oNEgvo1FthBs1eENodiaCw77g+twIAb7CX9wM7n9BkDOkGHKYZ7zTMy6JmZdhilI3HmnrWtI2qNXqM0jRuug7tGQfmFg5KG4Q5WeSSLnFnfLx2WK8ZLyBg5XQM4tForlIrHiXE4+jcFVDPx8hVXN5bUUsqovkli/QUsAxTYra9oyMsnscr5cbZ5+BGddzDvy1cmTKZlfHk0hU5lHvznF5TX19Bul8rvHvv2+tKLh9Jns705lXqaysklXjp840y3XZZMKM89RybSiI0dT2GV1WRdpaRk5mefz/FgMQO6QSvtIob6v1j4cNT6ZtSBa/UKzRNnaqeHVtfPqJKUVjVL52OwCMjnrGpu0iTvUNAb3fE5BSuqFtk7NuPmpec6l1VuGdL/UNnSe/VfuJUrxZQqzRaIcHrciAeKwpk6YgJc34ED8mTcMuUM+lPCHEgi2iWCbWGQ7iO8h2IbDG7F7Q3Bg3XNYzYgbxh3esBPGvegq5Ma86KoXjXtQwuYMOmEccoc+qEQs/hLsvP598anPumhfsNgCQQILEgqlprr6Rk+fyjw9X8cX8vlCqUyhG5sM4+tNzW23pPLHVmjcdF/SLlX8NKDT33t98B/NsF6h1NTUNSz8auuU9irvaBXKQUm7dDOxDfbe/Dmiuydp7xYKWy2LTiy8dv2GsK5eSKbSKdR8Jqv03IVLVGpBU8vNML7R1Hyzji+q+qH2jkZfVFKefYlSVV2/trH3xOYRNkvoDCbjKotEySNT6Xn0Qs2wcW//LUju7q9vvUps7ye299c2dtY2dhB/aCWy+jwQ8cCBcGTteSDiQ4J+LLK+tUusJrBQDA2EXxCbWIg4NODxtY3tzeRrLBjFglG397kHWXZ4UA8ciMVfvv3jr/8BVfCqPBuY3W4AAAAASUVORK5CYII=" nextheight="559" nextwidth="1024" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>与 Giza、Almanak 等已实现明确产品交付的 AgentFi 项目相比，NOYA 的 DeFi Agent 目前仍处于相对早期阶段。但 NOYA 的差异化在于其定位与切入层级：其以约 $10M FDV 的公平启动估值进入同一执行与资管叙事赛道，在现阶段具备显著的估值折价与增长潜力。</p><ul><li><p><strong>NOYA</strong>：以 Omnichain Vault 为核心的资管封装型 AgentFi 项目，当前交付重点集中在跨链执行与风险控制等基础设施层，上层的 Agent 执行、预测市场能力及 ZKML 相关机制仍处于开发与验证阶段。</p></li><li><p><strong>Giza</strong>：可直接运行资管策略（ARMA、Pulse），目前 AgentFi 产品完成度最高。</p></li><li><p><strong>Almanak</strong>：定位于 AI Quant for DeFi，通过模型与量化框架输出策略与风险信号，主要面向专业资金与策略管理需求，强调方法论的系统性与结果的可复现性。</p></li><li><p><strong>Theoriq</strong>：以多智能体协作（Agent Swarms）为核心的策略与执行框架，强调可扩展的 Agent 协作体系与中长期基础设施叙事，更偏向底层能力建设。</p></li><li><p><strong>Infinit</strong>：偏执行层的 Agentic DeFi 终端，通过“意图 → 多步链上操作”的流程编排，显著降低复杂 DeFi 操作的执行门槛，用户对产品价值的感知相对直接。</p></li></ul><h2 id="h-" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>八、总结：商业逻辑、工程实现及潜在风险</strong></h2><p><strong>商业逻辑：<br></strong>NOYA 是当前市场中较为少见的 <strong>AI Agent × Prediction Market × ZKML</strong> 多重叙事叠加标的，并进一步结合了 <strong>Intent 驱动执行</strong> 的产品方向。在资产定价层面，其以约 <strong>$10M FDV</strong> 启动，明显低于同类 AI / DeFAI / Prediction 相关项目常见的 <strong>$75M–$100M</strong> 区间估值，形成一定的结构性价差。</p><p>从设计上看，NOYA 试图将 <strong>策略执行（Vault / Agent）</strong> 与 <strong>信息优势（Prediction Market Intelligence）</strong> 统一到同一执行框架中，并通过协议收入回流（fees → buyback &amp; burn）建立价值捕获闭环。尽管项目仍处于早期阶段，但在多叙事叠加与低估值起点的共同作用下，其风险—收益结构更接近一类<strong>高赔率、非对称博弈</strong>标的。</p><p><strong>工程实现： </strong>在可验证的交付层面，NOYA 当前已上线的核心功能为 <strong>Omnichain Vaults</strong>，提供跨链资产调度、收益策略执行与延迟结算机制，工程实现相对偏基础。其愿景中强调的 <strong>Prediction Market Intelligence（Copilot）</strong>、<strong>NOYA AI Agent</strong> 以及 <strong>ZKML 驱动的可验证执行</strong>仍处于开发阶段，尚未在主网形成完整闭环。现阶段并非成熟的 DeFAI 平台。</p><p><strong>潜在风险与关注要点</strong></p><ol><li><p><strong>交付不确定性：</strong> 从“基础 Vault”到“全能 Agent”的技术跨度极大，需警惕 Roadmap 延期或 ZKML 落地不及预期的风险。</p></li><li><p><strong>潜在系统风险 ：</strong> 包含合约安全、跨链桥故障以及预测市场特有的<strong>预言机争议</strong>（如规则模糊导致无法裁决），任何单点故障都可能造成资金损耗。<br></p></li></ol><p><strong><em>免责声明：</em></strong><em>本文在创作过程中借助了 ChatGPT-5.2, Gemini 3和Claude Opus 4.5等 AI 工具辅助完成，作者已尽力校对并确保信息真实与准确，但仍难免存在疏漏，敬请谅解。需特别提示的是，加密资产市场普遍存在项目基本面与二级市场价格表现背离的情况。本文内容仅用于信息整合与学术/研究交流，不构成任何投资建议，亦不应视为任何代币的买卖推荐。</em></p><p><br></p>]]></content:encoded>
            <author>0xjacobzhao@newsletter.paragraph.com (jacobzhao)</author>
            <category>ai</category>
            <category>预测市场</category>
            <category>智能体</category>
            <category>polymarket</category>
            <category>noya.ai</category>
            <enclosure url="https://storage.googleapis.com/papyrus_images/8f0a8ab4d2b06b67f05fc7db52970a8c2124b3991cf779ef504829df94b6d51d.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Reinforcement Learning: The Paradigm Shift of Decentralized AI]]></title>
            <link>https://paragraph.com/@0xjacobzhao/reinforcement-learning-the-paradigm-shift-of-decentralized-ai</link>
            <guid>6r6xPASlmX2MiTqlCt6o</guid>
            <pubDate>Tue, 23 Dec 2025 05:20:43 GMT</pubDate>
            <description><![CDATA[Artificial intelligence is shifting from pattern-based statistical learning toward structured reasoning systems, with post-training—especially reinforcement learning—becoming central to capability scaling. reinforcement learning now demonstrably improves reasoning depth and complex decision-making, evolving from a mere alignment tool into a continuous intelligence-enhancement pathway. 
In parallel, Web3 is reshaping AI production via decentralized compute and crypto incentives, whose verifiabili]]></description>
            <content:encoded><![CDATA[<p style="text-align: center"><em>This independent research report is supported by </em><strong><em>IOSG Ventures</em></strong><em>. The research and writing process was inspired by </em><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://x.com/SPLehman"><strong><em><u>Sam Lehman</u></em></strong></a><strong><em> (Pantera Capital) ’s</em></strong><em> work on </em><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.symbolic.capital/writing/the-worlds-rl-gym"><strong><em><u>reinforcement learning</u></em></strong></a><em>. Thanks to</em><strong><em> </em></strong><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://x.com/fenbielding"><strong><em><u>Ben Fielding</u></em></strong></a><strong><em> (</em></strong><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://Gensyn.ai"><strong><em><u>Gensyn.ai</u></em></strong></a><strong><em>), </em></strong><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://x.com/yuangao"><strong><em><u>Gao Yuan</u></em></strong></a><strong><em>(</em></strong><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://gradient.network/"><strong><em><u>Gradient</u></em></strong></a><strong><em>), </em></strong><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.linkedin.com/in/samuel-b-dare/"><strong><em><u>Samuel Dare</u></em></strong></a><strong><em> &amp; </em></strong><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://x.com/erfan_mhi"><strong><em><u>Erfan Miahi</u></em></strong></a><strong><em> (</em></strong><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.covenant.ai/"><strong><em><u>Covenant AI</u></em></strong></a><strong><em>), </em></strong><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://x.com/0xshai"><strong><em><u>Shashank Yadav</u></em></strong></a><strong><em> (</em></strong><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://fractionai.xyz/"><strong><em><u>Fraction AI</u></em></strong></a><strong><em>), </em></strong><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://x.com/chaowxyz"><strong><em><u>Chao Wang</u></em></strong></a><strong><em> </em></strong><em>for their valuable suggestions on this article. This article strives for objectivity and accuracy, but some viewpoints involve subjective judgment and may contain biases. We appreciate the readers' understanding.</em></p><p>Artificial intelligence is shifting from <strong>pattern-based statistical learning</strong> toward <strong>structured reasoning systems</strong>, with post-training—especially <strong>reinforcement learning</strong>—becoming central to capability scaling. <strong>DeepSeek-R1</strong> signals a paradigm shift: reinforcement learning now demonstrably improves reasoning depth and complex decision-making, evolving from a mere alignment tool into a continuous intelligence-enhancement pathway.&nbsp;</p><p>In parallel, Web3 is reshaping AI production via decentralized compute and crypto incentives, whose verifiability and coordination align naturally with reinforcement learning’s needs. This report examines AI training paradigms and reinforcement learning fundamentals, highlights the structural advantages of “<strong>Reinforcement Learning × Web3</strong>,” and analyzes Prime Intellect, Gensyn, Nous Research, Gradient, Grail and Fraction AI.</p><h1 id="h-i-three-stages-of-ai-training" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>I. Three Stages of AI Training</strong></h1><p>Modern LLM training spans three stages—<strong>pre-training</strong>, <strong>supervised fine-tuning (SFT)</strong>, and <strong>post-training/reinforcement learning</strong>—corresponding to building a world model, injecting task capabilities, and shaping reasoning and values. Their computational and verification characteristics determine how compatible they are with decentralization.</p><ul><li><p><strong>Pre-training:</strong> establishes the core statistical and multimodal foundations via massive self-supervised learning, consuming 80–95% of total cost and requiring tightly synchronized, homogeneous GPU clusters and high-bandwidth data access, making it inherently centralized.</p></li><li><p><strong>Supervised Fine-tuning (SFT):</strong> adds task and instruction capabilities with smaller datasets and lower cost (5–15%), often using PEFT methods such as LoRA or Q-LoRA, but still depends on gradient synchronization, limiting decentralization.</p></li><li><p><strong>Post-training: </strong>Post-training consists of multiple iterative stages that shape a model’s reasoning ability, values, and safety boundaries. It includes both <strong>RL-based approaches (e.g. RLHF, RLAIF, GRPO)</strong>, non-RL preference optimization (e.g. <strong>DPO)</strong>, and process reward models (<strong>PRM)</strong>. With lower data and cost requirements (around 5–10%), computation focuses on rollouts and policy updates. Its native support for asynchronous, distributed execution—often without requiring full model weights—makes post-training the phase best suited for Web3-based decentralized training networks when combined with verifiable computation and on-chain incentives.</p></li></ul><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/5b098c1f18fa3e211461007ed914c55c76b87f3525c17c4036005603c2e483ff.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAPCAIAAAAK4lpAAAAACXBIWXMAAAsTAAALEwEAmpwYAAAD2UlEQVR4nG2UP2jjVhzH5SU98OAhYPDRy2ACAUFAYEiP3uCeB1GBSXr4MGTwocOgoZwDCgL1MNggyKDjDYIqZNDh4YU3aBA17SMVPHovvGvVQ4M2DxkEuuLBg0eDhw4q1mvMce1nENLT7/e+v3/vCY7jQAivrq4QQhBCz/N838cFEML5fM5XLt1LbgAhRAgBACYFi8UCQnh9fe26Ll+BEE4mkyAIwjC0bVvAGHe7XUmSFEXZ3d0tlUqiKK7X6zzP4zjOsiwIAgDA0dFRr9eTZVmSpEajIYqiruvcgBAyGAz2CxRFqVQqgiCcnp7meU4IEWzbJoRQSsMwxBiHYej7PiGEMQYhzLLMdV2MMaWU/yWEvPXehgWe52VZNplMptMpIQRjzJ881ziO39hvBIRQuVyu1WqSJPV6PUVRCCFpms7nc0pplmUY43K5LAhCvV5XFOXw8HAwGHQ6nfV6HUVRmqaEkGq1WiqVarXas++ePf7q8cnJSbPZ5KELhBDDMGRZbjabw+Gw3W7rut7r9UzT5P6MMcdxNE0bDofn+vnzzvM0TfOCJEmyLGOMWZbV7/c1TeO+w+FwtVrlec4YE3jWN/gmiiLGWJqms9mMMZYkyTbAXlH90Wis6zrfy/f9PM+5QRzHL3ovFOXb0Wh8rp+//uG1ZVnD4ZCnKEAIeVskSRJFsdVq1et1wzAAAHxr13WffP1kZ2fnoGB/f1+SpNlsxgPUNM22bVEUBUF4WHtYrVZrtVqz2eQR/JsBQsjzvCAIEEK+70MIeYJxHEdRNJ1OLy4uNE2zLItP198F3CCOY0opAGAwGIxG41ffv1JVlVLKa/jLTbgZ006nMxqNTdMEADiO81J92Wq1ptPp3d0dL5Esy5qmqap6dnZ2fHw8Go25Px/TKIr6/X6roN/vG4ah67qqqqZpbpqMMTYM46LANE3bttM0zbJstVolSZKmKaVUVVXlnkaj4XneVoD3oNvt8gGTZbnVaqmq2ul0XNfd9MD3/YODg0ql8uCLB5VK5dGXj0ajMUJoOySEEMdxDMPwPI8fdQjhpyeRMeZ5HgAAQugUTCYTy7KWy+VGgBDieR5CiN8WhBAAAMb4U4G9vT1BEJ5+87Rer1er1Xa7zXvApyiKIm7AZQAAvu8DABaLxUYAY3xSYBjGYDAAAPDwtxVgjC2Xy7uCJEniOP7r48fPBLIsm81mfJHD3ymlQhRF2+/P4D34X4OtAFf9ry832IwphPDDnx/YPVH0x/1LFARBlmUIIcbYb+/e/f7+ffDTzze/hre3t/yyQgjN5/MgCOg9230opXEc/3h5+Q/tiH13f16e+QAAAABJRU5ErkJggg==" nextheight="474" nextwidth="1024" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><h1 id="h-ii-reinforcement-learning-technology-landscape" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>II. Reinforcement Learning Technology Landscape</strong></h1><h2 id="h-21-system-architecture-of-reinforcement-learning" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>2.1 System Architecture of Reinforcement Learning</strong></h2><p>Reinforcement learning enables models to improve decision-making through a feedback loop of environment interaction, reward signals, and policy updates. Structurally, an RL system consists of three core components: <strong>the policy network</strong>, <strong>rollout for experience sampling,</strong> and the <strong>learner for policy optimization</strong>. The policy generates trajectories through interaction with the environment, while the learner updates the policy based on rewards, forming a continuous iterative learning process.</p><ol><li><p><strong>Policy Network (Policy):</strong> Generates actions from environmental states and is the decision-making core of the system. It requires centralized backpropagation to maintain consistency during training; during inference, it can be distributed to different nodes for parallel operation.</p></li><li><p><strong>Experience Sampling (Rollout):</strong> Nodes execute environment interactions based on the policy, generating state-action-reward trajectories. This process is highly parallel, has extremely low communication, is insensitive to hardware differences, and is the most suitable component for expansion in decentralization.</p></li><li><p><strong>Learner:</strong> Aggregates all Rollout trajectories and executes policy gradient updates. It is the only module with the highest requirements for computing power and bandwidth, so it is usually kept centralized or lightly centralized to ensure convergence stability.</p></li></ol><br><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/6999a75ad9acd62cb26ccfee343e82154c7bdc0da0924f991830277aa98bb2cd.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAASCAIAAAC1qksFAAAACXBIWXMAAAsTAAALEwEAmpwYAAAF0klEQVR4nFVVbUxTVxg++6P7Y/TH4hI3Mm2KxMlKoqihqQNRp6AMtRM6kI9lZTqr5Apa0FQrOkUDipUiQmWgiFVmtTDkw4auRQhdy8e9vZ/c03sptVWwUMVN988s7WXKbt68ybk5533O877veV7A0G6nYyDgH/f7PC6nIxSaevvmFQcphiZez874JiBDu2dfBkPTU6HQ1MuITU0+nZ7y+32egH+cpt007Wbm2fwlyxJg8rlPocj4PCqq4Kcfly9fnpSUuGbNGrl81+atyVu3btkj37NufbxUmiCJk2xK3rR+w/p169ZGR0er1cVSacK+fd+TxChkibmIY4RgLEtABocMzrIk8PJjMpkMALB4yWKxWCQWiwAAsbGrRSLRwo8XSuIkq2O/XLRoEYh8i5csBgB8snQpAEAsFu/fX/D+shSFkRQqeIbG/gMgAMsSqampK2NWSqUJXyfKdu36duNGWXx8fEzMSrl8V1rajtjY2Lg4SWpqqkgkSpBKRWKxVJoQFRUlk8nuGpt9Ex6KRBnaDVnK62V5ngl7jhYAGNoNGNrd2Wmu0V9p/LXutPaEwXDt7ZvZv1+/ehmanJp86vfxM8HATDAwHZwMhaZNd+osHcbp4LO/ZkMzwWc8z1AUJkS32R7fNTZ3dppNJqO1t4ciUZYKMwOQJRBElZgoy8vNysj8TqnMR5BDGk3JQP8fJDFKEqM4NsTQ7n57z/079drinAunDvRbu/rtFopEMcwVTg6JBvzjanXRsmWfxsSIRaIvqi5fnA4+m2MgAKjVxQ8ftjY3Nz4w3bt180Z9fY3dZon0EgZZqt/ebbWYh5z9WRnpO7cno6jTOdB7q7FGp6vEsSHI4L4JT9Xli9nZGRpNyekyzaXLF05qSuzWHp6jAWQptbpYdehAefmZ0tLi8vIzWu2J48ePmUytAnenw26ztEOWGufZhoZ6vf4qZIhxnh3s67Fa2kgCHXY5/D6PVntCKt2QlpaybdtmmUwqk0mtvV08zwAOUhpNyY6d248fP2Yw6A2Ga9drq2/dvCFkBjJ4b7eJoTEMc925ZbhefamupsrYfIPnaKdzsLHu0kzQ/+7dPy9DkzpdpVKZhyAqvb6q3XzfZDLi2FC4iyCD8xzNQcrLjwX8488D/POA1zfhEaJDBu+3d3Mcg44MrpVEfwTAAgDiJdE2myUzK/PQYZUiJw8A0Np6t7ZWl5ubjSCq0tKjJaXFLbcbOUjRQhcxNMbQGEWiJDEqeJIY5XnGbP7taHHh+XOnr+ouoSN/uhx9o64Bl6NvyNnf2dmRqcj45VwZWLAEACBL3Fx1+aIk7qtMxV4EOSSXp58uO/WhTQUT1u/7V/AuR19L4xXIUi7Hk/SUJMmqFZJVK9JTkjDUmZObk5ycdAQ5eP6cloNUq7Gh7GRRo6HKbjF3PGhyDlo/ADQ01BUVFZaoj2g0JUeQwwiiGh4ehAxOkSgHKVvv7xjqREdcDXVXKs5rKy6U1VZXUBT2wGSsrDhXXn62qcnggXRXe4uhWtvbbepsazEZDR1tt3HMFZYKyOAIotq2bUtubtbevbsLlD/k5mYdPvxzU5OBg1QYxj1seXQfHXFMTQbCf1giGJyEDN7d0drdabbZLDg+zEEKx1yoa8ADaY5jeJ4Rzs4xQBCVXJ6uUHyXvS/zpKb0geleU5Ohu6td2MGyJI65ejpaIeNOlX2Wk75msM/S03HPanlUo796vbb6yZNej4eOaBxJR0QpUtS5zIffAYKoUlK+USrzlcq8vPx9efnZZ8+cMhpvUiQawZhTLru152BBRuHB/MddJopCw7LDjz0dZyFLCQoaltJ5FZ0TO8hSRUWFu/ek6/VVOl2FTlep1Z5obm4Mn4xsEorh5cfazfcLkUKFIuNRR5vfF9Y4wSDzv9Z4fyTMY4wAzBjhGhocQZ1eL+v1sr4Jj9/PcxwdZjrvRsKr9vv5Fy+esixBR4bJ/CHDsmESwvyBLNXd1a7TVbbcafoX6YFmodf4xAIAAAAASUVORK5CYII=" nextheight="768" nextwidth="1376" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><h2 id="h-22-reinforcement-learning-stage-framework" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>2.2 Reinforcement Learning Stage Framework&nbsp;</strong></h2><p>Reinforcement learning can usually be divided into five stages, and the overall process as follows:</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/4183b576f64126b26ad60947fb5ff6eae5a705587334f03d5ee75ed19df966e5.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAASCAIAAAC1qksFAAAACXBIWXMAAAsTAAALEwEAmpwYAAAG20lEQVR4nCWRW1SSCQKA/92zO2dOe04Pu7Mz7taZ3a3ZLSsvu6HlrDrRpjmlR0PBG4YKir8KCpiId7xUgKYmNQgpNagISiSQilwUAiURQ5SLgFEKBHgZs1MPe/Zhz+x+D9/53j9Ab9pctvsNNp9p453R4TU6vGZ3SLfq1pqdWrNTt+ruY3NxRHIfmzulN03OG2rvMm+gKjKL8bAiXGUzQ26wWLbfrfuDYMf9+u7+IZFIpJ4b4AuEcvkAny+cmQE67w2mwYsKym4hMdUokFQAktLzMMKZRbXJNWtYVxpt9x/xSvFEGpOjMNoUJlsHawRDbkeAJCSurrFvULXqsvj3XIcfK7rY8akZ0OvXv8+CJ167BissbO7uHlepgB7W2C+PfBn2p9NHvzh29PfHTkXEVpKp44qXKpNTblhTGK3zls3FDb/WstnY1pF6I+smBosuL4devZqVl5eWlQXPL0Ci0RkIxKhIlFlYDLl06cS5c9cQCGp/32Op7PWHD8D8okOiMIlk+mczC6IpvXjWYFjbmjM6lAa7esk2Z7LNmewai0tn3XzIGmAw6HQGHZYJS01LTbyUeOHihcLCQjqN1tHW9oDFbmR0ocCKgrJKRBG6kU7XrNtswSBg2NiyeHet73762cF9W+jA9NZn9YdeebxmX8Aa3Fn3B15YrAKp7IlwYmxSOiKeHBGJBVLpo1G+2vjSFth2BH3bnw7e/fuD5zDg/bTrCHrWthzOoMfzk88d8gA13cyEjKzL8PxMLO4qEhWffiOvmpCQAWtgMqn3esrJNR3M/odCYT21FYVBY3EVOQVINIgtBjFoLHpIODLxYoYnF91hdhObyR09d6iMNv6U0LnntAVtpu3llbcmgC1ToetbfxV27Nx3V373zanfHP+6ffBJXe/9rhE+obEJjkQ2d3cxuNzSqipYbm4RCKLKMNmoAlRZCSwX0Uy/LdIpOGJ+Snpa5D8ioUlQSCwESwQHxVyO6NGSx2h8swQsbnrMvoAlsGf2BdYCIfv+wdre/nootOL1Lm9v6Rx2ezCoWDZwRp6002n9HFZnN72dfofWw6Ddo4vl0jXvhnv/bWtXG7GphthExFPwvZxeqUYqnZe4dzZeh1yAVD/LneRPzk9JdfKpRZXSpB2TP1UY55fdq6oVjWBqfPW12WBbUhpfaMwGjXnRtePicbta8Dm9bThKZTajobQJl19fiXguZkvmpQ9/7O8boOtMivf/Cb0JOb27boDYXgeBxh0N+y3wP458dfTMxaiOB/Tpl+qIuKgYaGz9XQrj8cjJGCgAfI4iUft4g0xmcxsZdbeljNZaxrxb/YBO6KKCQ8wG7gBVLOGQmnDFIBJdXiBVC/0fNwGlxWAObDIF3JrbzQ29nfQh5vqOZ8FlVpj1TB7rB/7gpFamWjW0P2AiK8C2vt5hGZ/BIKNzkwvgl5Hwy/DUb9OSY9KSz+fAEuPPn+jsJGKJmLxCeEb29VZ644bPAnjshjeOxcCW+SBofx+yHwQdb52Gfb/VuaY16CTWV+pd3/qcepLW1dHSSq4mgGAFBkeoLC0vIdTim6gU/vgwj88VPB2Vz4qWl2bMS3KHReN26BwWjUY17t82A5yWMhCW0InLJuQm3SpIaUCndVQiyEXplmnO1FAnqx2nHmbUVRZ/9tmvr19LyYbDEFkZWbCM5CTo98n/Ki/DCEd5PO7gIw57jMvcfiVTCJmDjNrhH1rnJQOHXkPIqQZkD5uyr0QfAYA//OLnB3/8HAgDAHxesnmGze7Aleckj/XWdVLKIZDo7+Iv5CBgRDwIh6UV5MMR8AwEPCM9LYXaQrnf2/WYRWfTSZkpcefPHM9Oi6c1YlfUwwGnEvBbZvfcCyGHdtet23Pr99wLu27dpx3LnnvBZ1GFHFq/RSnmDzEYNPItEu3u7eZmSmwMJOkKNDLi7P+Dw2YJxvgyEW97Xbnj0h/6jO+9yx8D5kOf8WPADJDqaFdS4Pk3K3OQICIfm5NfWoyuzs4tyc0tAcFaDLqqupqCLSWUl5Fqa1qqcORcBKqxjlp8E7xVRaGQmlYNVtuK+5Xesv7SbltyOoxO66LNY3YJf5xqp7K001oAV9tzOhIaHZvy5fHw4yeivwg7FR4Rf+KvkK//EnU6/OKZs3ER5779e3Ri5Nk4yPlL4eExfzsZlXAxKfybaEjUPyPPxOLQxIpiQhWmBpsP1uNbqosJpBISHoWLOJ3w55MJPZ0cYF5nFT/Xj47LeXzZ2MSsYEIukWmePlWKRArxM5VINDs5qZRJNDKJevq5RiJWKKd1U8+UYr5sVjKneK6Rjc+I+TLxiGT8iUghUUkFUqlAKh4Rq6WzfO6Ew2T7LxcW036qw0OdAAAAAElFTkSuQmCC" nextheight="565" nextwidth="1024" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><br><ol><li><p><strong>Data Generation Stage (Policy Exploration):</strong> Given a prompt, the policy samples multiple reasoning chains or trajectories, supplying the candidates for preference evaluation and reward modeling and defining the scope of policy exploration.</p></li><li><p><strong>Preference Feedback Stage (RLHF / RLAIF):</strong></p><ul><li><p><strong>RLHF (Reinforcement Learning from Human Feedback):</strong> trains a reward model from human preferences and then uses RL (typically PPO) to optimize the policy based on that reward signal.</p></li><li><p><strong>RLAIF (Reinforcement Learning from AI Feedback):</strong> replaces humans with AI judges or constitutional rules, cutting costs and scaling alignment—now the dominant approach for Anthropic, OpenAI, and DeepSeek.</p></li></ul></li><li><p><strong>Reward Modeling Stage (Reward Modeling):</strong> Learns to map outputs to rewards based on preference pairs. RM teaches the model "what is the correct answer," while PRM teaches the model "how to reason correctly."</p><ul><li><p><strong>RM (Reward Model):</strong> Used to evaluate the quality of the final answer, scoring only the output.</p></li><li><p><strong>Process Reward Model (PRM):</strong> scores step-by-step reasoning, effectively training the model’s reasoning process (e.g., in o1 and DeepSeek-R1).</p></li></ul></li><li><p><strong>Reward Verification (RLVR / Reward Verifiability)</strong>: A reward-verification layer constrains reward signals to be derived from reproducible rules, ground-truth facts, or consensus mechanisms. This reduces reward hacking and systemic bias, and improves auditability and robustness in open and distributed training environments.</p></li><li><p><strong>Policy Optimization Stage (Policy Optimization):</strong> Updates policy parameters $\theta$ under the guidance of signals given by the reward model to obtain a policy $\pi_{\theta'}$ with stronger reasoning capabilities, higher safety, and more stable behavioral patterns. Mainstream optimization methods include:</p><ul><li><p><strong>PPO (Proximal Policy Optimization):</strong> the standard RLHF optimizer, valued for stability but limited by slow convergence in complex reasoning.&nbsp;</p></li><li><p><strong>GRPO (Group Relative Policy Optimization):</strong> introduced by DeepSeek-R1, optimizes policies using <strong>group-level advantage estimates rather than simple ranking</strong>, preserving value magnitude and enabling more stable reasoning-chain optimization.</p></li><li><p><strong>DPO (Direct Preference Optimization):</strong> bypasses RL by optimizing directly on preference pairs—cheap and stable for alignment, but ineffective at improving reasoning.</p></li></ul></li><li><p><strong>New Policy Deployment Stage (New Policy Deployment):</strong> the updated model shows stronger System-2 reasoning, better preference alignment, fewer hallucinations, and higher safety, and continues to improve through iterative feedback loops.</p></li></ol><table style="min-width: 125px"><colgroup><col><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Stage</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Technology</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Core Role</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Pros</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Cons</strong></p></td></tr><tr><td colspan="1" rowspan="2"><p style="text-align: center">Preference Feedback</p></td><td colspan="1" rowspan="1"><p style="text-align: center">RLHF</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Human preference guidance</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Good alignment, mature</p></td><td colspan="1" rowspan="1"><p style="text-align: center">High labor cost</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center">RLAIF</p></td><td colspan="1" rowspan="1"><p style="text-align: center">AI Judge automated preference</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Low cost, high scalability</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Relies on AI quality, prone to bias</p></td></tr><tr><td colspan="1" rowspan="2"><p style="text-align: center">Reward Modeling</p></td><td colspan="1" rowspan="1"><p style="text-align: center">RM</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Final answer scoring</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Simple, mature</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Does not evaluate reasoning process</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center">PRM</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Step-by-step reasoning scoring</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Significant reasoning improvement, core to o1/R1</p></td><td colspan="1" rowspan="1"><p style="text-align: center">High training difficulty, high data cost</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center">Reward Verification</p></td><td colspan="1" rowspan="1"><p style="text-align: center">RLVR</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Verifiable reward constraints</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Decentralization-friendly</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Task-dependent</p></td></tr><tr><td colspan="1" rowspan="3"><p style="text-align: center">Policy Optimization</p></td><td colspan="1" rowspan="1"><p style="text-align: center">PPO</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Traditional RLHF optimizer</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Stable, mature</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Slow convergence/unstable for reasoning tasks</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center">GRPO</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Relative performance optimization</p></td><td colspan="1" rowspan="1"><p style="text-align: center">More suitable for reasoning chains, strong stability</p></td><td colspan="1" rowspan="1"><p style="text-align: center">High multi-sample demand, large engineering cost</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center">DPO</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Direct optimization on preference pairs</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Lowest cost, easy to implement</p></td><td colspan="1" rowspan="1"><p style="text-align: center">less efficient at improving reasoning compared to RL-based methods.</p></td></tr></tbody></table><p><br></p><h2 id="h-23-industrial-applications-of-reinforcement-learning" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>2.3 Industrial Applications of Reinforcement Learning</strong></h2><p>Reinforcement Learning (RL) has evolved from early game intelligence to a core framework for cross-industry autonomous decision-making. Its application scenarios, based on technological maturity and industrial implementation, can be summarized into five major categories:</p><ul><li><p><strong>Game &amp; Strategy:</strong> The earliest direction where RL was verified. In environments with "perfect information + clear rewards" like AlphaGo, AlphaZero, AlphaStar, and OpenAI Five, RL demonstrated decision intelligence comparable to or surpassing human experts, laying the foundation for modern RL algorithms.</p></li><li><p><strong>Robotics &amp; Embodied AI:</strong> Through continuous control, dynamics modeling, and environmental interaction, RL enables robots to learn manipulation, motion control, and cross-modal tasks (e.g., RT-2, RT-X). It is rapidly moving towards industrialization and is a key technical route for real-world robot deployment.</p></li><li><p><strong>Digital Reasoning / LLM System-2:</strong> RL + PRM drives large models from "language imitation" to "structured reasoning." Representative achievements include DeepSeek-R1, OpenAI o1/o3, Anthropic Claude, and AlphaGeometry. Essentially, it performs reward optimization at the reasoning chain level rather than just evaluating the final answer.</p></li><li><p><strong>Scientific Discovery &amp; Math Optimization:</strong> RL finds optimal structures or strategies in label-free, complex reward, and huge search spaces. It has achieved foundational breakthroughs in AlphaTensor, AlphaDev, and Fusion RL, showing exploration capabilities beyond human intuition.</p></li><li><p><strong>Economic Decision-making &amp; Trading:</strong> RL is used for strategy optimization, high-dimensional risk control, and adaptive trading system generation. Compared to traditional quantitative models, it can learn continuously in uncertain environments and is an important component of intelligent finance.</p></li></ul><h1 id="h-iii-natural-match-between-reinforcement-learning-and-web3" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>III. Natural Match Between Reinforcement Learning and Web3</strong></h1><p>Reinforcement learning and Web3 are naturally aligned as incentive-driven systems: RL optimizes behavior through rewards, while blockchains coordinate participants through economic incentives. RL’s core needs—large-scale heterogeneous rollouts, reward distribution, and verifiable execution—map directly onto Web3’s structural strengths.</p><ol><li><p><strong>Decoupling of Reasoning and Training:</strong> Reinforcement learning separates into rollout and update phases: rollouts are compute-heavy but communication-light and can run in parallel on distributed consumer GPUs, while updates require centralized, high-bandwidth resources. This decoupling lets open networks handle rollouts with token incentives, while centralized updates maintain training stability.</p></li><li><p><strong>Verifiability:</strong> ZK (Zero-Knowledge) and Proof-of-Learning provide means to verify whether nodes truly executed reasoning, solving the honesty problem in open networks. In deterministic tasks like code and mathematical reasoning, verifiers only need to check the answer to confirm the workload, significantly improving the credibility of decentralized RL systems.</p></li><li><p><strong>Incentive Layer, Token Economy-Based Feedback Production Mechanism:</strong> Web3 token incentives can directly reward RLHF/RLAIF feedback contributors, enabling transparent, permissionless preference generation, with staking and slashing enforcing quality more efficiently than traditional crowdsourcing.</p></li><li><p><strong>Potential for Multi-Agent Reinforcement Learning (MARL):</strong> Blockchains form open, incentive-driven multi-agent environments with public state, verifiable execution, and programmable incentives, making them a natural testbed for large-scale MARL despite the field still being early.</p></li></ol><h1 id="h-iv-analysis-of-web3-reinforcement-learning-projects" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>IV. Analysis of Web3 + Reinforcement Learning Projects</strong></h1><p>Based on the above theoretical framework, we will briefly analyze the most representative projects in the current ecosystem:</p><h2 id="h-prime-intellect-asynchronous-reinforcement-learning-prime-rl" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>Prime Intellect: Asynchronous Reinforcement Learning prime-rl</strong></h2><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.primeintellect.ai/"><u>Prime Intellect</u></a> aims to build an open global compute market and open-source superintelligence stack, spanning <strong>Prime Compute</strong>, the <strong>INTELLECT mode</strong>l family, open <strong>RL environments</strong>, and large-scale synthetic data engines. Its core <strong>prime-rl framework</strong> is purpose-built for asynchronous distributed RL, complemented by <strong>OpenDiLoCo</strong> for bandwidth-efficient training and <strong>TopLoc</strong> for verification.</p><p><strong>Prime Intellect Core Infrastructure Components Overview</strong></p><table style="min-width: 75px"><colgroup><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p><strong>Component Name</strong></p></td><td colspan="1" rowspan="1"><p><strong>Functional Positioning</strong></p></td><td colspan="1" rowspan="1"><p><strong>Key Technical Innovation</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>prime-rl</strong></p></td><td colspan="1" rowspan="1"><p>RL Training Framework</p></td><td colspan="1" rowspan="1"><p>Actor-Learner separated architecture; Supports FSDP2; vLLM backend acceleration; GRPO+ stability optimization</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>OpenDiLoCo</strong></p></td><td colspan="1" rowspan="1"><p>Distributed Communication Protocol</p></td><td colspan="1" rowspan="1"><p>Time-sparse updates; Int8 gradient quantization; Pseudo-gradient aggregation; High latency resistance</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Verifiers</strong></p></td><td colspan="1" rowspan="1"><p>Reward &amp; Verification Library</p></td><td colspan="1" rowspan="1"><p>Modular environment definition; Integrated Sandboxes; Supports multiple verification logics (code, math, judge)</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Prime Sandboxes</strong></p></td><td colspan="1" rowspan="1"><p>Code Execution Environment</p></td><td colspan="1" rowspan="1"><p>Rust-based high-performance containers; Sub-second startup; Secure isolation; Supports massive concurrency</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>TopLoc</strong></p></td><td colspan="1" rowspan="1"><p>Computational Integrity Verification</p></td><td colspan="1" rowspan="1"><p>Locality-sensitive hashing (LSH); Probabilistic verification; Prevents compute fraud</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Shardcast</strong></p></td><td colspan="1" rowspan="1"><p>Weight Distribution System</p></td><td colspan="1" rowspan="1"><p>Efficient distribution of large model weights to decentralized nodes</p></td></tr></tbody></table><p><br><strong>Technical Cornerstone: prime-rl Asynchronous Reinforcement Learning Framework</strong></p><p>prime-rl is Prime Intellect's core training engine, designed for large-scale asynchronous decentralized environments. It achieves high-throughput inference and stable updates through complete Actor–Learner decoupling. Executors (Rollout Workers) and Learners (Trainers) do not block synchronously. Nodes can join or leave at any time, only needing to continuously pull the latest policy and upload generated data:</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/547814d5e88f1d8a8d6bd546b172fddf8adaaa507924a8a9b62e9db8d47cefa0.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAOCAIAAADBvonlAAAACXBIWXMAAAsTAAALEwEAmpwYAAAD8klEQVR4nJ1TbUxbVRh+YkiIbgxnwoyKzC3ICP5YFg3OxUhMdJvZxI2MxcQZEp1zGRMkGZjBxnXCRrvyVVYgHU4Uoj+6uGjHZ4FRoAgt5WuldHQUutve3t7ecoEVLCbKMfe26+b+aHxyc3LOe06e932f576AhExNZlLjG9t/fCtVmSqeCf4fni8yJRbYX8y3JZ4dC4dSGnZvydmGoqehiN8gi0dF3IacbcmVr1IUFXpAQqsawUsIliFYgrVyBC4+Th2TbXwqS4MP23CoBUfbcViLw80x2UbsbNx5aOTzbHPxF2PFp8aKc81fZ5iyU/vTATzMQSFYg5XLCFzGUjlWFbivwFLdA24i1pBQMvx6szf/5vyXv87l3qTPtDjfbHY/Sw0hVpN0zlnuXmGnFm0mbpwP+muXfnrFfFDUjWgiTRACQigicWk0mWY1esPZkamRnim57Fv3meXVKW55hBa4wO/nBgMonUd0U0KhTW5hpgdpU+98v907W8M0vjyQTlLgqcJCOXg5gkrwV6RMtSA1TwAINMAvQ6AaKyrYroDIsEMx8GmL77Zb6LP7dTZumhXO6Hw4b0V0U1L+nbLbjNXgNnY59Xe89mrmu129ewHYm5/hKBAF1pXopdLWL+DP1iwyeomoNplPYKkCQglWK8BVRBMKGXL1Z20Low62Z5rpnGImnWx+NwdqUkxQYC27x9MTbsvg3DC74FPRjYk9+yWJMkMGiGsViPaIQI+5GScxyUjdRg0F0iDWbr+4kVCIkjmztD6Lix+c9XbbPDbPQl6HF5QdsU1JBXbZCDt+izF0MX1TnK3Kcy257+DjCVQgP+9bow2C1/5Xdx65GitdUREPYpTuA9d9v0zQzSP090ZaO+k6coONkrnCHczx9yYYy9CcieFZFf3DnvY9BDCrN/Ny0YN1pWgsKQK5vpe0f0KUou2CejP5FkQOh3o7gBfKxnPbOQfjHXF4h+wut1/4SsehUOogf6ZskrMa2GGdS2/jZiqZa6/p3yXAbD3YKnAUliUzw0JVixteLgYFFRbrcbdC9GmH3HC8hR91+junOa2VG3UJp9p8UZIHCYXTChfnsTC2IYd5cVGop5te6nk7IlEEFBX+TUPKRBCSKErmPN3hm2E405yvf4a96+XP9vCiB89pUnJmSwcFs47rb/V2G3yWbzz1u4wfiIy9aeIEPPikkSIh9kfjoXl8snQ+SxcwedY6HasdjsAU/8fpnmWct4j3Rzs/2q899l7rx+90HL+g33fitwxoQ84+Wui/YHeeJlk9m94weaB24P3KrvSroyl1M1upRoknPj4tdmtK/CZx3wail4L/kOG/ghDq2MmTcXEPQ38DofpJxMhsqz4AAAAASUVORK5CYII=" nextheight="641" nextwidth="1456" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><ul><li><p><strong>Actor (Rollout Workers):</strong> Responsible for model inference and data generation. Prime Intellect innovatively integrated the vLLM inference engine at the Actor end. vLLM's PagedAttention technology and Continuous Batching capability allow Actors to generate inference trajectories with extremely high throughput.</p></li><li><p><strong>Learner (Trainer):</strong> Responsible for policy optimization. The Learner asynchronously pulls data from the shared Experience Buffer for gradient updates without waiting for all Actors to complete the current batch.</p></li><li><p><strong>Orchestrator:</strong> Responsible for scheduling model weights and data flow.</p></li></ul><p><strong>Key Innovations of prime-rl:</strong></p><ul><li><p><strong>True Asynchrony:</strong> prime-rl abandons the traditional synchronous paradigm of PPO, does not wait for slow nodes, and does not require batch alignment, enabling any number and performance of GPUs to access at any time, establishing the feasibility of decentralized RL.</p></li><li><p><strong>Deep Integration of FSDP2 and MoE:</strong> Through FSDP2 parameter sharding and MoE sparse activation, prime-rl allows tens of billions of parameters models to be efficiently trained in distributed environments. Actors only run active experts, significantly reducing VRAM and inference costs.</p></li><li><p><strong>GRPO+ (Group Relative Policy Optimization):</strong> GRPO eliminates the Critic network, significantly reducing computation and VRAM overhead, naturally adapting to asynchronous environments. prime-rl's GRPO+ ensures reliable convergence under high latency conditions through stabilization mechanisms.</p></li></ul><p><strong>INTELLECT Model Family: A Symbol of Decentralized RL Technology Maturity</strong></p><ul><li><p><strong>INTELLECT-1 (10B, Oct 2024):</strong> Proved for the first time that OpenDiLoCo can train efficiently in a heterogeneous network across three continents (communication share &lt; 2%, compute utilization 98%), breaking physical perceptions of cross-region training.</p></li><li><p><strong>INTELLECT-2 (32B, Apr 2025):</strong> As the first Permissionless RL model, it validates the stable convergence capability of prime-rl and GRPO+ in multi-step latency and asynchronous environments, realizing decentralized RL with global open computing participation.</p></li><li><p><strong>INTELLECT-3 (106B MoE, Nov 2025):</strong> Adopts a sparse architecture activating only 12B parameters, trained on 512×H200 and achieving flagship inference performance (AIME 90.8%, GPQA 74.4%, MMLU-Pro 81.9%, etc.). Overall performance approaches or surpasses centralized closed-source models far larger than itself.</p></li></ul><p>Prime Intellect has built a full decentralized RL stack: OpenDiLoCo cuts cross-region training traffic by orders of magnitude while sustaining ~98% utilization across continents; TopLoc and Verifiers ensure trustworthy inference and reward data via activation fingerprints and sandboxed verification; and the SYNTHETIC data engine generates high-quality reasoning chains while enabling large models to run efficiently on consumer GPUs through pipeline parallelism. Together, these components underpin scalable data generation, verification, and inference in decentralized RL, with the INTELLECT series demonstrating that such systems can deliver world-class models in practice.<br></p><h2 id="h-gensyn-rl-core-stack-rl-swarm-and-sapo" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>Gensyn: RL Core Stack RL Swarm and SAPO</strong></h2><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.gensyn.ai/"><u>Gensyn</u></a> seeks to unify global idle compute into a trustless, scalable AI training network, combining standardized execution, P2P coordination, and on-chain task verification. Through mechanisms like RL Swarm, SAPO, and SkipPipe, it decouples generation, evaluation, and updates across heterogeneous GPUs, delivering not just compute, but verifiable intelligence.</p><p><strong>RL Applications in the Gensyn Stack</strong></p><table style="min-width: 100px"><colgroup><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p><strong>Layer</strong></p></td><td colspan="1" rowspan="1"><p><strong>Component</strong></p></td><td colspan="1" rowspan="1"><p><strong>Technical Principle</strong></p></td><td colspan="1" rowspan="1"><p><strong>Specific Role in RL</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p>RL Core Layer</p></td><td colspan="1" rowspan="1"><p><strong>RL Swarm</strong></p></td><td colspan="1" rowspan="1"><p>Decentralized Generate–Evaluate–Update Structure</p></td><td colspan="1" rowspan="1"><p>Executes a decentralized RL loop by sharing rollouts, with rewards evaluated locally by each node.</p></td></tr><tr><td colspan="1" rowspan="1"><p>RL Core Layer</p></td><td colspan="1" rowspan="1"><p><strong>SAPO</strong></p></td><td colspan="1" rowspan="1"><p>Rollout sharing with uninformative-sample filtering</p></td><td colspan="1" rowspan="1"><p>Enables stable RL optimization in asynchronous, heterogeneous networks</p></td></tr><tr><td colspan="1" rowspan="1"><p>Communication Layer</p></td><td colspan="1" rowspan="1"><p><strong>SkipPipe</strong></p></td><td colspan="1" rowspan="1"><p>Streaming parallel communication protocol</p></td><td colspan="1" rowspan="1"><p>Realizes low-latency parallel processing.</p></td></tr><tr><td colspan="1" rowspan="1"><p>Trusted Execution Layer</p></td><td colspan="1" rowspan="1"><p><strong>PoL</strong></p></td><td colspan="1" rowspan="1"><p>Probabilistic Proof of Learning</p></td><td colspan="1" rowspan="1"><p>Verifies Rollout is truly generated by the model, preventing fake RL data.</p></td></tr><tr><td colspan="1" rowspan="1"><p>Trusted Execution Layer</p></td><td colspan="1" rowspan="1"><p><strong>Verde</strong></p></td><td colspan="1" rowspan="1"><p>Game-theory based binary arbitration protocol</p></td><td colspan="1" rowspan="1"><p>Locates cheating steps with O(log N) cost, ensuring credible rewards.</p></td></tr><tr><td colspan="1" rowspan="1"><p>Consistency Layer</p></td><td colspan="1" rowspan="1"><p><strong>RepOps</strong></p></td><td colspan="1" rowspan="1"><p>Cross-GPU deterministic operators</p></td><td colspan="1" rowspan="1"><p>Ensures heterogeneous hardware outputs are bit-level consistent for verification and auditing.</p></td></tr></tbody></table><p><br><strong>RL Swarm: Decentralized Collaborative Reinforcement Learning Engine</strong></p><p>RL Swarm demonstrates a brand new collaboration mode. It is no longer simple task distribution, but an infinite loop of a decentralized generate–evaluate–update loop inspired by collaborative learning simulating human social learning:</p><ul><li><p><strong>Solvers (Executors):</strong> Responsible for local model inference and Rollout generation, unimpeded by node heterogeneity. Gensyn integrates high-throughput inference engines (like CodeZero) locally to output complete trajectories rather than just answers.</p></li><li><p><strong>Proposers:</strong> Dynamically generate tasks (math problems, code questions, etc.), enabling task diversity and curriculum-like adaptation to adapt training difficulty to model capabilities.</p></li><li><p><strong>Evaluators:</strong> Use frozen "Judge Models" or rules to check output quality, forming local reward signals evaluated independently by each node. The evaluation process can be audited, reducing room for malice.</p></li></ul><p>The three form a P2P RL organizational structure that can complete large-scale collaborative learning without centralized scheduling.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/cddde88f646e27fe355b68a113204a9886f28c7dff90b5e9ac79af2b10d6e2a2.jpg" alt="" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAASCAIAAAC1qksFAAAACXBIWXMAAAsTAAALEwEAmpwYAAAGZklEQVR4nE1Ve1BT6RX/2u60nbHd6W5n6li663QXO7vFnc5at+OuOAKVt1QjAWUJiMRk5ZGEhgoJyhpBDD4QKFFAFqTFFQaX6moaGxZYl9WFJgQCJIE1D5NwuTf3nXvzIhWwE2J3+s2ZM+eP78w55/f7zu8DCEZBCOGCsf83hKBINhAxhiXZAEoxLhgj2QBCeEk2SPlDFBtECApCKZRimMAKzvghlPwu3QmhThhbQkn7EgI2CuAuGIt6xxKC077SMnFiUsruPXuTU9OEJ8r5H5Wkp+/n5BzO/bAgISk5IzPrSB4vY/+BvPyjBzjcxKTkvQl/PFPXgJGME/JACL5hBISSjiUEwAiOkQxCeBHCi1IsQjAYyUxMGT9XPxi4ffee+l8a7cjIl18/+rdeox3Vjn01dFc9dFf9ufqBbmZ+bPyxdmxcrRnWjozNW55gJIuRkfT/dUw4IQ+AUNI4v6AzzBiMJt3MrM4wO2v+FiZoggkyoTDlD9GBUBSWqGdDYTYUhhBs3mK1WB0Wq2PR7jYt2mbNiwajaWLKqDPMfjeH3QVHIBocutvd23dr4LOunt4rLW29ff0UG3S7oTmT2bRoMy1an9gc+uk5aBl7+tRtXLDitG9kbLy1TaVq72zv6Ors7LrS0tbapuru6e3u7evs6oERHMGoKODACWOtqo6qavng0D3Too0JrDx//nxUP9fW2d3U0CCT/UUkKpfXyAb7++2w44MD6cdltYGV/0AIwfpC62vrs3aH3vrt+to64wtebv6r9GS1ou68WvPFgtXxggMIIUSSisNHjhQUFDQoG40L1gJxDQDgd1xunUJRUFIqk5+slIqutjRfuNkHwJs7EvjhtWcoSunNJnHnhXeO5e4uEQhV50f0uhMlooOcQ8UCwcWm5vFHEyhO213LEQ7kNaeOFhUVFfOnDFOJR+Ugel6PAzkHf731+/lbgDTxVV7sT4bU2sGB+1+OTVBsYHjSsLtKIO9Tya+3Cxob5X3tKdW84dF7NieyYHXgtA9CSQjBoxNE8MJpH0r5nz17ti29AoCXwbYEcESceelUJf/t4j+AQwAUx4Alt530h3Da+8S5/HaJrH/0YcAfXEYgGHEHA+GHD9XN8ni324nRfoTwOiH0BQcoTlfL5PXnlSNfPcZpr+rGfRCXB97aCTJ/v1mcBwr2vvTepjQAWipS7cu41QmxjO/SoIajaEE8GM747/zt8idKqZdlYA95pyl/Yuhy981/fJjPuzXwGU77IhBhJHOySsbl5mTu33/tWkd7v/qYrAWkcEHMz19glf6uUMFX3L6y4HzqwcjpBUdaoTSpuNJgcahUVzMy0lOTky82tdotM401kqZqgVQkzMjMKheJ72u0COEFMEErlY3lIpFUWnnr1qeZJ86Bn+5Ik1TsqzoNOLmvHS+sqef/bE8MSN+pM8zhjN++5Nl1SLwz98+zCw5l46V9ySkpqenymo/Nc3P1kmONksK25iuFRfyS0rLxbyYhlIyQjBAMyQbnbS7h9euv5h0Hr+8F4JU3+eK/fz2aoTwd9wPwox+Dl2P3Jb6R3aMaDK6s9I9MxOZV2RHs6id9tWcbTp2tb2275oSghtSYyTs97b0DJaVlldUyg9G0FC2AM/7V9fWbw2NbBcJfCSQghw9+Ew/i83LkTSD+4EubNm8B22I3JW/74R5eUilCURTry5Y0cjjSeYvtnubR7aEvZszW8mxxm5AXWF1FMBohGAiNLNoLqejqvnFOefEb/fTZ3hu/LT7xWv5HgMMD27NAbArY9H5szHvvv7Hrna1ZAm7V5MQcTNBuGPdgJO+AdHtMBidbmJsniNuSlryLd7G1Q9Xe6YIxOhDaeKaEY6MAVS6SZGZm8niFZWXllbVnjp5Wbj9c8Uqa8Hs7s3+xI+eXcdmpGaLBT/8ZFUQnhLlhHEJJNhzWPphM2lcU/wF3aGDUaDFn/SkrISHxVK1Cox01LdocEPbEsQQQwqtq75DVnFbUnVNrhgk6QLEBzSM9V9KwOT7/3aSi5qv9Lhil/AE3jDuhiEZGuttokAmF7fZls/lpcHVVZ5g7W3fujKKuWiaT19ZOTE2H19ZINhjRIpz2IQSDUn6EYByQx+aEPRhpd8G31Q/1RgtOM04ItTlhB+SJ/CTRDdqIbU4kIsto5LIbxjHaT9ABkg0ygZXHuumPFfX1ygv/BSGUa/IJSzaHAAAAAElFTkSuQmCC" nextheight="813" nextwidth="1456" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p><strong>SAPO: Policy Optimization Algorithm Reconstructed for Decentralization</strong></p><p><strong>SAPO (Swarm Sampling Policy Optimization) </strong>centers on sharing rollouts while filtering those without gradient signal, rather than sharing gradients. By enabling large-scale decentralized rollout sampling and treating received rollouts as locally generated, SAPO maintains stable convergence in environments without central coordination and with significant node latency heterogeneity. Compared to PPO (which relies on a critic network that dominates computational cost) or GRPO (which relies on group-level advantage estimation rather than simple ranking), SAPO allows consumer-grade GPUs to participate effectively in large-scale RL optimization with extremely low bandwidth requirements.</p><p>Through <strong>RL Swarm</strong> and <strong>SAPO</strong>, Gensyn demonstrates that reinforcement learning—particularly post-training RLVR—naturally fits decentralized architectures, as it depends more on diverse exploration via rollouts than on high-frequency parameter synchronization. Combined with <strong>PoL</strong> and <strong>Verde</strong> verification systems, Gensyn offers an alternative path toward training trillion-parameter models: a self-evolving superintelligence network composed of millions of heterogeneous GPUs worldwide.</p><br><h2 id="h-nous-research-reinforcement-learning-environment-atropos" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>Nous Research: Reinforcement Learning Environment Atropos</strong></h2><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://nousresearch.com/"><u>Nous Research</u></a>&nbsp; is building a decentralized, self-evolving cognitive stack, where components like Hermes, Atropos, DisTrO, Psyche, and World Sim form a closed-loop intelligence system. Using RL methods such as DPO, GRPO, and rejection sampling, it replaces linear training pipelines with continuous feedback across data generation, learning, and inference.</p><p><strong>Nous Research Components Overview</strong></p><table style="min-width: 75px"><colgroup><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p><strong>Component Name</strong></p></td><td colspan="1" rowspan="1"><p><strong>Core Role</strong></p></td><td colspan="1" rowspan="1"><p><strong>Relationship with Reinforcement Learning (RL)</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Hermes</strong></p></td><td colspan="1" rowspan="1"><p>Policy Model (LLM / Reasoning Agent)</p></td><td colspan="1" rowspan="1"><p>The optimization object of RL; its reasoning chain is constantly reinforced by DPO / GRPO / Rejection Sampling.</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Atropos</strong></p></td><td colspan="1" rowspan="1"><p>Standardized Verifiable Environment (RL Environment)</p></td><td colspan="1" rowspan="1"><p>Provides deterministic rewards and filters reasoning trajectories; core source of RL data quality and trustworthiness.</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>DisTrO</strong></p></td><td colspan="1" rowspan="1"><p>Distributed Optimizer (Gradient Transport)</p></td><td colspan="1" rowspan="1"><p>Completes RL parameter updates under low bandwidth conditions, making decentralized inference RL feasible.</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Psyche</strong></p></td><td colspan="1" rowspan="1"><p>Decentralized Training Network</p></td><td colspan="1" rowspan="1"><p>The actual computation execution layer carrying the RL closed loop (Generation → Verification → Reward → Update).</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>World Sim</strong></p></td><td colspan="1" rowspan="1"><p>Synthetic Task World</p></td><td colspan="1" rowspan="1"><p>Provides complex tasks and long-term reasoning scenarios for RL, supporting world model and general agent training.</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Forge</strong></p></td><td colspan="1" rowspan="1"><p>Inference &amp; Trajectory Collector</p></td><td colspan="1" rowspan="1"><p>Collects user and model reasoning trajectories, which become RL retraining data after Atropos verification.</p></td></tr></tbody></table><p><br></p><p><strong>Model Layer: Hermes and the Evolution of Reasoning Capabilities</strong></p><p>The Hermes series is the main model interface of Nous Research facing users. Its evolution clearly demonstrates the industry path migrating from traditional SFT/DPO alignment to Reasoning RL:</p><ul><li><p><strong>Hermes 1–3: Instruction Alignment &amp; Early Agent Capabilities:</strong> Hermes 1–3 relied on low-cost DPO for robust instruction alignment and leveraged synthetic data and the first introduction of Atropos verification mechanisms in Hermes 3.</p></li><li><p><strong>Hermes 4 / DeepHermes:</strong> Writes System-2 style slow thinking into weights via Chain-of-Thought, improving math and code performance with Test-Time Scaling, and relying on "Rejection Sampling + Atropos Verification" to build high-purity reasoning data.</p></li><li><p><strong>DeepHermes</strong> further adopts GRPO to replace PPO (which is hard to implement mainly), enabling Reasoning RL to run on the Psyche decentralized GPU network, laying the engineering foundation for the scalability of open-source Reasoning RL.</p></li></ul><p><strong>Atropos: Verifiable Reward-Driven Reinforcement Learning Environment</strong></p><p>Atropos is the true hub of the Nous RL system. It encapsulates prompts, tool calls, code execution, and multi-turn interactions into a standardized RL environment, directly verifying whether outputs are correct, thus providing deterministic reward signals to replace expensive and unscalable human labeling. More importantly, in the decentralized training network Psyche, Atropos acts as a "judge" to verify if nodes truly improved the policy, supporting auditable Proof-of-Learning, fundamentally solving the reward credibility problem in distributed RL.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/8010faa1a365b5b2d1b7ac5dfd218a5921eec828f36caed6975eb093dbb4b2a7.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAASCAIAAAC1qksFAAAACXBIWXMAAAsTAAALEwEAmpwYAAAGsElEQVR4nDXRe1BTZxoG8G9Wtyudzla0XKwICKjEqFVcb1xWqcJKV+tMK64jdrWoW+uKVruuikURwVG6si4DwpqSlAa5KGkwpVQWgRiDhAABYkJCLidpThJIyO3kcuCc73DODtvZmd8fz/vXM/O8gKIICAkISYqCNA1pmiDJIEXPUBQ+jwyShJ+COCRxmp4jCJyCJKTnApD0U9BFzLjnSC9NueCsC85OQ9JOzP4PgRK4lSSQUAAwDNM/IC+/X/VdU0tNHa+zu7tZIOA/bn7I5XHquHX19ZU1td818Curayura2s53Lr6xlp+Uw2/qYrH/6alVdDZVcHh8p4I/8Wt57S0Pmho5gra+KL2L2+WtUn7JmkIaJrwBz02u9lkRia0r01apVmvRMaHtcpBtUJmHB+12wzK18M2i8FiNkxPO40225jFMm63jaFoP4L0GfTPlUqpXt+jHhfKZC8NhkEUVTscozbruMtpwgPglzVoGjIMSYS8TrPaoBp6rZB7Uc20UYVZJxgKpyFO4T6aCM7RMMBQswyD0ZQTkjK7rceof2Ey9hgNYqNBbEJEyrFXFrPBj2k8DiSE6f0YIEOuOQo3IRNYwNMuFNbdvStoF1aWFxdeuV5WehuSQcSo8TisVMBD+JwQzuh92AP5iMRqRzAf4nU7iRnf/A+ISYJASaJTLue3tta3tv4gFqucU4YgBuCUjsC9ki7RUDvvo7z8sPW78jLfv5aSvOTXUcnLYv+490NRS8Osd2rW65hxozQZHP7ZAvIuFLd1aRxTPa/6Bu1WoQltHlT92N0l12vlOk27uFss7+8a6OsckKmmHYByo0Gb9nzB+SsH0zasSwlnpcavSI6L2/TbJQkxby+LBQtzjv4FxxyUD6W8KE247B5MOqLzBIIeCLla7e+aGuMecMJuVHwmeNY9gSBBj8KG1naJBfIhtWNK63MDymezyp4Vn/g0f92mxdFrlkYn7j5w6NiuHWsS2Iti1r8TEQvSc0/Wd+AOi8VsOMPpWXCek17ffKm//1CLCHx1Z0P1v1MfNYGSr8GJa0WiHmPQ2zI8ericU/joqTEYmC+gienywsKDR/KT1qYtBO9sWb/5cGnNzcwt+bGRb2YcyDhxevO5C6Dgxpv3muPu/QDOc5ferdpxriLhaCk4cX312eJcYXtixaPtn/8jZueZvKrHo1NTvYjpdBW/QvTcPIOPY17AUL7K0jsgKSMzNX0He+Oy5aztR/76cfb+rITVC9np7L0fLc79Mzh1FVzggWPV8XnX8vaV5LOL8j68m5NbumjDsbSDxRnHy5O2nQa7Py+41/TSYBB0dfI6Ol5MaIZRi8rrBpTbqlNIE7fmXPosv7Pyy/h4dnh4wupNmSAscUX0KlbsSsDOBpcaL37dtO/TO2kpFy9+8E9Rx7AGc3UM9p8tuL8wcn/M2uOR2QV7rlZL0Em5XstpamgUCQXd/+nVqlReL8BsOkTZ/31DffzmP2TtzE6Kij9+ILvkSO7iCNbqsMVvA5Ca9bEVtVKYhYKEb9LjCgTsNGEK+qTKkXH3ZIdcXfGk55ZI8sqA2Gly1Gb+SSp5JpOKVaMan0uNYYBwmaETwR2Gstv3cnJPRcRsvJ2xKi9jZ1gUO/P3e0/mfTImeQad+uC0laAgw9D2gH/Yj93kt6YXFGVfLMm5XPbBldJ9fyvZ//dbGaevJh/9gvXJBdbRL5L+dHbbqcsDk3YAnSa7QjLQ85NZKdO9ei6XdLUVFe3ZmtXQ+Jic1E3rR6HbAu3auaCLYRgIySAe7AsEtp75CiRuB5uzAGsXSEwFSWnzVm4H0Skg8j0QsQG8tQbE75BazAC6LCGTqpdfy79fzi8rEjZ8i7stLqOa8tme90ket4vGRuRqudSgfW1E9BjmnaNmZxlm/7lrICIZJG4DK7eAqHXzIlkgYu28JckgnAUWJYH4rVL0ZwCd5pBhZPgJV9jA63/a0syp0Q9JSKsWTumTzhXvyT1cWc15KmoTiUQKxdDExLhr2kEzzKGCQhC1Bqx4D0SxwBsrQfiqsNiNC95lg0UJ4K0kEJcCotlg+UaxGQHQYXJqR2XiTsXLTlnvjxp5j8cwNmvVQCeSf6l494GT3wuECsWgfHBAMTKs02msNpRmmH3HCwB491dLV4EFsQBEAvAGAMv+H34zXwmWA7BCakUBieoIVIOb1CGTKmRSkRYNadUSlvEZsyaIqLzGMd+UyYd5fX5fIOSfIXB8xj9JhBp7X5Q+5N54yLtaxbn+YF7JN/W/nIU1vJvfNl6uqj17q7xDOfZfR91Cp9epKRwAAAAASUVORK5CYII=" nextheight="813" nextwidth="1456" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p><strong>DisTrO and Psyche: Optimizer Layer for Decentralized Reinforcement Learning</strong></p><p>Traditional RLF (RLHF/RLAIF) training relies on centralized high-bandwidth clusters, a core barrier that open source cannot replicate. DisTrO reduces RL communication costs by orders of magnitude through momentum decoupling and gradient compression, enabling training to run on internet bandwidth; Psyche deploys this training mechanism on an on-chain network, allowing nodes to complete inference, verification, reward evaluation, and weight updates locally, forming a complete RL closed loop.</p><p>In the Nous system, <strong>Atropos</strong> verifies chains of thought; <strong>DisTrO</strong> compresses training communication; <strong>Psyche</strong> runs the RL loop; <strong>World Sim </strong>provides complex environments; <strong>Forge </strong>collects real reasoning; <strong>Hermes</strong> writes all learning into weights. Reinforcement learning is not just a training stage, but the core protocol connecting data, environment, models, and infrastructure in the Nous architecture, making Hermes a living system capable of continuous self-improvement on an open computing network.</p><h2 id="h-gradient-network-reinforcement-learning-architecture-echo" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>Gradient Network: Reinforcement Learning Architecture Echo</strong></h2><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://gradient.network/"><u>Gradient Network</u></a> aims to rebuild AI compute via an Open Intelligence Stack: a modular set of interoperable protocols spanning P2P communication (Lattica), distributed inference (Parallax), decentralized RL training (Echo), verification (VeriLLM), simulation (Mirage), and higher-level memory and agent coordination—together forming an evolving decentralized intelligence infrastructure.</p><table style="min-width: 100px"><colgroup><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p><strong>System Layer</strong></p></td><td colspan="1" rowspan="1"><p><strong>Module</strong></p></td><td colspan="1" rowspan="1"><p><strong>Core Function</strong></p></td><td colspan="1" rowspan="1"><p><strong>Positioning</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Inference Layer</strong></p></td><td colspan="1" rowspan="1"><p>Parallax</p></td><td colspan="1" rowspan="1"><p>Heterogeneous GPU distributed inference, WAN Pipeline Parallel, Speculative Decoding</p></td><td colspan="1" rowspan="1"><p>Global distributed execution OS for Sovereign AI</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Training Layer</strong></p></td><td colspan="1" rowspan="1"><p><strong>Echo</strong></p></td><td colspan="1" rowspan="1"><p>RL Rollout–Learner decoupling, heterogeneous device Rollouts, verifiable training data</p></td><td colspan="1" rowspan="1"><p>Training and optimization engine for decentralized RL</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Connectivity &amp; Networking Layer</strong></p></td><td colspan="1" rowspan="1"><p>Lattica</p></td><td colspan="1" rowspan="1"><p>P2P network, cross-NAT connectivity, Hole Punching, DHT, BitSwap, dynamic routing</p></td><td colspan="1" rowspan="1"><p>Communication and connection base for distributed AI</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Agent Intelligence Layer</strong></p></td><td colspan="1" rowspan="1"><p>Symphony, SEDM, Massgen, CUAHarm</p></td><td colspan="1" rowspan="1"><p>Symphony: Collaborative scheduling; SEDM: Growable long-term memory; Massgen: Multi-model debate; CUAHarm: Security sandbox</p></td><td colspan="1" rowspan="1"><p>Intelligent evolution and collective intelligence layer for decentralized Agents (Collaboration × Memory × Reasoning × Security)</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Trust &amp; Verification Layer</strong></p></td><td colspan="1" rowspan="1"><p>VeriLLM / Veri</p></td><td colspan="1" rowspan="1"><p>Spot-check verifiable inference, Commit–Reveal verification, training verifiability</p></td><td colspan="1" rowspan="1"><p>Trust layer for distributed inference and training</p></td></tr></tbody></table><p><br><strong>Echo — Reinforcement Learning Training Architecture</strong></p><p>Echo is Gradient's reinforcement learning framework. Its core design principle lies in decoupling training, inference, and data (reward) pathways in reinforcement learning, running them separately in heterogeneous Inference Swarm and Training Swarm, maintaining stable optimization behavior across wide-area heterogeneous environments with lightweight synchronization protocols. This effectively mitigates the SPMD failures and GPU utilization bottlenecks caused by mixing inference and training in traditional DeepSpeed RLHF / VERL.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/563d213059e5532fbc8d72a71afd4188e293043c1b53fbc8007144f2302e3c19.jpg" alt="" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAASCAIAAAC1qksFAAAACXBIWXMAAAsTAAALEwEAmpwYAAAD4ElEQVR4nK1Uz2/jVBB+FxBUSCshcUACUYGQ4IT2wh1ulaJKvfQCh1X7FywXdqVqySUIKpXDNoUsLd4slVKldbaXHrbQVt3sWhgW3DR13KR14tRpfjhOHTeJX/LyXvKQ/bpht1SoAj6NRuPn8YxnvpkHzHr92KxWrBoThLqXEHQ5ty6EbaBXK7+pKblwmCwcZIyjPv0/QQgBhBD2gLtYzx/pum6aZrFYVFXVtm3mRCnVdV0URdmDJEmWZQ2iGDVTOcjki8clo5LJHurHhUFMNwFCXUJIj/YazUZwLjjmYXR0NBAIsBwMxWJR1/WKB2aw87ptJ1N7fr9/fHzc5/PdvHFzc2uL9HsXVGDbtqqqPM/7/X6O42RZ1nUdIQQ9IIQkSdpJ7IiiqGkaxthptSCEDoTyvpLX9eWVleDcXDaXVQ7SHYQuSMD6kMlktKdgr/qe9GjPsqy6Xa+d1BzosENGGOy0kylZzWXVvJZM7Vn1v7r3XILCcWF9fT0cDnMct7q6yvN8JpO5JJmapsV4/vs7d+Lx+LN/fL4CBgghIQQh1Gg2KaXg2sfgHQDeB+C95+VdAK591O/1HegQQtpeGy+YoiaEVuu03mo02o6nIcJdB0LYaTeclpvgw1fB6wAMPyNveNFfA+DqEKW03mowf4wxwnigXQN1Qb5afpyWdguHqVLuYerJbi7NqHMgtE7dMQVXXwYAgCsAvODpVwB4yTMAAB+86E5H45QNggOhFxQNyMMYn7XIqp3UTNOoVEyjyjwIIRhjSuln0W+HPv9keGpyeGrirVsTb05NvH3LtYdufHp9ac5dIIyZP4SwVC4XPVQqFUr7Z3tAKd1JJKLR6MIPCw/jcfpvgRD6RRCi0ehdjhMEwTvpuhUwciCETY9V76pBpmka1appmn1K89Xyo/TuZuqPrX3pZ/nJ9n7iUXpXtwy39KdweUZtSmkbtWGn3aO9jtcJQCmVJCkQ+DIUCnEcNz09LYqioiiLi4uRSCQcDmvZXFI7XEs8/mZ96au1H7+4vxDciK0lBFlXi4UCz/Pz8/OhUCi5t5dM7X09PX17dvb27Ox3oVAme+hygFBXUZSZmRmfzzcyMhIIBERRtG2b1dFBqHnaSOm5JeHBvfha8Kfl4CZ/L762JDxIl4+apy69zLN2ciIlpOXl5bGxscnJyVgsphykzxJY1oksy6IoCoKgKIppmoPl6Hta1g62U9LKr5vBjVhwg7//+3Y8nUjl1YEDG/lCqWhaVskol4yyUauWjMpZAjeELEciEZ7nVdX97Nz2YYxN23LnvQOdDqy3GoZtsRn7Z5zf5L9v9X8EIeRPTszAEuXHWesAAAAASUVORK5CYII=" nextheight="819" nextwidth="1456" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>Echo uses an "Inference-Training Dual Swarm Architecture" to maximize computing power utilization. The two swarms run independently without blocking each other:</p><ul><li><p><strong>Maximize Sampling Throughput:</strong> The Inference Swarm consists of consumer-grade GPUs and edge devices, building high-throughput samplers via pipeline-parallel with Parallax, focusing on trajectory generation.</p></li><li><p><strong>Maximize Gradient Computing Power:</strong> The Training Swarm can run on centralized clusters or globally distributed consumer-grade GPU networks, responsible for gradient updates, parameter synchronization, and LoRA fine-tuning, focusing on the learning process.</p></li></ul><p>To maintain policy and data consistency, Echo provides two types of lightweight synchronization protocols: <strong>Sequential</strong> and <strong>Asynchronous</strong>, managing bidirectional consistency of policy weights and trajectories:</p><ul><li><p><strong>Sequential Pull Mode (Accuracy First):</strong> The training side forces inference nodes to refresh the model version before pulling new trajectories to ensure trajectory freshness, suitable for tasks highly sensitive to policy staleness.</p></li><li><p><strong>Asynchronous Push–Pull Mode (Efficiency First):</strong> The inference side continuously generates trajectories with version tags, and the training side consumes them at its own pace. The coordinator monitors version deviation and triggers weight refreshes, maximizing device utilization.</p></li></ul><p>At the bottom layer, Echo is built upon Parallax (heterogeneous inference in low-bandwidth environments) and lightweight distributed training components (e.g., VERL), relying on LoRA to reduce cross-node synchronization costs, enabling reinforcement learning to run stably on global heterogeneous networks.</p><h2 id="h-grail-reinforcement-learning-in-the-bittensor-ecosystem" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>Grail: Reinforcement Learning in the Bittensor Ecosystem</strong></h2><p>Bittensor constructs a huge, sparse, non-stationary reward function network through its unique Yuma consensus mechanism.</p><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.covenant.ai/"><u>Covenant AI</u></a> in the Bittensor ecosystem builds a vertically integrated pipeline from pre-training to RL post-training through SN3 Templar, SN39 Basilica, and SN81 Grail. Among them, SN3 Templar is responsible for base model pre-training, SN39 Basilica provides a distributed computing power market, and SN81 Grail serves as the "verifiable inference layer" for RL post-training, carrying the core processes of RLHF / RLAIF and completing the closed-loop optimization from base model to aligned policy.</p><table style="min-width: 100px"><colgroup><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p><strong>Stage</strong></p></td><td colspan="1" rowspan="1"><p><strong>Subnet</strong></p></td><td colspan="1" rowspan="1"><p><strong>Function Description</strong></p></td><td colspan="1" rowspan="1"><p><strong>Relation to RL</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p>Infrastructure Layer</p></td><td colspan="1" rowspan="1"><p>Basilica (SN39)</p></td><td colspan="1" rowspan="1"><p>Distributed inference and compute market, scheduling global GPU resources</p></td><td colspan="1" rowspan="1"><p>Indirect: Provides compute execution layer for rollout generation and RL training</p></td></tr><tr><td colspan="1" rowspan="1"><p>Pre-training Layer</p></td><td colspan="1" rowspan="1"><p>Templar (SN3)</p></td><td colspan="1" rowspan="1"><p>Base model pre-training (SFT / Base Model)</p></td><td colspan="1" rowspan="1"><p>Pre-requisite: Produces base policy model $\pi_0$ needed for RL fine-tuning</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Post-training / RL Layer</strong></p></td><td colspan="1" rowspan="1"><p><strong>Grail (SN81)</strong></p></td><td colspan="1" rowspan="1"><p>RLAIF / RLVR; reasoning, coding, tool use; verifiable rewards</p></td><td colspan="1" rowspan="1"><p>Core: The only subnet in Covenant executing RL, responsible for policy optimization and reasoning</p></td></tr></tbody></table><p><br><strong>GRAIL</strong> cryptographically verifies RL rollouts and binds them to model identity, enabling trustless RLHF. It uses deterministic challenges to prevent pre-computation, low-cost sampling and commitments to verify rollouts, and model fingerprinting to detect substitution or replay—establishing end-to-end authenticity for RL inference trajectories.</p><p>Grail’s subnet implements a verifiable GRPO-style post-training loop: miners produce multiple reasoning paths, validators score correctness and reasoning quality, and normalized results are written on-chain. Public tests raised Qwen2.5-1.5B MATH accuracy from 12.7% to 47.6%, showing both cheat resistance and strong capability gains; in Covenant AI, Grail serves as the trust and execution core for decentralized RLVR/RLAIF.</p><h2 id="h-fraction-ai-competition-based-reinforcement-learning-rlfc" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>Fraction AI: Competition-Based Reinforcement Learning RLFC</strong></h2><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://fractionai.xyz/"><u>Fraction AI</u></a> reframes alignment as <strong>Reinforcement Learning</strong> from Competition, using gamified labeling and agent-versus-agent contests. Relative rankings and AI judge scores replace static human labels, turning RLHF into a continuous, competitive multi-agent game.</p><p><strong>Core Differences Between Traditional RLHF and Fraction AI's RLFC:</strong></p><table style="min-width: 75px"><colgroup><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Dimension</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Traditional RLHF</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Fraction AI (RLFC)</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Reward Source</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Static Model: </strong>Reward Model trained on historical data, prone to obsolescence.</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Dynamic Market:</strong> Based on real-time competition rankings and rulings by decentralized AI Judges.</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Interaction Mode</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Isolated Optimization: </strong>Single-agent optimization against a fixed function.</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Adversarial Game: </strong>Adversarial or competitive interaction with other agents.</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Iteration Frequency</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Low-Frequency Offline: </strong>Batch data collection, low-frequency retraining.</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>High-Frequency Online: </strong>Continuous learning and weight updates based on session streams.</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Ownership</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Centralized: </strong>Model weights owned by centralized entity.</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Decentralized:</strong> Users own agent assets (NFT/Token) and their generated yields.</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Robustness</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Fragile: </strong>Susceptible to "Reward Hacking," falling into local optima.</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Robust: </strong>Dynamically changing opponent strategies force agents to constantly evolve, preventing policy collapse.</p></td></tr></tbody></table><p><br>RLFC’s core value is that rewards come from evolving opponents and evaluators, not a single model, reducing reward hacking and preserving policy diversity. Space design shapes the game dynamics, enabling complex competitive and cooperative behaviors.</p><p>In system architecture, Fraction AI disassembles the training process into four key components:</p><ul><li><p><strong>Agents:</strong> Lightweight policy units based on open-source LLMs, extended via QLoRA with differential weights for low-cost updates.</p></li><li><p><strong>Spaces:</strong> Isolated task domain environments where agents pay to enter and earn rewards by winning.</p></li><li><p><strong>AI Judges:</strong> Immediate reward layer built with RLAIF, providing scalable, decentralized evaluation.</p></li><li><p><strong>Proof-of-Learning:</strong> Binds policy updates to specific competition results, ensuring the training process is verifiable and cheat-proof.</p></li></ul><p>Fraction AI functions as a human–machine co-evolution engine: users act as meta-optimizers guiding exploration, while agents compete to generate high-quality preference data, enabling trustless, commercialized fine-tuning.</p><p><strong>Comparison of Web3 Reinforcement Learning Project Architectures</strong></p><table style="min-width: 125px"><colgroup><col><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Project Name</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>RL Architecture Mode</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Key Technology</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Bandwidth Optimization Strategy</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>RL Role</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Prime Intellect</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">Asynchronous Distributed RL</p></td><td colspan="1" rowspan="1"><p style="text-align: center">PRIME-RL (Framework), INTELLECT-½ (Model)</p></td><td colspan="1" rowspan="1"><p style="text-align: center">SHARDCAST: HTTP tree topology for high-speed weight broadcasting.</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Full Stack Platform: Complete facilities&nbsp;</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Gensyn</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">Collaborative Swarm RL</p></td><td colspan="1" rowspan="1"><p style="text-align: center">RL Swarm, Probabilistic PoL</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Graph-based Pinpoint: Verify only random points in compute graph.</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Base Protocol: Collaborative inference &amp; peer review via heterogeneous "swarm".</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Nous Research</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">Communication-Efficient Training</p></td><td colspan="1" rowspan="1"><p style="text-align: center">DisTrO (Optimizer), Tinker-Atropos (RL Env)</p></td><td colspan="1" rowspan="1"><p style="text-align: center">DisTrO: Reduces gradient communication by 1000x-10000x.</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Algorithm Architecture: Mathematical breakthroughs&nbsp;</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Gradient</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">Edge-Core Decoupling</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Echo (Framework), Parallax (Inference Engine)</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Physical Separation: Edge (Inference Swarm) for sampling, Core (Training Swarm) for updates.</p></td><td colspan="1" rowspan="1"><p style="text-align: center">OS: Maximize use of idle edge compute for large-scale data sampling.</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Grail</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">Verifiable RL Post-training</p></td><td colspan="1" rowspan="1"><p style="text-align: center">GRAIL Protocol, Superlinear Scoring</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Rollout Proofs: Transmit only inference results with encrypted fingerprints.</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Dedicated Subnet: Bittensor ecosystem focused on RL Post-training.</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Fraction AI</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">Darwin RL</p></td><td colspan="1" rowspan="1"><p style="text-align: center">RLFC (Competitive RL), Gamified Labeling</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Asynchronous Data Stream</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Data Fuel: Provides critical "feedback signals"&nbsp;</p></td></tr></tbody></table><h1 id="h-v-the-path-and-opportunity-of-reinforcement-learning-web3" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>V. The Path and Opportunity of Reinforcement Learning × Web3</strong></h1><p>Across these frontier projects, despite differing entry points, RL combined with Web3 consistently converges on a shared “decoupling–verification–incentive” architecture—an inevitable outcome of adapting reinforcement learning to decentralized networks.</p><p><strong>General Architecture Features of Reinforcement Learning: Solving Core Physical Limits and Trust Issues</strong></p><ol><li><p><strong>Decoupling of Rollouts &amp; Learning (Physical Separation of Inference/Training) — Default Computing Topology:</strong> Communication-sparse, parallelizable Rollouts are outsourced to global consumer-grade GPUs, while high-bandwidth parameter updates are concentrated in a few training nodes. This is true from Prime Intellect's asynchronous Actor–Learner to Gradient Echo's dual-swarm architecture.</p></li><li><p><strong>Verification-Driven Trust — Infrastructuralization:</strong> In permissionless networks, computational authenticity must be forcibly guaranteed through mathematics and mechanism design. Representative implementations include Gensyn's PoL, Prime Intellect's TopLoc, and Grail's cryptographic verification.</p></li><li><p><strong>Tokenized Incentive Loop — Market Self-Regulation:</strong> Computing supply, data generation, verification sorting, and reward distribution form a closed loop. Rewards drive participation, and Slashing suppresses cheating, keeping the network stable and continuously evolving in an open environment.</p></li></ol><p><strong>Differentiated Technical Paths: Different "Breakthrough Points" Under Consistent Architecture</strong></p><p>Although architectures are converging, projects choose different technical moats based on their DNA:</p><ul><li><p><strong>Algorithm Breakthrough School (Nous Research):</strong>&nbsp; Tackles distributed training’s bandwidth bottleneck at the optimizer level—DisTrO compresses gradient communication by orders of magnitude, aiming to enable large-model training over home broadband.</p></li><li><p><strong>Systems Engineering School (Prime Intellect, Gensyn, Gradient):</strong> Focuses on building the next generation "AI Runtime System." Prime Intellect's ShardCast and Gradient's Parallax are designed to squeeze the highest efficiency out of heterogeneous clusters under existing network conditions through extreme engineering means.</p></li><li><p><strong>Market Game School (Bittensor, Fraction AI):</strong> Focuses on the design of Reward Functions. By designing sophisticated scoring mechanisms, they guide miners to spontaneously find optimal strategies to accelerate the emergence of intelligence.</p></li></ul><p><strong>Advantages, Challenges, and Endgame Outlook</strong></p><p>Under the paradigm of Reinforcement Learning combined with Web3, system-level advantages are first reflected in the <strong>rewriting of cost structures and governance structures.</strong></p><ul><li><p><strong>Cost Reshaping:</strong> RL Post-training has unlimited demand for sampling (Rollout). Web3 can mobilize global long-tail computing power at extremely low costs, a cost advantage difficult for centralized cloud providers to match.</p></li><li><p><strong>Sovereign Alignment:</strong> Breaking the monopoly of big tech on AI values (Alignment). The community can decide "what is a good answer" for the model through Token voting, realizing the democratization of AI governance.</p></li></ul><p>At the same time, this system faces two structural constraints:</p><ul><li><p><strong>Bandwidth Wall:</strong> Despite innovations like DisTrO, physical latency still limits the full training of ultra-large parameter models (70B+). Currently, Web3 AI is more limited to fine-tuning and inference.</p></li><li><p><strong>Reward Hacking (Goodhart's Law):</strong> In highly incentivized networks, miners are extremely prone to "overfitting" reward rules (gaming the system) rather than improving real intelligence. Designing cheat-proof robust reward functions is an eternal game.</p></li><li><p><strong>Malicious Byzantine workers:</strong> refer to the deliberate manipulation and poisoning of training signals to disrupt model convergence. The core challenge is not the continual design of cheat-resistant reward functions, but mechanisms with adversarial robustness.</p></li></ul><p>RL and Web3 are reshaping intelligence via decentralized rollout networks, on-chain assetized feedback, and vertical RL agents with direct value capture. The true opportunity is not a decentralized OpenAI, but new intelligence production relations—open compute markets, governable rewards and preferences, and shared value across trainers, aligners, and users.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/06532da69763f49ff41647cdf4f6e5cfcf0efcc395a656cb14cca72f3a52cf59.jpg" alt="" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAASCAIAAAC1qksFAAAACXBIWXMAAAsTAAALEwEAmpwYAAAGZklEQVR4nE1UC1CTVxa+teu0VtmhroWMFNQKUyuZUpslEImQUgO0oYgYQ6y8DBAJEopoqbgVjKwSqAIFrWujBAxCkhoCJKTS37ABxcSEpEn6E37yhESMhqfQBbbb7eyQOMzeOXPn3HPPud+c79z5AAzrnOPW3/+zsrT0YmnpxfycZ3Fx7tfFWa/NrCy/WFlemJ/zLCzMriwv+uKzs56V5cX5uWlvfGZpaX5+bvq33/716+Ls/Nz0yvKCw2ExGoZNI3rTiB5MPrFXVJRv3vznKCwGjd5NpZLR6F3x8YTY2JhPSYmEODwG8z6VSsbhsAAAQhyeSPxoX+zeoCAUHo/LyEjfTyTg8biEhHg0ejeJlJiU9HF4+LstLZyZabcPA0w+cXx9tgx41zoA1q9/BYUKCApChYQEoVABAAC/jRsiItDh6PfW/Qls2x4cHBy0a1fYZv+NKFRAXBx+x47g0NB3wkJ3+m3c4PPXr3+FyWRIpWKDQbMKYBrRGw3DENQrFgshqFckEt692yEUtAsFPAjqlUg6xWKh8Ic7PT2ibrFQJGxpv32jrY0rl/e1tXFlsh5uM6e7SyS629Er7ZbJuvl8Xn//vaamuqamOhj2UuRjyozAPrNZR9ds7eh02WxWk6yTI+HXS/gN0h/+YUaMLpfdjMAT4+b/T342OW6zjk6MW1wu28sZ+DrwmU6rMo387OPOjMC+KxjWmS0maSc3lxovh/qk0p78jP3SzlazxbRWuJqDwBq1UiQSqtUPEcToex1BjC870OnUDwcVra3cHkmXRNIlaG8R8G9LJSKNekivV5stJkVfB41KzM0vOME8mUbC3eteA9DYrKMPBuXXv2tsarqC3xtdV8tqvlFvNGjMlpGXAGbE+HBQcR/q7eDf4bXxqi+er67629mvSgR8rhkx2uyjv/93qZCR/zoA7TfZnCYWbvebnKsXp2c9DofZ7XbK+++zWJXtd7hHKQdoWZRDKYklJ/IeDEJ2x9gaANz3U69EIpL33795i1N84njZ6ZLHjx/QaLRXX98AAAjc+vYb/ls43zceIkWTSbghhfTL4sziAkriR5g0Er6i/IueHtHtVk4NuyqfnpeRfoAQs4cQ84FSNWC1jgIY1lutJgi6d+5sybVrdWVlpXT6sVOnSs6zKjf6v1nAyAMAVJz/+rnH9ccf/z5y9MjOsLAz5WWdYuGhtM9CQ7cRiXH0rOQucUdjY+29H7t5bbzcnAxaFqXsNNMI6xDkF2C2wA+HFBf+fmGLHyAR954sZmZmHGltvdV4tTFw61aDTimX//jc49JoVVOz7mO07E1vrEs9mKIYkNsd5r4+qWnUWHmG0cy5VnqqpL6+tplzTTEAHc/9nPP9VafLbho1AAT5RTP8SCDoSEmIzKGSatgXCxn59fW1vDbeloBA1aN/Tk09rWZf6hILVlGbGgYHf0KQn2WyThotZ2iof2rmaXlp7rmzX6YeSKbRckqLGWx21fvhoXExe5zenwpgWOd2T7BYlQAA/01g+47tUdFRB9MOYqNx/pvfcrsnsrIz/V4D/pteBQAkEOPw+MiA10CgH0j4eJ9QwLM7xsq+yLly+RKDkV9USC8+cfxCZTm3+UZLC8dqH10dstEwPPnEwWQWAgBQKFRwSMieDz+MjPxrJDbqck3VsPrBmVJ64F82vB2yreabmtOnikqYeTvDwgAADEahRNLlctlPMo5cbaiOJ0TlZVMOH0xmsSojwt/ZFbbV6bLBsO7lN52aeup2Tzxzj7tcVofDNIYYPB6XSMCFoJ6Z2WfPPZMvFmau37jOrq2uZl/i3OJc/rZBq1dPTT1BEFj9WHHpXNH5r/LpWcm0zLSnbqdY3HGn7ZbdMbZKkdVqksm6s7OPkg+nUtLJ+4kESjq5XwFptCrN8COny8K5eZ1KJScl7U9KIlLSyUlJxJSUT5nMAhLpk/r6WofDfKWOnU49nJaWkp2dSaNlfn6UQj6ceoyWrVQNrgI4XZaWFg4ej4uPJ8TERGOxGAzmAzqdlnoguaqqwumyQVDvZ8mfoNG7Y2NjcLhIHA67D4/D4bBEYjwE9dodY0VFBVFYTJS3kERKxGIxERFoHA6rVA1MjFtWKdJpVRq10rsPwfCq/mi1SuWjAaNXb31CZjQM+2RDq1X6fJ+6jRh1Oq3KW6LySpleq1XCsJ7P5xUVFXzb8M3/AKKo4KI63HzIAAAAAElFTkSuQmCC" nextheight="768" nextwidth="1376" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p><em>Disclaimer: This article was completed with the assistance of AI tools ChatGPT-5 and Gemini 3. The author has made every effort to proofread and ensure information authenticity and accuracy, but omissions may still exist. Please understand. It should be specially noted that the crypto asset market often experiences divergences between project fundamentals and secondary market price performance. The content of this article is for information integration and academic/research exchange only and does not constitute any investment advice, nor should it be considered a recommendation to buy or sell any tokens.</em></p><p><br></p>]]></content:encoded>
            <author>0xjacobzhao@newsletter.paragraph.com (jacobzhao)</author>
            <category>reinforcementlearning</category>
            <category>decentralizedtraining</category>
            <category>ai</category>
            <category>web3</category>
            <enclosure url="https://storage.googleapis.com/papyrus_images/84535a21ed2f65cf4711897a3f6e8394d37e1ed03b032b1d434b54b5aa79b9b7.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[强化学习：去中心化 AI 网络的范式变迁]]></title>
            <link>https://paragraph.com/@0xjacobzhao/强化学习：去中心化-ai-网络的范式变迁</link>
            <guid>y8CpNgqsESS3WCQoF2N4</guid>
            <pubDate>Tue, 23 Dec 2025 01:30:00 GMT</pubDate>
            <description><![CDATA[人工智能正从以“模式拟合”为主的统计学习，迈向以“结构化推理”为核心的能力体系，后训练（Post-training）的重要性快速上升，强化学习不再只是价值对齐工具，而被证明能够系统提升推理链质量与复杂决策能力，正逐步演化为持续提升智能水平的技术路径。与此同时，Web3 通过去中心化算力与加密激励重构 AI 的生产关系；而强化学习对 rollout 采样、奖励信号与可验证训练的结构性需求，与区块链的算力协作、激励分配与可验证执行天然契合。本研报将拆解训练范式与强化学习原理，论证 RL × Web3 的结构优势，并分析 Prime Intellect、Gensyn、Nous Research、Gradient、Grail 与 Fraction AI 等项目。]]></description>
            <content:encoded><![CDATA[<p style="text-align: center"><em>本独立研报由</em><strong><em>IOSG Ventures</em></strong><em>支持，研究与写作过程受 </em><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://x.com/SPLehman"><strong><em><u>Sam Lehman</u></em></strong></a><strong><em>（Pantera Capital）</em></strong><em> </em><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.symbolic.capital/writing/the-worlds-rl-gym"><strong><em><u>强化学习研报</u></em></strong></a><em>的启发，感谢</em><strong><em> </em></strong><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://x.com/fenbielding"><strong><em><u>Ben Fielding</u></em></strong></a><strong><em> (</em></strong><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://Gensyn.ai"><strong><em><u>Gensyn.ai</u></em></strong></a><strong><em>), </em></strong><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://x.com/yuangao"><strong><em><u>Gao Yuan</u></em></strong></a><strong><em>(</em></strong><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://gradient.network/"><strong><em><u>Gradient</u></em></strong></a><strong><em>), </em></strong><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.linkedin.com/in/samuel-b-dare/"><strong><em><u>Samuel Dare</u></em></strong></a><strong><em> &amp; </em></strong><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://x.com/erfan_mhi"><strong><em><u>Erfan Miahi</u></em></strong></a><strong><em> (</em></strong><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.covenant.ai/"><strong><em><u>Covenant AI</u></em></strong></a><strong><em>), </em></strong><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://x.com/0xshai"><strong><em><u>Shashank Yadav</u></em></strong></a><strong><em> (</em></strong><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://fractionai.xyz/"><strong><em><u>Fraction AI</u></em></strong></a><strong><em>), </em></strong><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://x.com/chaowxyz"><strong><em><u>Chao Wang</u></em></strong></a><strong><em> </em></strong><em>对本文提出的宝贵建议。本文力求内容客观准确，部分观点涉及主观判断，难免存在偏差，敬请读者予以理解。</em></p><p>人工智能正从以“<strong>模式拟合</strong>”为主的统计学习，迈向以“<strong>结构化推理</strong>”为核心的能力体系，<strong>后训练（Post-training）</strong>的重要性快速上升。<strong>DeepSeek-R1 </strong>的出现标志着<strong>强化学习</strong>在大模型时代的范式级翻身，行业共识形成：<strong>预训练</strong>构建模型的通用能力基座，<strong>强化学习</strong>不再只是价值对齐工具，而被证明能够系统提升推理链质量与复杂决策能力，正逐步演化为持续提升智能水平的技术路径。</p><p>与此同时，Web3 正通过去中心化算力网络与加密激励体系重构 AI 的生产关系，而强化学习对 rollout 采样、奖励信号与可验证训练的结构性需求，恰与区块链的算力协作、激励分配与可验证执行天然契合。本研报将系统拆解 AI 训练范式与强化学习技术原理，论证强化学习 × Web3 的结构优势，并对 Prime Intellect、Gensyn、Nous Research、Gradient、Grail和Fraction AI等项目进行分析。</p><h1 id="h-ai" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>一. AI 训练的三阶段：预训练、指令微调与后训练对齐</strong></h1><p>现代大语言模型（LLM）训练全生命周期通常被划分为三个核心阶段：预训练（Pre-training）、监督微调（SFT）和后训练（Post-training/RL）。三者分别承担“构建世界模型—注入任务能力—塑造推理与价值观”的功能，其计算结构、数据要求与验证难度决定了去中心化的匹配程度。</p><ul><li><p><strong>预训练（Pre-training）</strong> 通过大规模<strong>自监督学习（Self-supervised Learning）</strong>构建模型的语言统计结构与跨模态世界模型，是 LLM 能力的根基。此阶段需在万亿级语料上以全局同步方式训练，依赖数千至数万张 H100 的同构集群，成本占比高达 80–95%，对带宽与数据版权极度敏感，因此必须在高度集中式环境中完成。</p></li><li><p><strong>微调（Supervised Fine-tuning）</strong>用于注入任务能力与指令格式，数据量小、成本占比约 5–15%，微调既可以进行<strong>全参训练</strong>，也可以采用<strong>参数高效微调（PEFT）</strong>方法，其中<strong> LoRA、Q-LoRA 与 Adapter</strong> 是工业界主流。但仍需同步梯度，使其去中心化潜力有限。</p></li><li><p><strong>后训练（Post-training）</strong>由多个迭代子阶段构成，决定模型的推理能力、价值观与安全边界，其方法既包括<strong>强化学习体系（RLHF、RLAIF、GRPO）</strong>也包括无 RL 的<strong>偏好优化方法（DPO）</strong>，以及<strong>过程奖励模型（PRM）等</strong>。该阶段数据量与成本较低（5–10%），主要集中在 Rollout 与策略更新；其天然支持异步与分布式执行，节点无需持有完整权重，结合可验证计算与链上激励可形成开放的去中心化训练网络，是最适配 Web3 的训练环节。</p></li></ul><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/5b8ec6d6a0c0655365e5a7105fefc71804827123abc58992633aeab744ce521d.jpg" alt="" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAARCAIAAAAzPjmrAAAACXBIWXMAAAsTAAALEwEAmpwYAAAFHUlEQVR4nD2UUajbVBjH87SXPl8ZszAYg8ugMLwwCoVh1odSuFAIhIDBsNCyaDASDQsrCwsEsnUuM1rMWKQaLAaiq4ssNFtGIbBoMNvCVhftXCFaIZCH6NWLDMqKVG7i3Y/zcDhwzne+7//9PwDfh2EYdh+GYTAMw3GcIAiWZTc3N4F9trZee7mnc5gcgjiDYRi5T3FOEG8BjUbj5MmTx44d07SBYRi6/qWmfT4cDkEQPHr0KIZhrVZra2uLot5xHGcwGDjOHQAASqUSBEEwDHvevdHohq7rNE0fOnQIQZDt7W0QBKvV6jAHQFG02WzW66dY9qzruo5zezwec9x5giA0TWMYhiRJRVEgCOp0OqJ4EcMwURQFQSBJ0jRNBEEGg4EkSa1WS5ZlXdfr9bqmacPhEMMwnucBlj3LMO9z3Pl+/+Nbt741zW9u3PjaNE3f923bRhAERVHbtoMgME3Ttm0vB4ZhkiQ9z3Mcx3Vd0zSLF7kcgiAEQTBzgG73XLvdxnFcUZSHDx/4/neOc8d1XUVRWJatVCq1Wo2maUVRptOpZVmCIIiiWKlUYBgWRVFVVdu25/P5crms53Q6HQiCGIZZrVbT6RRAEASCoG4OQZwhCGI0GgmC0Ol0ZFmWJMmyLF3XeZ43TXM2m+XSETAMUxR1+PBhinp3NpuFYTifzxuNBoIgjUajUqlwHLe7uxsEAUCSJIqiNE2z7Fmapkny7dFoVBQdw7Ber9fv97vdLkVRnudNp9PFYqGqKsuynU6H5/kkSeI4nk6nrus2m81KpYLjeLVardfraZruBeB5/uDBgyzLPn78yPe/D8PQtu0o+uny5Q+K4oAgWK/XRVHc2dlR1U+r1SrHcflXSBiGURSd5qxWKxzHS6USjuMnTpygaXq5XIZhCOA4Xi6XBUHwvHvj8dhxHMuy7t9/sF6vd3d3C0+kabper5MkCYIgSZIsy0ajkaZpWZYlOWEYPn/+PI7jyWQiSZLruovF4v8A3W4XAIBarWbtU7RQkiSr1UoQhG63u1wuF4vFfD73fT/LsjwVlef5nZ2dLMviOPZ9P91nOBxmWVbsfd8HGIY5cOBAqVTSNM1xHNu2DcMovpCmab/f7/V6aZrGcRxFkeu6SZKkaVroVLwSRZHneWmaJkkSRZEsy/P5vLjued6ekwEAKJfLPM+rqtrv9w3D8DzvZSkGg0FRitns6d27d2ezme/7uq5rmua6bqG8aZpBEPi+H4ahJF0NwzCKoul0atv23jw5cuQIjuPb29sIgrRaLcMwCmfNZrNut0vT9Hw+n0wmQRAMBp+RJNlsNmu1GoqiIAiiKBoEQTEOms2mqqqiKBY9hqLonpNzkV/t9S49efIoCH4olmEYg8FA13VBEBAEsSxLUZTCsRRFKYrC87wgCIZhqKqaZZmqqrL8YZEWiqIEQSiKQhCELMsAir6xsbFBUeRkMjHNm+OxVWggSRLHcSAIQhBk50RR1Ov1eJ7Pp8KbBEH0+31ZlvPw125Zt2+at0Tx4ubmJkmSmqYJgrCXAYIg5XIZQZDr168VyzCMyWTC8xdOnz6N4zhFUQzDKIoShiHHce12u9//KLc90el0OI6zLKt77j356oXLl7grVy5ubLwCQRAIgsePH6/VanttimEYiqKtVqvdbrdarclkEse/rtf//h7/aBhfDYdfGIZR2CKOf1u9+OPvP599oigcx0mSlCTJi9Xql5+Df/766dlT79TrNQAAYBguJDRN8z+4yJFEFZyPBAAAAABJRU5ErkJggg==" nextheight="559" nextwidth="1024" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><h1 id="h-" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>二. 强化学习技术全景：架构、框架与应用</strong></h1><h2 id="h-21" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>2.1 强化学习的系统架构与核心环节</strong></h2><p><strong>强化学习（Reinforcement Learning, RL）</strong>通过“<strong>环境交互—奖励反馈—策略更新</strong>”驱动模型自主改进决策能力，其核心结构可视为由状态、动作、奖励与策略构成的反馈闭环。一个完整的 RL 系统通常包含三类组件：<strong>Policy（策略网络）</strong>、<strong>Rollout（经验采样）</strong>与<strong> Learner（策略更新器）</strong>。策略与环境交互生成轨迹，Learner 根据奖励信号更新策略，从而形成持续迭代、不断优化的学习过程：</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/0e09f49ebe18c81551c4c23446695617d89a745b12431d1251ba2aeec7f273a0.jpg" alt="" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAASCAIAAAC1qksFAAAACXBIWXMAAAsTAAALEwEAmpwYAAAFiUlEQVR4nHVVbUxTZxi9fxabxS0aMyIoxUKslUVmtZlfnY0saEXByfcqKNqBgmJqmIXKKhioWjukQgWaWuomgnRWSJFGUD6UIhXopbfcr/aWFrijUCYExw//maW9Ukm23ZzcvHlz73ueJ+c85wUQ2AZaLTPeSZIcR2Dbwvzsu8W3Ex6ni0CXlhYmPITbjS/Mz1JYXJxbmJ/1+cg5H0mS4zPeSQyDENj2f8AwCCDJcT7/EI1GEwjSaTRa9NfRTObm/bzvOJydLBaLx9sfsSmCxWIxGIyYmBgmk0mnh9Pp9PT01OitW1NSkuzQyMrjgvhE4CJQNpsNAEBISMiGDaHr169ft24dg8Gg0+kAAOzduycyKnL16tVA4Pk8sKDRaJ+tWrVx48b8/HMOfCxIAMNgEMFNwIGP8fmHQkJCOBxOTMy22NgDbDY7KiqKwdi0a9e3CQlHQ0PD2OztXC53zZo1TOaWcHrEF198GRYauoXJbDe2TnicVBMuAiXJ8akpF0mOT3icy5SjfgJTR1v1nV91DeqK8tL6upp3i2/fv/97zjc9452c8DhnvJMz3smF+dmlpQVDU327Qbe4+NfC/OycbzpYPnWISlX1sFFXX6cy/NGCwDY7NOIncBFoYeFFHo8rFJ5KSDicnZ0plRZXlJea+3v8X9hBf4EI1NdtatKpCs8mlYgEr/qe9fd12SGrDXxDHeQi0MuFovDwsMhIelTUJqVSMU26P3bgIlCBIC0rK6O2tlpxS1Z9p1Iul12VStqNBgc+BkNWBw73dXd0PtWb+1+cOpGcfDwetFpev+zU1N++fr0ctFqoDmprq8+dFUqlEsUtWX1djUpVZe7vwTC7nyArKyMx8YhEIhaJLhQX/RxYnNfrmwJ/wgP93c9NBgeOEgRWXVNVWvoLgkBuN/Gqp6O3+ykM2yyDZheBXhIVsNnfxMbu53C2s1ibWazNpo42gkD9GpSVlcTHHyq5UqTVqrVatUZz92GjLmiD56bHgVkZ1GnuqlWV6rvKpt81LgIZHBxQ3b72du7PDx/ez/nIivKyzBNpubnCCllZc/MDrVYNWi1+myKwze3CqEGb8DgnPM4pDxG0gQ1886KzzYHDI0PmiLC1lFm3RIaZOtqOJx8/IzyTkXkSAAC9vuWK5DKHs53Pj8tIT01KPlZZeZPAEb8GlEqj4LAdGrFD1sDbDxeB6vVNlwtFV6XFSqUCgW0jQ2bUbh0bfTM6PNDaakhIOCqVSgCABgBA9Lad9XU1+7j7srIyy8qkObk/3ZRf90sYJKAMt3IsKQyYe+6rFQSBWQZf8vbsiAhdGxG2No63e2TInJaWFh9/+HR2ZpH4kgOHHz/SlZWIfrt3p9ukb9PfGx7swTH7RwKNpk4oPCWVFotE5y8W5OUIs60Bb1Ai93YZbaAFtFqUCll5aXH5NYlSIUNg28NGXZH4klhceLtS4cBh4+OGGrn4aesDQ4u2UVdjaNGC1td+FznwsYsF+QcPfp+RnpqS8oNQmJ15Iq2gIK+xUefEYUqGZ+160Dro83ldBIoj0KzPiyOQyfios7PdbO6jbApaLdahAYLACAIbdzkJAvs0ydnZJ7ncPXx+XGLikby8nPv371VW3nzyRL8cWHYbaDG1Nduh4QM7vko9GN373PisvbnTZJTLZXK5rLu7y4GPYZg9AAhBbchyFn0ctDOnT+7j7k4PqH8s8ahAkCqRiHUN6mVJQKrGvt6ujKS43NMZpna9HRp24DDlOkq/lSG6Iv4CBHl5OTweV6WqunGjXKlUlFwp0jVoqOuBAmWq1lb9hYL8HwXpxuWMW4b13zcBjtmnplxuN+4X2WYbQlGIDATh1NT49LTH7cZXRm4w3N1unCBQGB79z3qD1TjwMaPRIJGIVaqqfwBdWkUO/Bo8KAAAAABJRU5ErkJggg==" nextheight="768" nextwidth="1376" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><ol><li><p><strong>策略网络（Policy）</strong>：从环境状态生成动作，是系统的决策核心。训练时需集中式反向传播维持一致性；推理时可分发至不同节点并行运行。</p></li><li><p><strong>经验采样（Rollout）</strong>：节点根据策略执行环境交互，生成状态—动作—奖励等轨迹。该过程高度并行、通信极低，对硬件差异不敏感是最适合在去中心化中扩展的环节。</p></li><li><p><strong>学习器（Learner）</strong>：聚合全部 Rollout 轨迹并执行策略梯度更新，是唯一对算力、带宽要求最高的模块，因此通常保持中心化或轻中心化部署以确保收敛稳定性。</p></li></ol><h2 id="h-22-rlhf-rlaif-prm-grpo" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>2.2 强化学习阶段框架（RLHF → RLAIF → PRM → GRPO）</strong></h2><p>强化学习通常可分为五个阶段，整体流程如下所述：</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/bfbda7598faa50f3e5e684d884a4d7e9add422f893ced93da5ffa39db33ea85f.jpg" alt="" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAASCAIAAAC1qksFAAAACXBIWXMAAAsTAAALEwEAmpwYAAAGyklEQVR4nCXS+VNaBwLA8ffD/rI7e0xnOukv3fzSZtJ2x25n0xyeXRvPTdwYo+jGIxqjSQxexHqgEIOKBwpGUBBQVFD0IT5FRFFRQF5AEJAnV7gFFNGocZNsJjudnZ1pv/P5E76A0bZnC57+6tX+2ZY9oLG4N62+X3gM7gBXONdOfjEOzastLiVi5S3IyhufF5bX4LtpQ5BUiVhtobA9fESeFEEb8JpBz5jg88QijlA4CkFyoxFgcudTM/Ju55bm3Een3M5m8UUbO/6VTfvallOBuGCbr4PGyC180EkdlG5ZIdUWeUKU8aDycsKN/KoGGrio9YWQg2N9MIQZ4KXeLYhM+DEqMTElMzP/SVlabu740hLAHJsDfun3n5yrqieMQisbOz75tlu+7VGYPQqzW+PY2w6ewDY/KJFIZGuMsTFME76FRKppwmXm5t5CodhcrkgqVRq3Sp8+jU5OLq6ubqVSm/soXSyWxuMG5BoHH1ofnpTMLMHbriOlwb2+5VQanEqDW2lwqhCnCnHBFq/K4uZN8sXi2dExTlZWRmFRfmJiwuUrl3NyUDzeiBCc4guFBWj0PXQFZZRL4/HIHA4TBGG7E0CCh773H/3vPnrPPtgPT42BkPqVV+v2aV2+nYPwq9NTW/hI53DBCKLU61dgeFYmkyjlMGLctCB6h81xvGc98CNBl/NNyP/u0HUSDP/8Zu/9YejDcfi/J56TANDGGf/qWnRsekZEXPzfrieR+YJlh2fRYhUjCEswRaTTOOI5jlg0PM4FIYjG6kfl52Tmoiam+XzhxBgELhjkc1qpZHOVMcnqpHdDMkhhUhiCeiS0rd/f0nk1wMiqMq+69svvr0TdSPsmMuZGwX3RtnXRYpvfNrX2dN9/WNxOpw2J58emQRaPW17/05Oa6kfV5XXNTdML0AgEzqhX2HMTLO5wcVlx1A8xtdjaHkbvjBKaXgNhl0q7qwU0u0HzyZkpfIy8PjGFj1+6fEqnS+l0qT2ezV0f7HKYwqG1bf0qLF9aW6XQaSRqbxe1d3CUrdyE5QY1cuC0nfh4i+CSbnVYMEwaJGHw1cS+1sp69LppzfXaAchM8PjitEA2B8nFS5pVncek8yA6j0nr0CtMCsmG2L5nNnr1cpNSgWxoXTrfW//8LJs30NTWUPK4MK0gMx6PyfduTtlMEmI/sar+8YvB9qMP3nf/C+29eeUPWwAio+fq9ZhfTwX+ADzGlhv2djRuw4iIF5sSE5MQQ6ITGcKZi1fiAeB3WSWYGYVEMN03MoBl0Rqe1Rfh6wqJz0o2JDSbFpRJ2eJlDo74E7EXR+isVxkW/ccIILVuaoJW+vRoY28brq+dNDqg8ZpVTqPcounnM5jTQ+KX4nWrrpvDelj3lEDtY4EcbH0xuii1tCA5OT4iJf6v6alXmT0Y8WgbxMYPs1paac9Lq+5l5Ka1kBodIQPgdeuOQ5b3b7z/ebv788f9wwOLdUfptmtsFtholHnd2qN9i8Eoe0HrIHbiH5UV3c3LqsCgiT0tWFxtB4nQz6L00Ul99B4eyF5enlqTgdYduQYWLUDDEDgY8BsAEbUhLfJiVvx3NXkpD25ELzLwXnjSreC51rl68QBIbbQsDZEJtX/+/PN7+Xn3CwuyMtMbsbUUclcfpbut9Xk7sQWHrWuor+NzXhzaFmfHOvo7qikE9CpEextSH7tkgIzT9q/E789/AkRHnP8UAErv/B33KNO8yHYrx2foz2IvfQlSGwe78XFxUffys+6i0jFVZW0EbN1TdNnDQkzV4x5S2zTI5Y9zxMIhkEUozkn5x4+XbiVdJj9/YlODR45lYN8oOQuoT3dVR3bZkX0tqJ9zweCxSxG2rjhhgVMNeTYhRm9rVlZmdnZmakpidVVZdnZGYVFuYmL89es/oLJv02hkkRiSzvP9yELQLH3tWv/3nub9oe7tweZZQA1QB3gpqbejYxNu3kIlJd9qwneQKay0f6Iovazm5s6oyFhKD62yvAaVlZd28w66DKNVm3Uqk83gdJq8TpPXZwmYYARRm/1mf8AW2Lf67RqLU2eRCGVDTOEObARaukZj4tMvfH3t08+++M1vz331l2tfXPjuwsVLcXFpcXE3k5LulD2s+zYi8k9//OzcufPfRlzFVDRJBPI2XHdlSW1dBa6ypEYwNCscnqt7gu3CdTegG1f4Kwaptpc4+ghN4jGEwJRIw+ZK+1lzOALtWQu9EU+uqG6m9HI7OphdncwN+Y5gcoXcxSIS+qgkNocJviAxxaCM1sVswXa2YjsaKvFCDjQ7Nt/V1EUj9nc2dcoEK9oltWlFPcOZf7mg+j/8mSP3aSp2pgAAAABJRU5ErkJggg==" nextheight="565" nextwidth="1024" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><ol><li><p><strong>数据生成阶段（Policy Exploration）</strong>：在给定输入提示的条件下，策略模型 πθ 生成多条候选推理链或完整轨迹，为后续偏好评估与奖励建模提供样本基础，决定了策略探索的广度。</p></li><li><p><strong>偏好反馈阶段（RLHF / RLAIF）</strong>：</p><ol><li><p><strong>RLHF（Reinforcement Learning from Human Feedback）</strong>通过多候选回答、人工偏好标注、训练奖励模型（RM）并用 PPO 优化策略，使模型输出更符合人类价值观，是 GPT-3.5 → GPT-4 的关键一环</p></li><li><p><strong>RLAIF（Reinforcement Learning from AI Feedback）</strong>以 AI Judge 或宪法式规则替代人工标注，实现偏好获取自动化，显著降低成本并具备规模化特性，已成为 Anthropic、OpenAI、DeepSeek 等的主流对齐范式。</p></li></ol></li><li><p><strong>奖励建模阶段（Reward Modeling）：</strong>偏好对输入奖励模型，学习将输出映射为奖励。RM 教模型“什么是正确答案”，PRM 教模型“如何进行正确推理”。</p><ol><li><p><strong>RM（Reward Model）</strong>用于评估最终答案的好坏，仅对输出打分：</p></li><li><p><strong>过程奖励模型PRM（Process Reward Model）</strong>它不再只评估最终答案，而是为每一步推理、每个 token、每个逻辑段打分，也是 OpenAI o1 与 DeepSeek-R1 的关键技术，本质上是在“教模型如何思考”。</p></li></ol></li><li><p><strong>奖励验证阶段（RLVR / Reward Verifiability）</strong>：在奖励信号生成与使用过程中引入“可验证约束”，使奖励尽可能来自可复现的规则、事实或共识，从而降低 reward hacking 与偏差风险，并提升在开放环境中的可审计性与可扩展性。</p></li><li><p><strong>策略优化阶段（Policy Optimization）</strong>：是在奖励模型给出的信号指导下更新策略参数 θ，以得到更强推理能力、更高安全性与更稳定行为模式的策略 πθ′。主流优化方式包括：</p><ol><li><p><strong>PPO（Proximal Policy Optimization）</strong>： RLHF 的传统优化器，以稳定性见长，但在复杂推理任务中往往面临收敛慢、稳定性不足等局限。</p></li><li><p><strong>GRPO（Group Relative Policy Optimization）</strong>：是 DeepSeek-R1 的核心创新，通过对候选答案<strong>组内优势分布</strong>进行建模以估计期望价值，而非简单排序。该方法保留了奖励幅度信息，更适合推理链优化，训练过程更稳定，被视为继 PPO 之后面向深度推理场景的重要强化学习优化框架。</p></li><li><p><strong>DPO（Direct Preference Optimization）</strong>：非强化学习的后训练方法：不生成轨迹、不建奖励模型，而是直接在偏好对上做优化，成本低、效果稳定，因而被广泛用于 Llama、Gemma 等开源模型的对齐，但不提升推理能力。</p></li></ol></li><li><p><strong>新策略部署阶段（New Policy Deployment）</strong>：经过优化后的模型表现为：更强的推理链生成能力（System-2 Reasoning）、更符合人类或 AI 偏好的行为、更低的幻觉率、更高的安全性。模型在持续迭代中不断学习偏好、优化过程、提升决策质量，形成闭环。</p></li></ol><table style="min-width: 125px"><colgroup><col><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>阶段</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>技术</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>核心作用</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>优点</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>缺点</strong></p></td></tr><tr><td colspan="1" rowspan="2"><p style="text-align: center">偏好反馈</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>RLHF</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">人类偏好指导策略</p></td><td colspan="1" rowspan="1"><p style="text-align: center">对齐效果好、成熟</p></td><td colspan="1" rowspan="1"><p style="text-align: center">人工成本高</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>RLAIF</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">AI Judge 自动偏好</p></td><td colspan="1" rowspan="1"><p style="text-align: center">低成本、高扩展性</p></td><td colspan="1" rowspan="1"><p style="text-align: center">依赖AI质量、易偏差</p></td></tr><tr><td colspan="1" rowspan="2"><p style="text-align: center">奖励建模</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>RM</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">最终答案打分</p></td><td colspan="1" rowspan="1"><p style="text-align: center">简单、成熟</p></td><td colspan="1" rowspan="1"><p style="text-align: center">不评估推理过程</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>PRM</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">每步推理打分</p></td><td colspan="1" rowspan="1"><p style="text-align: center">推理提升显著，是 o1/R1 核心</p></td><td colspan="1" rowspan="1"><p style="text-align: center">训练难度大</p><p style="text-align: center">数据成本高</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center">奖励验证</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>RLVR</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">奖励可验证约束</p></td><td colspan="1" rowspan="1"><p style="text-align: center">去中心化友好</p></td><td colspan="1" rowspan="1"><p style="text-align: center">任务受限</p></td></tr><tr><td colspan="1" rowspan="3"><p style="text-align: center">策略优化</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>PPO</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">传统 RLHF 优化器</p></td><td colspan="1" rowspan="1"><p style="text-align: center">稳定、成熟</p></td><td colspan="1" rowspan="1"><p style="text-align: center">推理任务收敛慢、不稳</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>GRPO</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">相对表现优化</p></td><td colspan="1" rowspan="1"><p style="text-align: center">更适合推理链</p><p style="text-align: center">稳定性强</p></td><td colspan="1" rowspan="1"><p style="text-align: center">多样本需求高</p><p style="text-align: center">工程成本大</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>DPO</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">偏好对直接优化</p></td><td colspan="1" rowspan="1"><p style="text-align: center">成本最低、易落地</p></td><td colspan="1" rowspan="1"><p style="text-align: center">提升推理能力有限</p></td></tr></tbody></table><p><br></p><h2 id="h-23" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>2.3 强化学习的产业应用五大分类</strong></h2><p><strong>强化学习（Reinforcement Learning）</strong>已从早期的博弈智能演进为跨产业的自主决策核心框架，其应用场景按照技术成熟度与产业落地程度，可归纳为五大类别，并在各自方向推动了关键突破。</p><ul><li><p><strong>博弈与策略系统（Game &amp; Strategy）</strong>：是 RL 最早被验证的方向，在 AlphaGo、AlphaZero、AlphaStar、OpenAI Five 等“完美信息 + 明确奖励”的环境中，RL 展示了可与人类专家比肩甚至超越的决策智能，为现代 RL 算法奠定基础。</p></li><li><p><strong>机器人与具身智能（Embodied AI）</strong>：RL 通过连续控制、动力学建模与环境交互，使机器人学习操控、运动控制和跨模态任务（如 RT-2、RT-X），正快速迈向产业化，是现实世界机器人落地的关键技术路线。</p></li><li><p><strong>数字推理（Digital Reasoning / LLM System-2）</strong>：RL + PRM 推动大模型从“语言模仿”走向“结构化推理”，代表成果包括 DeepSeek-R1、OpenAI o1/o3、Anthropic Claude 及 AlphaGeometry，其本质是在推理链层面进行奖励优化，而非仅评估最终答案。</p></li><li><p><strong>自动化科学发现与数学优化（Scientific Discovery）</strong>：RL 在无标签、复杂奖励与巨大搜索空间中寻找最优结构或策略，已实现 AlphaTensor、AlphaDev、Fusion RL 等基础突破，展现出超越人类直觉的探索能力。</p></li><li><p><strong>经济决策与交易系统（Economic Decision-making &amp; Trading）</strong>：RL 被用于策略优化、高维风险控制与自适应交易系统生成，相较传统量化模型更能在不确定环境中持续学习，是智能金融的重要构成部分。</p></li></ul><h1 id="h-web3" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>三. 强化学习与 Web3 的天然匹配</strong></h1><p>强化学习（RL）与 Web3 的高度契合，源于二者本质上都是“<strong>激励驱动系统</strong>”。RL 依赖奖励信号优化策略，区块链依靠经济激励协调参与者行为，使两者在机制层面天然一致。RL 的核心需求——大规模异构 Rollout、奖励分配与真实性验证——正是 Web3 的结构优势所在。</p><ol><li><p><strong>推理与训练解耦：</strong>强化学习的训练过程可明确拆分为两个阶段：</p></li></ol><ul><li><p><strong>Rollout (探索采样)</strong>：模型基于当前策略生成大量数据，<strong>计算密集型</strong>但<strong>通信稀疏型</strong>的任务。它不需要节点间频繁通信，适合在全球分布的消费级 GPU 上并行生成。</p></li><li><p><strong>Update (参数更新)</strong>：基于收集到的数据更新模型权重，需高带宽中心化节点完成。</p></li></ul><p>“推理—训练解耦”天然契合去中心化的异构算力结构：Rollout 可外包给开放网络，通过代币机制按贡献结算，而模型更新保持集中化以确保稳定性。</p><ol start="2"><li><p><strong>可验证性 (Verifiability)：</strong>ZK 与 Proof-of-Learning 提供了验证节点是否真实执行推理的手段，解决了开放网络中的诚实性问题。在代码、数学推理等确定性任务中，验证者只需检查答案即可确认工作量，大幅提升去中心化 RL 系统的可信度。</p></li><li><p><strong>激励层，基于代币经济的反馈生产机制：</strong>Web3 的代币机制可直接奖励 RLHF/RLAIF 的偏好反馈贡献者，使偏好数据生成具备透明、可结算、无需许可的激励结构；质押与削减（Staking/Slashing）进一步约束反馈质量，形成比传统众包更高效且对齐的反馈市场。</p></li><li><p><strong>多智能体强化学习（MARL）潜力：</strong>区块链本质上是公开、透明、持续演化的多智能体环境，账户、合约与智能体不断在激励驱动下调整策略，使其天然具备构建大规模 MARL 实验场的潜力。尽管仍在早期，但其状态公开、执行可验证、激励可编程的特性，为未来 MARL 的发展提供了原则性优势。</p></li></ol><h1 id="h-web3" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>四. 经典 Web3 + 强化学习项目解析</strong></h1><p>基于上述理论框架，我们将对当前生态中最具代表性的项目进行简要分析：</p><h2 id="h-prime-intellect-prime-rl" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>Prime Intellect: 异步强化学习范式 prime-rl</strong></h2><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.primeintellect.ai/"><u>Prime Intellect </u></a>致力于构建全球开放算力市场，降低训练门槛、推动协作式去中心化训练，并发展完整的开源超级智能技术栈。其体系包括：Prime Compute（统一云/分布式算力环境）、INTELLECT 模型家族（10B–100B+）、开放强化学习环境中心（Environments Hub）、以及大规模合成数据引擎（SYNTHETIC-1/2）。</p><p>Prime Intellect 核心基础设施组件<strong>prime-rl 框架</strong>专为异步分布式环境设计与强化学习高度相关，其余包括突破带宽瓶颈的<strong> OpenDiLoCo 通信协议</strong>、保障计算完整性的 <strong>TopLoc 验证机制</strong>等。</p><p><strong>Prime Intellect 核心基础设施组件一览</strong></p><table style="min-width: 75px"><colgroup><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>组件名称</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>功能定位</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>关键技术创新</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>prime-rl</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">强化学习训练框架</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Actor-Learner 分离架构；支持 FSDP2；</p><p style="text-align: center">vLLM 后端加速；GRPO+ 稳定性优化</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>OpenDiLoCo</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">分布式通信协议</p></td><td colspan="1" rowspan="1"><p style="text-align: center">时间稀疏性更新；Int8 梯度量化；</p><p style="text-align: center">伪梯度聚合；抗高延迟</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Verifiers</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">奖励与验证库</p></td><td colspan="1" rowspan="1"><p style="text-align: center">模块化环境定义；集成 Sandboxes；</p><p style="text-align: center">支持多种验证逻辑（代码、数学、裁判）</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Prime Sandboxes</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">代码执行环境</p></td><td colspan="1" rowspan="1"><p style="text-align: center">基于 Rust 的高性能容器；亚秒级启动；</p><p style="text-align: center">安全隔离；支持大规模并发</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>TopLoc</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">计算完整性验证</p></td><td colspan="1" rowspan="1"><p style="text-align: center">局部敏感哈希（LSH）；概率性验证；防止算力欺诈</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Shardcast</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">权重分发系统</p></td><td colspan="1" rowspan="1"><p style="text-align: center">高效分发大模型权重到去中心化节点</p></td></tr></tbody></table><p><br><strong>技术基石：prime-rl 异步强化学习框架</strong></p><p><strong>prime-rl </strong>是 Prime Intellect 的核心训练引擎，专为大规模异步去中心化环境设计，通过 <strong>Actor–Learner </strong>完全解耦实现高吞吐推理与稳定更新。<strong>执行者(Rollout Worker) </strong>与<strong> 学习者(Trainer) </strong>不再同步阻塞，节点可随时加入或退出，只需持续拉取最新策略并上传生成数据即可：</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/900c2430ba4408e8d8ded6a1f6df48665189b02fc9a8ad2946d2873ee838c93b.png" alt="" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAOCAIAAADBvonlAAAACXBIWXMAAAsTAAALEwEAmpwYAAAD40lEQVR4nJ1TbUxbZRR+YkgWMjecOokGJS4Mk5poTCYqicHEuA1jkC0pMRqHWYwiuE0lbGEf3MyOj1I+pYzARqo2+3NnjANGu1KksjKgFFZsKV1tadfeftxe2jIowpZsr3lvAXF/ND65uXnv854855zn3AOIkLLSTNUbuy69lqPOod8E/w/PnDJlHXc+WzGbVTm1Rkm0r2+tfAqyx6BIR2MGGnduPfr8k01vgjDJAJJ814HUgNRiRYbVBixVPyy9rWwcxX34YACFfSjSo0iDA/2UfFmVVzDxeam5qnSq6oup00fMVQdMZTnDUgAMs5bDxoAokKjHUj3uNeCeAqsKkPPr2oTW8JxsbI+a/6bXc+yK92ivv7zXm6sOpDOjSGNfOult9CdCtrhjnJ8SVqLKhUtZ5veob4TdaIIQEMIQUYtlpeZODK1lh5QVw1r50sHF4ELCxt+Z8EX5xT9PjyzinAdburIrHfXWgH3EZ/rVM+wMu5oDqt3XC4gEwWZEGyDIqS3LbWKmdpDvHgGwdAHzrVhqQUKJ223UwEzF6Cd98xZf3OCc183y9mC8XCfgzAzSfsyucNRagjYjNz7gNTjCziau+5WhvQDm1Y/zSmrOg1YMMXkPzuL+1UNkspooU82fYaUGMRmWG8E30pgiuepwf2zCHdLbQ1pbcNoTLtfPg3HSBMdnajyCz8JZjXNjoWikzfd91uB+0SIprVq0gjSD9BTGfZNcwEvGa4gylWVALtDandWPEoKUOq64R7D6hZE/eP1seDYYPaaN0ATpXdkVt2pNIcsgZ7zGGaz8bEOo+8XfCh9OoAT5ae9d33As7Lyv/4p0pIlXzMYMtrXy+ZeFX2761Sa/apy7YvEf/DmcUufHFrGDOeG2JWC9MTcREEKtvh9yNbkEcHfuIHIQObWIZUFOgVx+m2iKiQIsEOvcQS7SW3fnLgBP1N48oom4uLDJHR51BDghfkLP46RzfQb8zHBoTOc32PlbiuDFPYZ3CBDpQLwWCwzutNAO1oxqoYdluUjKaEByBi/IjYf7opPeqMYe6bUJk754Sb+QwkzTDirtcj8fsAZmRt0T8Xis3a/ePIMNMMzab5p0ZgNJi1LquC+1goPjTXMRgyPkCggnBsUhP81KylwyY8ys4w19Yf31iPVssOPVkfep4lAe3YD1R1wpklTfzCf3MfWc55AuMRZc1bqXNa7E78Ld0sFFnLHS+6JrH+3v+TD/6sdvaT/91rCv1HgQPcnJbi70X5Dx9Y3dnXPvdk/vax/Jbxoo6LJIzrsyGZWok5GRl5Yp2b6dnvtBDCL5Dxv+KwhhSkpKJDv/pv4CZcdKM/1ibaEAAAAASUVORK5CYII=" nextheight="641" nextwidth="1456" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><ul><li><p><strong>执行者 Actor (Rollout Workers)</strong>：负责模型推理和数据生成。Prime Intellect 创新性地在 Actor 端集成了 <strong>vLLM</strong> 推理引擎 。vLLM 的 PagedAttention 技术和连续批处理（Continuous Batching）能力，使得 Actor 能够以极高的吞吐量生成推理轨迹。</p></li><li><p><strong>学习者 Learner (Trainer)</strong>：负责策略优化。Learner 从共享的经验回放缓冲区（Experience Buffer）中异步拉取数据进行梯度更新，无需等待所有 Actor 完成当前批次。</p></li><li><p><strong>协调器 (Orchestrator)</strong>：负责调度模型权重与数据流。</p></li></ul><p><strong>prime-rl 的关键创新点</strong>：</p><ul><li><p><strong>完全异步（True Asynchrony）</strong>：prime-rl 摒弃传统 PPO 的同步范式，不等待慢节点、无需批次对齐，使任意数量与性能的 GPU 都能随时接入，奠定去中心化 RL 的可行性。</p></li><li><p><strong>深度集成 FSDP2 与 MoE</strong>：通过 FSDP2 参数切片与 MoE 稀疏激活，prime-rl 让百亿级模型在分布式环境中高效训练，Actor 仅运行活跃专家，大幅降低显存与推理成本。</p></li><li><p><strong>GRPO+（Group Relative Policy Optimization）</strong>：GRPO 免除 Critic 网络，显著减少计算与显存开销，天然适配异步环境，prime-rl 的 GRPO+ 更通过稳定化机制确保高延迟条件下的可靠收敛。</p></li></ul><p><strong>INTELLECT 模型家族：去中心化 RL 技术成熟度的标志</strong></p><ul><li><p><strong>INTELLECT-1（10B，2024年10月）</strong>首次证明 OpenDiLoCo 能在跨三大洲的异构网络中高效训练（通信占比 &lt;2%、算力利用率 98%），打破跨地域训练的物理认知；</p></li><li><p><strong>INTELLECT-2（32B，2025年4月）</strong>作为首个 Permissionless RL 模型，验证 prime-rl 与 GRPO+ 在多步延迟、异步环境中的稳定收敛能力，实现全球开放算力参与的去中心化 RL；</p></li><li><p><strong>INTELLECT-3（106B MoE，2025年11月）</strong>采用仅激活 12B 参数的稀疏架构，在 512×H200 上训练并实现旗舰级推理性能（AIME 90.8%、GPQA 74.4%、MMLU-Pro 81.9% 等），整体表现已逼近甚至超越规模远大于自身的中心化闭源模型。</p></li></ul><p>Prime Intellect 此外还构建了数个支撑性基础设施：<strong>OpenDiLoCo</strong> 通过时间稀疏通信与量化权重差，将跨地域训练的通信量降低数百倍，使 INTELLECT-1 在跨三洲网络仍保持 98% 利用率；<strong>TopLoc + Verifiers</strong> 形成<strong>去中心化可信执行层</strong>，以激活指纹与沙箱验证确保推理与奖励数据的真实性；<strong>SYNTHETIC 数据引擎</strong> 则生产大规模高质量推理链，并通过流水线并行让 671B 模型在消费级 GPU 集群上高效运行。这些组件为去中心化 RL 的数据生成、验证与推理吞吐提供了关键的工程底座。<strong>INTELLECT 系列</strong>证明了这一技术栈可产生成熟的世界级模型，标志着去中心化训练体系从概念阶段进入实用阶段。</p><h2 id="h-gensyn-rl-swarmsapo" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>Gensyn： 强化学习核心栈RL Swarm与SAPO</strong></h2><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.gensyn.ai/"><u>Gensyn</u></a> 的目标是将全球闲置算力汇聚成一个开放、无需信任、可无限扩展的 AI 训练基础设施。其核心包括<strong>跨设备标准化执行层</strong>、<strong>点对点协调网络</strong>与<strong>无需信任的任务验证系统</strong>，并通过智能合约自动分配任务与奖励。围绕强化学习的特点，Gensyn 引入<strong> RL Swarm</strong>、<strong>SAPO</strong> 与 <strong>SkipPipe</strong> 等核心机制等机制，将<strong>生成</strong>、<strong>评估</strong>、<strong>更新</strong>三个环节解耦，利用全球异构 GPU 组成的“蜂群”实现集体进化。其最终交付的不是单纯的算力，而是<strong>可验证的智能（Verifiable Intelligence）</strong>。</p><p><strong>Gensyn堆栈的强化学习应用</strong><br></p><table style="min-width: 100px"><colgroup><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>层级（Layer）</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>组件</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>技术原理</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>在 RL 中的具体作用</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>强化学习核心层</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>RL Swarm</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">去中心化生成–评估–更新结构</p></td><td colspan="1" rowspan="1"><p style="text-align: center">执行去中心化 RL 循环，通过共享 Rollout 并由各节点<strong>本地评估奖励</strong>实现协作训练</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>强化学习核心层</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>SAPO</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">共享 Rollout 并过滤无梯度信号样本</p></td><td colspan="1" rowspan="1"><p style="text-align: center">在高异构、异步网络中实现稳定的策略优化</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>通信层</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>SkipPipe</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">流式并行通信协议</p></td><td colspan="1" rowspan="1"><p style="text-align: center">实现低延迟的并行处理。</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>可信执行层</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>PoL</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">概率式学习证明</p></td><td colspan="1" rowspan="1"><p style="text-align: center">验证 Rollout 真实由模型生成，防伪造 RL 数据。</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>可信执行层</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Verde</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">基于博弈论的二分仲裁协议</p></td><td colspan="1" rowspan="1"><p style="text-align: center">以 O(log N) 成本定位作弊步骤，确保奖励可信。</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>一致性层</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>RepOps</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">跨 GPU 确定性算子</p></td><td colspan="1" rowspan="1"><p style="text-align: center">确保异构硬件输出比特级一致，便于验证与审计。</p></td></tr></tbody></table><p><br></p><p><strong>RL Swarm：去中心化的协作式强化学习引擎</strong></p><p>&nbsp;<strong>RL Swarm</strong> 展示了一种全新的协作模式。它不再是简单的任务分发，而是一个模拟人类社会学习的去中心化的“生成—评估—更新”循环，类比协作式学习过程，无限循环：</p><ul><li><p><strong>Solvers（执行者）：</strong> 负责本地模型推理与 Rollout 生成，节点异构无碍。Gensyn 在本地集成高吞吐推理引擎（如 CodeZero），可输出完整轨迹而非仅答案。</p></li><li><p><strong>Proposers（出题者）：</strong> 动态生成任务（数学题、代码问题等），支持任务多样性与<strong>类 Curriculum Learning 的难度自适应</strong>。</p></li><li><p><strong>Evaluators（评估者）：</strong> 使用冻结的“裁判模型”或规则对本地 Rollout 进行评估，<strong>生成本地奖励信号</strong>。评估过程可被审计，减少作恶空间。</p></li></ul><p>三者共同组成一个 P2P 的 RL 组织结构，无需中心化调度即可完成大规模协作学习。</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/cddde88f646e27fe355b68a113204a9886f28c7dff90b5e9ac79af2b10d6e2a2.jpg" alt="" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAASCAIAAAC1qksFAAAACXBIWXMAAAsTAAALEwEAmpwYAAAGZklEQVR4nE1Ve1BT6RX/2u60nbHd6W5n6li663QXO7vFnc5at+OuOAKVt1QjAWUJiMRk5ZGEhgoJyhpBDD4QKFFAFqTFFQaX6moaGxZYl9WFJgQCJIE1D5NwuTf3nXvzIhWwE2J3+s2ZM+eP78w55/f7zu8DCEZBCOGCsf83hKBINhAxhiXZAEoxLhgj2QBCeEk2SPlDFBtECApCKZRimMAKzvghlPwu3QmhThhbQkn7EgI2CuAuGIt6xxKC077SMnFiUsruPXuTU9OEJ8r5H5Wkp+/n5BzO/bAgISk5IzPrSB4vY/+BvPyjBzjcxKTkvQl/PFPXgJGME/JACL5hBISSjiUEwAiOkQxCeBHCi1IsQjAYyUxMGT9XPxi4ffee+l8a7cjIl18/+rdeox3Vjn01dFc9dFf9ufqBbmZ+bPyxdmxcrRnWjozNW55gJIuRkfT/dUw4IQ+AUNI4v6AzzBiMJt3MrM4wO2v+FiZoggkyoTDlD9GBUBSWqGdDYTYUhhBs3mK1WB0Wq2PR7jYt2mbNiwajaWLKqDPMfjeH3QVHIBocutvd23dr4LOunt4rLW29ff0UG3S7oTmT2bRoMy1an9gc+uk5aBl7+tRtXLDitG9kbLy1TaVq72zv6Ors7LrS0tbapuru6e3u7evs6oERHMGoKODACWOtqo6qavng0D3Too0JrDx//nxUP9fW2d3U0CCT/UUkKpfXyAb7++2w44MD6cdltYGV/0AIwfpC62vrs3aH3vrt+to64wtebv6r9GS1ou68WvPFgtXxggMIIUSSisNHjhQUFDQoG40L1gJxDQDgd1xunUJRUFIqk5+slIqutjRfuNkHwJs7EvjhtWcoSunNJnHnhXeO5e4uEQhV50f0uhMlooOcQ8UCwcWm5vFHEyhO213LEQ7kNaeOFhUVFfOnDFOJR+Ugel6PAzkHf731+/lbgDTxVV7sT4bU2sGB+1+OTVBsYHjSsLtKIO9Tya+3Cxob5X3tKdW84dF7NieyYHXgtA9CSQjBoxNE8MJpH0r5nz17ti29AoCXwbYEcESceelUJf/t4j+AQwAUx4Alt530h3Da+8S5/HaJrH/0YcAfXEYgGHEHA+GHD9XN8ni324nRfoTwOiH0BQcoTlfL5PXnlSNfPcZpr+rGfRCXB97aCTJ/v1mcBwr2vvTepjQAWipS7cu41QmxjO/SoIajaEE8GM747/zt8idKqZdlYA95pyl/Yuhy981/fJjPuzXwGU77IhBhJHOySsbl5mTu33/tWkd7v/qYrAWkcEHMz19glf6uUMFX3L6y4HzqwcjpBUdaoTSpuNJgcahUVzMy0lOTky82tdotM401kqZqgVQkzMjMKheJ72u0COEFMEErlY3lIpFUWnnr1qeZJ86Bn+5Ik1TsqzoNOLmvHS+sqef/bE8MSN+pM8zhjN++5Nl1SLwz98+zCw5l46V9ySkpqenymo/Nc3P1kmONksK25iuFRfyS0rLxbyYhlIyQjBAMyQbnbS7h9euv5h0Hr+8F4JU3+eK/fz2aoTwd9wPwox+Dl2P3Jb6R3aMaDK6s9I9MxOZV2RHs6id9tWcbTp2tb2275oSghtSYyTs97b0DJaVlldUyg9G0FC2AM/7V9fWbw2NbBcJfCSQghw9+Ew/i83LkTSD+4EubNm8B22I3JW/74R5eUilCURTry5Y0cjjSeYvtnubR7aEvZszW8mxxm5AXWF1FMBohGAiNLNoLqejqvnFOefEb/fTZ3hu/LT7xWv5HgMMD27NAbArY9H5szHvvv7Hrna1ZAm7V5MQcTNBuGPdgJO+AdHtMBidbmJsniNuSlryLd7G1Q9Xe6YIxOhDaeKaEY6MAVS6SZGZm8niFZWXllbVnjp5Wbj9c8Uqa8Hs7s3+xI+eXcdmpGaLBT/8ZFUQnhLlhHEJJNhzWPphM2lcU/wF3aGDUaDFn/SkrISHxVK1Cox01LdocEPbEsQQQwqtq75DVnFbUnVNrhgk6QLEBzSM9V9KwOT7/3aSi5qv9Lhil/AE3jDuhiEZGuttokAmF7fZls/lpcHVVZ5g7W3fujKKuWiaT19ZOTE2H19ZINhjRIpz2IQSDUn6EYByQx+aEPRhpd8G31Q/1RgtOM04ItTlhB+SJ/CTRDdqIbU4kIsto5LIbxjHaT9ABkg0ygZXHuumPFfX1ygv/BSGUa/IJSzaHAAAAAElFTkSuQmCC" nextheight="813" nextwidth="1456" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p><strong>SAPO：为去中心化重构的策略优化算法： </strong>&nbsp;SAPO（Swarm Sampling Policy Optimization）以“<strong>共享 Rollout 并过滤无梯度信号样本，而非共享梯度</strong>”为核心，通过大规模去中心化的 Rollout 采样，并将接收的 Rollout 视为本地生成，从而在无中心协调、节点延迟差异显著的环境中保持稳定收敛。相较依赖 Critic 网络、计算成本较高的 PPO，或基于组内优势估计的 GRPO，SAPO 以极低带宽使消费级 GPU 也能有效参与大规模强化学习优化。</p><p>通过 <strong>RL Swarm</strong> 与 <strong>SAPO</strong>，Gensyn 证明了强化学习（<strong>尤其是后训练阶段的 RLVR</strong>）天然适配去中心化架构——因为其更依赖于大规模、多样化的探索（Rollout），而非高频参数同步。结合 PoL 与 Verde 的验证体系，Gensyn 为万亿级参数模型的训练提供了一条不再依赖单一科技巨头的替代路径：<strong>一个由全球数百万异构 GPU 组成的、自我演化的超级智能网络</strong>。</p><br><h2 id="h-nous-researchatropos" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>Nous Research：可验证强化学习环境Atropos</strong></h2><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://nousresearch.com/"><u>Nous Research</u></a>在构建一套 <strong>去中心化、可自我进化的认知基础设施</strong>。其核心组件——Hermes、Atropos、DisTrO、Psyche 与 World Sim被组织成一个持续闭环的智能演化系统。不同于传统“预训练—后训练—推理”线性流程，Nous 采用 DPO、GRPO、拒绝采样等强化学习技术，将数据生成、验证、学习与推理统一为连续反馈回路，打造持续自我改进的闭环 AI 生态。</p><p><strong>Nous Research 组件总览</strong></p><table style="min-width: 75px"><colgroup><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>组件名称</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>核心作用</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>与强化学习（RL）的关系</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Hermes</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>策略模型</strong></p><p style="text-align: center"><strong>（LLM / Reasoning Agent）</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">RL 的优化对象，其推理链由 DPO / GRPO / 拒绝采样不断强化。</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Atropos</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>标准化可验证环境</strong></p><p style="text-align: center"><strong>（RL Environment）</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">提供确定性奖励并过滤推理轨迹，是 RL 数据质量与可信性的核心来源。</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>DisTrO</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>分布式优化器</strong></p><p style="text-align: center"><strong>（Optimizer / Gradient Transport）</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">在低带宽条件下完成 RL 参数更新，使去中心化的推理 RL 可行。</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Psyche</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>训练与执行网络</strong></p><p style="text-align: center"><strong>（Decentralized Training Network）</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">承载 RL 闭环（生成→验证→奖励→更新）的实际计算执行层。</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>World Sim</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>合成训练环境</strong></p><p style="text-align: center"><strong>（Synthetic Task World）</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">为 RL 提供复杂任务与长期推理场景，支持世界模型与通用代理训练。</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Forge</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>推理与数据采集层</strong></p><p style="text-align: center"><strong>（Inference / Trajectory Collector）</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">采集用户与模型的推理轨迹，通过 Atropos 验证后成为 RL 再训练数据。</p></td></tr></tbody></table><p><strong>模型层：Hermes 与推理能力的演进</strong></p><p>Hermes 系列是 Nous Research 面向用户的主要模型接口，其演进清晰展示了行业从传统 SFT/DPO 对齐向推理强化学习（Reasoning RL）迁移的路径：</p><ul><li><p><strong>Hermes 1–3：指令对齐与早期代理能力：Hermes 1–3 依靠低成本 DPO 完成稳健指令对齐，并在 Hermes 3 借助合成数据与首次引入的 Atropos 验证机制。</strong></p></li><li><p><strong>Hermes 4 / DeepHermes：通过思维链将 System-2 式慢思考写入权重，以 Test-Time Scaling 提升数学与代码性能，并依赖“拒绝采样 + Atropos 验证”构建高纯度推理数据。</strong></p></li><li><p><strong>DeepHermes </strong>进一步采用 GRPO 替代难以分布式落地的 PPO，使推理 RL 能在 Psyche 去中心化 GPU 网络上运行，为开源推理 RL 的可扩展化奠定工程基础。</p></li></ul><p><strong>Atropos：可验证奖励驱动的强化学习环境</strong></p><p>Atropos 是 Nous RL 体系的真正枢纽。它将提示、工具调用、代码执行和多轮交互封装成标准化 RL 环境，可直接验证输出是否正确，从而提供确定性奖励信号，替代昂贵且不可扩展的人类标注。更重要的是，在去中心化训练网络 Psyche 中，Atropos 充当“裁判”，用于验证节点是否真实提升策略，支持可审计的 Proof-of-Learning，从根本上解决分布式 RL 中的奖励可信性问题。</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/f4a5a13ab8b7b7296cbe3cc868d238965fec4b3bab2b9e19e58cad6ebdfeb514.jpg" alt="" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAASCAIAAAC1qksFAAAACXBIWXMAAAsTAAALEwEAmpwYAAAGr0lEQVR4nDWUa1BTdxrG/3XVdtzdjqFFECyyBQmgQItWFwZdLmNZ1u7YdSsydLog7aKMwBa7rq0WpahYxHJpsYtFuQg0kBozhMjFAIKcCEmGW8iV3OhJgFwgF04unJyT/9mBmX3m9+H98Mw88z4fHkCSBIQERcFNKEj5SOihKN8mXgriFFynKO+mh4Jww0NRlNvnc1OUnSRtEK5BaIOkA/pskLRAn4UgjKTXCH2LBK51YYCiqHGB8L8Pmjsfsx60tQ+9GO3p6+t8wmpua29nMNoZjPsPWzp+/vn7ew0PWzsamx91MlltrO4HXaz6lvYOTu/TUeQnBrOFzalr6mhl9zR2sViDI01sztdVNf2iKQOBAwjxVatFrVWKpWLRpFA8JdAoxLMivog/MilAxKJxjVomEPDVSolcLv0VXRDPayZkykm1BpFIEblyeE7CnRAOS6RcgZA1hgzNSfgq9Uu1GlEqZ5eXdB4nIKDn/51Awuu06+WodFItnXEvz2N6OaZXUNQ6hOsQX4OEG1LQTVEERbl8PguEiMEwotOOoQvPtdoRnXZoQceVyYRGo8Hr0TodOo9T7XQAwm31ke4FncpqWxnq7WPU1XF4/XW1VTe+qbhbVUNRhEYlsxkNvjUrbjMTBK7DnAypQmSx6DFMj2Gm9XU7JMyE10QQBoLoFYq6ONw2djcXQSRmk9qJAWJJReBrEy94M88YWbkFtLjU3PeP3zxI9381kL475PTfswY4j3G7GbcZPat6ivRMaFFwsuAr1oDMZBpGkNnlpV79Uo9ExUNGpn/VjoqnmVxOD+8Ze5DXgyDS1RVAruq9Zm1R4cVr2UfjYg4GxCbtDo4K2Ru/kxYaQgsKAVtOnb1AuFZIG+q16qHX6nC7xqfnLQ4MI8k2hTKJ2Rlxv+n10ur8zu5R9YIed0lXzJyZuWHlvMxqUdhtgLQvrUzySvNy86Pf8Q+K8gsIS00/mZOSFPqH6N8HRfv7BYOkzGIGj7QurizrLre92HmxKaODdWNqOo/DA+V34+/dP9bZCSqqQf61m/2IzoU9npJklNVfesTWurCNAIhbvistzc0rCI9K/O2WXbGR8Tm3G2+nHs4L2fW7Y39L/rTgYFEJKC4PrmVGVHNBUevWO/cO/bsm+OxtcK6MXlSWxendV9PxxwvfBScXZv3ImjGah+Y1+d+33mHzFnCX3L4KKNJeXV6xLSI58XDSIXpsUBD9aO7nWRkfZrwdvjUmOfaDj/zO5IL8L8HFFvBZ475Py3Myb52lX/3o+I2UE9e2x+Uc+/jW0c+qIxMKt6YVnK9o5eu0T54PNfcNPFfKZ5YMcrsNEDbD/CQSFJd29Xzu4A+X9oRE76S9HfZu6is7wt7aHREVGgYOpIN/PSqoYCRn34yJKLzwp8onTybmVs1s/lhu3u1X/E4E0v9BSylMulg7gi4L1cr7jDYGt5s50D8slUhsDrC2qNaJX/7S/DDwUPqp9IzIoNDME+kV2Zk7d0VH7qDRAHgv9ZRGi5IOlPDidsOKBcOMkFh0YyLpnNJhYY1Nf9sxcJ09gqg0RkiKl/Q8ATIomkDkEuWaVepwAGIFJa0Gzypa9s2dtA/z3giIqkyJyklLe80vMvG9lE9OZwl5PcSqxmlZJDYny+R2CW22u6ynJ65Unr5ek32rNvtW7ZnrVR+X1/zlixtHzl1JPH814dyVQ//88nhJudC4DAjzgmkamRh+qpcKFS+f8Yd6mZcuJ+1Pampux1GZSSYkVlCvXu7DzBtjRxJOj/u5fS2xuAyEJYD4dBB5DIQlgvAkEH4UhCaCgHjgHwfejAU76CA0YQxd2PjAqZ7pa6ztvFfTVXm9u73ZvYouy8Wk3TAuErAH+zWKOaNKYjZoTWajy4VBknBAeLK4FPhFgrDDIOwICIwFAQeA3/4N/PcDv2hAiwLbw8HeI/yNAAu6rp6W9bRxHj0UcJi/ND9AxROEUYWj8rjLFX8+80l1fUMXk8HoZCDIqEIptVotBEVll5SCN+lg9wHwBh28FgYCol4Neec3e2LA6/sAjQ5C3gWBMSDk4KhOu1GRWTU7+owjHOYiPM7Mi/5liXAdlRHL6uySawnvZzOZj6emROPj41NTkyq1YnFR76Oov+YWgW17tu+ig217AaCBDQUC4L95ALAlFIBgAN4aNy4Br0GBo3K3TurSSZ0aqRdVEksqHFXiqNyplVpVs5hJt7YpzIm5cY/b4zTirnbecGl9w3/qGoq/rfuiqr6kqv7r+sbLtQ2fV/5QUv3jVw0tRRXVmUWXuvjI/wAGMEcaxcOZqAAAAABJRU5ErkJggg==" nextheight="813" nextwidth="1456" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p><strong>DisTrO 与 Psyche：去中心化强化学习的优化器层</strong></p><p>传统 RLF（RLHF/RLAIF）训练依赖中心化高带宽集群，这是开源无法复制的核心壁垒。DisTrO 通过动量解耦与梯度压缩，将 RL 的通信成本降低几个数量级，使训练能够在互联网带宽上运行；Psyche 则将这一训练机制部署在链上网络，使节点可以在本地完成推理、验证、奖励评估与权重更新，形成完整的 RL 闭环。</p><p>在 Nous 的体系中， Atropos 验证思维链；DisTrO 压缩训练通信；Psyche 运行 RL 循环；World Sim 提供复杂环境；Forge 采集真实推理；Hermes 将所有学习写入权重。强化学习不仅是一个训练阶段，而是 Nous 架构中 连接数据、环境、模型与基础设施的核心协议，让 Hermes成为一个 能在开源算力网络上持续自我改进的活体系统。</p><h2 id="h-gradient-networkecho" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>Gradient Network：强化学习架构Echo</strong></h2><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://gradient.network/"><u>Gradient Network</u></a> 核心愿景是通过“开放智能协议栈”（Open Intelligence Stack）重构 AI 的计算范式。Gradient 的技术栈由一组可独立演化、又异构协同的核心协议组成。其体系从底层通信到上层智能协作依次包括：Parallax（分布式推理）、Echo（去中心化 RL 训练）、Lattica（P2P 网络）、SEDM / Massgen / Symphony / CUAHarm（记忆、协作、安全）、VeriLLM（可信验证）、Mirage（高保真仿真），共同构成持续演化的去中心化智能基础设施。</p><table style="min-width: 100px"><colgroup><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>层级（System Layer）</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>模块</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>核心功能</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>定位</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>去中心化推理层</strong></p><p style="text-align: center"><strong>（Inference Layer）</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Parallax</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">异构 GPU 分布式推理、WAN Pipeline Parallel、Speculative Decoding</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Sovereign AI 的全球分布式执行 OS</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>去中心化训练层</strong></p><p style="text-align: center"><strong>（Training Layer）</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Echo</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">RL Rollout–Learner 解耦、异构设备 Rollouts、可验证训练数据</p></td><td colspan="1" rowspan="1"><p style="text-align: center">去中心化 RL 的训练与优化引擎</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>通信与网络层</strong></p><p style="text-align: center"><strong>（Connectivity &amp; Networking Layer）</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Lattica</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">P2P 网络、跨 NAT 连通性、Hole Punching、DHT、BitSwap、动态路由</p></td><td colspan="1" rowspan="1"><p style="text-align: center">分布式 AI 的通信与连接底座</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>智能体智能层</strong></p><p style="text-align: center"><strong>（Agent Intelligence Layer）</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Symphony</strong></p><p style="text-align: center"><strong>SEDM Massgen CUAHarm</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">Symphony：协作调度；</p><p style="text-align: center">SEDM：可生长长期记忆Massgen：多模型辩论CUAHarm：安全沙箱</p></td><td colspan="1" rowspan="1"><p style="text-align: center">去中心化 Agent 的智能演化与集体智能层（协作 × 记忆 × 推理 × 安全）</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>可信与验证层</strong></p><p style="text-align: center"><strong>（Trust &amp; Verification Layer）</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>VeriLLM / Veri</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">抽查式可验证推理、Commit–Reveal 验证、训练可验证</p></td><td colspan="1" rowspan="1"><p style="text-align: center">分布式推理与训练的可信层</p></td></tr></tbody></table><p><strong>Echo — 强化学习训练架构</strong></p><p>Echo 是 Gradient 的强化学习框架，其核心设计理念在于解耦强化学习中的训练、推理与数据（奖励）路径，使 Rollout 生成、策略优化与奖励评估能够在异构环境中独立扩展与调度。在由推理侧与训练侧节点组成的异构网络中协同运行，以轻量同步机制在广域异构环境中维持训练稳定性，有效缓解传统 DeepSpeed RLHF / VERL 中推理与训练混跑导致的 SPMD 失效与 GPU 利用率瓶颈。</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/563d213059e5532fbc8d72a71afd4188e293043c1b53fbc8007144f2302e3c19.jpg" alt="" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAASCAIAAAC1qksFAAAACXBIWXMAAAsTAAALEwEAmpwYAAAD4ElEQVR4nK1Uz2/jVBB+FxBUSCshcUACUYGQ4IT2wh1ulaJKvfQCh1X7FywXdqVqySUIKpXDNoUsLd4slVKldbaXHrbQVt3sWhgW3DR13KR14tRpfjhOHTeJX/LyXvKQ/bpht1SoAj6NRuPn8YxnvpkHzHr92KxWrBoThLqXEHQ5ty6EbaBXK7+pKblwmCwcZIyjPv0/QQgBhBD2gLtYzx/pum6aZrFYVFXVtm3mRCnVdV0URdmDJEmWZQ2iGDVTOcjki8clo5LJHurHhUFMNwFCXUJIj/YazUZwLjjmYXR0NBAIsBwMxWJR1/WKB2aw87ptJ1N7fr9/fHzc5/PdvHFzc2uL9HsXVGDbtqqqPM/7/X6O42RZ1nUdIQQ9IIQkSdpJ7IiiqGkaxthptSCEDoTyvpLX9eWVleDcXDaXVQ7SHYQuSMD6kMlktKdgr/qe9GjPsqy6Xa+d1BzosENGGOy0kylZzWXVvJZM7Vn1v7r3XILCcWF9fT0cDnMct7q6yvN8JpO5JJmapsV4/vs7d+Lx+LN/fL4CBgghIQQh1Gg2KaXg2sfgHQDeB+C95+VdAK591O/1HegQQtpeGy+YoiaEVuu03mo02o6nIcJdB0LYaTeclpvgw1fB6wAMPyNveNFfA+DqEKW03mowf4wxwnigXQN1Qb5afpyWdguHqVLuYerJbi7NqHMgtE7dMQVXXwYAgCsAvODpVwB4yTMAAB+86E5H45QNggOhFxQNyMMYn7XIqp3UTNOoVEyjyjwIIRhjSuln0W+HPv9keGpyeGrirVsTb05NvH3LtYdufHp9ac5dIIyZP4SwVC4XPVQqFUr7Z3tAKd1JJKLR6MIPCw/jcfpvgRD6RRCi0ehdjhMEwTvpuhUwciCETY9V76pBpmka1appmn1K89Xyo/TuZuqPrX3pZ/nJ9n7iUXpXtwy39KdweUZtSmkbtWGn3aO9jtcJQCmVJCkQ+DIUCnEcNz09LYqioiiLi4uRSCQcDmvZXFI7XEs8/mZ96au1H7+4vxDciK0lBFlXi4UCz/Pz8/OhUCi5t5dM7X09PX17dvb27Ox3oVAme+hygFBXUZSZmRmfzzcyMhIIBERRtG2b1dFBqHnaSOm5JeHBvfha8Kfl4CZ/L762JDxIl4+apy69zLN2ciIlpOXl5bGxscnJyVgsphykzxJY1oksy6IoCoKgKIppmoPl6Hta1g62U9LKr5vBjVhwg7//+3Y8nUjl1YEDG/lCqWhaVskol4yyUauWjMpZAjeELEciEZ7nVdX97Nz2YYxN23LnvQOdDqy3GoZtsRn7Z5zf5L9v9X8EIeRPTszAEuXHWesAAAAASUVORK5CYII=" nextheight="819" nextwidth="1456" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>Echo 采用“推理–训练双群架构”实现算力利用最大化，双群各自独立运行，互不阻塞：</p><ul><li><p><strong>最大化采样吞吐：推理群 Inference Swarm </strong>由消费级 GPU 与边缘设备组成，通过 Parallax 以 pipeline‐parallel 构建高吞吐采样器，专注于轨迹生成；</p></li><li><p><strong>最大化梯度算力：训练群Training Swarm </strong>由可运行于中心化集群或全球多地的消费级 GPU 网络，负责梯度更新、参数同步与 LoRA 微调，专注于学习过程。</p></li></ul><p>为维持策略与数据的一致性，Echo 提供 <strong>顺序（Sequential）</strong> 与<strong>异步（Asynchronous）</strong> 两类轻量级同步协议，实现策略权重与轨迹的双向一致性管理：</p><ul><li><p><strong>顺序拉取（Pull）模式｜精度优先</strong> ：训练侧在拉取新轨迹前强制推理节点刷新模型版本，从而确保轨迹新鲜度，适合对策略陈旧高度敏感的任务；</p></li><li><p><strong>异步推拉（Push–Pull）模式｜效率优先</strong>：推理侧持续生成带版本标签的轨迹，训练侧依自身节奏消费，协调器监控版本偏差并触发权重刷新，最大化设备利用率。</p></li></ul><p>在底层，Echo 构建于 Parallax（低带宽环境下的异构推理）与轻量化分布式训练组件（如 VERL)之上，依赖 LoRA 降低跨节点同步成本，使强化学习可在全球异构网络上稳定运行。</p><br><h2 id="h-grailbittensor" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>Grail：Bittensor 生态的强化学习</strong></h2><p>Bittensor 通过其独特的 Yuma 共识机制，构建了一个巨大的、稀疏的、非平稳的奖励函数网络。</p><p>Bittensor生态中的<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.covenant.ai/"><u>Covenant AI</u></a> 则通过 SN3 Templar、SN39 Basilica 与 SN81 Grail 构建了从预训练到 RL 后训练的垂直一体化流水线。其中，SN3 Templar 负责基础模型的预训练，SN39 Basilica 提供分布式算力市场，SN81 Grail 则作为面向 RL 后训练的“可验证推理层”，承载 RLHF / RLAIF 的核心流程，完成从基础模型到对齐策略的闭环优化。</p><table style="min-width: 100px"><colgroup><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>阶段</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>子网</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>功能描述</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>与强化学习（RL）的关联</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>基础设施层</strong></p></td><td colspan="1" rowspan="1"><p><strong>Basilica (SN39)</strong></p></td><td colspan="1" rowspan="1"><p>分布式推理与计算市场，调度全球 GPU 资源</p></td><td colspan="1" rowspan="1"><p><strong>间接关联</strong>：提供 rollout 生成与 RL 训练所需的算力执行层</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>预训练层</strong></p></td><td colspan="1" rowspan="1"><p><strong>Templar (SN3)</strong></p></td><td colspan="1" rowspan="1"><p>基础模型预训练（SFT / Base Model）</p></td><td colspan="1" rowspan="1"><p><strong>前置关联</strong>：产出 RL 微调所需的基础策略模型 π₀</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>后训练 / 强化学习层</strong></p></td><td colspan="1" rowspan="1"><p><strong>Grail (SN81)</strong></p></td><td colspan="1" rowspan="1"><p>RLAIF / RLVR；推理、代码、工具使用；可验证奖励</p></td><td colspan="1" rowspan="1"><p><strong>核心关联</strong>：Covenant 唯一执行 RL 的子网，负责策略优化与对齐</p></td></tr></tbody></table><p><br></p><p>GRAIL目标是<strong>以密码学方式证明每条强化学习 rollout 的真实性与模型身份绑定</strong>，确保 RLHF 能够在无需信任的环境中被安全执行。协议通过三层机制建立可信链条：</p><ol><li><p><strong>确定性挑战生成</strong>：利用 drand 随机信标与区块哈希生成不可预测但可复现的挑战任务（如 SAT、GSM8K），杜绝预计算作弊；</p></li><li><p><strong>通过 PRF 索引采样与 sketch commitments</strong>，使验证者以极低成本抽检 token-level logprob 与推理链，确认 rollout 确由声明模型生成；</p></li><li><p><strong>模型身份绑定：</strong>将推理过程与模型权重指纹及 token 分布的结构性签名绑定，确保替换模型或结果重放都会被立即识别。由此，为 RL 中推理轨迹（rollout）提供了真实性根基。</p></li></ol><p>在此机制上，Grail 子网实现了 GRPO 风格的可验证后训练流程：矿工为同一题目生成多条推理路径，验证者依据正确性、推理链质量与 SAT 满足度评分，并将归一化结果写入链上，作为 TAO 权重。公开实验显示，该框架已将 Qwen2.5-1.5B 的 MATH 准确率从 12.7% 提升至 47.6%，证明其既能防作弊，也能显著强化模型能力。在 Covenant AI 的训练栈中，Grail 是去中心化 RLVR/RLAIF 的信任与执行基石，目前尚未正式主网上线。</p><br><h2 id="h-fraction-airlfc" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>Fraction AI：基于竞争的强化学习RLFC</strong></h2><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://fractionai.xyz/"><u>Fraction AI </u></a>的架构明确围绕 <strong>竞争强化学习（Reinforcement Learning from Competition, RLFC）</strong> 和游戏化数据标注构建 ，将传统 RLHF 的静态奖励与人工标注替换为开放、动态的竞争环境。代理在不同 Spaces 中对抗，其相对排名与 AI 法官评分共同构成实时奖励，使对齐过程演变为持续在线的多智能体博弈系统。</p><p>传统RLHF与Fraction AI的RLFC之间的核心差异：</p><table style="min-width: 75px"><colgroup><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p><strong>维度</strong></p></td><td colspan="1" rowspan="1"><p><strong>传统 RLHF (Reinforcement Learning from Human Feedback)</strong></p></td><td colspan="1" rowspan="1"><p><strong>Fraction AI (Reinforcement Learning from Competition)</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>奖励来源</strong></p></td><td colspan="1" rowspan="1"><p><strong>静态模型</strong>：基于历史数据训练的奖励模型 (Reward Model)，易过时。</p></td><td colspan="1" rowspan="1"><p><strong>动态市场</strong>：基于实时竞争排名与去中心化 AI 法官的裁决。</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>交互模式</strong></p></td><td colspan="1" rowspan="1"><p><strong>孤立优化</strong>：针对固定函数的单体优化。</p></td><td colspan="1" rowspan="1"><p><strong>对抗博弈</strong>：与其他代理进行对抗性 (Adversarial) 或竞争性交互。</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>迭代频率</strong></p></td><td colspan="1" rowspan="1"><p><strong>低频离线</strong>：批量收集数据，低频重训练。</p></td><td colspan="1" rowspan="1"><p><strong>高频在线</strong>：基于会话流的持续学习与权重更新。</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>所有权</strong></p></td><td colspan="1" rowspan="1"><p><strong>中心化</strong>：模型权重归中心化实体所有。</p></td><td colspan="1" rowspan="1"><p><strong>去中心化</strong>：用户拥有代理资产 (NFT/Token) 及其产生的收益。</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>鲁棒性</strong></p></td><td colspan="1" rowspan="1"><p><strong>脆弱</strong>：易受“奖励破解” (Reward Hacking) 影响，陷入局部最优。</p></td><td colspan="1" rowspan="1"><p><strong>强健</strong>：动态变化的对手策略迫使代理不断进化，防止策略坍缩。</p></td></tr></tbody></table><p><br></p><p><strong>RLFC 的核心价值</strong>在于奖励不再来自单一模型，而来自不断演化的对手与评估者，避免奖励模型被利用，并通过策略多样性防止生态陷入局部最优。Spaces 的结构决定博弈性质（零和或正和），在对抗与协作中推动复杂行为涌现。</p><p>在系统架构上，Fraction AI 将训练过程拆解为四个关键组件：</p><ul><li><p><strong>Agents</strong>：基于开源 LLM 的轻量策略单元，通过 QLoRA 以差分权重扩展，低成本更新；</p></li><li><p><strong>Spaces</strong>：隔离的任务域环境，代理付费进入并以胜负获得奖励；</p></li><li><p><strong>AI Judges</strong>：以 RLAIF 构建的即时奖励层，提供可扩展、去中心化的评估；</p></li><li><p><strong>Proof-of-Learning</strong>：将策略更新绑定到具体竞争结果，确保训练过程可验证、防作弊。</p></li></ul><p>Fraction AI 的本质是构建了一个人机协同的进化引擎”。用户作为策略层的“元优化者” (Meta-optimizer)，通过提示工程（Prompt Engineering）和超参配置引导探索方向；而代理在微观的竞争中自动生成海量的高质量偏好数据对 (Preference Pairs)。这种模式让数据标注通过 <strong>“去信任化微调” (Trustless Fine-tuning)</strong> 实现了商业闭环 。</p><p><strong>强化学习 Web3项目 架构比较</strong></p><table style="min-width: 125px"><colgroup><col><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>项目名称</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>RL 架构模式&nbsp;</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>关键技术</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>通信带宽优化策略&nbsp;</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>强化学习角色</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Prime Intellect</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>异步分布式 RL</strong></p><p style="text-align: center">(Asynchronous Distributed RL)</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>PRIME-RL</strong>&nbsp;</p><p style="text-align: center">(框架)</p><p style="text-align: center"><strong>INTELLECT-½</strong></p><p style="text-align: center">&nbsp;(模型)</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>SHARDCAST</strong>: 基于 HTTP 树状拓扑的高速权重广播，解决跨节点模型同步延迟。</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>全栈平台</strong>：提供从算力聚合、模型训练到权重分发的完整设施</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Gensyn</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>群体协作 RL</strong></p><p style="text-align: center">(Collaborative Swarm RL)</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>RL Swarm</strong></p><p style="text-align: center"><strong>Probabilistic PoL</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Graph-based Pinpoint</strong>: 只需验证计算图中的随机点，极大降低通信和验证成本。</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>底层协议</strong>：通过异构设备组成的“蜂群”进行协作式推理和互评</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Nous Research</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>通信高效分布式训练</strong></p><p style="text-align: center">(Communication-Efficient Training)</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>DisTrO</strong> (优化器)</p><p style="text-align: center"><strong>Tinker-Atropos</strong> (RL环境)</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>DisTrO</strong>: 将梯度更新的通信量减少&nbsp;</p><p style="text-align: center"><strong>1000x-10000x</strong>，打破物理带宽限制。</p></td><td colspan="1" rowspan="1"><p><strong>算法架构层</strong>通过数学层面的突破，让消费级网络也能跑得动大规模 RL 训练。</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Gradient</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>边缘-中心解耦</strong></p><p style="text-align: center">(Edge-Core Decoupling)</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Echo</strong></p><p style="text-align: center">&nbsp;(框架)</p><p style="text-align: center"><strong>Parallax</strong></p><p style="text-align: center">&nbsp;(推理引擎)</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>物理分离</strong>: 边缘设备 (Inference Swarm) 仅做推理/采样，中心节点 (Training Swarm) 做更新</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>操作系统 (OS)</strong>：最大化利用边缘闲置算力进行大规模数据采样&nbsp;</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Grail</strong></p><p><br></p><p style="text-align: center"><em>(Bittensor SN81)</em></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>可验证 RL 后训练</strong></p><p style="text-align: center">(Verifiable RL Post-training)</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>GRAIL Protocol</strong></p><p style="text-align: center"><strong>Superlinear Scoring</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Rollout Proofs</strong>: 仅传输带有加密指纹的推理结果，而非全部原始数据。</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>专用子网</strong>：Bittensor 生态中专注于 RL <strong>后训练 (Post-training)</strong>&nbsp;</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Fraction AI</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>数据驱动 Darwin RL</strong></p><p style="text-align: center">(Data-Centric RLHF)</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>RLFC</strong></p><p style="text-align: center">&nbsp;(竞争性强化学习)</p><p style="text-align: center"><strong>Gamified Labeling</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>异步数据流</strong>: 专注于生成高质量的偏好数据 (Preference Data)，对实时带宽要求较低。</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>数据燃料</strong>：为上述所有 RL 训练项目提供最关键的“反馈信号”&nbsp;</p></td></tr></tbody></table><p><br></p><h1 id="h-web3" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>五. 总结与展望：强化学习 × Web3 的路径与机会</strong></h1><p>基于对上述前沿项目的解构分析，我们观察到：尽管各团队的切入点（算法、工程或市场）各异，但当强化学习（RL）与 Web3 结合时，其底层架构逻辑皆收敛为一个高度一致的“<strong>解耦-验证-激励</strong>”范式。这不仅是技术上的巧合，更是去中心化网络适配强化学习独特属性的必然结果。</p><p><strong>强化学习通用架构特征</strong>：解决核心的物理限制与信任问题<br></p><ol><li><p><strong>推训物理分离 (Decoupling of Rollouts &amp; Learning) —— 默认计算拓扑</strong></p></li></ol><p>通信稀疏、可并行的 Rollout 外包给全球消费级 GPU，高带宽的参数更新集中于少量训练节点，从 Prime Intellect 的异步 Actor–Learner 到 Gradient Echo 的双群架构皆如此。</p><ol start="2"><li><p><strong>验证驱动的信任层 (Verification-Driven Trust) —— 基础设施化</strong></p></li></ol><p>在无需许可的网络中，计算真实性必须通过数学与机制设计强制保障，代表实现包括 Gensyn 的 PoL、Prime Intellect 的 TOPLOC 与 Grail 的密码学验证。</p><ol start="3"><li><p><strong>代币化的激励闭环 (Tokenized Incentive Loop) —— 市场自我调节</strong>&nbsp;</p></li></ol><p>算力供给、数据生成、验证排序与奖励分配形成闭环，通过奖励驱动参与、通过 Slash 抑制作弊，使网络在开放环境中依然保持稳定与持续演进。</p><p><strong>差异化技术路径：一致架构下的不同“突破点”</strong></p><p>尽管架构趋同，但各项目根据自身基因选择了不同的技术护城河：</p><ul><li><p><strong>算法突破派 (Nous Research)</strong>：试图从数学底层解决分布式训练的根本矛盾（带宽瓶颈）。其 <strong>DisTrO</strong> 优化器旨在将梯度通信量压缩数千倍，目标是让家庭宽带也能跑得动大模型训练，这是对物理限制的“降维打击”。</p></li><li><p><strong>系统工程派 (Prime Intellect, Gensyn, Gradient)</strong>：侧重于构建下一代的“AI 运行时系统”。Prime Intellect的 <strong>ShardCast</strong> 和 Gradient 的 <strong>Parallax</strong> 都是为了在现有的网络条件下，通过极致的工程手段压榨出最高的异构集群效率。</p></li><li><p><strong>市场博弈派 (Bittensor, Fraction AI)</strong>：专注奖励函数（Reward Function）的设计。通过设计精妙的评分机制，引导矿工自发寻找最优策略，来加速智能涌现。</p></li></ul><p><strong>优势、挑战与终局展望</strong></p><p>在强化学习与 Web3 结合的范式下，系统级优势首先体现在 <strong>成本结构</strong>与<strong>治理结构</strong>的重写。</p><ul><li><p><strong>成本重塑</strong>：RL 后训练（Post-training）对采样（Rollout）的需求是无限的，Web3 能以极低成本调动全球长尾算力，这是中心化云厂商难以比拟的成本优势。</p></li><li><p><strong>主权对齐 (Sovereign Alignment)</strong>：打破大厂对 AI 价值观（Alignment）的垄断，社区可以通过 Token 投票决定模型“什么是好的回答”，实现 AI 治理的民主化。</p></li></ul><p>与此同时，这一体系也面临两大结构性约束。</p><ul><li><p><strong>带宽墙 (Bandwidth Wall)</strong>：尽管有 DisTrO 等创新，物理延迟仍限制了超大参数模型（70B+）的全量训练，目前 Web3 AI 更多局限于微调和推理。</p></li><li><p><strong>古德哈特定律 (Reward Hacking)</strong>：在高度激励的网络中，矿工极易“过拟合”奖励规则（刷分）而非提升真实智能。设计防作弊的鲁棒奖励函数是永恒的博弈。</p></li><li><p><strong>恶意拜占庭式节点攻击(BYZANTINE worker)</strong>：通过对训练信号的主动操纵与投毒破坏模型收敛。核心不在于持续设计防作弊的奖励函数，而在于构建具备对抗性鲁棒性的机制。</p></li></ul><p>强化学习与 Web3 的结合，本质是在重写“智能是如何被生产、对齐并分配价值”的机制。其演进路径可概括为三条互补方向：</p><ol><li><p><strong>去中心化推训网络</strong>：从算力矿机到策略网络，将并行且可验证的 Rollout 外包给全球长尾 GPU，短期聚焦可验证推理市场，中期演化为按任务聚类的强化学习子网；</p></li><li><p><strong>偏好与奖励的资产化</strong>：从标注劳工到数据股权。 实现偏好与奖励的资产化，将高质量反馈与 Reward Model 变为可治理、可分配的数据资产，从“标注劳工”升级为“数据股权”</p></li><li><p><strong>垂直领域的“小而美”进化</strong>：在结果可验证、收益可量化的垂直场景中孕育小而强的<strong>专用 RL Agents</strong>，如 DeFi 策略执行、代码生成，使策略改进与价值捕获直接绑定并有望跑赢通用闭源模型。</p></li></ol><p>总体来看，强化学习 × Web3 的真正机会不在于复制一个去中心化版 OpenAI，而在于重写“智能生产关系”：让<strong>训练执行成为开放算力市场</strong>，让<strong>奖励与偏好成为可治理的链上资产</strong>，让智能带来的价值不再集中于平台，而在<strong>训练者、对齐者与使用者之间重新分配</strong>。</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/3bce7d577eb1d731fa2ac81958506665d9648a1da0e406b67b8ef494f5bb1d73.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAASCAIAAAC1qksFAAAACXBIWXMAAAsTAAALEwEAmpwYAAAGlUlEQVR4nD1Ta1BTZxo+u53O7rZjK94w1ktXsbiMdNkJyEYwJFyENoqmElFRRAkYkKAkJIDQygZiQVuEcikQLlEugWAIm4ZACGEjGxpKIIRwOIfDyQmBpJAA4bLoj/5xdmKcfvP8+d553u+Z532/B4Ag49Kv1tevNrc2V7c2V9ddzq3N1RWnfXvb5alsba661pZca8vb2xuvX61vba641pZfv1rfcDnXVpbXXU7X2vJb2trGxsrW5ur2tgtDYRg0ILAJhqYAp8OWw83y8zsRFUUOCwu9eiUu+BT+6hVaRHhYfPyls9Hh4eFEBoNOoUR7e+8OCwtJSLgcExN5wvc4hRLtqVOp5ymU6KgoMpV64dJXF8LDie1twg2XAwYn3wnk53GBtweH27djx59xOO+AAP8DB7y9vD7as2enl9dHISHBfn4nPt6548iRQ35+fwsI8Pf23nPM51MCIegzXx88/h9Hjx7B7d/r6+vzub/fh395n8vJUg30gtNuEwACm8Bpg0wm6eoSyWXdPVJxZ0dLc5NA8kIkk0nksu7WVqFUKu7oaGttbe6RtDQLKoRCgUwmbWysk8kkz581iDvb29vcHMkLUVdXm0wmqawsKy9/AkFGtwMYnERgEzo3Y0YhMwpZLYgHGAp7rosL6OKCGcOg/p6Gn8Q/9HdX9/c0mOdm7Pb53/m/9zqXFqwWxL6I2RexdztwTwqcnJkah0EjOG2AwUnQNAFDU+8ATkKQ0WyGFT0t6bcoQ0OqPoU8kx7bK22em4PcTHDSZNSDpgkzCk3odXJZ9y+jw2+fNiIICEFT7hEhsMlk1I9oNR0dbf1KhVrdL5WIpJJ2pVI2rteaTOOzyMxLlSj5WvT1xFuMdOb1uAhpZy2GzcKgwWTUWy2IdlgtEFTX1PxACjtdWVYkFFSYTHoMQ+bmZtwCZhQa0WqGNSqxuL1N1Fby6F/8orw87r2uTuGiFbZa0TdvfmOxMvZ+CIgaS1vqS8KDcI21JZv/21hcMK84lwYG+ov5vLbW5muXz6cl37hCO5d9n6HTaeatKIKAAAxNmVGoTyHvU/QMqlV19dVpjNssVoZWq065k+L5Xbv27d/3yaEmQdUlSvDNy1H6n5UF2XRG0sVocuC5qMDiQo68VyoWt1aUP06/m3bjamwEER9BxE/oRzAMdgtgGKxUynkPs6sqHhcU5GVmpufksIv5RXv349jZWX98Dyh7+thun3/z5rekWzeP+352IzGhq6sz5faN4z5HqNRzrPQrnR3PKspLe6RikaiVmZF6l3GTy86EoCn3iDAM0umGH5XwD+7/4NJ5Mpt1PzHxanNTXUOj4KjPsXG9Vqv9j8NhMxjG1jdWmMw0HG5vMv22Xj86b8XUapXFihXmpTcJqh7k59TWVdX+WKnX/5yRdlNQX2W3z88iIIDOgQbDqEjUGn+ReJdOq6p6WvjwgVDY8KK76+DhQyPawZXVpdLSb6XSDqGwobqm6r8vVRYM/KlHnJJyW6sdWltzfJPD4HKYsbEUGo3KZTEryp8EB/qdOR2w7LBBkBEApw0OxyKfXwgAwB/cYcYd9/UlkckBePzBw4ftdiwlJXnPx3/Cee8CAOBCbEw46fSB3cBfce9HnyWJRC02u+UBm/79d484nHtczn32/YzyJ/znzxqEQgGGzbqXDE4bnA4bk5m+02v3Sf+TgUGBoUQiOZxMIpOflPKMBt3DPOahA7s+PXrsUcm3BfnZudzMk/4nAQBIv5sh7+1Zdtg4zGuC2jLaV19mpCYmXL7I5xf9M9AP/7mP3T7vduDJgdVqxjAYcqdjamZmAoYmf7Wjks7GIXWvw2Gz2RccjqV6wY/F/KJiPq+6rvq7p2Vj+lGnw4bOzYxoBwtzUrnMhOtxEUnXqbMI2NnxrElQZbWiMGQEMAxWqxW5uSwmM42dnRUfH8fOzhzT6zBsFjVDVivaLBRkZTEZDDqNRk29kxwfH0enJ/F4X9PpSfX11QgC1tRUsjmsjAxGbi6bnZ2Vn5+Teie5oCDXYBhD50DAakVFoudniKcjIkkRkaRTp/AEQlAaI5lGo/J439jtFrVaERv7RUCAf0QkKSQkOCKSdIYYSiYTo6LICsW/MQzmcO6RyKEEQtAZYmhMzFkCIRiP/zuBEKRUylF3khFwQq8b6JMNvxxUDchHtBrNkFIzpFTIpWOjWk/Ox0a1nuKwRqVWK3QjmoE+mWFcZ7HMwuCkp2VI7caIVqMa6B1+OdjcVMvjfd3cVPd/48nxW/OoibUAAAAASUVORK5CYII=" nextheight="768" nextwidth="1376" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p><strong><em>免责声明：</em></strong><em>本文在创作过程中借助了 ChatGPT-5 与Gemini 3的 AI 工具辅助完成，作者已尽力校对并确保信息真实与准确，但仍难免存在疏漏，敬请谅解。需特别提示的是，加密资产市场普遍存在项目基本面与二级市场价格表现背离的情况。本文内容仅用于信息整合与学术/研究交流，不构成任何投资建议，亦不应视为任何代币的买卖推荐。</em></p>]]></content:encoded>
            <author>0xjacobzhao@newsletter.paragraph.com (jacobzhao)</author>
            <category>强化学习</category>
            <category>去中心化训练</category>
            <category>ai</category>
            <category>web3</category>
            <enclosure url="https://storage.googleapis.com/papyrus_images/84535a21ed2f65cf4711897a3f6e8394d37e1ed03b032b1d434b54b5aa79b9b7.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Machine Economic Order: A Full-Stack Pathway to Agentic Commerce]]></title>
            <link>https://paragraph.com/@0xjacobzhao/machine-economic-order-a-full-stack-pathway-to-agentic-commerce</link>
            <guid>z4Mi3ZbmeST7EW70sxLE</guid>
            <pubDate>Tue, 16 Dec 2025 06:10:07 GMT</pubDate>
            <description><![CDATA[As AI agents gain autonomous execution capabilities, core commercial functions—discovery, trust, ordering, authorization, and payment—are increasingly handled by machines, giving rise to Agentic Commerce.

This shift does not replace existing payment systems but drives a dual-rail model: fiat payments will continue to dominate human-driven commerce, while stablecoins—constrained by regulation and accounting—will primarily serve digital, cross-border, Web3-native, and machine-economy use cases. T]]></description>
            <content:encoded><![CDATA[<p style="text-align: center"><em>This independent research report is supported by </em><strong><em>IOSG Ventures.</em></strong><em> The research and writing process was inspired by related work from </em><strong><em>Raghav Agarwal (</em></strong><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.longhash.vc/post/agentic-commerce-why-x402-is-just-the-beginning"><strong><em><u>LongHash</u></em></strong></a><strong><em>)</em></strong><em> and </em><strong><em>Jay Yu (</em></strong><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://panteracapital.com/http-402s-modern-makeover/"><strong><em><u>Pantera</u></em></strong></a><strong><em>)</em></strong><em>. Thanks to </em><strong><em>Lex Sokolin @ Generative Ventures</em></strong><em> , </em><strong><em>Jordan@AIsa</em></strong><em>,</em><strong><em> Ivy @PodOur2Cents</em></strong><em> for their valuable suggestions on this article. Feedback was also solicited from project teams such as </em><strong><em>Nevermined</em></strong><em>, </em><strong><em>Skyfire</em></strong><em>, </em><strong><em>Virtuals Protocol, AIsa</em></strong><em>, </em><strong><em>Heurist</em></strong><em>, </em><strong><em>AEON</em></strong><em> during the writing process. This article strives for objective and accurate content, but some viewpoints involve subjective judgment and may inevitably contain deviations. Readers' understanding is appreciated.</em></p><p><strong>Agentic Commerce</strong> refers to a full-process commercial system where AI agents autonomously complete service discovery, credibility judgment, order generation, payment authorization, and final settlement. It no longer relies on step-by-step human operation or information input, but rather involves agents automatically collaborating, placing orders, paying, and fulfilling in a cross-platform and cross-system environment, thereby forming a commercial closed loop of autonomous execution between machines (<strong>M2M Commerce</strong>).</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/5bef393dc615c4270897abfcead9b3270d109e6fa38d4d4d2bac9b20e5849709.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAARCAIAAAAzPjmrAAAACXBIWXMAAAsTAAALEwEAmpwYAAAFwElEQVR4nG2UezSUeRjHn45z6pzdLpxOl40ol8YlkVI21xHDmMIQ0ppKrZbIUXLdWmFKrsklo0lDU6ubCNtl21dimEmu09yYiYqSjBlhXswbs8dlL3X6/P0738/vec55vqBSqaamplQqVf+HgcxsGvXchYwsWmX1Exq9JOlsdnQsNelsNr2IWVn9OP9ycWY2LZGakUjNys65TC9i5ly6mpqRxyguLbl269eE8/k0xv2qR9RzF1gNTXOx06ppUP2DQoGKRJIucXdXT6/49bt2Xie7mV/b0MZuFbR0CF+KXvG7el71vhOKJEKRpL2dz+UJW1v5LS+4AoGYx+/k8YQSSc/bt71iSQ+Kov/G/ieYFapkY2jfgEz0egDhCFgdkpoWCfKik83t7uobvFeJ5BYUMZg3aZeZYkkvnyd50STkcHgI0sTnil80cfv7pRMTk3P7+FqAYZ8Vk5/lqJLVLLheVU8vq43OurkvKpdwNN0jNu9I8pWMoqqLV8pOJV+M/S0zJb3oEr08Pi4vKjjueHD8L0FncrKuMBn3EITNYjV//DiEzfB57tMzgqkZpodGxp828UuqGqLz7rnH5Gu5nwS3kxCYvOVUDtgfNPSO2x9PP5tbQc28cyIq3xYf4WTt65uQ7uJC8dtJ9nE7GHc8gUFjPnncwON2oopx1ewc8wIMw7o/ysV9g78/aArNvI07kAyukUCKg71UsAmDdSTY7Atep4EYvQEfYe4UbfHjUQ+3UB8CGWxCF9v6eNq67nUmU9x9Kd4/52YU1iDs9/3SvsGhuV3NCGTyUQtKWnpVI62sfr13HNiGgXMsbA2C5XgAEwAcqG8D8wPgHguanivMjhDw4Vam/gth00JYu0gDB6Bjqr0jwHW3iyVhn2dI9d0HueUsu70pn2Sf5gUj6AQYeQWfvxFbUA72oeARDvgQAEswJKsZuanhdoK2I6z3gNVEwB3UtT9tahUNq8gLvrdarWOto+ugpUsAMPba9VPIgQDrjfjCnJLI5KuLtFzHFei84Fp5vRohrLSuPYLxBCgpoOcG+gTQJum4HHMIPOMQmLTBOQR0SPCDJ1DyzGMqYIUfrCRabKF4k6P9/eOJpAgN9W3ue4LxgWfIbv43Cwtr6p5rOoTcvvtsXoBwBEHnbhdUPg/LvD2zCm0/MA8E2A5bKeuI4Voux8DYD9aTweYE2MSDXhCAzYo1pHU4Hwe7w3b2IWYW+xbAJvWlRhqrLfS0bDKpecXXkaiEG40NgnlBl0SSkpFXWHI3+84zwB8FMwoQY8AoAFY5wzoibPICnBuYHQabCNCjANgvXkXU1/VV1yBYmeDdd+xYCqYGa6xWqhtqLNGztHApLS6vqPgzNTVbJBLNCxQKxUu+gNfV87CRH3juOriEg20oeCWCVxKQThobW9v4hAAhHjR3g4HPUj0/WOuH0w8NJMeqOR823Wa3x9E1wInsZUfycPRNjk+rRTjd3W9a2zvk8uEvLlk+gja2Chnl9SEZt3T2JYJ5AFgeAhN/M8fDsAwPOF8Dp2MBwWlHI/MciVEGhvs19feAnh8sc9u+0dnSwNmPdCiTerH0xsO2NuHY6HxVzN/BHFKpjCt61cLvLkOaEwoqfOMKwWQXLN8MC81grbW9d0QWvfLW/bqKP9jMUiQm9pKOPvE7NX2NJQb6uk6RYck5WSXl5QibwxWLu1EUxTDs6y5CUXRQKut+865DIKnh8J42iRhlSNKFa6fT6BmFpQ9qWhBWez6jklZczWK/ZDVymcyKtCzG2VT61at3G+pb6p41d3a+RtHxIelMVXxjgjlGR8fGJyYUk0r5CCodRj8MjX0YGpMOo7JPCmHX6/uP2E9q23revh8YkI2NojKpfEgqH5YNjwyPoeg4plRiSmxaNf2Nsvs/U1NTGDb7WImNT0wqlZhMPsxmc+pZjbW1dX8hT1ta2zjPm0dHx5RK5cQMk5hytt8w7IvsWcHfdQnU5pujBgkAAAAASUVORK5CYII=" nextheight="559" nextwidth="1024" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>In the crypto ecosystem, the most practically valuable applications today are concentrated in <strong>stablecoin payments</strong> and <strong>DeFi</strong>. Therefore, as AI and Crypto converge, two high-value development paths are emerging:</p><ul><li><p><strong>Short term:</strong> <strong>AgentFi</strong>, built on today’s mature DeFi protocols</p></li><li><p><strong>Mid to long term:</strong> <strong>Agent Payment</strong>, built around stablecoin settlement and progressively standardized by protocols such as <strong>ACP, AP2, x402, and ERC-8004</strong></p></li></ul><p>Agentic Commerce is difficult to scale quickly in the short term due to factors such as protocol maturity, regulatory differences, and merchant/user acceptance. However, from a long-term perspective, payment is the underlying anchor of all commercial closed loops, making Agentic Commerce the most valuable in the long run.</p><h2 id="h-i-agentic-commerce-payment-systems-and-application-scenarios" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>I. Agentic Commerce Payment Systems and Application Scenarios</strong></h2><p>In the Agentic Commerce system, the real-world merchant network is the largest value scenario. Regardless of how AI Agents evolve, the traditional fiat payment system (Stripe, Visa, Mastercard, bank transfers) and the rapidly growing stablecoin system (USDC, x402) will coexist for a long time, jointly constituting the base of Agentic Commerce.</p><h3 id="h-comparison-traditional-fiat-payment-vs-stablecoin-payment" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>Comparison: Traditional Fiat Payment vs. Stablecoin Payment</strong></h3><table style="min-width: 75px"><colgroup><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p><strong>Category</strong></p></td><td colspan="1" rowspan="1"><p><strong>Traditional Fiat Payment (Stripe)</strong></p></td><td colspan="1" rowspan="1"><p><strong>Stablecoin Payment (x402 / USDC)</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Pros</strong></p></td><td colspan="1" rowspan="1"><p>- Extremely high merchant coverage</p><p>- Smooth user experience, no wallet needed</p><p>- Mature compliance, complete risk control</p><p>- Supports refunds/chargebacks</p></td><td colspan="1" rowspan="1"><p>- Globally unified, borderless</p><p>- Extremely low cost (&lt;0.1%), instant settlement</p><p>- Strong programmability (smart contracts, automated settlement)</p><p>- Native support for M2M micropayments</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Cons</strong></p></td><td colspan="1" rowspan="1"><p>- High fees (2–4% + FX)</p><p>- Complex cross-border, slow clearing (T+1 ~ T+3)</p><p>- Weak programmability</p><p>- Cannot support machine-scale payments</p></td><td colspan="1" rowspan="1"><p>- Extremely low merchant adoption</p><p>- High user threshold (Wallet/Gas)</p><p>- Non-unified regulation, complex taxation</p><p>- No chargeback mechanism (need to build own dispute resolution)</p></td></tr></tbody></table><p><br>Real-world merchants—from e-commerce, subscriptions, and SaaS to travel, paid content, and enterprise procurement—carry trillion-dollar demand and are also the core value source for AI Agents to automatically compare prices, renew subscriptions, and procure. In the short term, mainstream consumption and enterprise procurement will still be dominated by the traditional fiat payment system for a long time.</p><p>The core obstacle to the scaling of stablecoins in real-world commerce is not just technology, but <strong>regulation</strong> (KYC/AML, tax, consumer protection), <strong>merchant accounting</strong> (stablecoins are non-legal tender), and the lack of dispute resolution mechanisms caused by <strong>irreversible payments</strong>. Due to these structural limitations, it is difficult for stablecoins to enter high-regulation industries such as healthcare, aviation, e-commerce, government, and utilities in the short term. Their implementation will mainly focus on <strong>digital content, cross-border payments, Web3 native services, and machine economy (M2M/IoT/Agent)</strong> scenarios where regulatory pressure is lower or are native on-chain—this is precisely the opportunity window for Web3-native Agentic Commerce to achieve scale breakthroughs first.</p><p>However, regulatory institutionalization is advancing rapidly in 2025: the US stablecoin bill has achieved bipartisan consensus, Hong Kong and Singapore have implemented stablecoin licensing frameworks, the EU MiCA has officially come into effect, Stripe supports USDC, and PayPal has launched PYUSD. The clarity of the regulatory structure means that stablecoins are being accepted by the mainstream financial system, opening up policy space for future cross-border settlement, B2B procurement, and the machine economy.</p><h3 id="h-best-application-scenario-matching-for-agentic-commerce" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>Best Application Scenario Matching for Agentic Commerce</strong></h3><table style="min-width: 125px"><colgroup><col><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Category</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Sub-scenarios</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Features</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Payment&nbsp;</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Reason</strong></p></td></tr><tr><td colspan="1" rowspan="3"><p style="text-align: center"><strong>A. Digital Native</strong></p><p><br></p><p style="text-align: center"><strong>&nbsp;(AI / Machine) Explodes First</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">Digital Services (API/SaaS/Compute)</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Purely digital, pay-per-call, enterprise procurement</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Traditional payment mainly, stablecoin supplementary</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Service providers are deeply bound to Stripe; Enterprises need invoices/payment terms/refunds.</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center">Multi-agent &amp; M2M Commerce: Multi-agent collaboration, M2M micropayments, IoT, robots, browser stream payments</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Machine-to-machine, small amount &amp; high frequency, second-level settlement</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Stablecoin is the only reasonable option</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Traditional payment fees are high and require manual work; stablecoins support automation and real-time micropayments.</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center">DeFi / AgentFi: On-chain lending, market making, yield strategy execution</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Native on-chain</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Stablecoin / Crypto</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Traditional payments cannot enter.</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>B. Digital Virtual Goods (Fast Growth)</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">In-game purchases, virtual items, memberships, digital assets</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Low unit price, global users</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Traditional payment dominates; stablecoin has cross-border advantages</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Platforms are dominated by card organizations; stablecoins are suitable for cross-border transactions.</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>C. Real World Commerce (Long-term)</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">Air tickets, hotels, e-commerce, food delivery, medicine, offline retail</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Logistics + Regulation + Refund System</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Traditional fiat payment dominates long-term</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Involves tax, chargebacks, regulatory compliance; stablecoins are hard to enter scenarios in the short term.</p></td></tr></tbody></table><p><br></p><p>The core of <strong>Agentic Commerce</strong> is not to let one payment rail replace another, but to hand over the execution subject of "order—authorization—payment" to AI Agents, allowing the traditional fiat payment system (AP2, authorization credentials, identity compliance) and the stablecoin system (x402, CCTP, smart contract settlement) to leverage their respective advantages. It is neither a zero-sum competition between fiat and stablecoins nor a substitution narrative of a single rail, but a <strong>structural opportunity to expand the capabilities of both</strong>: fiat payments continue to support human commerce, while stablecoin payments accelerate machine-native and on-chain native scenarios. The two complement and coexist, becoming the twin engines of the agent economy.</p><br><h2 id="h-ii-agentic-commerce-protocol-standards-panorama" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>II. Agentic Commerce Protocol Standards Panorama</strong></h2><p>The protocol stack of Agentic Commerce consists of six layers, forming a complete machine commerce link from "capability discovery" to "payment delivery". A2A Catalog and MCP Registry are responsible for capability discovery, ERC-8004 provides on-chain verifiable identity and reputation; ACP and AP2 undertake structured ordering and authorization instructions respectively; the payment layer is composed of traditional fiat rails (AP2) and stablecoin rails (x402) in parallel; the delivery layer currently has no unified standard.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/bf8a03a6d08a77780bebdf44afd35d2ba394401f9c05ad394a3a7d3c96609d4e.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAARCAIAAAAzPjmrAAAACXBIWXMAAAsTAAALEwEAmpwYAAAF8UlEQVR4nCVU328aVxa+75VW+9aXXWkfVqvVeldqpX3rH7CvVZWmP7Y0drJN4jauEjaxsZTGdcyC46wLjrFxwIQ4jo2DwcRjPM4YGA8Dw4wHGAYY5nqG+QEDGKykbtJo2yppnBXOy9W5D993znfOdw6off/TDz//+uyXVz++OHr24uh/v75+/vLol9ev24fPTWfOj9rs1vHxWY/3mtX2rc3+3dS067bHOj4+cn1s1GYftdmvWIaHR//T+eH5T69e//ji6PnLozdUT1++Ovz5pdJ5CuDBMxdamIrmZ9C8Cy24UN6F8rOY4EILQzbnp4MTvSOzX07c+8TiODn4308tk72j7rPW+Q8Hb35ovtH37ZzJMunwPZhGcsfAggstTCP5aaRL6EI5wfgeJCoGAB+A33wA3jkNek6DP34G/toL3j4BwInrYRr87gvw9mfgL72g5wx4zwzeuwx6zoG/XwDvngfvngO//Rj8/uxYeBe89RH4Wx/o6QN//hz86RTo6QV/+Ai8dTLCSICWjNsYO7a4fcHu/druvWD3fmmdG/VtuBAmzkEvyjgizAyauxkkTl60nhqe7PvGeWp40hmhZlDWuc54USZZUccX0cvOwKBrZdAVtLhWLe7wsDvsCONZuQmyVYORG7P3Vi8OXztvHjKP2M4M/HvCfYeWGrzWxBhudjniXg475pcuXR0dGLpmvmrt+8rs9C3f9Nyzu+bjbKFUb0dw2hNYD2wS86ubS0jcH0bnV6Nxluf1DhD3n9BQSxSrpFhLlNVoFmakRlLU0oLCay0G6ni5mqyo1F4jzkH/evx2aDPGQVKsEYISL8o01PKKQUs1LFcJxVLu0JbvIbZO7hKCykj1bgJeb+8INVJsp2Bng4YIVSHFNim2twsqDZUMNNbIsjuAesMxi931z3OXTp4euDrp9iPEDc/KQ0pMi/Xj+mojzjumfvOQbeqroetfmL+ZC26nxEZB3QeV5hNOqWeOC0mLGsqWOaWZgXpWrvFaq6Q3aajHWR5juBjDYbkySheoioTnyzG2kIFaUW9yipGT6xhbirM8QnOxnJDIFzGW51WD1w9AyTggRGMpxo7NLljdi7eWkLHZBT9Cxso1RtLwouJbJ+9jtC+Cf3L20j9OmN43nX3f1O8OYr4IfgchE0WFkbQtrhoXWri4H6EqCC3uiPuJUiteVApqu6uA3tNj/B6Wq0R3S9H8HpotRXdLpKjyWpORNP9GbGEj5l/HHiTStwJrc+Gof33LG9r0hqL+dYyRNK4rXcXLSrKiIkxpm5PwsoKXFRpq3RmUjIOkoFLSwSZb8Ya3p5eQNYKjpANc0LNVIwN1vFSj5c7UQviCZfTy9ZsXR8YtNsfcSpSWO3ipRok6I9UIQUvBVhruR1m4xVVTsNX9inpXQcl4nIG1Qv1wh5eXosm5FRTNlPjW07RYzyv1rNxYfJjwr25O+QIz91YnvfetUx7n/OLs8tot/8oCkmCrDRbqFDSy2hOucYh3/dLgaoes8piRjLzSAoLxmBK1ELEbTFDBnd2lbSqUyARiKaKidFsE9W2uSggqXtSc/sCEd3HCu+gJIklRJyp6vNjtA6cYeGEvgJErMdId2lp8RAawZDBBpYVqNwGvd5KiEcAYhz/kWkIcd9dmAlE/kowVtayspSv6XHB78k7whifw+cAVU7+5b2DQ1G+e8K447656wwlS1BmoPcrJ392NOO6u3fAEZlbQSX/4dnA7Xqxm5UZ3yIykUaLOKq1kRd3KVrJKixJ1GmoFtZWTNVKostVGsiRH8LQvvBnaoTG2lIEaIzeSgpKTu/5moJaBtazSQrMiISiMZFCwxkCdrTa6i0aIRlo6SMMOllcf5app6SAFO7jQ3aC0aBBiO4DRruUNU7/54399ff7KNVP/5blA9D5Gk3udpKBTopIo1lKwk4IdlFVQVkkfxxRsZuXasU2hlhHVjKimj1+qolAVhYYqpzQZqJElORwnEYLeSDGPGG4jxSAE/QBLIASdEdXjU9GiYRf4BvuGhxSqebnO6+3/A+x2f/vKEB5WAAAAAElFTkSuQmCC" nextheight="559" nextwidth="1024" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><ul><li><p><strong>Discovery Layer:</strong> Solves "How Agents discover and understand callable services". The AI side builds standardized capability catalogs through A2A Catalog and MCP Registry; Web3 relies on ERC-8004 to provide addressable identity guidance. This layer is the entrance to the entire protocol stack.</p></li><li><p><strong>Trust Layer:</strong> Answers "Is the other party credible". There is no universal standard on the AI side yet. Web3 builds a unified framework for verifiable identity, reputation, and execution records through ERC-8004, which is a key advantage of Web3.</p></li><li><p><strong>Ordering Layer:</strong> Responsible for "How orders are expressed and verified". ACP (OpenAI × Stripe) provides a structured description of goods, prices, and settlement terms to ensure merchants can fulfill contracts. Since it is difficult to express real-world commercial contracts on-chain, this layer is basically dominated by Web2.</p></li><li><p><strong>Authorization Layer:</strong> Handles "Whether the Agent has obtained legal user authorization". AP2 binds intent, confirmation, and payment authorization to the real identity system through verifiable credentials. Web3 signatures do not yet have legal effect, so they cannot bear the contract and compliance responsibilities of this layer.</p></li><li><p><strong>Payment Layer:</strong> Decides "Which rail completes the payment". AP2 covers traditional payment networks such as cards and banks; x402 provides native API payment interfaces for stablecoins, enabling assets like USDC to be embedded in automated calls. The two types of rails form functional complementarity here.</p></li><li><p><strong>Fulfillment Layer:</strong> Answers "How to safely deliver content after payment is completed". Currently, there is no unified protocol: the real world relies on merchant systems to complete delivery, and Web3's encrypted access control has not yet formed a cross-ecosystem standard. This layer is still the largest blank in the protocol stack and is most likely to incubate the next generation of infrastructure protocols.</p></li></ul><h2 id="h-iii-agentic-commerce-core-protocols-in-depth-explanation" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>III. Agentic Commerce Core Protocols In-Depth Explanation</strong></h2><p>Focusing on the five key links of service discovery, trust judgment, structured ordering, payment authorization, and final settlement in Agentic Commerce, institutions such as Google, Anthropic, OpenAI, Stripe, Ethereum, and Coinbase have all proposed underlying protocols in corresponding links, jointly building the core protocol stack of the next generation Agentic Commerce.</p><h3 id="h-agent-to-agent-a2a-agent-interoperability-protocol-google" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>Agent-to-Agent (A2A) – Agent Interoperability Protocol (Google)</strong></h3><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://a2a-protocol.org/"><u>A2A</u></a> is an open-source protocol initiated by Google and donated to the Linux Foundation. It aims to provide unified communication and collaboration standards for AI Agents built by different vendors and frameworks. Based on HTTP + JSON-RPC, A2A implements secure, structured message and task exchange, enabling Agents to conduct multi-turn dialogue, collaborative decision-making, task decomposition, and state management in a native way. Its core goal is to build an "Internet of Agents", allowing any A2A-compatible Agent to be automatically discovered, called, and combined, thereby forming a cross-platform, cross-organization distributed Agent network.</p><h3 id="h-model-context-protocol-mcp-unified-tool-data-access-protocol-anthropic" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>Model Context Protocol (MCP) – Unified Tool Data Access Protocol (Anthropic)</strong></h3><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://modelcontextprotocol.io/docs/getting-started/intro"><u>MCP</u></a> launched by Anthropic, is an open protocol connecting LLM / Agents with external systems, focusing on unified tool and data access interfaces. It abstracts databases, file systems, remote APIs, and proprietary tools into standardized resources, enabling Agents to access external capabilities securely, controllably, and auditably. MCP's design emphasizes low integration costs and high scalability: developers only need to connect once to let the Agent use the entire tool ecosystem. Currently, MCP has been adopted by many leading AI vendors and has become the de facto standard for agent-tool interaction.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/b4496598bba9f7db2651ad049ac5a108ea88fea95c66d8feeb0ed03dd03a0911.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAYCAIAAAAUMWhjAAAACXBIWXMAAAsTAAALEwEAmpwYAAAGJUlEQVR4nLVVaWwbVRB+ICEhfiBUVH5UgBpVAgS0FTdFVSHlCKQ/WrUoAhUCLTSlpTKCCJqqgh4BgUJTmja9EkpITBM3RxMc93DSOEcTO0kTs3bt2ImPuOusz931rr37au/G+5D9gjFF8AOJ0fyYt/tmvjffjGYAQiitKIII0X8SKSv/cgHIspzM3sCGIEJBFAURpiQZa/Kv/jzPY0OWZfyspCSxXCIcZbGyXCKnaSUN5uX5qhMtAIDhMcu8PC9nRRAhxwtYI3QsPz+aprEB4c3rjlkAwKOFm/ERf8+9TBAhzXAAIbRk9TsAgPLKenLWVVJSsnHjxuPHj+P0UdaTZrgcG5FoFD8CIaW2QQsy8iBCqKioqKCgYPv27RD++RqWS2QAlhVuAQB8sv9UEsbXrn2ltLS0t6cnH4DjhZwPx8WwkVYU3ZUxABY/XPg+Qqi6urq8/DODwQAhzML/AcDxQozjJ61TWfZhWlFwzWVZxqHFZMrq8E65Zg+fPqvp0tfWtzSe67Y7PQ43maVFkSQJplLYEdcsrSgcLyQlieMFQAWjUZpNK4qYTOGIbCxDCHjizY1llQihCB2zOrzWKde2zyu//O7Eji8O7q48fM0yZXV4SX8oHGVphovSTJRmqUAIIWXdln0ArEgraZZLZABYLjE4dFWlUpWWvltcXPzSi2sOH6kBdz0NwBIA7rjzmbcQUlgukaOora0VN1JchN4bfpVKtXXrluLiN5YWFFQe2AuWFQGwGIDHALibpMKCCEGEjoXDjMvl8s36rBara2bmBukHD627PDBRtrvm4VfLMJW4JLIsn66rZxgWfwyGGGouyPNxhmH9fgoh5R1V1b5q9XXH7LLCD2iGWwAwT9ksnmmLZ9p2w2O74bZ4nEGWsnicLr97lvKEo3Q8r00bm9QxLsMhxwsm84S2X39xuE831NM7OjBkHiVcRP/4wMXhvgnHuN3tjMdFYJt2/tLd8nOzul3b3qhpqq794WxHS4NGXXPqaM2po3Vnz/SNDeKmxADq5mZMkdPrPd+nSwjxudBcLB4jbMT+g18RdmuUpedCc1Q4dNnU66MCwOJ0tOk73y3dsvm998orKsp27NxUUlK2Y2d5RcXer/bVqRv6J43JmykMIEnSJb0+B9A9dEVMpkIMwyUS45Pm8ooKm8PBJRIhhuEh1JsMGQAqENL2qAdMrdYZQ9+wxjiptc4YJu1646S2b1jTceG03WkWxJs5gK6urlyR+wa6zqgPnjtfozc06HrrjePt+v6M0dZ5tK5h/+i4ThBvAq/HyTH2b7+pfHH1qrIPSouLXl75+CObNqzbtL54186Phg3dAXKSFyAGgBC2trVjgCjNeKeHEUJjo2N79+w5eeJE5YEDtceOHaqq6u3RI4R800OBgB+QpNvvHvC4rWPG3gQ3t/nt9ffff+/ZppM+j5X0TQXJ39yOwWyRFwA0rQttynKJGVufmJgzDl3as/vj9taGQ98fOHbk62NHvjZc6YRCeNbZTwUoEI6y2VhEkJoKkDaXc1x7oSNI2fw+66zH7HaOkqQ710XZDNpyFPlJj3d6NEhNxWMkE5nR9+roiDtGewOkfcZu9JMzmVEhSVJchGIylVMfFck/Cnk9CiFsb+/AAPPyvJB1jIuQ5RKBCOtwk2QgynIJHDA7eNKZYZdNP6NMLFb7k2bD+5+Oma2ShDtHyTXoLRnkOyZTSV3v4JOvvnXuVz0Ti+U7YoAFSUnymNne3HGZCmZm8t/XE4SwsakpD2BBJEliGPbileEozd6yoBYAItHoiMlEEISZIDTnLxCEZXDo6ojReMs6hBCaTKb8EDzPq5ubNa2trW1tNSfru3UXmlvO/dKiye2lTA0QQpf1Pc88t2pNYeErr732UuHaVS+s/nBbWdX3h0RR+KcNjA0zQexSqZ546unX31j37HPPL1+xcuu2bXU/nhkxGvGdhQwu6fX79h8Et92e3VBg+YqV9Wd+Onm67l8WOv41YjR2arWPPPr4PYsW3bv4vgeWLj1aW9vQ2Oj1ev9CkcPpnJiY7NbpOjs7u7q0BkO/mSD8fv8/Rc8JTdPXbbZr166NGI1mIiPXbTYf6bu1Bv+f/A66VmemD68GSQAAAABJRU5ErkJggg==" nextheight="752" nextwidth="1008" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><ul><li><p><strong>MCP</strong> focuses on <strong>"How Agents use tools"</strong>—providing models with unified and secure external resource access capabilities (such as databases, APIs, file systems, etc.), thereby standardizing agent-tool / agent-data interaction methods.</p></li><li><p><strong>A2A</strong> solves <strong>"How Agents collaborate with other Agents"</strong>—establishing native communication standards for cross-vendor, cross-framework agents, supporting multi-turn dialogue, task decomposition, state management, and long-lifecycle execution. It is the basic interoperability layer between agents.</p></li></ul><table style="min-width: 75px"><colgroup><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Feature</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>MCP (Model Context Protocol)</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>A2A (Agent to Agent)</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Main Goal</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Capability Extension:</strong>&nbsp;</p><p style="text-align: center">Connect AI to data and tools.</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Coordination:</strong>&nbsp;</p><p style="text-align: center">Connect AI to other AI.</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Analogy</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Vertical:</strong> Agent &lt;-&gt; Database/API</p><p style="text-align: center">Analogy: USB-C interface</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Horizontal:</strong> Agent &lt;-&gt; Agent</p><p style="text-align: center">Analogy: Internet</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>State Management</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Stateless:</strong> "Execute this function, then return the result to me."</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Stateful:</strong> "Take this task, update progress continuously, and tell me when finished."</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Typical Scenarios</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">Tool calling, data read/write, file processing, enterprise system integration</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Multi-Agent collaborative tasks, cross-platform agent interoperability, automated workflow</p></td></tr></tbody></table><p><br></p><h3 id="h-agentic-commerce-protocol-acp-ordering-and-checkout-protocol-openai-stripe" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>Agentic Commerce Protocol (ACP) – Ordering and Checkout Protocol (OpenAI × Stripe)</strong></h3><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.agenticcommerce.dev/"><u>ACP</u></a> (Agentic Commerce Protocol) is an open ordering standard (Apache 2.0) proposed by OpenAI and Stripe. It establishes a structured ordering process that can be directly understood by machines for <strong>Buyer—AI Agent—Merchant</strong>. The protocol covers product information, price and term verification, settlement logic, and payment credential transmission, enabling AI to safely initiate purchases on behalf of users without becoming a merchant itself.</p><p>Its core design is: AI calls the merchant's checkout interface in a standardized way, while the merchant retains full commercial and legal control. ACP enables merchants to enter the AI shopping ecosystem without transforming their systems by using structured orders (JSON Schema / OpenAPI), secure payment tokens (Stripe Shared Payment Token), compatibility with existing e-commerce backends, and supporting REST and MCP publishing capabilities. Currently, ACP has been used for ChatGPT Instant Checkout, becoming an early deployable payment infrastructure.</p><h3 id="h-agent-payments-protocol-ap2-digital-authorization-and-payment-instruction-protocol-google" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>Agent Payments Protocol (AP2) – Digital Authorization and Payment Instruction Protocol (Google)</strong></h3><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://github.com/google-agentic-commerce/AP2"><u>AP2</u></a> is an open standard jointly launched by Google and multiple payment networks and technology companies. It aims to establish a unified, compliant, and auditable process for <strong>AI Agent-led payments</strong>. It binds the user's payment intent, authorization scope, and compliance identity through cryptographically signed digital authorization credentials, providing merchants, payment institutions, and regulators with verifiable evidence of "who is spending money for whom".</p><p>AP2 takes "Payment-Agnostic" as its design principle, supporting credit cards, bank transfers, real-time payments, and accessing stablecoin and other crypto payment rails through extensions like x402. In the entire Agentic Commerce protocol stack, AP2 is not responsible for specific goods and ordering details, but provides a universal <strong>Agent payment authorization framework</strong> for various payment channels.</p><table style="min-width: 75px"><colgroup><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Project</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>ACP</strong></p><p style="text-align: center"><strong>&nbsp;(Agentic Commerce Protocol)</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>AP2</strong></p><p style="text-align: center"><strong>&nbsp;(Agent Payments Protocol)</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Lead</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">OpenAI × Stripe</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Google Cloud (Alliance Partners)</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Core Function</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Checkout Protocol:</strong> Lets AI Agents structurally call merchant checkout / ordering interfaces</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Authorization Protocol:</strong> Proves Agent has legal authorization to pay on behalf of user</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Analogy</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">Equivalent to Online POS / E-commerce Checkout Page</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Equivalent to Bank Card Chip + PIN Authorization Mechanism</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Crypto Connection</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">Mainly traditional payment channels</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Supports x402 extension for native stablecoin payment support</p></td></tr></tbody></table><p><br></p><h3 id="h-erc-8004-on-chain-agent-identity-reputation-verification-standard-ethereum" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>ERC-8004 – On-chain Agent Identity / Reputation / Verification Standard (Ethereum)</strong></h3><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://eips.ethereum.org/EIPS/eip-8004"><u>ERC-8004</u></a> is an Ethereum standard jointly proposed by MetaMask, Ethereum Foundation, Google, and Coinbase. It aims to build a <strong>cross-platform, verifiable, trustless</strong> identity and reputation system for AI Agents. The protocol consists of three on-chain parts:</p><ul><li><p><strong>Identity Registry:</strong> Mints a chain identity similar to NFT for each Agent, which can link cross-platform information such as MCP / A2A endpoints, ENS/DID, wallets, etc.</p></li><li><p><strong>Reputation Registry:</strong> Standardizes recording of scores, feedback, and behavioral signals, making the Agent's historical performance auditable, aggregatable, and composable.</p></li><li><p><strong>Validation Registry:</strong> Supports verification mechanisms such as stake re-execution, zkML, TEE, providing verifiable execution records for high-value tasks.</p></li></ul><p>Through ERC-8004, the Agent's identity, reputation, and behavior are preserved on-chain, forming a cross-platform discoverable, tamper-proof, and verifiable trust base, which is an important infrastructure for Web3 to build an open and trusted AI economy. ERC-8004 is in the Review stage, meaning the standard is basically stable and feasible, but is still soliciting broad community opinion and has not been finalized.</p><h3 id="h-x402-stablecoin-native-api-payment-rail-coinbase" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>x402 – Stablecoin Native API Payment Rail (Coinbase)</strong></h3><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.x402.org/"><u>x402</u></a> is an open payment standard (Apache-2.0) proposed by Coinbase. It turns the long-idle <strong>HTTP 402 Payment Required</strong> into a programmable on-chain payment handshake mechanism, allowing APIs and AI Agents to achieve <strong>accountless, frictionless, pay-per-use</strong> on-chain settlement without accounts, credit cards, or API Keys.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/ae7733549e41f4d367db110f5b26314a8049109c7891b02d2368a228655add08.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAATCAIAAAB+9pigAAAACXBIWXMAAAsTAAALEwEAmpwYAAAEP0lEQVR4nJ2Ub0wbZRzHb74zwRjjC01QM0wWTTSLyarMhQT3YkWWDKlbOV8QFhmuG8qSAqVC7MbpaOmxomXoqHQgsuK2qiVx6+jYoJZuhwNuKdBx0Jud67iE0lIK/fOUh/IYerEryIbz++Ly5Lnf83zu+3u+z2HosXIkxLIswzAsy3Ic5/V6PR4PwzBerxdC+PjlCCFsw1l+ZTAYFAgEQqFQJBKlpaVlZmamp7+UkZEhFAp3JQQA4IvXaRNAomIldYaiKKPReOPmzc5z59ra2inqRuq+j/q+jQH8uy8bdCcb9QwzYbVaHQ5HJBz2en1TTupkzUfDt2zlFZVyedXppibeQccF/Wfyw2PjozQ9QtMjjvHRVPBDQDzxZO9Nd17ukxNaGaH93dpvMBisVqvJZFIoTnQb9adVH1+zdKvJUxUJGY1GgiBy9+85ULzPZusn6kiiTq2QySbv3FndcHl5DQDCVUTzxctP7T00OHybe+DxJjTlcqlJMisry2q1zs8HbLa+UcfwmGMIoRWDoQvDMIlEEolGQoHZnIPl2OvZWEJE4zcIoSUI1wO6eqxH63XtXRePlZUV4HhJSYmu5cxZ3Xc4jjPMRFtbW1VVVUnJJ1Kp1O/3fy6XYxgmEuUvJRrS+atFVFy+/eWtz2/BdPoWvuGrgGTLkuGBEBqNRoVCUadUuiYn25sblUpCoVDMzHAxEIqEF0B0EUKgVikPFebLKiv4w0Bohe/zBilKPRaf38+74QVAbIqZWJz3IoRo6rdaWX5r07FmUnK8Mv8e64iEF5zO8dSMxAAAILrmkN1ut0qlEovFpaWlZrOZd4AQuvB9S6taCSFkmIlwKIAQuk11VxzJln0qLCvOlhQKxuneuYDP5Zr6d1LXODCbzVKplCTJ2tpajUbDcRwPKHwva2f6iwvBoMvlWgzOQhh30pd2vvXMG9uefu3VLXt3vzLjGZqfn3O7/9wEsE7+1RZBlao+NyfnXYFAIpHQNB1fhgAs3XVebSQKKo7ukRTt0p066ONG4wjdZdlNAOsuut/vRwhxHGexWC5d6fnr/n32ny0gBEsguAxDyzAMooFoNARh/P84SI6nhv74qUmrlleZf/zh/LfNg1d74okrmUwBhDCJ/68A36wvBkAcobGRoWpx/rPYQ9Xs38eMDMVXowUgXIIQAgDcbveTAUKhUGBuLhKNDg7YFPiHGRj2HIa9gGEZGPbFgbyzXzfvzso1dHQNDNj6rvdZLBaGYZ4MkKrh6736+gZ1DaE98ZW+vuFW75X+awNvbnv75/O/4HhBXl6eUCg0mUx8sdPp5DguFAr5E+Jv3yMBiR6vcDNzRZVnMsXHt39Q836x2sP5eIvV1dUikaioqEgsFuM4TlEUQmjHjndaW1tpmrbb7RaLxel0bu4gGS0AYgDE+B/k9IPpjo4OrVZLkqRGoyFJ0m63p9YnxwihvwGf1sm/xVoH5wAAAABJRU5ErkJggg==" nextheight="866" nextwidth="1456" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p style="text-align: center"><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://panteracapital.com/http-402s-modern-makeover/"><u>HTTP 402 Payment Flow</u></a>. Source: <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://x.com/0xfishylosopher"><u>Jay Yu@Pantera Capital</u></a></p><p><strong>Core Mechanism:</strong> The x402 protocol revives the HTTP 402 status code left over from the early internet. Its workflow is:</p><ol><li><p><strong>Request &amp; Negotiation:</strong> Client (Agent) initiates request -&gt; Server returns 402 status code and payment parameters (e.g., amount, receiving address).</p></li><li><p><strong>Autonomous Payment:</strong> Agent locally signs the transaction and broadcasts it (usually using stablecoins like USDC), without human intervention.</p></li><li><p><strong>Verification &amp; Delivery:</strong> After the server or third-party "Facilitator" verifies the on-chain transaction, resources are released instantly.</p></li></ol><p>x402 introduces the <strong>Facilitator</strong> role as middleware connecting Web2 APIs and the Web3 settlement layer. The Facilitator is responsible for handling complex on-chain verification and settlement logic, allowing traditional developers to monetize APIs with minimal code. The server side does not need to run nodes, manage signatures, or broadcast transactions; it only needs to rely on the interface provided by the Facilitator to complete on-chain payment processing. Currently, the most mature Facilitator implementation is provided by the Coinbase Developer Platform.</p><p>The technical advantages of x402 are: supporting on-chain micropayments as low as 1 cent, breaking the limitation of traditional payment gateways unable to handle high-frequency small-amount calls in AI scenarios; completely removing accounts, KYC, and API Keys, enabling AI to autonomously complete M2M payment closed loops; and achieving gasless USDC authorized payments through EIP-3009, natively compatible with Base and Solana, possessing multi-chain scalability.</p><p>Based on the introduction of the core protocol stack of Agentic Commerce, the following table summarizes the positioning, core capabilities, main limitations, and maturity assessment of the protocols at each level, providing a clear structural perspective for building a cross-platform, executable, and payable agent economy.</p><br><table style="min-width: 125px"><colgroup><col><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p><strong>Layer</strong></p></td><td colspan="1" rowspan="1"><p><strong>Protocol</strong></p></td><td colspan="1" rowspan="1"><p><strong>Positioning</strong></p></td><td colspan="1" rowspan="1"><p><strong>Limitations / Risks</strong></p></td><td colspan="1" rowspan="1"><p><strong>Maturity</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Discovery</strong></p></td><td colspan="1" rowspan="1"><p><strong>A2A (Google)</strong></p></td><td colspan="1" rowspan="1"><p>Standardized Multi-Agent Service Discovery &amp; Interoperability</p></td><td colspan="1" rowspan="1"><p>Relies on Google ecosystem; uneven cross-vendor adoption; may be limited by giant's closure in future</p></td><td colspan="1" rowspan="1"><p><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span></p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Discovery</strong></p></td><td colspan="1" rowspan="1"><p><strong>MCP (Anthropic)</strong></p></td><td colspan="1" rowspan="1"><p>Unified Tool and Data Access Interface</p></td><td colspan="1" rowspan="1"><p>Ecosystem may fragment; tools need active integration; risk of replacement by larger vendor standards</p></td><td colspan="1" rowspan="1"><p><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span></p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Discovery/Trust</strong></p></td><td colspan="1" rowspan="1"><p><strong>ERC-8004 (Ethereum)</strong></p></td><td colspan="1" rowspan="1"><p>On-chain Verifiable Identity, Reputation &amp; Execution Records</p></td><td colspan="1" rowspan="1"><p>Separated from Web2/KYC system; needs wide integration to form network effect</p></td><td colspan="1" rowspan="1"><p><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span></p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Ordering</strong></p></td><td colspan="1" rowspan="1"><p><strong>ACP (OpenAI × Stripe)</strong></p></td><td colspan="1" rowspan="1"><p>Structured description of goods, price, terms, generating fulfillable orders</p></td><td colspan="1" rowspan="1"><p>Highly dependent on Stripe merchants; limited coverage; high closeness, insufficient documentation</p></td><td colspan="1" rowspan="1"><p><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span></p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Authorization</strong></p></td><td colspan="1" rowspan="1"><p><strong>AP2 (Google)</strong></p></td><td colspan="1" rowspan="1"><p>Compliance expression of user intent and payment authorization (mandate)</p></td><td colspan="1" rowspan="1"><p>Strong reliance on real-name/KYC; inconsistent regulation; hard to enter non-KYC on-chain scenarios</p></td><td colspan="1" rowspan="1"><p><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span></p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Payment</strong></p></td><td colspan="1" rowspan="1"><p><strong>X402 (Coinbase)</strong></p></td><td colspan="1" rowspan="1"><p>Stablecoin API Payment Rail, suitable for Automation &amp; M2M</p></td><td colspan="1" rowspan="1"><p>Merchants need to adapt; stablecoin regulatory uncertainty; complex multi-chain execution path</p></td><td colspan="1" rowspan="1"><p><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span></p></td></tr></tbody></table><p><br></p><h2 id="h-iv-web3-agentic-commerce-ecosystem-representative-projects" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>IV. Web3 Agentic Commerce Ecosystem Representative Projects</strong></h2><p>Currently, the Web3 ecosystem of Agentic Commerce can be divided into three layers:</p><ul><li><p><strong>Business Payment Systems Layer (L3):</strong> Includes projects like Skyfire, Payman, Catena Labs, Nevermined, providing payment encapsulation, SDK integration, quota and permission governance, human approval, and compliance access. They connect to traditional financial rails (banks, card organizations, PSP, KYC/KYB) to varying degrees, building a bridge between payment business and the machine economy.</p></li><li><p><strong>Native Payment Protocol Layer (L2):</strong> Consists of protocols like x402, Virtual ACP and their ecosystem projects. Responsible for charge requests, payment verification, and on-chain settlement. This is the core that truly achieves automated, end-to-end clearing in the Agent economy. x402 relies completely on no banks, card organizations, or payment service providers, providing on-chain native M2M/A2A payment capabilities.</p></li><li><p><strong>Infrastructure Layer (L1):</strong> Includes Ethereum, Base, Solana, and Kite AI, providing the trusted technical stack base for payment and identity systems, such as on-chain execution environments, key systems, MPC/AA, and permission Runtimes.</p></li></ul><table style="min-width: 100px"><colgroup><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p><strong>Layer</strong></p></td><td colspan="1" rowspan="1"><p><strong>Name</strong></p></td><td colspan="1" rowspan="1"><p><strong>Core Function</strong></p></td><td colspan="1" rowspan="1"><p><strong>Web3 Projects</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>L3</strong></p></td><td colspan="1" rowspan="1"><p><strong>Business Payment Systems Layer</strong></p></td><td colspan="1" rowspan="1"><p>Provide Agents with payment encapsulation, SDK integration, quota/permission/policy governance, human approval &amp; compliance access</p></td><td colspan="1" rowspan="1"><p>Skyfire, Payman, Catena Labs, Nevermined</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>L2</strong></p></td><td colspan="1" rowspan="1"><p><strong>Native Payment Protocol Layer</strong></p></td><td colspan="1" rowspan="1"><p>Initiate charge requests to Agents; Facilitator completes transmission, verification &amp; on-chain settlement</p></td><td colspan="1" rowspan="1"><p>x402, Virtuals ACP (Agent Commerce Protocol)</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>L1</strong></p></td><td colspan="1" rowspan="1"><p><strong>Infrastructure Layer</strong></p></td><td colspan="1" rowspan="1"><p>Provide underlying capabilities like on-chain execution environment, wallet signing, MPC/AA, Permission Runtime</p></td><td colspan="1" rowspan="1"><p>Ethereum, Base (EVM), Solana (SVM), Kite AI (Payment L1)</p></td></tr></tbody></table><p><br></p><h3 id="h-l3-skyfire-identity-and-payment-credentials-for-ai-agents" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>L3 - Skyfire: Identity and Payment Credentials for AI Agents</strong></h3><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://skyfire.xyz/"><u>Skyfire</u></a> takes <strong>KYA + Pay</strong> as its core, abstracting "Identity Verification + Payment Authorization" into JWT credentials usable by AI, providing verifiable automated access and deduction capabilities for websites, APIs, and MCP services. The system automatically generates Buyer/Seller Agents and custodial wallets for users, supporting top-ups via cards, banks, and USDC.</p><p>At the system level, Skyfire generates Buyer/Seller Agents and custodial wallets for each user. Its biggest advantage is full compatibility with Web2 (JWT/JWKS, WAF, API Gateway can be used directly), providing "identity-bearing automated paid access" for content sites, data APIs, and tool SaaS.</p><p>Skyfire is a realistically usable Agent Payment middle layer, but identity and asset custody are centralized solutions.</p><h3 id="h-l3-payman-ai-native-fund-authority-risk-control" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>L3 - Payman: AI Native Fund Authority Risk Control</strong></h3><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://paymanai.com/"><u>Payman</u></a> provides four capabilities: <strong>Wallet, Payee, Policy, Approval</strong>, building a governable and auditable "Fund Authority Layer" for AI. AI can execute real payments, but all fund actions must meet quotas, policies, and approval rules set by users. Core interaction is done through the payman.ask() natural language interface, where the system is responsible for intent parsing, policy verification, and payment execution.</p><p>Payman's key value lies in: <strong>"AI can move money, but never oversteps authority."</strong> It migrates enterprise-level fund governance to the AI environment: automated payroll, reimbursement, vendor payments, bulk transfers, etc., can all be completed within clearly defined permission boundaries. Payman is suitable for internal financial automation of enterprises and teams (salary, reimbursement, vendor payment, etc.), positioned as a <strong>Controlled Fund Governance Layer</strong>, and does not attempt to build an open Agent-to-Agent payment protocol.</p><h3 id="h-l3-catena-labs-agent-identitypayment-standard" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>L3 - Catena Labs: Agent Identity/Payment Standard</strong></h3><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://catenalabs.com/"><u>Catena</u></a> uses AI-Native financial institutions (custody, clearing, risk control, KYA) as the commercial layer and <strong>ACK (Agent Commerce Kit)</strong> as the standard layer to build the Agent's unified identity protocol (ACK-ID) and Agent-native payment protocol (ACK-Pay). The goal is to fill the missing verifiable identity, authorization chain, and automated payment standards in the machine economy.</p><p>ACK-ID establishes the Agent's ownership chain and authorization chain based on DID/VC; ACK-Pay defines payment request and verifiable receipt formats decoupled from underlying settlement networks (USDC, Bank, Arc). Catena emphasizes long-term cross-ecosystem interoperability, and its role is closer to the <strong>"TLS/EMV layer of the Agent economy"</strong>, with strong standardization and a clear vision.</p><h3 id="h-l3-nevermined-metering-billing-and-micropayment-settlement" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>L3 - Nevermined: Metering, Billing and Micropayment Settlement</strong></h3><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://nevermined.ai/"><u>Nevermined</u></a> focuses on the AI <strong>usage-based economic model</strong>, providing Access Control, Metering, Credits System, and Usage Logs for automated metering, pay-per-use, revenue sharing, and auditing. Users can top up credits via Stripe or USDC, and the system automatically verifies usage, deducts fees, and generates auditable logs for each API call.</p><p>Its core value lies in supporting <strong>sub-cent real-time micropayments</strong> and Agent-to-Agent automated settlement, allowing data purchase, API calls, workflow scheduling, etc., to run in a "pay-per-call" manner. Nevermined does not build a new payment rail, but builds a <strong>metering/billing layer on top of payment</strong>: promoting AI SaaS commercialization in the short term, supporting A2A marketplace in the medium term, and potentially becoming the micropayment fabric of the machine economy in the long term.</p><table style="min-width: 150px"><colgroup><col><col><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Layer</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Core Function</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Skyfire</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Payman</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Catena</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Nevermined</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Protocol</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">Build payment protocol/settlement protocol</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="cross_mark" class="emoji" data-type="emoji">❌</span> Compatible with Web2 Standards</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="cross_mark" class="emoji" data-type="emoji">❌</span> Pure API Encapsulation</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="check_mark" class="emoji" data-type="emoji">✔</span> Builds ACK Standard</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="cross_mark" class="emoji" data-type="emoji">❌</span> Metering Protocol, not Payment Protocol</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Payment Access &amp; Settlement</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">Agent compliance entry: Bank / Card Org / Stablecoin / KYC/KYB</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="check_mark" class="emoji" data-type="emoji">✔</span> Card + Bank + USDC + KYA</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="cross_mark" class="emoji" data-type="emoji">❌</span> Funds flow needs external account</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="check_mark" class="emoji" data-type="emoji">✔</span> KYA + Custodial Account + Bank Clearing</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="warning" class="emoji" data-type="emoji">⚠</span> Relies on Stripe/USDC</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Fund Operations</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">Wallet / Limits / Approval / Permission</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="white_circle" class="emoji" data-type="emoji">⚪</span> Basic wallet limit control</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="check_mark" class="emoji" data-type="emoji">✔</span> Complete Wallet + Policy Rules + Approval Process</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="warning" class="emoji" data-type="emoji">⚠</span> Custody &amp; Risk Control, but not Policy/Approval System</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="cross_mark" class="emoji" data-type="emoji">❌</span> No Wallet/Approval</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Metering &amp; Billing</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">Metering, Billing &amp; Revenue Sharing</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="cross_mark" class="emoji" data-type="emoji">❌</span></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="cross_mark" class="emoji" data-type="emoji">❌</span></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="white_circle" class="emoji" data-type="emoji">⚪</span> Basic</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="check_mark" class="emoji" data-type="emoji">✔</span> Core Strength</p></td></tr></tbody></table><p><br></p><p>Skyfire, Payman, Catena Labs, and Nevermined belong to the business payment layer and all need to connect to banks, card organizations, PSPs, and KYC/KYB to varying degrees. But their real value is not in "accessing fiat", but in solving machine-native needs that traditional finance cannot cover—identity mapping, permission governance, programmatic risk control, and pay-per-use.</p><ul><li><p><strong>Skyfire (Payment Gateway):</strong> Provides "Identity + Auto-deduction" for Websites/APIs (On-chain identity mapping to Web2 identity).</p></li><li><p><strong>Payman (Financial Governance):</strong> Policy, quota, permission, and approval for internal enterprise use (AI can spend money but not overstep).</p></li><li><p><strong>Catena Labs (Financial Infrastructure):</strong> Combines with banking system, building (AI Compliance Bank) through KYA, custody, and clearing services.</p></li><li><p><strong>Nevermined (Cashier):</strong> Does metering and billing on top of payment; payment relies on Stripe/USDC.</p></li></ul><p>In contrast, <strong>x402</strong> is at a lower level and is the only native on-chain payment protocol that does not rely on banks, card organizations, or PSPs. It can directly complete on-chain deduction and settlement via the 402 workflow. Upper-layer systems like Skyfire, Payman, and Nevermined can call x402 as a settlement rail, thereby providing Agents with a truly M2M / A2A automated native payment closed loop.</p><h3 id="h-l2-x402-ecosystem-from-client-to-on-chain-settlement" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>L2 - x402 Ecosystem: From Client to On-chain Settlement</strong></h3><p>The x402 native payment ecosystem can be divided into four levels: Client, Server, Payment Execution Layer (Facilitators), and Blockchain Settlement Layer. The Client is responsible for allowing Agents or Apps to initiate payment requests; the Server provides data, reasoning, or storage API services to Agents on a per-use basis; the Payment Execution Layer completes on-chain deduction, verification, and settlement, serving as the core execution engine of the entire process; the Blockchain Settlement Layer undertakes the final token deduction and on-chain confirmation, realizing tamper-proof payment finality.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/c0912e52c40d78576be95b1c1ea11cf97cec5376315e62d08d6dfd6bf16a8bae.png" alt="" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAATCAIAAAB+9pigAAAACXBIWXMAAAsTAAALEwEAmpwYAAAEHElEQVR4nJWUb0wbdRjHf74zvtoLo3Fv1Bh96wtegQ6iJr4zcQpT35gYE15saGRuiyAwLBuUchy0vf67K71bb/XKXcsdl+Oa48qVlWOFdi2s4MTNLggicSzglBVzxJ4pNzvXIsMn31x+eXL5ffJ8n+f3AP3RWFhYJMlAPr+z9dsf9+5tb98vHvL5nXFZQRyefH6nJE3bzeWWKYoewi/Nz13XtN1SfmVl1YN6MWxoY+MOKAPoeqGte8gEk5tbv8+kFwVRuZX7+YcfV3EqYkPDgVC0TJhfcPq4yjxORXLLa7peANv3d3Rd1/Yin88XU09XA3Bk9faiB/VCECSKoizLKIqa7ZTTx5UJdjMWJOgi+LK8i+B/3djUdb28Ak3bHSK5MXGcoqhes9lqtdqssAsjAQAXBkicirgIHvMLJdm9LOxmvOTDTEkPAEeqGiLjV+w2G0lezmQy6cy8y8e2nu/v7vMY6oHQcx394JnXzHaq8qLHAwB4ieGkpqaTXV0XWI41dV2U42lRSY5NzLp8LOygaC4mXUnLsZRhBeYX3PioGx81brGhYQgZPghQMuevQuFPbXeP+Sw4eszj6Kuqqqqpqa6vfz9Akk3NX5NBiZcSOBUhGdmQ4RjsZipvfwgoNfmfb+FEo+nzNkQYT4wIU7yUYEYnaS7Giyp47ljDpx3f3VjE/dSg1RFXrwpitMvstCDBQ1VgRD6/I8ozBCXZvey/R4UdUwF4/vV3Tq3+dBPHcRRFVVWdnIyfaTFbkCBORQ4L0LRdnIo4fVxlMzEfkc2mJHnC4XBAUH9yZrqzxwkA6HfR/w8QCEUrZ1GUZwB4BTzxcjqprv2yvrS0pKrT19JpmotZkOBBPdgXUFaB08fRXOypV9+refvjzbvrU+o0z/PZbPZaOpO9cdv4x0XwOBXBaclF8MVy/cLG3a3DAuwPKnjyxeqP5jOJQSvSBw0IY2IozH3W3HKi0RQS4nQ40maCWjrMdFiSlNRjAGUWYXs6+ZUVdjOBUJRiFYpVjEndaz548/ipCVk4c/Zcc3MzSZKyLDmdnpW1O/sDSEauBNi9rDFXEDJsrBobGrahYZqLtZyHlVh8PpNBUXRg0HpzaaG92wPAUeMB/KdFRZcCojEepaWmqHO55fWrs9dHRqXc8nome+uN+tNftkJONICg3w4il2CEIMhw4xc94IW3DgIEQtERYarPSl7s8xJUhJcS7Jh6trUXw4aCQWbQirS0ttPDDOLwAgBOd7pCQtw3LOF0Ub5hiWIVOZbad4oKuq7DVsxPXu7p6a6tra2rq/vwg4b2b3oBAJ0WwlgMRQMDomFaIBS1IEHjXBLsZmA3s3+TdV1PzM7LSmJSTU/PZJPp7xPJhVEx/u4nHXIsJSmPSFSSkpKamMpU5hV1TtvbbH8DVXoSuL4wWN0AAAAASUVORK5CYII=" nextheight="738" nextwidth="1222" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p style="text-align: center">x402 Payment Flow Source: <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.x402.org/x402-whitepaper.pdf"><u>x402 Whitepaper</u></a></p><ul><li><p><strong>Client-Side Integrations / The Payers:</strong> Enable Agents or Apps to initiate x402 payment requests, the "starting point" of the entire payment process. Representative projects:</p><ul><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://portal.thirdweb.com/"><u>thirdweb Client SDK</u></a>: The most commonly used x402 client standard in the ecosystem, actively maintained, multi-chain support, default tool for developers to integrate x402.</p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.nuwa.dev/"><u>Nuwa AI</u></a>: Enables AI to directly pay for x402 services without coding, representative project of "Agent Payment Entrance".</p></li><li><p>Others like Axios/Fetch, Mogami Java SDK, Tweazy are early clients.</p></li><li><p><em>Current status:</em> Existing clients are still in the "SDK Era", essentially developer tools. More advanced forms like Browser/OS clients, Robot/IoT clients, or Enterprise systems managing multi-wallet/multi-Facilitator have not yet appeared.</p></li></ul></li><li><p><strong>Services / Endpoints / The Sellers:</strong> Sell data, storage, or reasoning services to Agents on a per-use basis. Representative projects:</p><ul><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://aisa.one/"><u>AIsa</u></a>:&nbsp; provides payment and settlement infrastructure for real AI Agents to access data, content, compute, and third-party services on a per-call, per-token, or usage basis, and is currently the top project by x402 request volume.</p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.firecrawl.dev/"><u>Firecrawl</u></a>: Web parsing and structured crawler entrance most frequently consumed by AI Agents.</p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://402.pinata.cloud/"><u>Pinata</u></a>: Mainstream Web3 storage infrastructure, x402 covers real underlying storage costs, not lightweight API.</p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.itsgloria.ai/"><u>Gloria AI</u></a>: Provides high-frequency real-time news and structured market signals, intelligence source for Trading and Analytical Agents.</p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://aeon.xyz/AIPayment"><u>AEON</u></a>: Extends x402 + USDC to online &amp; offline merchant acquiring in Southeast Asia / LatAm / Africa. Reaching up to 50 million merchants.</p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://neynar.com/blog/agents-frames-and-the-future-of-farcaster-neynar-s-vision-for-x402"><u>Neynar</u></a>: Farcaster social graph infrastructure, opening social data to Agents via x402.</p></li><li><p><em>Current status:</em> Server side is concentrated in crawler/storage/news APIs. Critical layers like financial transaction execution APIs, ad delivery APIs, Web2 SaaS gateways, or APIs executing real-world tasks are almost undeveloped.</p></li></ul></li><li><p><strong>Facilitators / The Processors:</strong> Complete on-chain deduction, verification, and settlement. The core execution engine of x402. Representative projects:</p><ul><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://docs.cdp.coinbase.com/x402/welcome"><u>Coinbase Facilitator (CDP)</u></a>: Enterprise-grade trusted executor, Base mainnet zero fees + built-in OFAC/KYT, strongest choice for production environment.</p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://payai.network/"><u>PayAI Facilitator</u></a>: Execution layer project with widest multi-chain coverage and fastest growth (Solana, Polygon, Base, Avalanche, etc.), highest usage multi-chain Facilitator in the ecosystem.</p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://facilitator.daydreams.systems/"><u>Daydreams</u></a>: Project combining payment execution with LLM reasoning routing, currently the fastest-growing "AI Reasoning Payment Executor", becoming the third pole in the x402 ecosystem.</p></li><li><p>Others: According to<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.x402scan.com/facilitators"> <u>x402scan</u></a> data, there are long-tail Facilitators/Routers like Dexter, Virtuals Protocol, OpenX402, CodeNut, Heurist, Thirdweb, etc., but volume is significantly lower than the top three.</p></li></ul></li><li><p><strong>Blockchain Settlement Layer:</strong> The final destination of the x402 payment workflow. Responsible for actual token deduction and on-chain confirmation.</p><ul><li><p><strong>Base:</strong> Promoted by CDP official Facilitator, USDC native, stable fees, currently the settlement network with the largest transaction volume and number of sellers.</p></li><li><p><strong>Solana:</strong> Key support from multi-chain Facilitators like PayAI, fastest growing in high-frequency reasoning and real-time API scenarios due to high throughput and low latency.</p></li><li><p><em>Trend:</em> The chain itself doesn't participate in payment logic. With more Facilitators expanding, x402's settlement layer will show a stronger multi-chain trend.</p></li></ul></li></ul><table style="min-width: 100px"><colgroup><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p><strong>Layer</strong></p></td><td colspan="1" rowspan="1"><p><strong>Core Function</strong></p></td><td colspan="1" rowspan="1"><p><strong>Role Positioning</strong></p></td><td colspan="1" rowspan="1"><p><strong>Representative Projects</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Client-Side Integrations / The Payers</strong></p></td><td colspan="1" rowspan="1"><p>Let Agent or App initiate x402 payment request</p></td><td colspan="1" rowspan="1"><p>Construct 402 Request Header, responsible for initiating payment call; not responsible for deduction/verification</p></td><td colspan="1" rowspan="1"><p><strong>thirdweb Client SDK</strong> — Industry standard, multi-chain support, developer default choice</p><p><br></p><p><strong>Nuwa AI</strong> — Strongest Agent client, lets AI directly pay for services</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Services / Endpoints / The Sellers</strong></p></td><td colspan="1" rowspan="1"><p>Provide pay-per-use API / Content services to Agent</p></td><td colspan="1" rowspan="1"><p>Merchant side: Charge per call, return data after verifying payment</p></td><td colspan="1" rowspan="1"><p><strong>Firecrawl</strong> — Strongest "Killer Service", crawler/parsing API most consumed by AI</p><p><br></p><p><strong>Pinata</strong> — Web3 infrastructure giant, proving x402 can cover real infrastructure costs</p><p><br></p><p><strong>AIsa</strong>&nbsp; — provides paid API call &amp; settlement infrastructure for AI Agents and is currently No1 by x402 request volume.</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Facilitators / The Processors</strong></p></td><td colspan="1" rowspan="1"><p>Execute on-chain payment → Verify payment → Return proof</p></td><td colspan="1" rowspan="1"><p>x402's "Payment Executor"; Truly helps Agent send money</p></td><td colspan="1" rowspan="1"><p><strong>Coinbase Facilitator</strong> — Enterprise-grade trusted, Base zero fees + OFAC/KYT</p><p><br></p><p><strong>PayAI Facilitator</strong> — Fastest growth, best Solana multi-chain support</p></td></tr></tbody></table><p><br>In the x402 payment system, the <strong>Facilitator</strong> is the only role that truly executes on-chain payments and is closest to "protocol-level revenue": responsible for verifying payment authorization, submitting and tracking on-chain transactions, generating auditable settlement proofs, and handling replay, timeout, multi-chain compatibility, and basic compliance checks. Unlike Client SDKs (Payers) and API Servers (Sellers) which only handle HTTP requests, it is the final clearing outlet for all M2M/A2A transactions, controlling traffic entrance and settlement charging rights, thus being at the core of value capture in the Agent economy.</p><p>However, reality is that most projects are still in testnet or small-scale Demo stages, essentially lightweight "Payment Executors", lacking moats in key capabilities like identity, billing, risk control, and multi-chain steady-state handling, showing obvious low-threshold and high-homogeneity characteristics. As the ecosystem matures, facilitators backed by Coinbase, with strong advantages in stability and compliance, do enjoy a clear early lead. However, as <strong>CDP facilitator</strong>s begin charging fees while others may remain free or experiment with alternative monetization models, the overall market structure and share distribution still have significant room to evolve. In the long run, x402 is still an interface layer and cannot carry core value. What truly possesses sustainable competitiveness are comprehensive platforms capable of building <strong>identity, billing, risk control, and compliance systems</strong> on top of settlement capabilities.</p><h3 id="h-l2-virtual-agent-commerce-protocol" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>L2 - Virtual Agent Commerce Protocol</strong></h3><p>Virtual's<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://app.virtuals.io/research/agent-commerce-protocol"> <u>Agent Commerce Protocol (ACP)</u></a> provides a common commercial interaction standard for autonomous AI. Through a four-stage process of <strong>Request → Negotiation → Transaction → Evaluation</strong>, it enables independent agents to request services, negotiate terms, complete transactions, and accept quality assessments in a secure and verifiable manner. ACP uses blockchain as a trusted execution layer to ensure the interaction process is auditable and tamper-proof, and establishes an incentive-driven reputation system by introducing Evaluator Agents, allowing heterogeneous and independent professional Agents to form an "autonomous commercial body" and conduct sustainable economic activities without central coordination. Currently, ACP has moved beyond the purely experimental stage. Adoption through the Virtuals ecosystem suggests early network effects, looking more than "multi-agent commercial interaction standards".</p><h3 id="h-l1-infrastructure-layer-emerging-agent-native-payment-chain" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>L1 Infrastructure Layer - Emerging Agent Native Payment Chain</strong></h3><p>Mainstream general public chains like Ethereum, Base (EVM), and Solana provide the most core execution environment, account system, state machine, security, and settlement foundation for Agents, possessing mature account models, stablecoin ecosystems, and broad developer bases.</p><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://gokite.ai/"><strong><u>Kite AI</u></strong></a> is a representative "Agent Native L1" infrastructure, specifically designing the underlying execution environment for Agent payment, identity, and permission. Its core is based on the <strong>SPACE framework</strong> (Stablecoin native, Programmable constraints, Agent-first certification, Compliance audit, Economically viable micropayments), and implements fine-grained risk isolation through a three-layer key system of Root→Agent→Session. Combined with optimized state channels to build an "Agent Native Payment Railway", it suppresses costs to $0.000001 and latency to the hundred-millisecond level, making API-level high-frequency micropayments feasible. As a general execution layer, Kite is upward compatible with x402, Google A2A, Anthropic MCP, and downward compatible with OAuth 2.1, aiming to become a unified Agent payment and identity base connecting Web2 and Web3.</p><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://aisa.one/"><strong><u>AIsaNet </u></strong></a>integrates x402 and<strong> L402</strong> (the Lightning Network–based 402 payment protocol standard developed by <strong>Lightning Labs</strong>) as a micro-payment and settlement layer for AI Agents, supporting high-frequency transactions, cross-protocol call coordination, settlement path selection, and transaction routing, enabling Agents to perform cross-service, cross-chain automated payments without understanding the underlying complexity.</p><h2 id="h-v-summary-and-outlook-from-payment-protocols-to-reconstruction-of-machine-economic-order" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>V. Summary and Outlook: From Payment Protocols to Reconstruction of Machine Economic Order</strong></h2><p><br><strong>Agentic Commerce</strong> is the establishment of a completely new economic order dominated by machines. It is not as simple as "AI placing orders automatically", but a reconstruction of the entire cross-subject link: how services are discovered, how credibility is established, how orders are expressed, how permissions are authorized, how value is cleared, and who bears disputes. The emergence of A2A, MCP, ACP, AP2, ERC-8004, and x402 standardizes the "commercial closed loop between machines".</p><p>Along this evolutionary path, future payment infrastructure will diverge into two parallel tracks: one is the <strong>Business Governance Track</strong> based on traditional fiat logic, and the other is the <strong>Native Settlement Track</strong> based on the x402 protocol. The value capture logic between the two is different.</p><h4 id="h-1-business-governance-track-web3-business-payment-system-layer" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>1. Business Governance Track: Web3 Business Payment System Layer</strong></h4><ul><li><p><strong>Applicable Scenarios:</strong> Low-frequency, non-micropayment real-world transactions (e.g., procurement, SaaS subscription, physical e-commerce).</p></li><li><p><strong>Core Logic:</strong> Traditional fiat will dominate for a long time. Agents are just smarter front-ends and process coordinators, not replacements for Stripe / Card Organizations / Bank Transfers. The hard obstacles for stablecoins to enter the real commercial world on a large scale are regulation and taxation.</p></li><li><p>The value of projects like Skyfire, Payman, Catena Labs lies not in underlying payment routing (usually done by Stripe/Circle), but in <strong>"Machine Governance-as-a-Service"</strong>. That is, solving machine-native needs that traditional finance cannot cover—identity mapping, permission governance, programmatic risk control, liability attribution, and M2M / A2A micropayment (settlement per token / second). The key is who can become the "AI Financial Steward" trusted by enterprises.</p></li></ul><h4 id="h-2-native-settlement-track-x402-protocol-ecosystem-and-the-endgame-of-facilitators" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>2. Native Settlement Track: x402 Protocol Ecosystem and the Endgame of Facilitators</strong></h4><ul><li><p><strong>Applicable Scenarios:</strong> High-frequency, micropayment, M2M/A2A digital native transactions (API billing, resource stream payments).</p></li><li><p><strong>Core Logic:</strong> x402 as an open standard achieves atomic binding of payment and resources through the HTTP 402 status code. In programmable micropayment and M2M / A2A scenarios, x402 is currently the protocol with the most complete ecosystem and most advanced implementation (HTTP native + on-chain settlement). Its status in the Agent economy is expected to be analogous to <strong>'Stripe for agents'</strong>.</p></li><li><p>Simply accessing x402 on the Client or Service side does not bring sector premium; what truly has growth potential are upper-layer assets that can precipitate long-term repurchases and high-frequency calls, such as OS-level Agent clients, Robot/IoT wallets, and high-value API services (market data, GPU reasoning, real-world task execution, etc.).</p></li><li><p><strong>Facilitator</strong>, as the protocol gateway assisting Client and Server to complete payment handshake, invoice generation, and fund clearing, controls both traffic and settlement fees, and is the link closest to "revenue" in the current x402 Stack. Most Facilitators are essentially just "Payment Executors" with obvious low-threshold and homogeneity characteristics. Giants with availability and compliance advantages (like Coinbase) will form a dominant pattern. The core value to avoid marginalization will move up to the <strong>"Facilitator + X" service layer</strong>: providing high-margin capabilities such as arbitration, risk control, and treasury management by building verifiable service catalogs and reputation systems.</p></li></ul><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/8859b6f2e53a742e1fee38ad8a09e84dc2a92766a54e5f7093324810c6b1e2e5.jpg" alt="" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAARCAIAAAAzPjmrAAAACXBIWXMAAAsTAAALEwEAmpwYAAAF9klEQVR4nEWTaVBTVxTHLy8vIUQIkBAICWR7LwsEDEsSlpeY9WWHELMQMIgiW6AJCVoIVpAiUosLVqFV0XG0daPFaqt2qtKhTu0ynXa0jvrBWmf6wXa6zLRf2s70w+u80E5n7pw59907//87v3sOAEAMIAmAJWQE6UhuUYghozDLYFYFlVNNLaylFmpo3DpakZZapEkvNZVTDbNVcF4lJVcJMeQARkgpUgEhI5mLIboCkBtYmkGXkTfW1GkIJUcB55VT2aQ6jaulcevoPCyLp2fwDAy+ns7D6Dwsk1tPOnGqqez1lFwlJUcBaAipDiMARtN+ItIYQChEV0AMBfkVIg8oOfK0dBWNq6UXNzD4unX8DcxSS64QZ5Zac4U4mQjNOaVGOg8jy+JqqZwqmFUJZcvXRDJoUjIBElKWtKKtVScAgA+AKCMThbKklFwlXKCmFtYzig15QjtL5MoXOSWaraLaznyxnY04maXWnBJzFk8PF6jhfLIIEgMJR0guEjjyHyIIKRbo3L7xjZEZh2+sJTLTFp1v6Zpzbdrr752XqNsLELegKsKvCCl0g1rPCF/ZXIA2cWX+fJGThTgmt08MdMV39Cd3Dm7vCmzrCm7t8EYySEQSKKuMNIAoUv2GAG7vxHRBC77ZaN9i8fS5gwm77wVjc9zVPlqs9Jcb4vW+aVvPMa0npTNvqsL65apulsjNKLJgjj7cn9C7ttma+l2uLr11U63GQ1aQgUKMMgAyUAAKWrt2L918wBXo5y/eNbYkKPk1m0cXb9z7+eCZW2zEWW6I1zrHseC+jS++E9w6ZXb1aPV70PXbSio8x/dM/Xp9OdLcD7M100M7ncZ4yB2cjs/msRoAKEm/AZCUGwbevP/Xx78QO5eeLD0lpq7/ePHL51//8OfTv4nfCeLOo+8vfvIsPndD45nwJi7g4WmDO35h525iaZ64dop4cof44mPi2hli/u630w8udH312UvfrKYeHQ+smNH+dBfB0mpLtNY53Nw7E4gd0IdGK+zx2K7D++ZeH9879/LsG4cWL63eu//wN2Jx9bk7ehbzTxLv7yc+v/LH7LVzAyeO9B4+sOWVs0OHJlunOkz7Bjyzfm1qk3G8z74niL2YRgRL195dWNpYrXKyeDWAKvEEBhdPL88dv3R6aSWy42j9xpHrt1fv/0Wkzj95tjhD3HvvbnTFbhy0+IcjiVdUnlidb0hcF5BhXrW1lSfToZX4eq2TK8Qy6HIAAAIo/7cXR+bO5ut5ymZ3ZDwU2+8aOKDveFUXfPXQyUvXHv50+TuCuH+CWLh6vO28zdnZ4YvanZ26DWHcGCjgWyW8ZjhHmZ5kUbrpxZTs8rVJRpmcWr4AKxZigjKHtMpXjUf5lV65rlMfnhDVdEgbeoYOvn36o8dnbj96unLz8fIHJ0YOKFQhRX17W2QQd3ZgupbGekd5hU0uNZeUYBLExONjmbkV/85BJlNZqfLg9s190dHunu3R2ERydCYam/S2D+O+gXzEzlW2VNqi567cvfDunesffHb58orRFCmSOhSNEXf7iKklGu7b1Ts0NZCcSoxMNwWiJkvEZt8iQk2AuoYIIsea5MPVoDKrpMwiVznK1V5qoYZerOdIPdwyn0zXq2mZsPYetXQfqW1KoQ3dxUp/vtiexdPBrGq2SCev8ZQgGypUnmx2dZo2AiApSWxNHaLIKOvKAVUGqHJaIZYnxktrwlXuYbVvzJtcnD7/aWLhw+TCreSxldjCrcGjN1InV8dO3bH1v6b2pVTuZGlNOE9soxU1ZmTKAZWUgsh/F6QRUWVsoTVfYmGj9lq8xxoaafDGzeFUo29YqGkXqNvKzP3RXcdS+9/ad+K92cWrU/PLexeWZxev9owtVOBRgbpNXB9p8CXwjjFTeMQQSOqa43xlS3apwWAfYBbWAYiu4CJ4gcTELKnX6ltDkR0uT+/GUNzW1M0WmziIrVDqZArNztZhV9t2qz9m9cccwYSjNZkrthTJHBzUxpKYjY5tgbZkKLKjKTDo8EaLUEsWHwt3TRWW6P8B6MW96SA6cagAAAAASUVORK5CYII=" nextheight="559" nextwidth="1024" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>We believe that a <strong>"Dual-Track Parallel of Fiat System and Stablecoin System"</strong> will form in the future: the former supports mainstream human commerce, while the latter carries machine-native and on-chain native high-frequency, cross-border, and micropayment scenarios. The role of Web3 is not to replace traditional payments, but to provide underlying capabilities of <strong>Verifiable Identity, Programmable Clearing, and Global Stablecoins</strong> for the Agent era. Ultimately, Agentic Commerce is not limited to payment optimization, but is a reconstruction of the machine economic order. When billions of micro-transactions are automatically completed by Agents in the background, those protocols and companies that first provide trust, coordination, and optimization capabilities will become the core forces of the next generation of global commercial infrastructure.</p><hr><p><strong>Disclaimer:</strong> <em>This article was completed with the assistance of AI tools ChatGPT-5 and Gemini 3 during the creation process. The author has made every effort to proofread and ensure the information is true and accurate, but omissions may still exist, and understanding is appreciated. It is important to note that the crypto asset market generally has a divergence between project fundamentals and secondary market price performance. The content of this article is for information integration and academic/research exchange only, does not constitute any investment advice, and should not be considered as a recommendation for buying or selling any tokens.</em></p><p><br></p>]]></content:encoded>
            <author>0xjacobzhao@newsletter.paragraph.com (jacobzhao)</author>
            <category>agenticcommerce</category>
            <category>agentpayment</category>
            <category>agent</category>
            <category>m2m</category>
            <category>x402</category>
            <enclosure url="https://storage.googleapis.com/papyrus_images/b76c4ce0d718d92f41bc12a2958fefd2d0f05982355bad8fc6af1f9b049eeb5b.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[机器的经济秩序：智能体商业的全栈路径]]></title>
            <link>https://paragraph.com/@0xjacobzhao/机器的经济秩序：智能体商业的全栈路径</link>
            <guid>gVOnooult9VPIoIMWBc5</guid>
            <pubDate>Tue, 16 Dec 2025 04:05:03 GMT</pubDate>
            <description><![CDATA[本独立研报由IOSG Ventures支持，研究写作过程受Raghav Agarwal@LongHash与Jay Yu@Pantera相关研报启发，感谢Lex Sokolin @ Generative Ventures, Jordan@AIsa, Ivy@《支无不言》博客对本文提出的宝贵建议。撰写过程中亦征询了 Nevermined, Skyfire, Virtuals Protocol, AIsa, Heurist, AEON等项目团队的意见反馈。本文力求内容客观准确，部分观点涉及主观判断，难免存在偏差，敬请读者予以理解。 智能体商业（Agentic Commerce）指的是由AI智能体自主完成服务发现、可信度判断、订单生成、支付授权及最终结算的全流程商业体系。它不再依赖于人类逐步操作或信息输入，而是由智能体在跨平台、跨系统的环境中自动协作、下单、支付与履约，从而形成机器与机器之间自主执行的商业闭环（M2M Commerce）。 加密领域中，最具实际应用价值的场景目前主要集中在稳定币支付与DeFi。因此，在Crypto与AI融合的过程中，最具价值的两条路径分别为：短期内依托现有...]]></description>
            <content:encoded><![CDATA[<p style="text-align: center"><em>本独立研报由</em><strong><em>IOSG Ventures</em></strong><em>支持，研究写作过程受</em><strong><em>Raghav Agarwal</em></strong><em>@</em><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.longhash.vc/post/agentic-commerce-why-x402-is-just-the-beginning"><em><u>LongHash</u></em></a><em>与</em><strong><em>Jay Yu</em></strong><em>@</em><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://panteracapital.com/http-402s-modern-makeover/"><em><u>Pantera</u></em></a><em>相关研报启发，感谢</em><strong><em>Lex Sokolin @ Generative Ventures, Jordan@AIsa, Ivy@《支无不言》博客</em></strong><em>对本文提出的宝贵建议。撰写过程中亦征询了</em><strong><em> Nevermined, Skyfire, Virtuals Protocol, AIsa, Heurist, AEON</em></strong><em>等项目团队的意见反馈。本文力求内容客观准确，部分观点涉及主观判断，难免存在偏差，敬请读者予以理解。</em></p><p><strong>智能体商业（Agentic Commerce）</strong>指的是由AI智能体自主完成服务发现、可信度判断、订单生成、支付授权及最终结算的全流程商业体系。它不再依赖于人类逐步操作或信息输入，而是由智能体在跨平台、跨系统的环境中自动协作、下单、支付与履约，从而形成机器与机器之间自主执行的商业闭环（M2M Commerce）。<br></p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/5bef393dc615c4270897abfcead9b3270d109e6fa38d4d4d2bac9b20e5849709.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAARCAIAAAAzPjmrAAAACXBIWXMAAAsTAAALEwEAmpwYAAAFwElEQVR4nG2UezSUeRjHn45z6pzdLpxOl40ol8YlkVI21xHDmMIQ0ppKrZbIUXLdWmFKrsklo0lDU6ubCNtl21dimEmu09yYiYqSjBlhXswbs8dlL3X6/P0738/vec55vqBSqaamplQqVf+HgcxsGvXchYwsWmX1Exq9JOlsdnQsNelsNr2IWVn9OP9ycWY2LZGakUjNys65TC9i5ly6mpqRxyguLbl269eE8/k0xv2qR9RzF1gNTXOx06ppUP2DQoGKRJIucXdXT6/49bt2Xie7mV/b0MZuFbR0CF+KXvG7el71vhOKJEKRpL2dz+UJW1v5LS+4AoGYx+/k8YQSSc/bt71iSQ+Kov/G/ieYFapkY2jfgEz0egDhCFgdkpoWCfKik83t7uobvFeJ5BYUMZg3aZeZYkkvnyd50STkcHgI0sTnil80cfv7pRMTk3P7+FqAYZ8Vk5/lqJLVLLheVU8vq43OurkvKpdwNN0jNu9I8pWMoqqLV8pOJV+M/S0zJb3oEr08Pi4vKjjueHD8L0FncrKuMBn3EITNYjV//DiEzfB57tMzgqkZpodGxp828UuqGqLz7rnH5Gu5nwS3kxCYvOVUDtgfNPSO2x9PP5tbQc28cyIq3xYf4WTt65uQ7uJC8dtJ9nE7GHc8gUFjPnncwON2oopx1ewc8wIMw7o/ysV9g78/aArNvI07kAyukUCKg71UsAmDdSTY7Atep4EYvQEfYe4UbfHjUQ+3UB8CGWxCF9v6eNq67nUmU9x9Kd4/52YU1iDs9/3SvsGhuV3NCGTyUQtKWnpVI62sfr13HNiGgXMsbA2C5XgAEwAcqG8D8wPgHguanivMjhDw4Vam/gth00JYu0gDB6Bjqr0jwHW3iyVhn2dI9d0HueUsu70pn2Sf5gUj6AQYeQWfvxFbUA72oeARDvgQAEswJKsZuanhdoK2I6z3gNVEwB3UtT9tahUNq8gLvrdarWOto+ugpUsAMPba9VPIgQDrjfjCnJLI5KuLtFzHFei84Fp5vRohrLSuPYLxBCgpoOcG+gTQJum4HHMIPOMQmLTBOQR0SPCDJ1DyzGMqYIUfrCRabKF4k6P9/eOJpAgN9W3ue4LxgWfIbv43Cwtr6p5rOoTcvvtsXoBwBEHnbhdUPg/LvD2zCm0/MA8E2A5bKeuI4Voux8DYD9aTweYE2MSDXhCAzYo1pHU4Hwe7w3b2IWYW+xbAJvWlRhqrLfS0bDKpecXXkaiEG40NgnlBl0SSkpFXWHI3+84zwB8FMwoQY8AoAFY5wzoibPICnBuYHQabCNCjANgvXkXU1/VV1yBYmeDdd+xYCqYGa6xWqhtqLNGztHApLS6vqPgzNTVbJBLNCxQKxUu+gNfV87CRH3juOriEg20oeCWCVxKQThobW9v4hAAhHjR3g4HPUj0/WOuH0w8NJMeqOR823Wa3x9E1wInsZUfycPRNjk+rRTjd3W9a2zvk8uEvLlk+gja2Chnl9SEZt3T2JYJ5AFgeAhN/M8fDsAwPOF8Dp2MBwWlHI/MciVEGhvs19feAnh8sc9u+0dnSwNmPdCiTerH0xsO2NuHY6HxVzN/BHFKpjCt61cLvLkOaEwoqfOMKwWQXLN8MC81grbW9d0QWvfLW/bqKP9jMUiQm9pKOPvE7NX2NJQb6uk6RYck5WSXl5QibwxWLu1EUxTDs6y5CUXRQKut+865DIKnh8J42iRhlSNKFa6fT6BmFpQ9qWhBWez6jklZczWK/ZDVymcyKtCzG2VT61at3G+pb6p41d3a+RtHxIelMVXxjgjlGR8fGJyYUk0r5CCodRj8MjX0YGpMOo7JPCmHX6/uP2E9q23revh8YkI2NojKpfEgqH5YNjwyPoeg4plRiSmxaNf2Nsvs/U1NTGDb7WImNT0wqlZhMPsxmc+pZjbW1dX8hT1ta2zjPm0dHx5RK5cQMk5hytt8w7IvsWcHfdQnU5pujBgkAAAAASUVORK5CYII=" nextheight="559" nextwidth="1024" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>加密领域中，最具实际应用价值的场景目前主要集中在<strong>稳定币支付</strong>与<strong>DeFi</strong>。因此，在Crypto与AI融合的过程中，最具价值的两条路径分别为：短期内依托现有成熟DeFi协议的<strong>AgentFi</strong>，以及中长期围绕稳定币结算、依赖ACP/AP2/x402/ERC-8004等协议逐步完善的<strong>Agent Payment</strong>。</p><p>智能体商业（Agentic Commerce）短期受限于协议成熟度、监管差异、商户用户接受度等因素，难以快速规模化；但从长期看，支付是所有商业闭环的底层锚点，智能体商业最具有长期价值。</p><h2 id="h-" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>一、智能体商业支付体系与应用场景</strong></h2><p>在智能体商业（Agentic Commerce）体系中，真实世界的商户网络才是最大的价值场景。无论 AI Agent 如何演进，<strong>传统法币支付体系（Stripe、Visa、Mastercard、银行转账）</strong>与快速增长的<strong>稳定币体系（USDC、x402）</strong>都将长期并存，共同构成智能体商业的底座。</p><h3 id="h-vs" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>传统法币支付 vs 稳定币支付对比</strong><br></h3><table style="min-width: 75px"><colgroup><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>类别</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>传统法币支付（Stripe）</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>稳定币支付（x402 / USDC）</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>优势</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">- 商户覆盖率极高</p><p style="text-align: center">- 用户体验顺滑，无需钱包</p><p style="text-align: center">- 合规成熟、风控完善</p><p style="text-align: center">- 支持退款/拒付</p></td><td colspan="1" rowspan="1"><p style="text-align: center">- 全球统一、无国界</p><p style="text-align: center">- 成本极低（&lt;0.1%）即时结算</p><p style="text-align: center">- 可编程性强（智能合约、自动化结算）</p><p style="text-align: center">- 原生支持 M2M micropayments</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>劣势</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">- 手续费高（2–4% + FX）</p><p style="text-align: center">- 跨境复杂，清算慢（T+1〜T+3）</p><p style="text-align: center">- 可编程性弱</p><p style="text-align: center">- 无法支持机器规模支付</p></td><td colspan="1" rowspan="1"><p style="text-align: center">- 商户采用度极低</p><p style="text-align: center">- 用户门槛高（钱包/Gas）</p><p style="text-align: center">- 监管不统一、税务复杂</p><p style="text-align: center">- 无拒付机制（需自建争议处理）</p></td></tr></tbody></table><p><br>真实世界商户——从电商、订阅、SaaS 到出行、内容付费与企业采购——承载万亿美元级需求，也是 AI Agent 自动比价、续费与采购的核心价值来源。短期内，主流消费与企业采购仍将由<strong>传统法币支付体系长期主导</strong>。</p><p>稳定币在现实商业无法规模化的核心障碍并非仅技术，而是<strong>监管（KYC/AML、税务、消费者保护）、商户会计（稳定币非法偿）</strong>以及<strong>不可逆支付带来的争议处理机制缺失</strong>。由于这些结构性限制，稳定币短期难以进入医疗、航空、电商、政府、公用事业等高监管行业，其落地将主要集中在<strong>数字内容、跨境支付、Web3 原生服务与机器经济（M2M/IoT/Agent）</strong>等监管压力较低或链上原生的场景——这也正是 Web3 原生的智能体商业最先实现规模突破的机会窗口。</p><p>不过，2025 年监管制度化正快速推进：美国稳定币法案取得两党共识，香港与新加坡落地稳定币牌照框架，欧盟 MiCA 正式生效，Stripe 支持 USDC、PayPal 推出 PYUSD。监管结构的清晰化意味着稳定币正被主流金融体系接纳，为未来跨境结算、B2B 采购与机器经济打开政策空间。</p><h3 id="h-" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>智能体商业最佳应用场景匹配</strong><br></h3><table style="min-width: 125px"><colgroup><col><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>场景大类</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>典型子场景</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>核心特征</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>支付通道</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>原因</strong></p></td></tr><tr><td colspan="1" rowspan="3"><p style="text-align: center"><strong>A. 数字原生</strong></p><p style="text-align: center"><strong>（AI / Machine）</strong></p><p style="text-align: center">最先爆发</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Digital Services</strong></p><p style="text-align: center"><strong>（API/SaaS/Compute）</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">纯数字、按调用计费、企业采购</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>传统支付为主，稳定币补充</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">服务商已深度绑定 Stripe；企业需发票/账期/退款；</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Multi-agent &amp; M2M Commerce：</strong></p><p style="text-align: center">多智能体协作、M2M micropayments、IoT、机器人、浏览器流式付费</p></td><td colspan="1" rowspan="1"><p style="text-align: center">机器对机器、小额高频、秒级结算</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>稳定币</strong></p><p style="text-align: center"><strong>唯一合理</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">传统支付手续费高且需人工；</p><p style="text-align: center">稳定币支持自动化与实时微支付</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>DeFi / AgentFi </strong>链上借贷、做市、收益策略执行</p></td><td colspan="1" rowspan="1"><p style="text-align: center">原生链上</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>稳定币 / Crypto</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">传统支付无法进入</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>B. 数字虚拟商品（增长快）</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">游戏内购、虚拟道具、会员、数字素材</p></td><td colspan="1" rowspan="1"><p style="text-align: center">低客单价、全球用户</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>传统支付主导；稳定币跨境优势</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">平台以卡组织为主；稳定币适合跨境</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>C. 真实世界商业（长期）</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">机票、酒店、电商、外卖、药品、线下零售</p></td><td colspan="1" rowspan="1"><p style="text-align: center">物流 + 监管 + 退款体系</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>传统发币支付长期主导</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">涉及税务、拒付、监管合规；稳定币短期难进入场景</p></td></tr></tbody></table><p><br>智能体商业（Agentic Commerce）的核心不是让一种支付轨道取代另一种，而是将“<strong>下单—授权—支付</strong>”的执行主体交给 AI Agent，使传统法币支付体系（AP2、授权凭证、身份合规）与稳定币体系（x402、CCTP、智能合约结算）各自发挥优势。它既不是法币 vs 稳定币的零和竞争，也不是单一轨道的替代叙事，而是一个同时扩张双方能力的结构性机会：法币支付继续支撑人类商业，稳定币支付加速机器原生与链上原生场景，两者互补共生，成为智能体经济的双引擎。</p><h2 id="h-" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>二、智能体商业底层协议标准全景</strong></h2><p>智能体商业（Agentic Commerce）的协议栈由六个层级构成，形成“能力发现”至“支付交付”完整的机器商业链路。<strong>A2A Catalog </strong>与 <strong>MCP Registry </strong>负责能力发现，<strong>ERC-8004 </strong>提供链上可验证身份与声誉；<strong>ACP </strong>与 <strong>AP2 </strong>分别承担结构化下单与授权指令；支付层由<strong>传统法币轨道（AP2）</strong>与<strong>稳定币轨道（x402）</strong>并行组成；交付层则尚无统一标准。</p><br><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/cecb1d1982d8df5a4789eaead2e90cace5f5faffb639fc67b84c8861fb2bfd38.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAARCAIAAAAzPjmrAAAACXBIWXMAAAsTAAALEwEAmpwYAAAGBklEQVR4nC2U/08b9xnHP/1pUjVtf8G6HyZNmqqlmxraRk2TsKokTfjSsEG+ECiFEAUUlDQpJIM6jJg0JYHQEJeE2MQUOzHYGBtztrk7n302Pt/hj+/jz+d8d/ZiYjuQtipb2qFl2aT+MplNev30PM9bj56vIPft5tOtrfVnP3yztfXN1tbf//XvzecvNp+/+P7HH7Nff3uwpubd/VXv7NvXcPTIH4807PlD5YGaQ8dbm+sbGw7V1ba0tzW1tj7efPa3F//Z/GdZtfn8xfqzH7Z5tvGP70lxA+hffzcXT9v51P9wxPBsnMwJxLWqJQtPDjR37Kis3lV7dFftsd+9V/vmwfpdNY1vHz62u/7E2/UndlRW17WfUZ5+51rV5gRlNq7YY2i2TNoeTTliKJnLA19SAS9VAbAX/KIOVLaV+VUtAPsAeO/sPS/45Qnws8Pg538COz4EVR3ggwug+mNQ2Q4q2sFP68HLdeCV473WJQCqwE/2g7eawZ5W8OZRsPMIAO+Cl6ocvAR4RTdYl86Muz64YGwxDB/u/nPndUuP2dc76fYIyQGrr+uWs9s0f9Gy0G9d2Hm4+dUDjYavvJemPN2mst0w5VmSYN+U5/So7Zhh5KTxdtvgWPPAyOlRW7/VR8sYRBVdUPNOeqX5VOfZS/17q97vvTIUITkupcaIGiNaXC3avMsdZ88fP3n6tzsrXtv5RsOH7QMjYxGkhfGjKMnGSSZKck462vfZyODYRP/w6IWBK042Jqj5KFEBKq6HkeIRVgNJ5BGkB2zMs5LwrEgBCSbUbJSoHiG5KKx645JPkIbuWox3LW5ecEfFRSHpWZE4REQ9S8NUUFaWZTJmc95xLfJ6PpDCHMKingNwrcikdBoVWLTm5lPznBQmJS5doiSVxwoja2FSokSt13jj3OWh13fvffX1ih7j9Wt3rBx5zKVLQZjhMablrHlh+dxlY11Ty973a3qM1x1snEV6LJMF+PFGjGgMVDisU1J6LpzgiR5CKguRqOfiRA1ImEPK5EPnhM1x2+a4Yb5vdXkcyyyHdb8IY0QV9RyPM3YqOGFzfGFzmOyzt6dtDiay7XoEUvkCg7JfOny9V2/0DI0aJ6YuDY/dss17E0q5Aqj6RO0Lm2vEYr/jXNq9/9Bv3th1xWQx3Jy466SCME9JhFdUTzwVhPkQKTkj0B1Lc2SdgvmlRLqcABfXeaJP+znrEj05v2Sm2GmKvu+jg5AIGZ1DiieeujJu6v504NPhm32j432j4yc/udh27pNbdqc/qbIQi5pGSSmPIHuF1AzNz4VFryAvxFMsxHCtAHDxSQipfKboZOOmB+5rkzO2QJhXnwRhuQKeZENobRlqy1BdElB1Y1N1YxMlYkrELMqyKB9GWjyjMVANoXyYrM1zoicGOfyIRfkQ1sVsHqBiiYFKJFN4GIiMTT8YMpktbn+Y5P0SjhKVwzpHCp9/OdXS1d3adbbj457GjzqaT3Udazv1cJlncYGWSYwoAaiweC1MHvsS6QDUIpkSi9cYWV3RcoA82eCwOuPn7EHOTkfGHnocTHjaHwpIsqhpIRkvrMAAJEFZDcrqhWsjXYZBWlZpWaOktDuapCESVC0gpR4wvM3PXDXPjM44JhxuV1QKJuW4liu3aFnOXrfMDZruX52c7jIYB8bNo9PuhTjmMQ5A1RXLnLr4l6bO802d5/c3nPj17ysONLYcPP6R4eakbzXvjUMeY19CGb4323djovXcxc7+wTOXP3NwKa+AOKwAZf1plKhBSDisLoqyg0+EsB6ECguRkNHjRKUS0MVF3JHYYlwyzbk/N1uDSdnBRNzcSkCSeays5nIhiIKQhLA6x4tUEnNYoySZQ5iWCUjmyjOkUSGEiwsx5OZTIVxkUMknZsKI0LLGpUthUhy3uy+PTFS8U/naW3v6hm/PBHg+s8Gg/x/aYiITQgWOFOd52c0jtny2pTD+KwPx9qvAmcAqCUiYkuRAGRyQMANxjGgcKtsZmL436xy+c3d8+iuzc37Mcn+OZllYDgsjZXubCZtSGahsy2W/hCkRRYkWJdn/AkTZMk9N+w2cAAAAAElFTkSuQmCC" nextheight="559" nextwidth="1024" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><ul><li><p><strong>发现层（Discovery Layer）</strong>： 解决“Agent 如何发现并理解可调用服务”。AI 侧通过 A2A Catalog 与 MCP Registry 构建标准化能力目录；Web3 则依托 ERC-8004 提供可寻址的身份指引。该层是整个协议栈的入口。</p></li><li><p><strong>信任层（Trust Layer）</strong>：回答“对方是否可信”。AI 侧尚无通用标准，Web3 通过 ERC-8004 构建可验证身份、声誉与执行记录的统一框架，是Web3 的关键优势。</p></li><li><p><strong>下单层（Ordering Layer）</strong>：负责“订单如何表达与校验”。ACP（OpenAI × Stripe）提供对商品、价格与结算条款的结构化描述，确保商户可履约。由于链上难以表达现实世界商业契约，该层基本由 Web2 主导。</p></li><li><p><strong>授权层（Authorization Layer）</strong>：处理“Agent 是否获得用户合法授权”。AP2 通过可验证凭证将意图、确认与支付授权绑定至真实身份体系。Web3 签名尚不具法律效力，因此无法承担该层的契约与合规责任。</p></li><li><p><strong>支付层（Payment Layer）</strong>：决定“付款通过何种轨道完成”。AP2 覆盖卡与银行等传统支付网络；x402 则提供稳定币的原生 API 支付接口，使 USDC 等资产可嵌入自动化调用。两类轨道在此形成功能互补。</p></li><li><p><strong>交付层（Fulfillment Layer）</strong>：回答“支付完成后如何安全交付内容”。目前无统一协议：现实世界依赖商户系统完成交付，Web3 的加密访问控制尚未形成跨生态标准。该层仍是协议栈的最大空白，也最有可能孕育下一代基础协议。</p></li></ul><h2 id="h-" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>三、智能体商业关键核心协议详解</strong></h2><p>围绕智能体商业（Agentic Commerce）<strong>服务发现、信任判断、结构化下单、支付授权与最终结算</strong>这五个关键环节，Google、Anthropic、OpenAI、Stripe、Ethereum、Coinbase 等机构均在相应环节提出底层协议，从而共同构建出下一代 <strong>Agentic Commerce 核心协议栈</strong>。</p><h3 id="h-agenttoagent-a2a-google" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>Agent‑to‑Agent (A2A) – 智能体互操作协议（Google）</strong></h3><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://a2a-protocol.org/"><u>A2A </u></a>是由 Google 发起并捐赠至 Linux Foundation 的开源协议，旨在为不同供应商、不同框架构建的 AI Agents 提供统一的通信与协作标准。A2A 基于 HTTP + JSON-RPC，实现安全、结构化的消息与任务交换，使 Agents 能以原生方式进行多轮对话、协作决策、任务分解与状态管理。它的核心目标是构建“智能体之间的互联网”，让任何 A2A 兼容的 Agent 都能被自动发现、调用与组合，从而形成跨平台、跨组织的分布式 Agent 网络。</p><h3 id="h-model-context-protocol-mcp-anthropic" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>Model Context Protocol (MCP) – 统一工具数据接入协议（Anthropic）</strong></h3><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://modelcontextprotocol.io/docs/getting-started/intro"><u>MCP </u></a>由 Anthropic 推出，是连接 LLM / Agents 与外部系统的开放协议，侧重统一工具与数据访问接口。它将数据库、文件系统、远程 API 以及专有工具抽象为标准化资源，使 Agent 可以安全、可控、可审计地访问外部能力。MCP 的设计强调低集成成本与高可扩展性：开发者只需一次对接，即可让 Agent 使用整个工具生态。目前 MCP 已被多家头部 AI 厂商采用，成为 agent-tool 交互的事实标准。</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/b4496598bba9f7db2651ad049ac5a108ea88fea95c66d8feeb0ed03dd03a0911.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAYCAIAAAAUMWhjAAAACXBIWXMAAAsTAAALEwEAmpwYAAAGJUlEQVR4nLVVaWwbVRB+ICEhfiBUVH5UgBpVAgS0FTdFVSHlCKQ/WrUoAhUCLTSlpTKCCJqqgh4BgUJTmja9EkpITBM3RxMc93DSOEcTO0kTs3bt2ImPuOusz931rr37au/G+5D9gjFF8AOJ0fyYt/tmvjffjGYAQiitKIII0X8SKSv/cgHIspzM3sCGIEJBFAURpiQZa/Kv/jzPY0OWZfyspCSxXCIcZbGyXCKnaSUN5uX5qhMtAIDhMcu8PC9nRRAhxwtYI3QsPz+aprEB4c3rjlkAwKOFm/ERf8+9TBAhzXAAIbRk9TsAgPLKenLWVVJSsnHjxuPHj+P0UdaTZrgcG5FoFD8CIaW2QQsy8iBCqKioqKCgYPv27RD++RqWS2QAlhVuAQB8sv9UEsbXrn2ltLS0t6cnH4DjhZwPx8WwkVYU3ZUxABY/XPg+Qqi6urq8/DODwQAhzML/AcDxQozjJ61TWfZhWlFwzWVZxqHFZMrq8E65Zg+fPqvp0tfWtzSe67Y7PQ43maVFkSQJplLYEdcsrSgcLyQlieMFQAWjUZpNK4qYTOGIbCxDCHjizY1llQihCB2zOrzWKde2zyu//O7Eji8O7q48fM0yZXV4SX8oHGVphovSTJRmqUAIIWXdln0ArEgraZZLZABYLjE4dFWlUpWWvltcXPzSi2sOH6kBdz0NwBIA7rjzmbcQUlgukaOora0VN1JchN4bfpVKtXXrluLiN5YWFFQe2AuWFQGwGIDHALibpMKCCEGEjoXDjMvl8s36rBara2bmBukHD627PDBRtrvm4VfLMJW4JLIsn66rZxgWfwyGGGouyPNxhmH9fgoh5R1V1b5q9XXH7LLCD2iGWwAwT9ksnmmLZ9p2w2O74bZ4nEGWsnicLr97lvKEo3Q8r00bm9QxLsMhxwsm84S2X39xuE831NM7OjBkHiVcRP/4wMXhvgnHuN3tjMdFYJt2/tLd8nOzul3b3qhpqq794WxHS4NGXXPqaM2po3Vnz/SNDeKmxADq5mZMkdPrPd+nSwjxudBcLB4jbMT+g18RdmuUpedCc1Q4dNnU66MCwOJ0tOk73y3dsvm998orKsp27NxUUlK2Y2d5RcXer/bVqRv6J43JmykMIEnSJb0+B9A9dEVMpkIMwyUS45Pm8ooKm8PBJRIhhuEh1JsMGQAqENL2qAdMrdYZQ9+wxjiptc4YJu1646S2b1jTceG03WkWxJs5gK6urlyR+wa6zqgPnjtfozc06HrrjePt+v6M0dZ5tK5h/+i4ThBvAq/HyTH2b7+pfHH1qrIPSouLXl75+CObNqzbtL54186Phg3dAXKSFyAGgBC2trVjgCjNeKeHEUJjo2N79+w5eeJE5YEDtceOHaqq6u3RI4R800OBgB+QpNvvHvC4rWPG3gQ3t/nt9ffff+/ZppM+j5X0TQXJ39yOwWyRFwA0rQttynKJGVufmJgzDl3as/vj9taGQ98fOHbk62NHvjZc6YRCeNbZTwUoEI6y2VhEkJoKkDaXc1x7oSNI2fw+66zH7HaOkqQ710XZDNpyFPlJj3d6NEhNxWMkE5nR9+roiDtGewOkfcZu9JMzmVEhSVJchGIylVMfFck/Cnk9CiFsb+/AAPPyvJB1jIuQ5RKBCOtwk2QgynIJHDA7eNKZYZdNP6NMLFb7k2bD+5+Oma2ShDtHyTXoLRnkOyZTSV3v4JOvvnXuVz0Ti+U7YoAFSUnymNne3HGZCmZm8t/XE4SwsakpD2BBJEliGPbileEozd6yoBYAItHoiMlEEISZIDTnLxCEZXDo6ojReMs6hBCaTKb8EDzPq5ubNa2trW1tNSfru3UXmlvO/dKiye2lTA0QQpf1Pc88t2pNYeErr732UuHaVS+s/nBbWdX3h0RR+KcNjA0zQexSqZ546unX31j37HPPL1+xcuu2bXU/nhkxGvGdhQwu6fX79h8Et92e3VBg+YqV9Wd+Onm67l8WOv41YjR2arWPPPr4PYsW3bv4vgeWLj1aW9vQ2Oj1ev9CkcPpnJiY7NbpOjs7u7q0BkO/mSD8fv8/Rc8JTdPXbbZr166NGI1mIiPXbTYf6bu1Bv+f/A66VmemD68GSQAAAABJRU5ErkJggg==" nextheight="752" nextwidth="1008" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p><strong>MCP 关注的是 “Agent 如何使用工具”</strong>——为模型提供统一且安全的外部资源访问能力（如数据库、API、文件系统等），从而标准化 agent-tool / agent-data 的交互方式。</p><p><strong>A2A 则解决 “Agent 如何与其他 Agent 协同工作”</strong>——为跨厂商、跨框架的智能体建立原生通信标准，支持多轮对话、任务分解、状态管理与长生命周期执行，是智能体之间的基础互操作层。</p><br><table style="min-width: 75px"><colgroup><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>特性</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>MCP (模型上下文协议)</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>A2A (智能体对智能体)</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>主要目标</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>能力扩展 (Capability)：</strong>&nbsp;</p><p style="text-align: center">将 AI 连接到数据和工具。</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>协作协调 (Coordination)：</strong>&nbsp;</p><p style="text-align: center">将 AI 连接到其他 AI。</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>通俗类比</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>垂直：</strong> 智能体 &lt;-&gt; 数据库/API</p><p style="text-align: center"><strong>类比USB-C 接口</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>水平方向：</strong> 智能体 &lt;-&gt; 智能体</p><p style="text-align: center"><strong>类比互联网</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>状态管理</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>无状态 (Stateless)：</strong> “执行这个函数，然后把结果返给我。”</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>有状态 (Stateful)：</strong> “接下这个任务，持续更新进度，并告诉我何时完成。”</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>典型场景</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">工具调用、数据读写、文件处理、企业系统集成</p></td><td colspan="1" rowspan="1"><p style="text-align: center">多 Agent 协作任务、跨平台智能体互操作、自动化 workflow</p></td></tr></tbody></table><p><br></p><h3 id="h-agentic-commerce-protocol-acp-openai-stripe" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>Agentic Commerce Protocol (ACP) – 下单结账协议（OpenAI × Stripe）</strong></h3><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.agenticcommerce.dev/"><u>ACP</u></a>（Agentic Commerce Protocol）是 <strong>OpenAI </strong>与 <strong>Stripe </strong>提出的开放下单标准（Apache 2.0），为 <em>买家—AI Agent—商户</em> 建立可被机器直接理解的结构化下单流程。协议覆盖商品信息、价格与条款校验、结算逻辑及支付凭证传递，使 AI 能在不成为商户的前提下代表用户安全发起购买。</p><p>其核心设计是：AI 以标准化方式调用商户的结账接口，而商户保留全部商业与法律控制权。ACP 通过结构化订单（JSON Schema / OpenAPI）、安全支付令牌（Stripe Shared Payment Token）、兼容现有电商后台，并支持 REST 与 MCP 发布能力，使商户无需改造系统即可进入 AI 购物生态。目前 ACP 已用于 <strong>ChatGPT Instant Checkout</strong>，成为早期部署可用的支付基础设施。</p><h3 id="h-agent-payments-protocol-ap2-google" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>Agent Payments Protocol (AP2) – 数字授权与支付指令协议（Google）</strong></h3><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://github.com/google-agentic-commerce/AP2"><u>AP2 </u></a>是由 <strong>Google </strong>联合多家支付网络与科技公司共同推出的开放标准，旨在为 <strong>AI Agent 主导的支付</strong> 建立统一、合规、可审计的流程。它通过加密签名的数字授权凭证将用户的支付意图、授权范围与合规身份绑定起来，为商户、支付机构与监管方提供可验证的“谁在为谁花钱”的证据。</p><p>AP2 以“Payment-Agnostic”为设计原则，同时支持信用卡、银行转账、实时支付以及通过 x402 等扩展接入稳定币等加密支付轨道。在整个 Agentic Commerce 协议栈中，AP2 不负责具体商品与下单细节，而是为各种支付渠道提供通用的Agent 支付授权框架。<br></p><table style="min-width: 75px"><colgroup><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>项目</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>ACP</strong></p><p style="text-align: center"><strong>（Agentic Commerce Protocol）</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>AP2</strong></p><p style="text-align: center"><strong>（Agent Payments Protocol）</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>主导方</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">OpenAI × Stripe</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Google Cloud（诸多合作联盟）</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>核心作用</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>结账协议</strong>：让 AI Agent 能结构化调用商户的结账 / 下单接口</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>授权协议</strong>：证明 Agent 具备代表用户付款的合法授权</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>理解类比</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">相当于 <strong>线上版 POS 机 / 电商结账页</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">相当于 <strong>银行卡的芯片 + PIN 授权机制</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Crypto 联系</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">传统支付通道为主</p></td><td colspan="1" rowspan="1"><p style="text-align: center">支持x402 扩展原生支持稳定币支付</p></td></tr></tbody></table><p><br></p><h3 id="h-erc8004-agent-ethereum" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>ERC‑8004 – 链上 Agent 身份 / 声誉 / 验证标准（Ethereum）</strong></h3><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://eips.ethereum.org/EIPS/eip-8004"><strong><u>ERC-8004</u></strong></a><strong> </strong>是由 MetaMask、Ethereum基金会、Google、 Coinbase共同提出的以太坊标准，旨在为 AI Agents 构建 <strong>跨平台、可验证、无需预信任</strong> 的身份与信誉体系，协议由链上三部分组成：</p><ul><li><p><strong>Identity Registry</strong>：为每个 Agent 铸造类似 NFT 的链上身份，可挂接 MCP / A2A 端点、ENS/DID、钱包等跨平台信息。</p></li><li><p><strong>Reputation Registry</strong>：标准化记录评分、反馈与行为信号，使 Agent 的历史表现可审计、可聚合、可组合。</p></li><li><p><strong>Validation Registry</strong>：支持 stake re-execution、zkML、TEE 等验证机制，为高价值任务提供可验证的执行记录。</p></li></ul><p>通过 ERC-8004，Agent 的身份、信誉与行为被链上存证，形成<strong>跨平台可发现、不可篡改、可验证的信任底座</strong>，是 Web3 构建开放、可信 AI 经济的重要基础设施。ERC-8004 处于 Review 阶段，意味着标准已基本稳定、具备可实现性，但仍在广泛征求社区意见，尚未最终定稿。</p><h3 id="h-x402-api-coinbase" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>x402 – 稳定币原生 API 支付轨道（Coinbase）</strong></h3><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.x402.org/"><u>x402</u></a> 是 Coinbase 提出的开放支付标准（Apache-2.0），将长期闲置的 HTTP 402 Payment Required 变为可编程的链上支付握手机制，让 API 与 AI Agent 可以在 无需账号、无需信用卡、无需 API Key 的情况下实现<strong>去账户化、无摩擦、按需付费</strong>的链上结算。</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/ae7733549e41f4d367db110f5b26314a8049109c7891b02d2368a228655add08.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAATCAIAAAB+9pigAAAACXBIWXMAAAsTAAALEwEAmpwYAAAEP0lEQVR4nJ2Ub0wbZRzHb74zwRjjC01QM0wWTTSLyarMhQT3YkWWDKlbOV8QFhmuG8qSAqVC7MbpaOmxomXoqHQgsuK2qiVx6+jYoJZuhwNuKdBx0Jud67iE0lIK/fOUh/IYerEryIbz++Ly5Lnf83zu+3u+z2HosXIkxLIswzAsy3Ic5/V6PR4PwzBerxdC+PjlCCFsw1l+ZTAYFAgEQqFQJBKlpaVlZmamp7+UkZEhFAp3JQQA4IvXaRNAomIldYaiKKPReOPmzc5z59ra2inqRuq+j/q+jQH8uy8bdCcb9QwzYbVaHQ5HJBz2en1TTupkzUfDt2zlFZVyedXppibeQccF/Wfyw2PjozQ9QtMjjvHRVPBDQDzxZO9Nd17ukxNaGaH93dpvMBisVqvJZFIoTnQb9adVH1+zdKvJUxUJGY1GgiBy9+85ULzPZusn6kiiTq2QySbv3FndcHl5DQDCVUTzxctP7T00OHybe+DxJjTlcqlJMisry2q1zs8HbLa+UcfwmGMIoRWDoQvDMIlEEolGQoHZnIPl2OvZWEJE4zcIoSUI1wO6eqxH63XtXRePlZUV4HhJSYmu5cxZ3Xc4jjPMRFtbW1VVVUnJJ1Kp1O/3fy6XYxgmEuUvJRrS+atFVFy+/eWtz2/BdPoWvuGrgGTLkuGBEBqNRoVCUadUuiYn25sblUpCoVDMzHAxEIqEF0B0EUKgVikPFebLKiv4w0Bohe/zBilKPRaf38+74QVAbIqZWJz3IoRo6rdaWX5r07FmUnK8Mv8e64iEF5zO8dSMxAAAILrmkN1ut0qlEovFpaWlZrOZd4AQuvB9S6taCSFkmIlwKIAQuk11VxzJln0qLCvOlhQKxuneuYDP5Zr6d1LXODCbzVKplCTJ2tpajUbDcRwPKHwva2f6iwvBoMvlWgzOQhh30pd2vvXMG9uefu3VLXt3vzLjGZqfn3O7/9wEsE7+1RZBlao+NyfnXYFAIpHQNB1fhgAs3XVebSQKKo7ukRTt0p066ONG4wjdZdlNAOsuut/vRwhxHGexWC5d6fnr/n32ny0gBEsguAxDyzAMooFoNARh/P84SI6nhv74qUmrlleZf/zh/LfNg1d74okrmUwBhDCJ/68A36wvBkAcobGRoWpx/rPYQ9Xs38eMDMVXowUgXIIQAgDcbveTAUKhUGBuLhKNDg7YFPiHGRj2HIa9gGEZGPbFgbyzXzfvzso1dHQNDNj6rvdZLBaGYZ4MkKrh6736+gZ1DaE98ZW+vuFW75X+awNvbnv75/O/4HhBXl6eUCg0mUx8sdPp5DguFAr5E+Jv3yMBiR6vcDNzRZVnMsXHt39Q836x2sP5eIvV1dUikaioqEgsFuM4TlEUQmjHjndaW1tpmrbb7RaLxel0bu4gGS0AYgDE+B/k9IPpjo4OrVZLkqRGoyFJ0m63p9YnxwihvwGf1sm/xVoH5wAAAABJRU5ErkJggg==" nextheight="866" nextwidth="1456" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p style="text-align: center"><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://panteracapital.com/http-402s-modern-makeover/"><u>图例：HTTP 402 支付工作流</u></a>. 来源: <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://x.com/0xfishylosopher"><u>Jay Yu@Pantera Capital</u></a></p><p><strong>核心机制：</strong>x402 协议复活了互联网早期遗留的 HTTP 402 状态码。其工作流为：</p><ul><li><p><strong>请求与协商：</strong> 客户端（Agent）发起请求 -&gt; 服务端返回 402 状态码及支付参数（如金额、接收地址） 。</p></li><li><p><strong>自主支付：</strong> Agent 本地签署交易并广播（通常使用 USDC 等稳定币），无需人工干预 。</p></li><li><p><strong>验证与交付：</strong> 服务端或第三方“Facilitator”验证链上交易后，即时释放资源。</p></li></ul><p>x402 引入了<strong> Facilitator（促进者） 角色</strong>，作为连接 Web2 API 与 Web3 结算层的中间件。Facilitator 负责处理复杂的链上验证与结算逻辑，使传统开发者仅需极少代码即可将 API 货币化，服务端无需运行节点、管理签名或广播交易，只需依赖 Facilitator 提供的接口即可完成链上支付处理。当前最成熟的 Facilitator 实现由 <strong>Coinbase Developer Platform </strong>提供。</p><br><p><strong>x402 的技术优势</strong>在于：支持低至 1 美分的链上微支付，突破传统支付网关在 AI 场景下无法处理高频小额调用的限制；完全移除账户、KYC 与 API Key，使 AI 能自主完成 M2M 支付闭环；并通过 EIP-3009 实现无 Gas 的 USDC 授权支付，原生兼容 Base 与 Solana，具备多链可扩展性。<br></p><p>基于对Agentic Commerce的核心协议栈的介绍，下表总结协议在各层级的定位、核心能力、主要限制与成熟度评估，为构建跨平台、可执行、可支付的智能体经济提供了清晰的结构化视角。<br></p><table style="min-width: 125px"><colgroup><col><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>层级</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>协议</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>核心定位</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>限制 / 风险</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>成熟度</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>发现层</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>A2A</strong></p><p style="text-align: center"><strong>（Google）</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">标准化多 Agent 服务发现与互操作</p></td><td colspan="1" rowspan="1"><p style="text-align: center">依赖 Google 生态；跨厂商采用不均衡；未来可能因巨头封闭而受限</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>发现层</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>MCP</strong></p><p style="text-align: center"><strong>（Anthropic）</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">统一工具与数据接入接口</p></td><td colspan="1" rowspan="1"><p style="text-align: center">生态可能碎片化；工具需主动集成；存在被更大厂商标准替代的风险</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>发现层&nbsp; 信任层</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>ERC-8004</strong></p><p style="text-align: center"><strong>（Ethereum）</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">链上可验证身份、声誉与执行记录</p></td><td colspan="1" rowspan="1"><p style="text-align: center">与 Web2/KYC 体系割裂；需广泛集成才能形成网络效应</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>下单层</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>ACP</strong></p><p style="text-align: center"><strong>（OpenAI × Stripe）</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">结构化描述商品、价格、条款，生成可履约订单</p></td><td colspan="1" rowspan="1"><p style="text-align: center">高度依赖 Stripe 商户端；覆盖有限；封闭度高、文档不足</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>授权层</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>AP2</strong></p><p style="text-align: center"><strong>（Google）</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">用户意图与支付授权（mandate）的合规表达</p></td><td colspan="1" rowspan="1"><p style="text-align: center">强依赖实名/KYC；监管不一致；难进入无 KYC 的链上场景</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>支付层</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>X402</strong></p><p style="text-align: center"><strong>（Coinbase）</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">稳定币 API 支付轨道，适用于自动化与 M2M</p></td><td colspan="1" rowspan="1"><p style="text-align: center">商户需适配；稳定币监管不确定；多链执行路径复杂</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span></p></td></tr></tbody></table><h2 id="h-web3" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>四、Web3智能体商业生态代表性项目</strong></h2><p>当下智能体商业（Agentic Commerce）的Web3生态可分为三层：</p><ul><li><p><strong>业务支付系统层（L3）</strong>，包括 Skyfire、Payman、Catena Labs、Nevermined 等项目，提供支付封装、SDK 集成、额度与权限治理、人类审批与合规接入，并不同程度对接传统金融轨道（银行、卡组织、PSP、KYC/KYB），搭建支付业务与机器经济的桥梁。</p></li><li><p><strong>原生支付协议层（L2）</strong>，由 x402、Virtual ACP 等协议及其生态项目构成，负责收费请求、支付验证与链上结算，是当前 Agent 经济中真正实现自动化、端到端清算的核心。x402 完全不依赖银行、卡组织与支付服务商，提供链上原生 M2M/A2A 支付能力。</p></li><li><p><strong>基础设施层（L1）</strong>，包括 Ethereum、Base、Solana 以及 Kite AI 等，为支付与身份体系提供链上执行环境、密钥体系、MPC/AA 与权限 Runtime的技术栈可信底座。</p></li></ul><table style="min-width: 100px"><colgroup><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>层级</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>名称</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>核心作用</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>典型Web3项目</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>L3</strong></p></td><td colspan="1" rowspan="1"><p>业务支付系统层（Business Payment Systems Layer）</p></td><td colspan="1" rowspan="1"><p>为 Agent 提供支付封装、SDK 集成、额度/权限/策略治理、人类审批与合规接入</p></td><td colspan="1" rowspan="1"><p>Skyfire、Payman、Catena Labs、Nevermined</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>L2</strong></p></td><td colspan="1" rowspan="1"><p>原生支付协议层</p><p>（Native Payment Protocol Layer）</p></td><td colspan="1" rowspan="1"><p>服务方向 Agent 发起收费请求；由 Facilitator 完成传输、验证与链上结算</p></td><td colspan="1" rowspan="1"><p>x402, Virtuals ACP (Agent Commerce Protocol)</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>L1</strong></p></td><td colspan="1" rowspan="1"><p>基础设施层</p><p>（Infrastructure Layer）</p></td><td colspan="1" rowspan="1"><p>提供链上执行环境、钱包签名、MPC/AA、权限 Runtime 等底层能力</p></td><td colspan="1" rowspan="1"><p>Ethereum, Base (EVM), Solana (SVM), Kite AI (Payment L1)&nbsp;</p></td></tr></tbody></table><p><br></p><h3 id="h-l3-skyfireai-agent" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>L3业务支付系统层 - Skyfire：AI Agent 的身份与支付凭证</strong></h3><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://skyfire.xyz"><u>Skyfire</u></a> 以 KYA + Pay为核心，将“身份验证 + 支付授权”抽象为 AI 可用的 JWT 凭证，为网站、API、MCP 服务提供可验证的自动化访问与扣费能力。系统自动为用户生成 Buyer/Seller Agent 与托管钱包，支持卡片、银行与 USDC 充值。</p><p>系统层面，Skyfire 为每个用户生成 Buyer/Seller Agent 与托管钱包，支持通过卡、银行和 USDC 充值余额。其最大优势是<strong>完全兼容 Web2</strong>（JWT/JWKS、WAF、API Gateway 可直接使用），可为内容网站、数据 API、工具类 SaaS 提供“带身份的自动付费访问”。</p><p>Skyfire 是现实可用的 Agent Payment 中间层，但身份与资产托管均为中心化方案。</p><h3 id="h-l3-paymanai" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>L3业务支付系统层 -&nbsp; Payman：AI 原生资金权限风控</strong></h3><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://paymanai.com"><u>Payman </u></a>提供 Wallet、Payee、Policy、Approval 四类能力，为 AI 构建可治理、可审计的“资金权限层”。AI 可以执行真实支付，但所有资金动作必须满足用户设置的额度、策略与审批规则。核心交互通过 payman.ask() 自然语言接口完成，系统负责解析意图、验证策略与执行支付。</p><p>Payman 的关键价值在于：“AI 可以动钱，但永远不越权。”将企业级资金治理迁移到 AI 环境：自动发薪、报销、供应商付款、批量转账等都可在明确定义的权限边界内完成。Payman 适合企业与团队内部的财务自动化（工资、报销、供应商付款等），定位是 <strong>受控资金治理层</strong>，并不尝试构建开放式 Agent-to-Agent 支付协议。</p><h3 id="h-l3-catena-labsagent" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>L3业务支付系统层 - Catena Labs：Agent 身份/支付标准</strong></h3><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://catenalabs.com"><u>Catena</u></a> 以 AI-Native 金融机构（托管、清算、风控、KYA）为商业层，以 ACK（Agent Commerce Kit）为标准层，构建 Agent 的统一身份协议（ACK-ID）与 Agent-native 支付协议（ACK-Pay）。目标是填补机器经济中缺失的可验证身份、授权链与自动化支付标准。</p><p>ACK-ID 基于 DID/VC 建立 Agent 的所有权链、授权链；ACK-Pay 定义与底层结算网络（USDC、银行、Arc）解耦的支付请求与可验证收据格式。Catena 强调长期的跨生态互操作性，其角色更接近“Agent 经济的 TLS/EMV 层”，标准化程度强、愿景清晰。</p><h3 id="h-l3-nevermined" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>L3业务支付系统层 -&nbsp; Nevermined：计量、计费与微支付结算</strong></h3><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://nevermined.ai"><u>Nevermined</u></a> 聚焦基于使用量的 AI 经济模型，提供 Access Control、Metering、Credits System 与 Usage Logs，用于自动化计量、按次计费、分账与审核。用户可通过 Stripe 或 USDC 充值 credits，系统在每次 API 调用时自动校验使用量、扣费并生成可审计日志。</p><p>其核心价值在于支持 sub-cent 的实时微支付与 Agent-to-Agent 自动化结算，使数据购买、API 调用、workflow 调度等都能以“按调用付费”的方式运行。Nevermined 不构建新的支付轨道，而是构建支付之上的计量/计费层：短期推动 AI SaaS 商业化，中期支撑 A2A marketplace，长期可能成为机器经济的微支付 fabric。</p><table style="min-width: 150px"><colgroup><col><col><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>层级</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>核心功能</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Skyfire</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Payman</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Catena&nbsp;</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Nevermined</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>协议/标准</strong></p><br></td><td colspan="1" rowspan="1"><p style="text-align: center">构建支付协议/结算协议</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="cross_mark" class="emoji" data-type="emoji">❌</span> 兼容Web2 标准</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="cross_mark" class="emoji" data-type="emoji">❌</span>纯 API 封装</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="check_mark" class="emoji" data-type="emoji">✔</span>构建 ACK 标准</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="cross_mark" class="emoji" data-type="emoji">❌</span>计量协议而非支付协议</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>支付接入 &amp; 结算</strong></p><br></td><td colspan="1" rowspan="1"><p style="text-align: center">Agent 合规进入支付：银行 / 卡组织 / 稳定币 / KYC/KYB</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="check_mark" class="emoji" data-type="emoji">✔</span> 卡 + 银行 + USDC + KYA</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="cross_mark" class="emoji" data-type="emoji">❌</span> 资金流需外部账户</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="check_mark" class="emoji" data-type="emoji">✔</span> KYA + 托管账户 +银行清算</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="warning" class="emoji" data-type="emoji">⚠</span> 依赖 Stripe/USDC</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>资金运营</strong></p><br></td><td colspan="1" rowspan="1"><p style="text-align: center">钱包 / 限额 / 审批 / 权限</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="white_circle" class="emoji" data-type="emoji">⚪</span> 基础钱包限额控制</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="check_mark" class="emoji" data-type="emoji">✔</span> 完整的钱包 + 策略规则+ 审批流程</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="warning" class="emoji" data-type="emoji">⚠</span> 托管与风控，但非策略/审批体系</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="cross_mark" class="emoji" data-type="emoji">❌</span> 无钱包/审批</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>计量与计费</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">计量、计费与分账</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="cross_mark" class="emoji" data-type="emoji">❌</span></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="cross_mark" class="emoji" data-type="emoji">❌</span></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="white_circle" class="emoji" data-type="emoji">⚪</span> 基础</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="check_mark" class="emoji" data-type="emoji">✔</span>&nbsp; 核心强项</p></td></tr></tbody></table><p>Skyfire、Payman、Catena Labs、Nevermined 属于业务支付层，都需要在不同程度上对接银行、卡组织、PSP 与 KYC/KYB，但它们的真正价值并不在“接入法币”，而在于解决传统金融无法覆盖的机器原生需求——身份映射、权限治理、程序化风控与按次计费。</p><ul><li><p><strong>Skyfire(支付网关)</strong>：为网站/API 提供“身份 + 自动扣费”（链上身份映射Web2身份）</p></li><li><p><strong>Payman(财务治理)</strong>：面向企业内部的策略、额度、权限与审批（AI 可花钱但不越权）</p></li><li><p><strong>Catena Labs(金融基建)</strong>：银行体系结合，通过 KYA、托管与清算服务构建(AI合规银行)</p></li><li><p><strong>Nevermined (收银台)</strong>：支付之上只做计量与计费；支付依赖 Stripe/USDC。</p></li></ul><p>相比之下，<strong>x402 处于更底层，是唯一不依赖银行、卡组织与 PSP 的原生链上支付协议</strong>，可通过 402 工作流直接完成链上扣款与结算。当 Skyfire、Payman、Nevermined 等上层系统都可以调用 x402 作为结算轨道，从而为 Agent 提供真正意义上的 <strong>M2M / A2A 自动化原生支付闭环</strong>。</p><h3 id="h-l2-x402" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>L2原生支付协议层 - x402 生态：从客户端到链上结算</strong></h3><p>x402 原生支付生态可分为四个层级：客户端（Client）、服务端（Server）、支付执行层（Facilitators）以及区块链结算层。<strong>客户端</strong>负责让 Agent 或应用发起支付请求；<strong>服务端</strong>按次向 Agent 提供数据、推理或存储等 API 服务；<strong>支付执行层</strong>完成链上扣款、验证与结算，是整个流程的核心执行引擎；<strong>区块链结算层</strong>则承担最终的代币扣款与链上确认，实现不可篡改的支付落地。</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/c0912e52c40d78576be95b1c1ea11cf97cec5376315e62d08d6dfd6bf16a8bae.png" alt="" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAATCAIAAAB+9pigAAAACXBIWXMAAAsTAAALEwEAmpwYAAAEHElEQVR4nJWUb0wbdRjHf74zvtoLo3Fv1Bh96wtegQ6iJr4zcQpT35gYE15saGRuiyAwLBuUchy0vf67K71bb/XKXcsdl+Oa48qVlWOFdi2s4MTNLggicSzglBVzxJ4pNzvXIsMn31x+eXL5ffJ8n+f3AP3RWFhYJMlAPr+z9dsf9+5tb98vHvL5nXFZQRyefH6nJE3bzeWWKYoewi/Nz13XtN1SfmVl1YN6MWxoY+MOKAPoeqGte8gEk5tbv8+kFwVRuZX7+YcfV3EqYkPDgVC0TJhfcPq4yjxORXLLa7peANv3d3Rd1/Yin88XU09XA3Bk9faiB/VCECSKoizLKIqa7ZTTx5UJdjMWJOgi+LK8i+B/3djUdb28Ak3bHSK5MXGcoqhes9lqtdqssAsjAQAXBkicirgIHvMLJdm9LOxmvOTDTEkPAEeqGiLjV+w2G0lezmQy6cy8y8e2nu/v7vMY6oHQcx394JnXzHaq8qLHAwB4ieGkpqaTXV0XWI41dV2U42lRSY5NzLp8LOygaC4mXUnLsZRhBeYX3PioGx81brGhYQgZPghQMuevQuFPbXeP+Sw4eszj6Kuqqqqpqa6vfz9Akk3NX5NBiZcSOBUhGdmQ4RjsZipvfwgoNfmfb+FEo+nzNkQYT4wIU7yUYEYnaS7Giyp47ljDpx3f3VjE/dSg1RFXrwpitMvstCDBQ1VgRD6/I8ozBCXZvey/R4UdUwF4/vV3Tq3+dBPHcRRFVVWdnIyfaTFbkCBORQ4L0LRdnIo4fVxlMzEfkc2mJHnC4XBAUH9yZrqzxwkA6HfR/w8QCEUrZ1GUZwB4BTzxcjqprv2yvrS0pKrT19JpmotZkOBBPdgXUFaB08fRXOypV9+refvjzbvrU+o0z/PZbPZaOpO9cdv4x0XwOBXBaclF8MVy/cLG3a3DAuwPKnjyxeqP5jOJQSvSBw0IY2IozH3W3HKi0RQS4nQ40maCWjrMdFiSlNRjAGUWYXs6+ZUVdjOBUJRiFYpVjEndaz548/ipCVk4c/Zcc3MzSZKyLDmdnpW1O/sDSEauBNi9rDFXEDJsrBobGrahYZqLtZyHlVh8PpNBUXRg0HpzaaG92wPAUeMB/KdFRZcCojEepaWmqHO55fWrs9dHRqXc8nome+uN+tNftkJONICg3w4il2CEIMhw4xc94IW3DgIEQtERYarPSl7s8xJUhJcS7Jh6trUXw4aCQWbQirS0ttPDDOLwAgBOd7pCQtw3LOF0Ub5hiWIVOZbad4oKuq7DVsxPXu7p6a6tra2rq/vwg4b2b3oBAJ0WwlgMRQMDomFaIBS1IEHjXBLsZmA3s3+TdV1PzM7LSmJSTU/PZJPp7xPJhVEx/u4nHXIsJSmPSFSSkpKamMpU5hV1TtvbbH8DVXoSuL4wWN0AAAAASUVORK5CYII=" nextheight="738" nextwidth="1222" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p style="text-align: center">图例：X402支付流 来源：<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.x402.org/x402-whitepaper.pdf"><u>x402白皮书</u></a></p><p><strong>客户端集成层（Client-Side Integrations / The Payers）：</strong>让 Agent 或应用能够发起 x402 支付请求，是整个支付流程的“出发点”。代表项目：</p><ul><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://portal.thirdweb.com/"><strong><u>thirdweb Client SDK</u></strong></a> —— 生态最常用的 x402 客户端标准，维护活跃、支持多链，是开发者集成 x402 的默认工具。</p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.nuwa.dev/"><strong><u>Nuwa AI</u></strong></a> —— 使 AI 可无需编码直接付费访问 x402 服务，“Agent 付费入口”的代表项目。</p></li><li><p>官网中同时列出 <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://github.com/coinbase/x402/tree/main/examples/typescript/clients"><u>Axios/Fetch</u></a>、<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://mogami.tech/"><u>Mogami Java SDK</u></a>、<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://github.com/aaronjmars/tweazy"><u>Tweazy</u></a> 等尚属于早期客户端。</p></li></ul><p>目前现有客户端仍停留在 “<strong>SDK 时代”</strong>，本质上是开发者工具。而类似<strong>浏览器/OS客户端</strong>、<strong>机器人/IoT客户端</strong>、<strong>企业系统</strong>或能<strong>管理多钱包 / 多 Facilitator </strong>的更高级形态的客户端尚未出现。</p><p><strong>服务端 / API 商品方（Services / Endpoints / The Sellers）：</strong>向 Agent 按次出售数据、存储或推理服务，部分代表项目包括：</p><ul><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://aisa.one/"><strong><u>AIsa</u></strong></a>&nbsp; ——&nbsp; 为真实运行的 AI Agents 提供付费资源的 API 调用与结算基础设施，使其可按调用、按 token 或按量访问数据、内容、算力及第三方服务，目前x402调用量第一。</p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.firecrawl.dev/"><strong><u>Firecrawl</u></strong><u> </u></a>—— AI Agent 最常消费的网页解析与结构化爬虫入口。</p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://402.pinata.cloud/"><strong><u>Pinata</u></strong></a> —— 主流 Web3 存储基础设施，x402 已能覆盖真实的底层存储成本非轻量 API。</p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.itsgloria.ai/"><strong><u>Gloria AI</u></strong></a> —— 提供高频实时新闻与结构化市场信号，交易与分析型 Agent 的情报来源。</p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://aeon.xyz/AIPayment"><strong><u>AEON</u></strong></a> —— 将 x402 + USDC 扩展到东南亚 / 拉美 / 非洲线下线上商户收单，商户达50M</p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://neynar.com/blog/agents-frames-and-the-future-of-farcaster-neynar-s-vision-for-x402"><strong><u>Neynar</u></strong></a> —— Farcaster 社交图基础设施，将社交数据以 x402 的方式开放给 Agent。</p></li></ul><p>当前服务端集中于<strong>爬虫/存储/新闻</strong>API，将<strong>金融交易执行API</strong>、<strong>广告投放</strong> API、<strong>Web2 SaaS 网关</strong>甚至可以<strong>执行现实世界任务</strong>API的更高级的关键层几乎未开发，是未来最具潜力的增长曲线。</p><p><strong>支付执行层（Facilitators / The Processors）：</strong>完成链上扣款、验证与结算，是 x402 的核心执行引擎，代表项目：</p><ul><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://docs.cdp.coinbase.com/x402/welcome"><strong><u>Coinbase Facilitator（CDP）</u></strong></a> —— 企业级可信执行器，Base 主网零费率 + 内置 OFAC/KYT，是生产环境的最强选择。</p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://payai.network/"><strong><u>PayAI Facilitator</u></strong></a> —— 多链覆盖最广、增长最快的执行层项目（Solana、Polygon、Base、Avalanche 等），是生态中使用量最高的多链 Facilitator。</p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://facilitator.daydreams.systems/"><strong><u>Daydreams</u></strong></a> —— 将支付执行与 LLM 推理路由结合的强场景项目，是当前增长最快的“AI 推理支付执行器”，正成为 x402 生态的第三极力量。</p></li><li><p>根据<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.x402scan.com/facilitators"><u> </u><strong><u>x402scan</u></strong></a> 近 30 日数据，还存在一批中长尾 Facilitator／Router，包括 Dexter、Virtuals Protocol、OpenX402、CodeNut、Heurist、Thirdweb、<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://x402.rs">x402.rs</a>、Mogami、Questflow 等，整体 <strong>交易量、卖家数量、买家数量均明显低于头部三家。</strong></p></li></ul><p><strong>区块链结算层（Blockchain Settlement Layer）：</strong> x402 支付工作流的最终落点，负责完成代币的实际扣款与链上确认。虽然 x402 协议本身是Chain-Agnostic的，但从当前生态数据来看，结算主要集中于两条网络：</p><ul><li><p><strong>Base</strong> —— 由 CDP 官方 Facilitator 主推，USDC 原生、费用稳定，是目前交易量与卖家数量最大的结算网络。</p></li><li><p><strong>Solana</strong> —— 由 PayAI 等多链 Facilitator 重点支持，凭借高吞吐和低延迟，在高频推理和实时 API 场景中增长最快。</p></li></ul><p>链本身不参与支付逻辑，随着更多 Facilitator的扩展 ，x402 的结算层将呈现更强的多链化趋势。</p><table style="min-width: 100px"><colgroup><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>层级（Layer）</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>核心作用</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>项目角色定位</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>代表性项目</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>① 客户端集成层（Client-Side Integrations / The Payers）</strong></p></td><td colspan="1" rowspan="1"><p>让 Agent 或 App 能发起 x402 支付请求</p></td><td colspan="1" rowspan="1"><p>构造 402 Request Header，负责发起支付调用；<strong>不负责扣款/验证</strong></p></td><td colspan="1" rowspan="1"><p><strong>thirdweb Client SDK</strong> — 行业标准、支持多链、开发者默认选择<strong>Nuwa AI</strong> — 最强 Agent 客户端，让 AI 可直接付费消费服务</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>② 服务端 / API 商品方（Services / Endpoints / The Sellers）</strong></p></td><td colspan="1" rowspan="1"><p>向 Agent 提供按次付费的 API / 内容服务</p></td><td colspan="1" rowspan="1"><p>商家端：按调用收费，验证付款后返回数据</p></td><td colspan="1" rowspan="1"><p><strong>Firecrawl</strong> — 生态最强“杀手级服务”，AI 最常消费的爬虫/解析 API</p><p><strong>Pinata</strong> — Web3 基础设施巨头，其加入证明 x402 可覆盖真实基础设施成本</p><p><strong>AIsa&nbsp; </strong>——&nbsp; 为真实运行的 AI Agents 提供付费 API 调用结算基础设施，目前x402调用量第一。</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>③ 支付执行层（Facilitators / The Processors）</strong></p></td><td colspan="1" rowspan="1"><p>负责执行链上付款 → 验证支付 → 返回 proof</p></td><td colspan="1" rowspan="1"><p>x402 的“支付执行器”；<strong>真正帮 Agent 把钱打过去</strong></p></td><td colspan="1" rowspan="1"><p><strong>Coinbase Facilitator</strong>— 企业级可信，Base 零费率 + OFAC/KYT</p><p><strong>PayAI Facilitator</strong> — 增长最快，Solana 多链支持最佳</p></td></tr></tbody></table><p>在 x402 支付体系中，<strong>Facilitator是唯一真正执行链上支付的角色，离“协议级收入”最近</strong>：负责验证支付授权、提交与追踪链上交易，并生成可审计结算证明，同时处理重放、超时、多链兼容与基础的合规检查。与只处理 HTTP 请求的 Client SDK（Payers）和 API 服务端（Sellers）不同，掌握流量入口与结算收费权，因此处于 Agent 经济的价值捕获核心，最受市场关注。</p><p>但现实情况是，大多数项目仍停留在测试网或小规模 Demo 阶段，本质只是轻量“支付执行器”，在身份、计费、风控、多链稳态处理等关键能力上缺乏护城河，呈现明显的低门槛、高同质化特征。随着生态逐步成熟，具备稳定性与合规优势由Coinbase背书的 Facilitator 确实拥有较为明显的先发优势，但随着 CDP Facilitator 开始收费，而其他 Facilitator 仍可能探索不同的变现模式，整体市场格局与份额分布仍存在较大的演变空间。从长期看，x402 仍属于接口层，无法承载核心价值，真正具备持续性竞争力的，是能在结算能力之上构建身份、计费、风控与合规体系的综合平台。<br></p><h3 id="h-l2-virtual-agent-commerce-protocol" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>L2原生支付协议层 - Virtual Agent Commerce Protocol</strong></h3><p>Virtual 的 <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://app.virtuals.io/research/agent-commerce-protocol"><strong><u>Agent Commerce Protocol（ACP）</u></strong></a> 为自主 AI 提供了一套通用的商业交互标准，通过 <em>Request → Negotiation → Transaction → Evaluation</em> 四阶段流程，使独立智能体能够以安全、可验证的方式请求服务、协商条款、完成交易并接受质量评估。ACP 以区块链作为可信执行层，确保交互过程可审计、不可篡改，并通过引入 Evaluator Agents 建立激励驱动的信誉体系，使异构而独立的专业 Agent 能在无中心协调的条件下形成“自治商业体”，开展可持续的经济活动。目前，ACP 已超越早期实验阶段初具生态规模，不限于对“多智能体商业交互标准”的探索。</p><h3 id="h-l1-agent" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>L1基础设施层 - 新兴/垂直Agent 原生支付链</strong></h3><p>Ethereum、Base（EVM）、Solana等主流通用公链为 Agent 提供了最核心的执行环境、账户体系、状态机、安全性与结算基础，拥有成熟的账户模型、稳定币生态和广泛的开发者基础。</p><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://gokite.ai/"><u>Kite AI </u></a>是代表性的 <strong>“Agent 原生 L1”</strong> 基础设施，专为智能体设计支付、身份与权限的底层执行环境。其核心基于 SPACE 框架（稳定币原生、可编程约束、代理优先认证、合规审计、经济可行微支付），并通过 Root→Agent→Session 的三层密钥体系实现细粒度风险隔离；再结合优化状态通道构建“Agent 原生支付铁路”，将成本压至 $0.000001、延迟控制在百毫秒级，使 API 级高频微支付成为可行。作为通用执行层，Kite 向上兼容 x402、Google A2A、Anthropic MCP，向下兼容 OAuth 2.1，目标成为连接 Web2 与 Web3 的统一 Agent 支付与身份底座。</p><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://aisa.one/"><u>AIsaNet </u></a>集成x402与 L402（Lightning Labs 开发的基于闪电网络的 402 支付协议标准）协议，作为面向 AI Agents 的微支付与结算层，支持高频交易、跨协议调用协调、结算路径选择和交易路由，使 Agents 无需理解底层复杂性即可完成跨服务、跨链自动支付。</p><h2 id="h-" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>五、总结与展望：从支付协议到机器经济秩序重构</strong></h2><p>智能体商业（Agentic Commerce）是由机器主导的一套全新经济秩序的建立。它不是“AI 自动下单”这么简单，而是一整条跨主体链路的重构：服务如何被发现、可信度如何建立、订单如何表达、权限如何授权、价值如何清算、争议由谁承担。A2A、MCP、ACP、AP2、ERC-8004 与 x402 的出现，把“机器之间的商业闭环”标准化。</p><p>沿着这条演化路径，未来的支付基础设施将分化为两条平行轨道：一条是基于传统法币逻辑的<strong>业务治理轨道</strong>，另一条是基于 x402 协议的<strong>原生结算轨道</strong>。这两者之间的价值捕获逻辑并不同。</p><h4 id="h-1-web3" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>1. 业务治理轨道：Web3 业务支付系统层</strong></h4><ul><li><p><strong>适用场景：</strong> 低频、非微支付的真实世界交易（如采购、SaaS 订阅、实物电商）。</p></li><li><p><strong>核心逻辑：</strong> 传统法币将长期主导，Agent 只是更聪明的前端与流程协调器，而不替代 Stripe / 卡组织 / 银行转账。稳定币大规模进入真实商业世界的硬障碍在<strong>监管与税务</strong>。</p></li><li><p><strong>Skyfire、Payman、Catena Labs</strong> 等项目价值不在于底层的支付路由（通常由 Stripe/Circle 完成），而在于机器治理服务” (Governance-as-a-Service)。即解决传统金融无法覆盖的机器原生需求——身份映射、权限治理、程序化风控、责任归属及<strong>M2M / A2A micropayment</strong>（按 token / 秒结算）。关键是谁能成为企业信赖的“AI 财务管家”。</p></li></ul><h4 id="h-2-x402-facilitator" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>2. 原生结算轨道：x402 协议生态与 Facilitator 的终局&nbsp;</strong></h4><ul><li><p><strong>适用场景：</strong> 高频、微支付、M2M/A2A 的数字原生交易（API 计费、资源流支付）。</p></li><li><p><strong>核心逻辑：</strong> x402 作为开放标准，通过 HTTP 402 状态码实现了支付与资源的原子化绑定。在可编程微支付和 M2M / A2A 场景中，x402 目前是生态最完整、落地最靠前的协议（HTTP 原生 + 链上结算），在 Agent 经济中的地位有望类比 ‘Stripe for agents’。</p></li><li><p>单纯在<strong> Client </strong>或 <strong>Service </strong>端接入 x402 并不带来赛道溢价；真正具备增长潜力的是能沉淀长期复购与高频调用的上层资产，如 OS 级 Agent 客户端、机器人/IoT 钱包及高价值 API 服务（市场数据、GPU 推理、现实任务执行等）。</p></li><li><p><strong>Facilitator</strong>协助 Client 与 Server 完成支付握手、发票生成与资金清算的<strong>协议网关</strong>，既掌握流量也掌握结算费，是目前 x402 Stack 中离“收入”最近的一环。多数 Facilitator 本质上只是“支付执行器”，明显的<strong>低门槛、同质化</strong>特征。具备可用性与合规优势的巨头（如 Coinbase）形成主导格局。而避免被边缘化的核心价值将上移至 <strong>“Facilitator + X” 服务层</strong>：通过构建可验证服务目录与声誉体系，提供仲裁、风控、金库管理等高毛利能力。</p></li></ul><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/5a748bd4152f41a82ed1b31bc217182b8e3e25b86e833d5214c7f2e4803b5c80.jpg" alt="" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAARCAIAAAAzPjmrAAAACXBIWXMAAAsTAAALEwEAmpwYAAAGAUlEQVR4nE1UaWwbVRB247288ZXL9vrYXa8dOz5y2U4cO+sjDvGV+3Ca1LHTJk0ahzpHU9ImJG3apukRSiFxQ6lLS6CFHvQgLaAWEaBIFFUIcYOQOIQQEhKgAgL+VDJaGySk+THznub75nszb1gAhAMQASDkvwbhGQfiqeAcLZKvR0QGRFKGSMz/MxMiKWcOCwxIvg4WaiG+BkCUDA6UwaHYIMEGSZCvZ2VikKsCUTLNpAS5FCwsRHKLmHxxGQczc7AKVGrlYDZUZsuW21GpNR1WcjAzIi5DCoxwng4WFoKokqFhANMOREI5ehYAESCqBrnpa6YEAuarMugczMTBLBzMmi2v5uFOPu7myd183CUgangKmiunUZmNg1UwgsRGRgpf9Z8ICoApNqiC+ToWUzIHz3CyQQLkaWGhFpGUM5liCyqz8XG3AK/JIesEeI2idKO8OCQka3OUD6X5nKjMzsGsqNSCSMp4Il22oBCECICjBhCSDVIgT8f679EpmKsqKmnqGXoqPLg8uXBr18ItY3VYoHTnUr5cyicr7pDoWgqrBiz+MUzjz1cFRerGHLJOqKzNU9fOxieWpqYvHNy3trgw3LUFRCkYVQEQmVbAECjZgDRPZAgGo4FgpDsajw7sigxMPjw+hxUFBbhHXNSopYfMjVN015FK/5iVDhVbN6v14VwiAItosqS+uXc82Dk4GJvp7x/v3BQTinRsQA7AhZDQwIJQ5YYNuZGBRy+9/r6Cog+fedXZtA3OL+keeezK3W+PnlnLU/k09kGjZ7yyZa9v6HRbeMbuiZhsU5QhojAGlh/d/dP1c5GmAURsmR95JOAa3tjYfnRyMa/AksWWZxQQGnrz0hvfXfrk/rbEW8vv/LLj2Y+Sb352+4Ov3/7216/u/3zz7gfJV+5u3Xe+xDvhHXja1TxT4e5/Jr4jlZhJJedTb15MXTmXWpr7dfbqexPry603b4++dWVo/Yjvgle7DeLrWBCqKq3pM/m2h+LzOxaf2zh6WF+3tTM2dSyxspJ8Znr+WHzv4o131u988+Pj1z/29CYqgjtTZ3amrq58P376hcHEme1LydiR9QOnj/ctRDz753pXGswjw83zu7oec2l6YaGRBUBkFlsOwcryUq8/0G2q9AOwtNjWdOLU+ROr106cf61l27yteeT8S5fXf/hr9PS9TxcfSb2cWOu8Ul0Z8XWODU0f8/XNNGydKfWEi51N7ZtjYqW5yt3q9oZADs704N+xRagstjwrSypQOnnSCp213VwbqWiOB4YPtcUXLQ2zhxKnnr/7+cl7f6TWplNzZ59sSro9HaFgxOHqKLMEaYtfiFVriXooW52pmA1gAKyGcgwsAKYAiAS5GpHMpDXUkdqHtKXBElePzhqSlTRo6F6xtklli9ZvP3z44vq7X/785xcfXjt0fCo0bLC0Y0X+QH040Nhrraqvcfhwld2g9yhIK6awoIKiNEExowBClWyQ1OlroptHR8b2PP/i5ZVTq0OjexTGAPOJZHaZsVmsrz9+du3Bg9Tvv/29tHQWp+xUaQNX5jA6uvvG5mYXk2efuxiKjOycXtgyMEG7QjBfA3DUcIYgszey2HIuX+1wtHsDPbX+HlRcikgsPNwp0gQJcwdR0eWKHuibv9Q5mbSFZglzN24OFWgCqNSKSMprg9Fge7/N2Wa1NkukVWx2ejVwCiGmyQgFQDjE0yACHSTQM7oKTNlyO0WHSwLbTa0TdO8BesuB8cTarpOvTJ58dfXOl8lbH4+fWHNsOVgd2VfWENd7B6UlrTwFDeWVApxCkJuGyi5kZkegZ0EcdT7pzlO5CzR1hKWtJjRCt8Z80UmVo0dhaVeYO4iqTpmpxRfdvf/46vKz126u30us3rC3DstMLXhlSGEJkZVdFB22t8abBmc9naM6Z1is8fNwp8O3NQerZCF8rVxbJ1I78kgrpnb0D00PxnaP7NhPGb35SqdEU6PQBwhjPVdRZfNGdu55IjZx0Nv2MFdRpTAG5Hq/WO0RqV2YxtmxKb53/1IsPqctDwoJW46yuqtvFsNd/wD+7awcR8qFTAAAAABJRU5ErkJggg==" nextheight="559" nextwidth="1024" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>我们相信未来将形成 <strong>“法币体系”与“稳定币体系”双轨并行”</strong>：前者支撑主流人类商业，后者承载机器原生与链上原生的高频、跨境、微支付场景。Web3 的角色不是取代传统支付，而是为 Agent 时代提供 <strong>可验证身份、可编程清算与全球稳定币</strong> 的底层能力。最终，智能体商业（Agentic Commerce）不仅限于支付优化，而是机器经济秩序的重构。当数十亿次微交易由 Agent 在后台自动完成时，那些率先提供信任、协调与优化能力的协议与公司，将成为下一代全球商业基础设施的核心力量。</p><p><strong><em>免责声明：</em></strong><em>本文在创作过程中借助了 ChatGPT-5 与Gemini 3的 AI 工具辅助完成，作者已尽力校对并确保信息真实与准确，但仍难免存在疏漏，敬请谅解。需特别提示的是，加密资产市场普遍存在项目基本面与二级市场价格表现背离的情况。本文内容仅用于信息整合与学术/研究交流，不构成任何投资建议，亦不应视为任何代币的买卖推荐。</em></p><p><br></p>]]></content:encoded>
            <author>0xjacobzhao@newsletter.paragraph.com (jacobzhao)</author>
            <category>智能体商业</category>
            <category>智能体</category>
            <category>支付</category>
            <category>m2m</category>
            <category>agenticcommerce</category>
            <enclosure url="https://storage.googleapis.com/papyrus_images/b76c4ce0d718d92f41bc12a2958fefd2d0f05982355bad8fc6af1f9b049eeb5b.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[The Convergent Evolution of Automation, AI, and Web3 in the Robotics Industry]]></title>
            <link>https://paragraph.com/@0xjacobzhao/the-convergent-evolution-of-automation-ai-and-web3-in-the-robotics-industry</link>
            <guid>X71coKz0gpFmObm0uPjR</guid>
            <pubDate>Tue, 18 Nov 2025 06:39:46 GMT</pubDate>
            <description><![CDATA[🤖 The robotics industry is shifting toward Embodied AI, with humanoid robots emerging as the leading form factor and gaining cross-scenario “understand → predict → act” capabilities.
💠 Since 2025, Web3 × Robotics has become a key narrative: limited value in hardware, real potential in simulation/software, and the highest upside in decentralized coordination and machine identity.
📊 Our report maps the ecosystem across five layers — Model Intelligence, Machine Economy, Data, Simulation, and Rob]]></description>
            <content:encoded><![CDATA[<p style="text-align: center"><em>This independent research report is supported by </em><strong><em>IOSG Ventures</em></strong><em>. The author thanks </em><strong><em>Hans </em></strong><em>(RoboCup Asia-Pacific), </em><strong><em>Nichanan Kesonpat</em></strong><em>(1kx), </em><strong><em>Robert Koschig</em></strong><em> (1kx), </em><strong><em>Amanda Young </em></strong><em>(Collab+Currency)</em><strong><em> </em></strong><em>, </em><strong><em>Jonathan Victor</em></strong><em> (Ansa Research), </em><strong><em>Lex Sokolin</em></strong><em> (Generative Ventures), </em><strong><em>Jay Yu </em></strong><em>(Pantera Capital) , </em><strong><em>Jeffrey Hu </em></strong><em>(Hashkey Capital) for their valuable comments, as well as contributors from </em><strong><em>OpenMind</em></strong><em>, </em><strong><em>BitRobot</em></strong><em>, </em><strong><em>peaq</em></strong><em>, </em><strong><em>Auki Labs, XMAQUINA</em></strong><em>, </em><strong><em>GAIB, Vader, Gradient, Tashi Network </em></strong><em>and </em><strong><em>CodecFlow</em></strong><em> for their constructive feedback. While every effort has been made to ensure objectivity and accuracy, some insights inevitably reflect subjective interpretation, and readers are encouraged to engage with the content critically.</em></p><br><h2 id="h-i-robotics-from-industrial-automation-to-humanoid-intelligence" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>I. Robotics: From Industrial Automation to Humanoid Intelligence</strong></h2><p>The traditional robotics industry has developed a vertically integrated value chain, comprising four main layers: <strong>core components</strong>, <strong>control systems</strong>, <strong>complete machines</strong>, and <strong>system integration &amp; applications</strong>.</p><ul><li><p><strong>Core components</strong> (controllers, servos, reducers, sensors, batteries, etc.) have the highest technical barriers, defining both performance ceilings and cost floors.</p></li><li><p><strong>Control systems</strong> act as the robot’s “brain and cerebellum,” responsible for decision-making and motion planning.</p></li><li><p><strong>Complete machine manufacturing</strong> reflects the ability to integrate complex supply chains.</p></li><li><p><strong>System integration and application development</strong> determine the depth of commercialization and are becoming the key sources of value creation.</p></li></ul><p>Globally, robotics is evolving along a clear trajectory — <strong>from industrial automation → scenario-specific intelligence → general-purpose intelligence</strong> — forming five major categories: <strong>industrial robots, mobile robots, service robots, special-purpose robots, and humanoid robots.</strong></p><ol><li><p><strong>Industrial Robots: </strong>Currently the only fully mature segment, industrial robots are widely deployed in welding, assembly, painting, and handling processes across manufacturing lines. The industry features standardized supply chains, stable margins, and well-defined ROI. Within this category, <strong>collaborative robots (cobots)</strong>—designed for safe human–robot collaboration, lightweight operation, and rapid deployment.<br><strong>Representative companies:</strong> ABB, Fanuc, Yaskawa, KUKA, Universal Robots, JAKA, and AUBO</p></li><li><p><strong>Mobile Robots: </strong>Including <strong>AGV (Automated Guided Vehicles)</strong> and <strong>AMR (Autonomous Mobile Robots)</strong>, this category is widely adopted in logistics, e-commerce fulfillment, and factory transport. It is the most mature segment for B2B applications.<br><strong>Representative companies:</strong> Amazon Robotics, Geek+, Quicktron, Locus Robotics.</p></li><li><p><strong>Service Robots: Targeting consumer and commercial sectors—such as cleaning,food service, and education—this is the fastest-growing category on the consumer side. Cleaning robots now follow a consumer electronics logic, while medical and delivery robots are rapidly commercializing. A new wave of more general manipulators (e.g., two-arm systems like Dyna) is emerging—more flexible than task-specific products, yet not as general as humanoids.</strong></p></li></ol><p><strong>Representative companies:</strong> Ecovacs, Roborock, Pudu Robotics,KEENON Robotics, iRobot, Dyna.</p><ol start="4"><li><p><strong>Special-Purpose Robots: Designed for high-risk or niche applications—healthcare, military, construction, marine, and aerospace—these robots serve small but profitable markets with strong entry barriers, typically relying on government or enterprise contracts.<br>Representative companies: Intuitive Surgical, Boston Dynamics, ANYbotics, NASA Valkyrie, Honeybee Robotics</strong></p></li><li><p><strong>Humanoid Robots: Regarded as the future “universal labor platform,” humanoid robots are drawing the most attention at the frontier of embodied intelligence.<br>Representative companies: Tesla (Optimus), Figure AI (Figure 01), Sanctuary AI (Phoenix), Agility Robotics (Digit), Apptronik (Apollo), 1X Robotics, Neura Robotics,&nbsp; Unitree, UBTECH, Agibot</strong></p></li></ol><p>The core value of humanoid robots lies in their human-like morphology, allowing them to operate within existing social and physical environments without infrastructure modification. Unlike industrial robots that pursue peak efficiency, humanoids emphasize <strong>general adaptability and task transferability</strong>, enabling seamless deployment across factories, homes, and public spaces.</p><p>Most humanoid robots remain in the <strong>technical demonstration stage</strong>, focused on validating <strong>dynamic balance</strong>, <strong>locomotion</strong>, and <strong>manipulation</strong> capabilities. While limited deployments have begun to appear in <strong>highly controlled factory settings</strong> (e.g., Figure × BMW, Agility Digit), and additional vendors such as 1X are expected to enter early distribution starting in 2026, these are still <strong>narrow-scope, single-task</strong> applications—not true <strong>general-purpose labor</strong> integration. Meaningful <strong>large-scale commercialization</strong> is still years away.</p><p>The core bottlenecks span several layers:</p><ul><li><p><strong>Multi-DOF coordination</strong> and <strong>real-time dynamic balance</strong> remain challenging;</p></li><li><p><strong>Energy and endurance</strong> are constrained by battery density and actuator efficiency;</p></li><li><p><strong>Perception–decision pipelines</strong> often destabilize in open environments and fail to generalize;</p></li><li><p>A significant <strong>data gap</strong> limits the training of generalized policies;</p></li><li><p><strong>Cross-embodiment transfer</strong> is not yet solved;</p></li><li><p><strong>Hardware supply chains and cost curves</strong>—especially outside China—remain substantial barriers, making <strong>low-cost, large-scale deployment</strong> difficult.</p></li></ul><p>The <strong>commercialization of humanoid robotics</strong> will advance in three stages: <strong>Demo-as-a-Service</strong> in the short term, driven by pilots and subsidies; <strong>Robotics-as-a-Service (RaaS)</strong> in the mid term, as task and skill ecosystems emerge; and a <strong>Labor Cloud</strong> model in the long term, where value shifts from hardware to software and networked services.&nbsp; Overall, humanoid robotics is entering a pivotal transition <strong>from demonstration to self-learning</strong>. Whether the industry can overcome the intertwined barriers of <strong>control, cost, and intelligence</strong> will determine if embodied intelligence can truly become a scalable economic force.</p><h2 id="h-ii-ai-robotics-the-dawn-of-the-embodied-intelligence-era" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>II. AI × Robotics: The Dawn of the Embodied Intelligence Era</strong></h2><p>Traditional automation relies heavily on pre-programmed logic and pipeline-based control architectures—such as the <strong>DSOP paradigm (perception–planning–control)</strong>—which function reliably only in structured environments. The real world, however, is far more complex and unpredictable. The new generation of <strong>Embodied AI</strong> follows an entirely different paradigm: leveraging large models and unified representation learning to give robots cross-scene capabilities for <strong>understanding, prediction, and action</strong>. Embodied intelligence emphasizes the dynamic coupling of <strong>the body (hardware), the brain (models), and the environment (interaction)</strong>. The robot is merely the vehicle—intelligence is the true core.</p><p><strong>Generative AI</strong> represents intelligence in the <em>symbolic and linguistic world</em>—it excels at understanding language and semantics. <strong>Embodied AI</strong>, by contrast, represents intelligence in the <em>physical world</em>—it masters perception and action. The two correspond to the <strong>“brain”</strong> and <strong>“body”</strong> of AI evolution, forming two parallel but converging frontiers.</p><p>From an intelligence hierarchy perspective, Embodied AI is a higher-order capability than generative AI, but its maturity lags far behind. LLMs benefit from abundant internet-scale data and a well-defined “data → compute → deployment” loop. Robotic intelligence, however, requires <strong>egocentric, multimodal, action-grounded data</strong>—teleoperation trajectories, first-person video, spatial maps, manipulation sequences—which <strong>do not exist by default</strong> and must be generated through real-world interaction or high-fidelity simulation. This makes data far scarcer, costlier, and harder to scale. While simulated and synthetic data help, they cannot fully replace real sensorimotor experience. This is why companies like Tesla and Figure must operate teleoperation factories, and why data-collection farms have emerged in SEA. In short, <strong>LLMs learn from existing data; robots must create their own through physical interaction.</strong></p><p><strong><br></strong> In the next <strong>5–10 years</strong>, both will deeply converge through <strong>Vision–Language–Action (VLA) models</strong> and <strong>Embodied Agent architectures</strong>—LLMs will handle <em>high-level cognition and planning</em>, while robots will execute <em>real-world actions</em>, forming a bidirectional loop between <em>data and embodiment</em>, thus propelling AI from <strong>language intelligence</strong> toward <strong>true general intelligence (AGI)</strong>.<br></p><h3 id="h-the-core-technology-stack-of-embodied-intelligence" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>The Core Technology Stack of Embodied Intelligence</strong></h3><p>Embodied AI can be conceptualized as a <strong>bottom-up intelligence stack</strong>, comprising:<br> <strong>VLA (Perception Fusion)</strong>, <strong>RL/IL/SSL (Learning)</strong>, <strong>Sim2Real (Reality Transfer)</strong>, <strong>World Model (Cognitive Modeling)</strong>, and <strong>Swarm &amp; Reasoning (Collective Intelligence and Memory)</strong>.</p><table style="min-width: 125px"><colgroup><col><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Module</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Key Technologies</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Core Function</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Representative Projects</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Importance</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Perception &amp; Understanding</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">Vision–Language–Action (VLA)</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Multimodal fusion and semantic-to-action mapping</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Google RT-X / DeepMind RT-2 / Figure Helix</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="green_circle" class="emoji" data-type="emoji">🟢</span> High — Core entry point for embodied intelligence; early deployment stage</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Learning &amp; Adaptation</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">Self-Supervised (SSL) + Imitation (IL) + Reinforcement (RL)</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Learn control and policy from data, demonstrations, and feedback</p></td><td colspan="1" rowspan="1"><p style="text-align: center">OpenAI Robotics / Tesla FSD / DeepMind Alpha</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="green_circle" class="emoji" data-type="emoji">🟢</span> High — Core of behavior generation; most costly to train</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Reality Transfer</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">Simulation-to-Reality (Sim2Real)</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Migrate virtual training to the physical world safely</p></td><td colspan="1" rowspan="1"><p style="text-align: center">NVIDIA Isaac Sim / Meta Habitat / Boston Dynamics</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="yellow_circle" class="emoji" data-type="emoji">🟡</span> Medium–High — Key bridge; limited by the “reality gap”</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Cognitive Modeling</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">Latent Modeling + Imagination Planning + Model-based RL</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Internal simulation and predictive reasoning</p></td><td colspan="1" rowspan="1"><p style="text-align: center">DeepMind Dreamer / Google Gemini+RT-2 / Tesla FSD V12</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="yellow_circle" class="emoji" data-type="emoji">🟡</span> Medium–High — Theoretical frontier; supports long-term reasoning</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Swarm &amp; Reasoning</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">Multi-agent coordination + Long-term memory + Neuro-symbolic AI</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Collaborative learning and distributed cognition</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Google SWARM / Figure cluster experiments / OpenDevin</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="orange_circle" class="emoji" data-type="emoji">🟠</span> Medium–Low — Experimental; early-stage exploration</p></td></tr></tbody></table><p><br></p><h3 id="h-perception-and-understanding-vision-language-action-vla" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>Perception &amp; Understanding: Vision–Language–Action (VLA)</strong></h3><p>The <strong>VLA model</strong> integrates <strong>Vision</strong>, <strong>Language</strong>, and <strong>Action</strong> into a unified multimodal system, enabling robots to <em>understand human instructions</em> and translate them into <em>physical operations</em>. The execution pipeline includes <strong>semantic parsing</strong>, <strong>object detection</strong>, <strong>path planning</strong>, and <strong>action execution</strong>, completing the full loop of “understand semantics → perceive world → complete task.”&nbsp; <strong>Representative projects:</strong> Google RT-X, Meta Ego-Exo, and Figure Helix, showcasing breakthroughs in multimodal understanding, immersive perception, and language-conditioned control.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/6b97daf2401a8f99022fb397acc09cfcd95d059b827dcecf8095f90e556c20e7.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAARCAIAAAAzPjmrAAAACXBIWXMAAAsTAAALEwEAmpwYAAAEO0lEQVR4nJ2Vf0wbZRjH35hojDMz/mGiiTEaZ7KFaMIfi/HXBswtGAJDt/JjDiFDuoHZHEMQcG3JAFM3ZWQ/oKyF9ig/HORCO8GzlHaUXAX6a5QwM3AiV+i49crRll5puXKvobcgjoUtfvLmcsnz3vvN8zz3fV4An4BAIECS82632+WanZ2bI2OEw+En+RZsEeNiT4qiUBTFcby2tiY5OfnQoc+u/3zdYDBgGAYhx8b4PwIMwwQCAQhhSUmJVCqFEK5EVoQFBQiiikRYlo2mpKTgOM7v3CKbRwhEuVUIYbH8Mkj91IhhHZ0dWq1W3do6hONy+TWbY0JZn6G+mjticaAx+np7KYraSoDdBISw7+bAd5frbFaLrl9nNBpRFLXZ7UqV0mIdw7prsO7vJ27f6erq6tH0aLU9JEnyxz10yL8CG8Pr7xDCyTuTiQmJS4Gl1ehaWgq5YnRklN+gblUnJe3nS8dH1wu1UQNAyLW1qWUyGYIgKIoq5HKTyQQhHLRrW7GLzTcuiC4WdRsbfxlW/mptvdQuUvX++JtVbbjVIa7/qqahWIM3q/t++tM1vhwKu91zCKLq7OxEEMTpdPItBOFw2OGwSyRi4XFhWlqqVColCGJ2znVOUVjWIChvyKpuLzhasfeVeLBjz/adiS/GHXjp9feeffOj50WKnCpl3rdXMs9cSldp6rwe2u/3VZ6tyMgQpKen6fr7Q6HQmgAXy4hhGJ/f57nvuU+SfHYy9FxlY06VPL9Knn+iOi1VuDvt+LuZp/cITn6Q+mW84OT74qZcSVNelfxYRWO2wYLydmEYhqZpKgZfsf/0YGMbWJYJR6hQxBuKeMNra4Hj2Kyjh1VtLRBCf5hZjATXF//jPdIQawIkSWIYptfrBwwD5mEzRVEsyw7ZJxGNuRk1qjRD7b0jiMaMoIYvcvOrz5aOG+Rj+qZxg2JisGVisMXed4Wen2LZKEHM6Pp1+hgOh/1BDyCEDofj1OlTRUUnBILDEkkVTdMzBHHw6wu7BJVg+y6wxqvvHBG9LSj7Y3recuN8ZSqIfw68BcBOAOIAqM0EWHPpYjA0PGwuKiosKytNTz9YXl7Om+NBBjar1Wa3ORz2UcsowzBBhvm8vOHlj4UAPA3AawCAZz7MOVB4foakx/qv1uVvi98GdgDwBgBxT4F64QsGtZiD8O/pv4xG47jT6Yjh9/sedvLGIt6+O6vDnSMT07v3foJi+E3r5PCtqSjHhZa8d60aj+v3HyqPSYqzKdfI1GhPcPHeY0bFZgeGQ0ww4GMjy2XfnLnnnmUjyyFmaTUaZdnogm8pynHqjq6Ga8oox1G+4HJ4ZSsnb4ZlWRw3m0wmsUSckJAgEomNRqNeP8AwDEV5mmSN+xIT83LzjmRn7d+X1CSTEQTxmAw2w09TmqYJgvBQnkAMfh7QNM1fCR6KIklywevdYpr+Ax+JGnTkmdMpAAAAAElFTkSuQmCC" nextheight="370" nextwidth="712" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>VLA systems are still in an early stage and face four fundamental bottlenecks:</p><ol><li><p><strong>Semantic ambiguity and weak task generalization:</strong> models struggle to interpret vague or open-ended instructions;</p></li><li><p><strong>Unstable vision–action alignment:</strong> perception errors are amplified during planning and execution;</p></li><li><p><strong>Sparse and non-standardized multimodal data:</strong> collection and annotation remain costly, making it difficult to build large-scale data flywheels;</p></li><li><p><strong>Long-horizon challenges across temporal and spatial axes:</strong> long temporal horizons strain planning and memory, while large spatial horizons require reasoning about out-of-perception elements—something current VLAs lack due to limited world models and cross-space inference.</p></li></ol><p>These issues collectively constrain VLA’s cross-scenario generalization and limit its readiness for large-scale real-world deployment.<br></p><h3 id="h-learning-and-adaptation-ssl-il-and-rl" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>Learning &amp; Adaptation: SSL, IL, and RL</strong></h3><ul><li><p><strong>Self-Supervised Learning (SSL):</strong> Enables robots to infer patterns and physical laws directly from perception data—teaching them to “<em>understand the world</em>.”</p></li><li><p><strong>Imitation Learning (IL):</strong> Allows robots to mimic human or expert demonstrations—helping them “<em>act like humans</em>.”</p></li><li><p><strong>Reinforcement Learning (RL):</strong> Uses reward-punishment feedback loops to optimize policies—helping them “<em>learn through trial and error</em>.”</p></li></ul><p>In Embodied AI, these paradigms form a <strong>layered learning system</strong>: SSL provides <strong>representational grounding</strong>, IL provides <strong>human priors</strong>, and&nbsp; RL drives <strong>policy optimization</strong>,<br> jointly forming the core mechanism of <em>learning from perception to action</em>.<br><br></p><table style="min-width: 150px"><colgroup><col><col><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Paradigm</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Objective</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Supervision Source</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Data Type</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Key Challenge</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Embodied Role</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center">SSL</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Learn features (understand structure)</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Data itself (self-labeling)</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Massive unlabeled data (e.g., videos)</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Lacks action understanding</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Provides perceptual foundation</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center">IL</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Learn by imitation (replicate expert behavior)</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Human demonstrations</p></td><td colspan="1" rowspan="1"><p style="text-align: center">State–action pairs</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Quality dependent on experts</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Provides safe starting policies</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center">RL</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Learn optimal strategy (maximize reward)</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Environment feedback</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Agent experience</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Low data efficiency; high trial cost</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Finds superhuman optimal strategies</p></td></tr></tbody></table><p><br></p><h3 id="h-sim2real-bridging-simulation-and-reality" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>Sim2Real: Bridging Simulation and Reality</strong></h3><p><strong>Simulation-to-Reality (Sim2Real)</strong> allows robots to train in virtual environments before deployment in the real world. Platforms like <strong>NVIDIA Isaac Sim</strong>, <strong>Omniverse</strong>, and <strong>DeepMind MuJoCo</strong> produce vast amounts of synthetic data—reducing cost and wear on hardware.</p><p>The goal is to minimize the <strong>“reality gap”</strong> through:</p><ul><li><p><strong>Domain Randomization:</strong> Randomly altering lighting, friction, and noise to improve generalization.</p></li><li><p><strong>Physical Calibration:</strong> Using real sensor data to adjust simulation physics for realism.</p></li><li><p><strong>Adaptive Fine-tuning:</strong> Rapid on-site retraining for stability in real environments.</p></li></ul><p>Sim2Real forms the <strong>central bridge</strong> for embodied AI deployment. Despite strong progress, challenges remain around <strong>reality gap</strong>, <strong>compute costs</strong>, and <strong>real-world safety</strong>. Nevertheless, <strong>Simulation-as-a-Service (SimaaS)</strong> is emerging as a lightweight yet strategic infrastructure for the Embodied AI era—via <strong>PaaS (Platform Subscription)</strong>, <strong>DaaS (Data Generation)</strong>, and <strong>VaaS (Validation)</strong> business models.</p><br><h3 id="h-cognitive-modeling-world-model-the-robots-inner-world" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>Cognitive Modeling: World Model — The Robot’s “Inner World”</strong></h3><p>A <strong>World Model</strong> serves as the <em>inner brain</em> of robots, allowing them to simulate environments and outcomes internally—predicting and reasoning before acting. By learning environmental dynamics, it enables <strong>predictive and proactive behavior</strong>. <strong>Representative projects:</strong> DeepMind Dreamer, Google Gemini + RT-2, Tesla FSD V12, NVIDIA WorldSim.</p><p>Core techniques include:</p><ul><li><p><strong>Latent Dynamics Modeling:</strong> Compressing high-dimensional observations into latent states.</p></li><li><p><strong>Imagination-based Planning:</strong> Virtual trial-and-error for path prediction.</p></li><li><p><strong>Model-based Reinforcement Learning:</strong> Replacing real-world trials with internal simulations.</p></li></ul><p>World Models mark the transition from <strong>reactive to predictive intelligence</strong>, though challenges persist in <strong>model complexity</strong>, <strong>long-horizon stability</strong>, and <strong>standardization</strong>.</p><p><br></p><h3 id="h-swarm-intelligence-and-reasoning-from-individual-to-collective-cognition" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>Swarm Intelligence &amp; Reasoning: From Individual to Collective Cognition</strong></h3><p><strong>Multi-Agent Collaboration</strong> and <strong>Memory-Reasoning Systems</strong> represent the next frontier—extending intelligence from individual agents to cooperative and cognitive collectives.</p><ul><li><p><strong>Multi-Agent Systems (MAS):</strong> Enable distributed cooperation among multiple robots via cooperative RL frameworks (e.g., OpenAI <em>Hide-and-Seek</em>, DeepMind <em>QMIX</em> / <em>MADDPG</em>). These have proven effective in logistics, inspection, and coordinated swarm control.</p></li><li><p><strong>Memory &amp; Reasoning:</strong> Equip agents with long-term memory and causal understanding—crucial for cross-task generalization and self-planning. Research examples include <em>DeepMind Gato</em>, <em>Dreamer</em>, and <em>Voyager</em>, enabling continuous learning and “remembering the past, simulating the future.”</p></li></ul><p>Together, these components lay the foundation for <strong>robots capable of collective learning, memory, and self-evolution</strong>.<br></p><h3 id="h-global-embodied-ai-landscape-collaboration-and-competition" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>Global Embodied AI Landscape: Collaboration and Competition</strong></h3><table style="min-width: 125px"><colgroup><col><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Module Layer</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="flag_us" class="emoji" data-type="emoji">🇺🇸</span><strong> United States</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="china" class="emoji" data-type="emoji">🇨🇳</span><strong> China</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="european_union" class="emoji" data-type="emoji">🇪🇺</span><strong> Europe</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Japan &amp; Korea</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Algorithm Layer (Models)</strong></p></td><td colspan="1" rowspan="1"><p>Google DeepMind (RT-X, Gemini)</p><p>OpenAI (GPT-4o “Omni”, robotics integration)</p><p>Tesla (World Model, end-to-end autonomy)</p><p>Meta (Habitat, V-JEPA)</p><br></td><td colspan="1" rowspan="1"><p>Shanghai AI Lab (InternLM / OpenX-Embodiment)</p><p>Baidu (Apollo, Wenxin)</p><p>Tsinghua University / Zhipu AI (CogVLM)</p><br></td><td colspan="1" rowspan="1"><p>ETH Zurich RSL (Switzerland)</p><p>DeepMind EU teams (Paris / Zurich)</p><p>PAL Robotics (Spain)</p><p>Neura Robotics (Germany, cognitive robotics)</p></td><td colspan="1" rowspan="1"><p>Preferred Networks (Japan, PFN)</p><p>NAVER Labs (Korea)</p><p>University of Tokyo AI Labs (Japan)</p><br></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Simulation &amp; Training Layer (Sim2Real)</strong></p></td><td colspan="1" rowspan="1"><p><strong>NVIDIA (Isaac Sim / Omniverse) <em>(industry dominant)</em></strong></p><p>DeepMind MuJoCo (physics engine)</p><p>Meta (Habitat)</p><br></td><td colspan="1" rowspan="1"><p>Huawei (Cyberverse / Pangu Robotics)</p><p>Unity China</p><p>Tencent (Robotics X)</p><p>Agibot / Zhiyuan Robotics (Agi-Sim)</p></td><td colspan="1" rowspan="1"><p>ETH Zurich RSL (Switzerland) <em>(global Sim2Real academic hotspot)<br><br></em></p><p>Dassault Systèmes (France)</p></td><td colspan="1" rowspan="1"><p>Sony AI <em>(mainly internal robotics R&amp;D)</em></p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>System Layer (Humanoid Robots)</strong></p></td><td colspan="1" rowspan="1"><p>Tesla Optimus</p><p>Figure AI (Figure 01)</p><p>Sanctuary AI (Phoenix)</p><p>Agility Robotics (Digit)</p><p>Apptronik (Apollo)</p></td><td colspan="1" rowspan="1"><p>Agibot (Zhiyuan Robotics)</p><p>Unitree Robotics (H1 / G1)</p><p>Fourier Intelligence (GR-1)</p><p>UBTECH (Walker)</p></td><td colspan="1" rowspan="1"><p>1X Robotics (Norway / US)<br><br></p><p>Neura Robotics (Germany – 4NE-1)</p></td><td colspan="1" rowspan="1"><p>NAVER Labs (Ambidex – Korea)</p><p><br></p><p><strong>Note:</strong> Japan &amp; Korea lag behind the US/China in humanoid commercial deployment; their traditional strengths remain in industrial robots (Fanuc) and core components (Harmonic Drive).</p></td></tr></tbody></table><br><p>The global robotics industry is entering an era of <strong>cooperative competition</strong>.</p><ul><li><p><strong>China</strong> leads in supply-chain efficiency, manufacturing, and vertical integration, with companies like Unitree and UBTECH already mass-producing humanoids. However, its algorithmic and simulation capabilities still trail the U.S. by several years.</p></li><li><p><strong>The U.S.</strong> dominates frontier AI models and software (DeepMind, OpenAI, NVIDIA), yet this advantage does not fully extend to robotics hardware—where Chinese players often iterate faster and demonstrate stronger real-world performance. This hardware gap partly explains U.S. industrial-reshoring efforts under the CHIPS Act and IRA.</p></li><li><p><strong>Japan</strong> remains the global leader in precision components and motion-control systems, though its progress in AI-native robotics remains conservative.</p></li><li><p><strong>Korea</strong> distinguishes itself through advanced consumer-robotics adoption, driven by LG, NAVER Labs, and a mature service-robot ecosystem.</p></li><li><p><strong>Europe</strong> maintains strong engineering culture, safety standards, and research depth; while much manufacturing has moved abroad, Europe continues to excel in collaboration frameworks and robotics standardization.</p></li></ul><p>Together, these regional strengths are shaping the <strong>long-term equilibrium of the global embodied intelligence industry</strong>.<br></p><h2 id="h-iii-robots-ai-web3-narrative-vision-vs-practical-pathways" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>III. Robots × AI × Web3: Narrative Vision vs. Practical Pathways</strong></h2><p>In 2025, a new narrative emerged in Web3 around the fusion of robotics and AI. While Web3 is often framed as the base protocol for a decentralized machine economy, its real integration value and feasibility vary markedly by layer:</p><ul><li><p><strong>Hardware manufacturing &amp; service layer:</strong> Capital-intensive with weak data flywheels; Web3 can currently play only a supporting role in edge cases such as supply-chain finance or equipment leasing.</p></li><li><p><strong>Simulation &amp; software ecosystem:</strong> Higher compatibility; simulation data and training jobs can be put on-chain for attribution, and agents/skill modules can be assetized via NFTs or Agent Tokens.</p></li><li><p><strong>Platform layer:</strong> Decentralized labor and collaboration networks show the greatest potential—Web3 can unite identity, incentives, and governance to gradually build a credible “machine labor market,” laying the institutional groundwork for a future machine economy.</p></li></ul><table style="min-width: 125px"><colgroup><col><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Layer</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Business Model</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Capital Intensity</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Web3 Integration Potential</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Representative Projects</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>L1 Hardware Manufacturing</strong></p></td><td colspan="1" rowspan="1"><p>Full robot production, key components, maintenance services</p></td><td colspan="1" rowspan="1"><p><span data-name="red_circle" class="emoji" data-type="emoji">🔴</span> Very High</p></td><td colspan="1" rowspan="1"><p><span data-name="white_circle" class="emoji" data-type="emoji">⚪</span> Low — asset-heavy, weak data loops; mainly suitable for supply-chain finance</p></td><td colspan="1" rowspan="1"><p>Boston Dynamics, Tesla Optimus, Figure AI, Unitree, UBTECH</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>L2 Service Deployment</strong></p></td><td colspan="1" rowspan="1"><p>RaaS leasing, system integration, project fees, subscription models</p></td><td colspan="1" rowspan="1"><p><span data-name="orange_circle" class="emoji" data-type="emoji">🟠</span> High</p></td><td colspan="1" rowspan="1"><p><span data-name="yellow_circle" class="emoji" data-type="emoji">🟡</span> Medium — on-chain task/usage metering, automated settlement</p></td><td colspan="1" rowspan="1"><p>Agility Robotics, ABB Robotics, Geek+</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>L3 Simulation &amp; Data</strong></p></td><td colspan="1" rowspan="1"><p>Simulation-as-a-Service, data licensing, cloud subscriptions</p></td><td colspan="1" rowspan="1"><p><span data-name="yellow_circle" class="emoji" data-type="emoji">🟡</span> Medium–Low</p></td><td colspan="1" rowspan="1"><p><span data-name="yellow_circle" class="emoji" data-type="emoji">🟡</span> Medium — simulation data + training jobs can be attributed/tokenized on-chain</p></td><td colspan="1" rowspan="1"><p>NVIDIA Isaac Sim / Omniverse, MuJoCo</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>L4 Runtime &amp; Control Software</strong></p></td><td colspan="1" rowspan="1"><p>AI agent runtimes, SDKs, control frameworks, developer tools</p></td><td colspan="1" rowspan="1"><p><span data-name="green_circle" class="emoji" data-type="emoji">🟢</span> Low</p></td><td colspan="1" rowspan="1"><p><span data-name="green_circle" class="emoji" data-type="emoji">🟢</span><span data-name="green_circle" class="emoji" data-type="emoji">🟢</span> High — skills/policies can be assetized (Skill NFTs / Agent Tokens)</p></td><td colspan="1" rowspan="1"><p>Isaac ROS, ROS2 Nav2 / MoveIt, OpenMind</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>L5 Real-Time Orchestration Layer</strong></p></td><td colspan="1" rowspan="1"><p>Low-latency sensor exchange, multi-robot real-time state sync, edge compute sharing, encrypted access control</p></td><td colspan="1" rowspan="1"><p><span data-name="green_circle" class="emoji" data-type="emoji">🟢</span> Low</p></td><td colspan="1" rowspan="1"><p><span data-name="green_circle" class="emoji" data-type="emoji">🟢</span><span data-name="green_circle" class="emoji" data-type="emoji">🟢</span><span data-name="green_circle" class="emoji" data-type="emoji">🟢</span> High — real-time collaboration needs on-chain identity, signatures, permissions</p></td><td colspan="1" rowspan="1"><p>Geodnet, Auki</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>L6 Robot Economy &amp; Platform</strong></p></td><td colspan="1" rowspan="1"><p>Robot identity, payments, service/data marketplaces; token incentives driving network effects</p></td><td colspan="1" rowspan="1"><p><span data-name="green_circle" class="emoji" data-type="emoji">🟢</span> Lowest</p></td><td colspan="1" rowspan="1"><p><span data-name="green_circle" class="emoji" data-type="emoji">🟢</span><span data-name="green_circle" class="emoji" data-type="emoji">🟢</span><span data-name="green_circle" class="emoji" data-type="emoji">🟢</span> Highest — ideal for on-chain identity, settlement, governance</p></td><td colspan="1" rowspan="1"><p>BitRobot, Peaq, PrismaX, IoTeX</p></td></tr></tbody></table><p><br></p><p><strong>Long-term vision.</strong> The Orchestration and Platform layer is the most valuable direction for integrating Web3 with robotics and AI. As robots gain perception, language, and learning capabilities, they are evolving into intelligent actors that can autonomously decide, collaborate, and create economic value. For these “intelligent workers” to truly participate in the economy, four core hurdles must be cleared: <strong>identity, trust, incentives, and governance</strong>.</p><ul><li><p><strong>Identity:</strong> Machines require attributable, traceable digital identities. With <strong>Machine DIDs</strong>, each robot, sensor, or UAV can mint a unique verifiable on-chain “ID card,” binding ownership, activity logs, and permission scopes to enable secure interaction and accountability.</p></li><li><p><strong>Trust:</strong> “Machine labor” must be verifiable, measurable, and priceable. Using <strong>smart contracts</strong>, <strong>oracles</strong>, and <strong>audits</strong>—combined with <strong>Proof of Physical Work (PoPW)</strong>, <strong>Trusted Execution Environments (TEE)</strong>, and <strong>Zero-Knowledge Proofs (ZKP)</strong>—task execution can be proven authentic and traceable, giving machine behavior accounting value.</p></li><li><p><strong>Incentives:</strong> Web3 enables automated settlement and value flow among machines via <strong>token incentives</strong>, <strong>account abstraction</strong>, and <strong>state channels</strong>. Robots can use micropayments for compute rental and data sharing, with staking/slashing to secure performance; smart contracts and oracles can coordinate a decentralized <strong>machine coordination marketplace</strong> with minimal human dispatch.</p></li><li><p><strong>Governance:</strong> As machines gain long-term autonomy, Web3 provides transparent, programmable governance: <strong>DAOs</strong> co-decide system parameters; <strong>multisigs</strong> and reputation maintain safety and order. Over time, this pushes toward <strong>algorithmic governance</strong>—humans set goals and bounds, while contracts mediate machine-to-machine incentives and checks.</p></li></ul><p><strong>The ultimate vision of Web3 × Robotics</strong>: a <strong>real-world evaluation network</strong>—distributed robot fleets acting as “physical-world inference engines” to continuously test and benchmark model performance across diverse, complex environments; and a <strong>robotic workforce</strong>—robots executing verifiable physical tasks worldwide, settling earnings on-chain, and reinvesting value into compute or hardware upgrades.</p><p><strong>Pragmatic path today.</strong> The fusion of embodied intelligence and Web3 remains early; decentralized machine-intelligence economies are largely narrative- and community-driven. Viable near-term intersections concentrate in three areas:</p><ol><li><p><strong>Data crowdsourcing &amp; attribution</strong> — on-chain incentives and traceability encourage contributors to upload real-world data.</p></li><li><p><strong>Global long-tail participation</strong> — cross-border micropayments and micro-incentives reduce the cost of data collection and distribution.</p></li><li><p><strong>Financialization &amp; collaborative innovation</strong> — DAO structures can enable robot assetization, revenue tokenization, and machine-to-machine settlement.<br><br></p></li></ol><p>Overall, the integration of robotics and Web3 will progress in phases: <strong>in the short term</strong>, the focus will be on data collection and incentive mechanisms; <strong>in the mid term</strong>, breakthroughs are expected in stablecoin-based payments, long-tail data aggregation, and the assetization and settlement of RaaS models; and <strong>in the long term</strong>, as humanoids scale, Web3 could evolve into the institutional foundation for machine ownership, revenue distribution, and governance, enabling a truly decentralized machine economy.</p><br><h2 id="h-iv-web3-robotics-landscape-and-curated-cases" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>IV. Web3 Robotics Landscape &amp; Curated Cases</strong></h2><p>Based on three criteria—<strong>verifiable progress, technical openness, and industrial relevance</strong>—this section maps representative projects at the intersection of <strong>Web3 × Robotics</strong>, organized into five layers: <strong>Model &amp; Intelligence</strong>, <strong>Machine Economy</strong>, <strong>Data Collection</strong>, <strong>Perception &amp; Simulation Infrastructure</strong>, and <strong>Robot Asset &amp; Yield (RobotFi / RWAiFi)</strong>. To remain objective, we have removed obvious hype-driven or insufficiently documented projects; please point out any omissions.</p><table style="min-width: 100px"><colgroup><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Layer</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Sub-category</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Representative Projects</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Primary Function</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Model &amp; Intelligence</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">OS &amp; Intelligent Planning</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>OpenMind</strong>, <strong>CodecFlow</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">OpenMind: decentralized Robot OS &amp; multi-robot coordination; CodecFlow: VLA runtime &amp; general execution engine</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Machine Economy Layer</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">Machine Identity &amp; Payments/Settlement</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>peaq</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">Runtime-Native Machine identities, wallets, and task-settlement infrastructure, Robotics-specific SDKs</p></td></tr><tr><td colspan="1" rowspan="1"><br></td><td colspan="1" rowspan="1"><p style="text-align: center">Robotic Task Incentives &amp; Economic Coordination</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>BitRobot Network</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">Decentralized robotic collaboration &amp; incentives; organizes task execution, verification, and rewards via Subnets</p></td></tr><tr><td colspan="1" rowspan="3"><p style="text-align: center"><strong>Data Layer</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">Teleoperation (remote control)</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>PrismaX, BitRobot Network</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">Capture teleop and human-feedback data for model training</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center">POV &amp; Motion Data</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Mecka</strong>, <strong>BitRobot Network</strong>, <strong>Sapien</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">POV/gamified/wearable human-motion datasets to build multimodal embodied datasets</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center">Simulation / Synthetic</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>BitRobot Network</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">Scale human–robot interaction data in simulation beyond scripted environments</p></td></tr><tr><td colspan="1" rowspan="2"><p style="text-align: center"><strong>Middleware &amp; Simulation</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">Localization &amp; Comms Middleware</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>RoboStack</strong>, <strong>GEODNET</strong>, <strong>Auki</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">RoboStack: RCP standard + cloud sim + workflow orchestration; GEODNET: cm-level RTK; Auki: shared 3D spatial mapping</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center">Distributed Simulation &amp; Learning Systems</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Gradient</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">Mirage provides distributed simulation, dynamic interactive environments, and large-scale parallel training for embodied AI</p></td></tr><tr><td colspan="1" rowspan="2"><p style="text-align: center"><strong>RobotFi / RWAiFi</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">Robotic Asset Tokenization</p><br></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>XMAQUINA</strong></p><br></td><td colspan="1" rowspan="1"><p style="text-align: center">An decentralized DAO that provides high-liquidity exposure to the growth of humanoid robotics companies.</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center">AI Funded Asset Financialization</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>GAIB</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">On-chain financialization of AI GPUs and robotics cash flows</p></td></tr></tbody></table><p><br></p><h3 id="h-model-and-intelligence-layer" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>Model &amp; Intelligence Layer</strong></h3><h4 id="h-openmind-building-android-for-robots-httpsopenmindorg" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>OpenMind — <em>Building Android for Robots</em> (</strong><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://openmind.org/"><strong><u>https://openmind.org/</u></strong></a><strong>)</strong></h4><p><strong>OpenMind</strong> is an open-source <strong>Robot OS</strong> for <strong>Embodied AI &amp; control</strong>, aiming to build the first decentralized runtime and development platform for robots. Two core components:</p><ul><li><p><strong>OM1:</strong> A modular, open-source AI agent runtime layer built on top of ROS2, orchestrating perception, planning, and action pipelines for both digital and physical robots.</p></li><li><p><strong>FABRIC:</strong> A distributed coordination layer connecting cloud compute, models, and real robots so developers can control/train robots in a unified environment.</p></li></ul><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/a01c971e8d1cb811c64ede05db6a69d5df880303118693426d846f16013239f3.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAATCAIAAAB+9pigAAAACXBIWXMAAAsTAAALEwEAmpwYAAAFgklEQVR4nG2V32/TVhTHIwJ4YSRtnNSxHV87jp3YcWI7JrHjxnHTpGuTNs0P2qQNJNDRtKGB/qCUQulga7XCmPgxJKZNiKHygqbuhbGH/RvbpIl/YhKPe9heJqi0lYzzcHXuD52P7v2ec4/l2LtmfWsHncOHj7x69ftff//562+/7O19//r1Hy9fvnj58sfdZ7uHrIesVisEQba35yEI2h8PBrQcnMAwnEwmWTaAo25dk9KmGqAJH4ntPX+8s7W+sdrae/74xQ+7P//0/JsH27e31wujmeLooJGQAjSeNlRNCVEkQBDk/QCL5Y2vaSrLsrwfLM+dWl04a2phTeZWL3xsqLJ+gqtP5C/Mn2nUK6VcKhblr2+sbG9dm6rmU0lpfe3i7JmKJEscx7ndrn/v8SYoBEEwDJtmamPj2s4XO/OzZ4MeXA2G0kpM8vp4HNTy+RjH9/u56sjoZ2tr0+OlsVh/lOUXZ1vtxowels2Q1Go2RxMGz/H9AwM4jncDUBQdHh7SNLVSqYxkM4TLIwUFNRJDe/rQnr5iJs8StK8PT4RjMxOnE6LKEzRA8GImPzFSCOI+FiVro5WkEAMEKNWq3YCD8losFrvNdiJu3ty6nR3OG+bQcL6USKazw2OJZLoPwQDJLF1aL5RrWsIczOQqJ6cMc4jyB50wgnr9doeTD4cxFO2+AY7jjx9/216Yf7r79OrlywD4Ydgpy5FWa2Zx8fyNGxvVarnZnJqqnaR9lNvtBIAQhFAkIpimwXEBZ2/PkcNWl9vj6IFFRUFRdD8DuzVIJpOF8UIiHh9Ij03VT8GwKxbTotFYJpMFgIJhNwy7HY6e1dXLEVH2M+xRyAZBtqOQzXbsOATZXAjOsMG4rgMAYBi22+3vf6Ke48fHylMznUuFiYacSMWNbNzIclI8NVQQ44aiD+ZKNSM7qqdH9HQubmT19Mj+VDGG7371MDsyXCoW2wvtfSX+A9jtdgAAw7LBQMAcHDBShqapk9XJCxc7mzc2q9XJ083pUnm8vTA/MVka+mioXp/O53P5fO50c7q90J5tnesfSJGUL6aqOI6/v9BkWcpkMqZhiIqC4ziGojzHGcmkaaZMMwUA8Hg8b9YxDEEQr9eLIAhBECjqwVCUwHFRUZwul6hEMQzrziL7W9uvdYfDkS+VZ+ZbtdOncoXRc+351bW1Yrm0fu3qdKPRudjZ+OR6o9mcbjTKk5PhSCQUFkRZ1vv7R8bGXQgiKsp7APsC7Gvg7O2Ny1HSSxAYLgQ5MSQAmg6rJ+hggA3xbDhEsky+Uj5zvqUPmjhFkixDsYzXR7FhwemCuwEQBLnd7uWVpbm5uStX1u7d+3JurqXwfD6fW+x0KpWKwod8PhJB0ZAoCtFoKBpNGMb1T29ubm/fvn8/ntBS2WxAEPwsi5MEimGDQ0MIgnQD6vXpanVyeWWpUimXikWepikSdNqzshihvd7+/oQvGGjOzbZWlhoXOtt3bi0sLRYbzc2dnc9u7dx++DA3WdMzGSbEcTwf1/UPDnyo7zyRxWKxWq29vb1ySJC40PhwLsLxSkTyMayo64KqBmU5oqr+UCgoiRFdC0giExaYcDiiJjhZEXWdYhgnDEP/B+wvWSwWAICmqX0YQwZURjC8fgmwJ1jBIGgJMFE6oAFGAUzUQ/AkEyOZGIJzBC15aUmOjzC84YT7uvsBBEEIgiwuXqzX65curdy5c6tUKXkQ6vMb97Y27wBMiHDGo/tPfIRIA+nRg+/icpom5Pa55dlG5+6tr5c718JBIy4OPHn0TAgmPrTZjx2zdfcDu92OoiiGYePjhVqtJsmSy+lh/JyPYjAPhXkARTGoB6AeQAIaRXDMQzF+jvTSfirA+Dkc9WPo260+r8PR0wX4B00+VN4oH+2+AAAAAElFTkSuQmCC" nextheight="885" nextwidth="1456" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>OpenMind acts as the <strong>intelligent middleware</strong> between LLMs and the robotic world—turning <strong>language intelligence into embodied intelligence</strong> and providing a scaffold from <strong>understanding (Language → Action)</strong> to <strong>alignment (Blockchain → Rules)</strong>. Its multi-layered system forms a full collaboration loop: humans provide feedback/labels via the <strong>OpenMind App</strong> (RLHF data); the <strong>Fabric Network</strong> handles identity, task allocation, and settlement; <strong>OM1 robots</strong> execute tasks and conform to an on-chain “robot constitution” for behavior auditing and payments—completing a decentralized cycle of <strong>human feedback → task collaboration → on-chain settlement</strong>.</p><table style="min-width: 100px"><colgroup><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Layer</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>System Module</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Core Components</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Primary Role</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Blockchain Layer</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">Ethereum / L2s</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Robot identity registry; smart contracts (“robot constitution”); stablecoin settlement (USDC/DAI/sUSDe); Fabric token &amp; reputation logs</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Identity, audit, task settlement, incentive distribution</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Fabric Layer</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">FABRIC Protocol</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Identity &amp; task market; P2P comms (Zenoh/DDS); automated payments &amp; compliance; skill &amp; reputation registries</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Task distribution &amp; collaboration, low-latency comms, on-chain settlement &amp; governance</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>OM1 Runtime</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">OM1 (Python + ROS2)</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Multimodal sensing; Natural Language Data Bus; LLM decision core; hardware abstraction (Unitree SDK)</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Turn robots into language-native agents; cross-platform compatibility; on-chain auditability</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Application Layer</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">OpenMind App (iOS/Android/Web)</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Map crowdsourcing &amp; annotation; teleoperation &amp; task posting; robot ID management &amp; rewards</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Human participation, RobotFi data &amp; incentive portal</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Ecosystem</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">OEMs / Labs / Devs</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Partners: Unitree, UBTECH, Stanford, etc.; standardized SDKs &amp; enterprise solutions</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Hardware access standards; industry applications &amp; co-building</p></td></tr></tbody></table><br><p><strong>Progress &amp; Assessment.</strong> OpenMind is in an <strong>early “technically working, commercially unproven”</strong> phase. <strong>OM1 Runtime</strong> is open-sourced on GitHub with multimodal inputs and an NL data bus for language-to-action parsing—original but experimental. <strong>Fabric</strong> and on-chain settlement are interface-level designs so far.&nbsp; Ecosystem ties include Unitree, UBTECH, TurtleBot, and universities (Stanford, Oxford, Seoul Robotics) for education/research; no industrial rollouts yet. The App is in beta; incentives/tasks are early.</p><p><strong>Business model:</strong> OM1 (open-source) + Fabric (settlement) + Skill Marketplace (incentives). No revenue yet; relies on ~$20M early financing (Pantera, Coinbase Ventures, DCG). Technically ambitious with long path and hardware dependence; if Fabric lands, it could become the “<strong>Android of Embodied AI</strong>.”</p><br><h4 id="h-codecflow-the-execution-engine-for-robotics-httpscodecflowai" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>CodecFlow — <em>The Execution Engine for Robotics</em> (</strong><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://codecflow.ai"><strong><u>https://codecflow.ai</u></strong></a><strong>)</strong></h4><p><strong>CodecFlow</strong> is a <strong>decentralized Execution Layer for Robotics</strong> on <strong>Solana</strong>, providing on-demand runtime environments for AI agents and robotic systems—giving each agent an “<strong>Instant Machine</strong>.” Three modules:</p><ul><li><p><strong>Fabric:</strong> Cross-cloud and DePIN compute aggregator (Weaver + Shuttle + Gauge) that spins up secure VMs, GPU containers, or robot control nodes in seconds.</p></li><li><p><strong>optr SDK:</strong> A Python framework that abstract hardware connectors, training algorithms and blockchain integration. To enable creating “Operators” that control desktops, sims, or real robots.</p></li><li><p><strong>Token Incentives:</strong> On-chain incentives for the open source contributors, buyback from revenue, and future economy for the marketplace&nbsp;&nbsp;</p></li></ul><p><strong>Goal:</strong> Unify the fragmented robotics ecosystem with a single execution layer that gives builders hardware abstraction, fine‑tuning tools, cloud simulation infrastructure, and onchain economics so they can launch and scale revenue generating operators for robots and desktop.</p><p><strong>Progress &amp; Assessment.</strong> Early Fabric (Go) and <strong>optr SDK</strong> (Python) are live; web/CLI can launch isolated compute instances, Integration with NRN, ChainLink, peaq. <strong>Operator Marketplace</strong> targets late-2025, serving AI devs, robotics labs, and automation operators.</p><br><table style="min-width: 125px"><colgroup><col><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Project</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Core Role</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Analogy</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Key Function</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Crypto Hook</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>OpenMind</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">Decentralized Robot OS</p></td><td colspan="1" rowspan="1"><p style="text-align: center">The “system brain”</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Connect LLMs to robots; multi-robot orchestration</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Fabric node coordination &amp; task incentives</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>CodecFlow</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">Runtime and development kit for robotics&nbsp;</p></td><td colspan="1" rowspan="1"><p style="text-align: center">The “action engine”</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Execute multimodal tasks bridging AI agents and embodiment</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Operator marketplace &amp; incentive design</p></td></tr></tbody></table><p><br></p><h3 id="h-machine-economy-layer" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>Machine Economy Layer</strong></h3><h4 id="h-bitrobot-the-worlds-open-robotics-lab-httpsbitrobotai" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>BitRobot — <em>The World’s Open Robotics Lab</em> (</strong><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://bitrobot.ai"><strong><u>https://bitrobot.ai</u></strong></a><strong>)</strong></h4><p>A decentralized <strong>research &amp; collaboration network</strong> for Embodied AI and robotics, co-initiated by FrodoBots Labs and Protocol Labs. Vision: an open architecture of <strong>Subnets + Incentives + Verifiable Robotic Work (VRW)</strong>.</p><ul><li><p><strong>VRW:</strong> Define &amp; verify the real contribution of each robotic task.</p></li><li><p><strong>ENT (Embodied Node Token):</strong> On-chain robot identity &amp; economic accountability.</p></li><li><p><strong>Subnets:</strong> Organize cross-region collaboration across research, compute, devices, and operators.</p></li><li><p><strong>Senate + Gandalf AI:</strong> Human-AI co-governance for incentives and research allocation.<br></p></li></ul><table style="min-width: 100px"><colgroup><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Layer</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Module</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Core Components</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Role</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center">Blockchain</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Solana / BitRobot Token</p></td><td colspan="1" rowspan="1"><p style="text-align: center">VRW verification; subnet registry &amp; governance; incentive settlement</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Verifiable tasks &amp; incentive distribution</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center">Coordination</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Subnet Framework</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Task specs; resource scheduling; data/model sharing</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Open research &amp; execution network</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center">Identity</p></td><td colspan="1" rowspan="1"><p style="text-align: center">ENT</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Robot registration, staking, credit tracking</p></td><td colspan="1" rowspan="1"><p style="text-align: center">On-chain robot identity &amp; digital twin</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center">Economy</p></td><td colspan="1" rowspan="1"><p style="text-align: center">MER Loop</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Measure–Evaluate–Reward; Senate + Gandalf AI</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Turn research outcomes into quantifiable incentives</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center">Governance</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Senate / Gandalf AI / Foundation</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Human review; AI proposals; foundation support</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Human–AI co-governed resource allocation</p></td></tr></tbody></table><br><p>Since its 2025 whitepaper, BitRobot has run multiple subnets (e.g., <strong>SN/01 ET Fugi</strong>, <strong>SN/05 SeeSaw by Virtuals</strong>), enabling decentralized teleoperation and real-world data capture, and launched a <strong>$5M Grand Challenges</strong> fund to spur global research on model development.</p><h4 id="h-peaq-the-machine-economy-computer-httpswwwpeaqxyz" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>peaq — <em>The Machine Economy Computer </em></strong>(<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.peaq.xyz/">https://www.peaq.xyz/</a>)</h4><p><strong>peaq</strong> is a Layer-1 chain built for the Machine Economy, providing machine identities, wallets, access control, and time-sync (Universal Machine Time) for millions of robots and devices. Its Robotics SDK lets builders make robots “Machine Economy–ready” with only a few lines of code, enabling vendor-neutral interoperability and peer-to-peer interaction.</p><p>The network already hosts the world’s first tokenized robotic farm and 60+ real-world machine applications. peaq’s tokenization framework allows robotics companies to raise liquidity for capital-intensive hardware and broaden participation beyond traditional B2B/B2C buyers. Its protocol-level incentive pools, funded by network fees, subsidize machine onboarding and support builders—creating a growth flywheel for robotics projects.<br></p><table style="min-width: 75px"><colgroup><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Component</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Function</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Value</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>peaq Blockchain</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">Machine IDs, payments, access control, data verification.</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Base OS for machine identity, interoperability, and onchain actions.</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Economic Model</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">Incentive pools funded by network + Machine DeFi fees.</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Subsidizes machine onboarding; creates a self-reinforcing growth flywheel.</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Robotics SDK</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">Adds Universal Machine Functions with minimal code.</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Makes robots Machine-Economy-ready; enables app connectivity and decentralized storage.</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>x402 Integration</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">Machine-native payment protocol support.</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Robots/agents pay APIs &amp; services instantly.</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Universal Machine Time</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">Nanosecond-precision onchain time sync.</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Precise coordination, timestamping, and auditing for global fleets.</p></td></tr></tbody></table><p><br></p><h3 id="h-data-layer" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>Data Layer</strong></h3><p>Purpose: unlock scarce, costly real-world data for embodied training via <strong>teleoperation (PrismaX, BitRobot Network)</strong>, <strong>first-person &amp; motion capture (Mecka, BitRobot Network, Sapien、Vader、NRN)</strong>, and <strong>simulation/synthetic pipelines (BitRobot Network)</strong> to build scalable, generalizable training corpora.</p><p><strong>Note:</strong> Web3 doesn’t <strong>produce</strong> data better than Web2 giants; its value lies in <strong>redistributing</strong> data economics. With <strong>stablecoin rails + crowdsourcing</strong>, permissionless incentives and on-chain attribution enable low-cost micro-settlement, provenance, and automatic revenue sharing. Open crowdsourcing still faces <strong>quality control</strong> and <strong>buyer demand</strong> gaps.</p><h4 id="h-prismax-httpsgatewayprismaxai" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>PrismaX (</strong><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://gateway.prismax.ai"><strong><u>https://gateway.prismax.ai</u></strong></a><strong>)</strong></h4><p>A decentralized <strong>teleoperation &amp; data economy</strong> for Embodied AI—aiming to build a <strong>global robot labor market</strong> where human operators, robots, and AI models co-evolve via on-chain incentives.</p><ul><li><p><strong>Teleoperation Stack:</strong> Browser/VR UI + SDK connects global arms/service robots for real-time control &amp; data capture.</p></li><li><p><strong>Eval Engine:</strong> CLIP + DINOv2 + optical-flow semantic scoring to grade each trajectory and settle on-chain.</p></li></ul><p>Completes the loop <strong>teleop → data capture → model training → on-chain settlement</strong>, turning <strong>human labor into data assets</strong>.</p><table style="min-width: 75px"><colgroup><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Layer</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Module</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Function</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center">Blockchain</p></td><td colspan="1" rowspan="1"><p style="text-align: center">PIX L2</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Staking, verification, settlement for trustworthy incentives</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center">Control</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Browser/VR Stack</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Remote robot control; capture action &amp; visual data</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center">Data</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Eval Engine / Hub</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Automatic quality scoring &amp; on-chain attribution</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center">Application</p></td><td colspan="1" rowspan="1"><p style="text-align: center">PrismaX Gateway</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Task posting, job taking, and rewards</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center">Model</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Robots + AI Models</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Robots generate data; models learn continuously</p></td></tr></tbody></table><p><strong>Progress &amp; Assessment.</strong> Testnet live since Aug 2025 (<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://gateway.prismax.ai">gateway.prismax.ai</a>). Users can teleop arms for grasping tasks and generate training data. Eval Engine running internally. Clear positioning and high technical completeness; strong candidate for a <strong>decentralized labor &amp; data protocol</strong> for the embodied era, but near-term scale remains a challenge.</p><h4 id="h-bitrobot-network-httpsbitrobotai" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>BitRobot Network </strong>&nbsp;(<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://bitrobot.ai/"><u>https://bitrobot.ai/</u></a>)</h4><p><strong>BitRobot Network</strong> subnets power data collection across video, teleoperation, and simulation. With <strong>SN/01 ET Fugi</strong> users remotely control robots to complete tasks, collecting navigation &amp; perception data in a “real-world Pokemon Gogame”. The game led to the creation of <strong>FrodoBots-2K</strong>, one of the largest open human-robot navigation datasets, used by UC Berkeley RAIL and Google DeepMind. <strong>SN/05 SeeSaw</strong> crowdsources egocentric video data via iPhone from real-world environments at scale. Other announced subnets RoboCap and Rayvo focus on egocentric video data collection via low-cost embodiments.&nbsp;</p><h4 id="h-mecka-httpswwwmeckaai" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>Mecka (</strong><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.mecka.ai"><strong><u>https://www.mecka.ai</u></strong></a><strong>)</strong></h4><p>Mecka is a robotics data company that crowdsources egocentric video, motion, and task demonstrations—via gamified mobile capture and custom hardware rigs—to build large-scale multimodal datasets for embodied AI training.</p><h4 id="h-sapien-httpswwwsapienio" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>Sapien (</strong><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.sapien.io/"><strong><u>https://www.sapien.io/</u></strong></a><strong>)</strong></h4><p>A crowdsourcing platform for <strong>human motion data</strong> to power robot intelligence. Via wearables and mobile apps, Sapien gathers human pose and interaction data to train embodied models—building a global motion data network.</p><h4 id="h-vader-httpswwwvaderaiai" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>Vader</strong> (<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.vaderai.ai"><u>https://www.vaderai.ai</u></a>)</h4><p>Vader crowdsources egocentric video and task demonstrations through <em>EgoPlay</em>, a real-world MMO where users record daily activities from a first-person view and earn $VADER. Its ORN pipeline converts raw POV footage into privacy-safe, structured datasets enriched with action labels and semantic narratives—optimized for humanoid policy training.</p><h4 id="h-nrn-agents-httpswwwnrnagentsai" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>NRN Agents</strong> (<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.nrnagents.ai/"><u>https://www.nrnagents.ai/</u></a>)</h4><p>A gamified embodied-RL data platform that crowdsources human demonstrations through browser-based robot control and simulated competitions. NRN generates long-tail behavioral trajectories for imitation learning and continual RL, using sport-like tasks as scalable data primitives for sim-to-real policy training.</p><p><strong>Embodied Data Collection — Project Comparison</strong></p><table style="min-width: 75px"><colgroup><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Project</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Primary Data Modality</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Distinctive Features</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>PrismaX</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">Human teleoperation (real robots)</p></td><td colspan="1" rowspan="1"><p style="text-align: center">High-fidelity, expert-like demonstrations; limited scale</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>BitRobot Network</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">Teleoperation + egocentric video + simulation</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Diverse in-the-wild environments; cross-embodiment data</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Mecka / Sapien / Vader/ NRN</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">First-person video + body motion (wearables / gamified tasks)</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Low-cost crowdsourcing; large scale but noisier</p></td></tr></tbody></table><p><br></p><h3 id="h-middleware-and-simulation" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>Middleware &amp; Simulation</strong></h3><p>The Middleware &amp; Simulation layer forms the backbone between physical sensing and intelligent decision-making, covering localization, communication, spatial mapping, and large-scale simulation. The field is still early: projects are exploring high-precision positioning, shared spatial computing, protocol standardization, and distributed simulation, but no unified standard or interoperable ecosystem has yet emerged.</p><h4 id="h-middleware-and-spatial-infrastructure" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>Middleware &amp; Spatial Infrastructure</strong></h4><p>Core robotic capabilities—<strong>navigation, localization, connectivity, and spatial mapping</strong>—form the bridge between the physical world and intelligent decision-making. While broader DePIN projects (Silencio, WeatherXM, DIMO) now mention “robotics,” the projects below are the ones most directly relevant to embodied AI.</p><ul><li><p><strong>RoboStack — Cloud-Native Robot Operating Stack</strong> (<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://robostack.io)￼"><u>https://robostack.io</u>)<br></a> Cloud-native robot OS &amp; control stack integrating <strong>ROS2</strong>, <strong>DDS</strong>, and <strong>edge computing</strong>. Its <strong>RCP (Robot Control Protocol)</strong> aims to make robots callable/orchestrable like cloud services.</p></li><li><p><strong>GEODNET — Decentralized GNSS Network</strong> (<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://geodnet.com)￼"><u>https://geodnet.com</u>)<br></a> A global decentralized satellite-positioning network offering <strong>cm-level RTK/GNSS</strong>. With distributed base stations and on-chain incentives, it supplies high-precision positioning for drones, autonomous driving, and robots—becoming the <strong>Geo-Infra Layer</strong> of the machine economy.</p></li><li><p><strong>Auki — Posemesh for Spatial Computing</strong> (<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.auki.com)￼"><u>https://www.auki.com</u>)<br></a> A decentralized <strong>Posemesh</strong> network that generates shared real-time 3D maps via crowdsourced sensors &amp; compute, enabling AR, robot navigation, and multi-device collaboration—key infra fusing <strong>AR × Robotics</strong>.</p></li><li><p><strong>Tashi Network — Real-Time Mesh Coordination for Robots</strong> (<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://tashi.network)￼"><u>https://tashi.network</u>)<br></a> A decentralized mesh network enabling sub-30ms consensus, low-latency sensor exchange, and multi-robot state synchronization. Its MeshNet SDK supports shared SLAM, swarm coordination, and robust map updates for real-time embodied AI.</p></li><li><p><strong>Staex — Decentralized Connectivity &amp; Telemetry</strong> (<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.staex.io)￼"><u>https://www.staex.io</u>)<br></a> A decentralized connectivity and device-management layer from Deutsche Telekom R&amp;D, providing secure communication, trusted telemetry, and device-to-cloud routing. Staex enables robot fleets to exchange data reliably and interoperate across operators.</p></li></ul><br><h4 id="h-distributed-simulation-and-learning-systems" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>Distributed Simulation &amp; Learning Systems</strong></h4><p><strong>Gradient – Towards Open Intelligence</strong>（<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://gradient.network/">https://gradient.network/</a>）</p><p>Gradient is an AI R&amp;D lab dedicated to building <strong>Open Intelligence</strong>, enabling distributed training, inference, verification, and simulation on a decentralized infrastructure. Its current technology stack includes <strong>Parallax</strong> (distributed inference), <strong>Echo</strong> (distributed reinforcement learning and multi-agent training), and <strong>Gradient Cloud</strong> (enterprise AI solutions).&nbsp;&nbsp;</p><p>In robotics, Gradient is developing <strong>Mirage</strong> — a distributed simulation and robotic learning platform designed to build generalizable world models and universal policies, supporting dynamic interactive environments and large-scale parallel training. Mirage is expected to release its framework and model soon, and the team has been in discussions with <strong>NVIDIA</strong> regarding potential collaboration.</p><h3 id="h-robot-asset-and-yield-robotfi-rwaifi" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>Robot Asset &amp; Yield (RobotFi / RWAiFi)</strong></h3><p>This layer converts robots from <strong>productive tools</strong> into <strong>financializable assets</strong> through <strong>tokenization, revenue distribution, and decentralized governance</strong>, forming the financial infrastructure of the machine economy.</p><h4 id="h-xmaquinadao-physical-ai-dao-httpswwwxmaquinaio" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>XmaquinaDAO — <em>Physical AI DAO</em> (</strong><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.xmaquina.io"><strong><u>https://www.xmaquina.io</u></strong></a><strong>)</strong></h4><p>XMAQUINA is a decentralized ecosystem providing global, liquid exposure to leading private humanoid-robotics and embodied-AI companies—bringing traditionally VC-only opportunities onchain. Its token <strong>DEUS</strong> functions as a liquid index and governance asset, coordinating treasury allocations and ecosystem growth. The DAO Portal and Machine Economy Launchpad enable the community to co-own and support emerging Physical AI ventures through tokenized machine assets and structured onchain participation.</p><h4 id="h-gaib-the-economic-layer-for-ai-infrastructure-httpsgaibai" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>GAIB — <em>The Economic Layer for AI Infrastructure</em> (</strong><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://gaib.ai/"><strong><u>https://gaib.ai/</u></strong></a><strong>)</strong></h4><p><strong>GAIB</strong> provides a unified <strong>Economic Layer</strong> for real-world AI infrastructure such as <strong>GPUs and robots</strong>, connecting decentralized capital to productive AI infra assets and making yields <strong>verifiable, composable, and on-chain</strong>.</p><p>For robotics, GAIB does <strong>not</strong> “sell robot tokens.” Instead, it <strong>financializes</strong> robot equipment and operating contracts (RaaS, data collection, teleop) on-chain—converting <strong>real cash flows → composable on-chain yield assets</strong>. This spans <strong>equipment financing</strong> (leasing/pledge), <strong>operational cash flows</strong> (RaaS/data services), and <strong>data-rights revenue</strong> (licensing/contracts), making robot assets and their income <strong>measurable, priceable, and tradable</strong>.</p><p>GAIB uses <strong>AID / sAID</strong> as settlement/yield carriers, backed by structured risk controls (over-collateralization, reserves, insurance). Over time it integrates with DeFi derivatives and liquidity markets to close the loop from <strong>“robot assets” to “composable yield assets.”</strong> The goal: become the <strong>economic backbone of intelligence</strong> in the AI era.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/678d3c6c1888226563afd9ba24f86b1caec043bd5b5da1a3d6c04e2e2d57c37a.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAZCAIAAADfbbvGAAAACXBIWXMAAAsTAAALEwEAmpwYAAAFSUlEQVR4nI2V228VRRzH5z/QPwAfTIjBhBiFRH3RqDwYDW+ENx8ML4ZEYjQYEwyokXIxGFqKiBdatUAvlFhboBY4pdYWK8ilLb2dntNzOJf23Hbn7M7szOzO7v7M7LSnLTTA5pPNby9z/f6+v0Gm5Zg4ohYsgQnXWMwzLUaZCwCUS4Nw02IPUFmGrwRxISG6giDUAQDoSAahDEKXu6ZpAEAs1r9+/fq2tnYA4MLVX2u4fuD6gSf9B0D5+ULzb62/tLT3XRlobmn74VRLR+cfl2N/dZzvrhg4BKim810drQCw68NdCKGXNm0CAEKdlf2u2fXiAJZNp2bik1MzqXRucip+89bo1Ew8mcqOT0w7jIcAC8XS2Pikw8Q/I/++s3VrV3cPALhSPknvagC9s0E0nSAIwxWx8KTwZLFsJNN5A9uUuUy4hDLuSe75TMgnAZkWM7BdwcTETgXTGjX1LOa5vm8SET06S+9XKakxMHuYZZHDhxWWQRCEpsnGRxdcKR3mVSqOK+UjNsSTvpTBSiKRW9pOn+282Hu150JfpPBgW0dXKp2L8iq8cyt//NvBEPxCgQwNpRh7zAAPakCoM5uYm44nM9l8ci4djycTifS9iWmbMr2IXL4wOztLqVO1yP1MViXo6jk+GmQxoVVVq9OZF2UIZR5lHhMyV6hMzqRMyykbdjyRpcp03MSMUBcrxzmECpsqhWzqUubpew1U0R5eZA1/UiVyYFoME+75ftliTMgQQpMIFulnEmFFUzEjnTERa4ushV2Z1K6UIcBsfDr2Zw+lxKg4I8NpgPD9D84htPPu6PyOHe8hhO5NTO7Z8/m6devujo4FQZhIJG2bLm9RsWz8eqb90pX+1o7fKwbWe6XRY/dfvnT04D7LKt8bL5w8PgwQoKc+QejFjs4xFF3d3Rc2bNiAEGprOxcCDAwOFoqlWj+IMp5MZVPpzHQ8qRa/WqIgCHP5wo0bN22b5vKFW7fHXM/t75/Z91VXtcqvxmJffLm/WiWDQ0N1h76xbZrJLdwZnUilM0pXvQLTYsLzhee7fkCZZz1EyaTpXAkTkS+ayfR8BTtMCMqw1tDSYnJJqFJC/5POlWrNUQkzbd2iQSqWo+qtenQi8bmCCO4pJ2u1beoaRBDpVx03mpw0sDK26wfc8zERnvSVPXVbiyOHecpOSxW7VrRra2SMDw4NAcDu3Z8ihNo7z5uJuSNvvpu7PVo0zYMH6gBgdHwSIfTq5ucB4LOvDz33+hZXqqxXW3Q/V6xvPFnf+GNT85meC32nz3Y2NZ9pOP5Tb9+1MMqr5FyqofEYAGzc+AJCaO/+ujunTr+NUDw2kF5Y2L5tGwBcujKgBQeAZza/ghAycDVUx4ZEhIpcvpDJzqfSmUKxXCwaUZwrFMsyKq7Jucz1kf+YcP8eGtl/4HAilc5m8/19MdMilLL8fMGVMpnKNpxoaj3XpSaUyg6P3NSTk0GIKhbnns8jnSmXlC+WYsolJgITUcJOvmhWMMWRs5T7MC2ZNDpl+UK5WqWuiR0upPJzpIdpcZu6GlRS/606V2v66JpsU1d4UlcIIn2LR6ViCazKuL4vN7RsUWPRyfqc0UHtmOVCnUVNTc0IoZ27PgKA+pefvvjxdh/AUxUv+k0Gi3dNEPoy8CMDaZTIh48cazzxc2/ftboDRw8faSyWjUD1LgllALB37z6E0GtvbAEIdiL0/VvPemHoumsU7bWrqcM8QpnGsqltUy6WG4cAE1Oz9Y3fDV+/EQThSO/F2bu3Q4BHd7pqgBJWh6JGm7Omj8aMyqeOq9zDzqqv5HH8D3SQWxYyYPrkAAAAAElFTkSuQmCC" nextheight="718" nextwidth="927" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p style="text-align: center"><strong>Web3 Robotics Stack Link:</strong> <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://fairy-build-97286531.figma.site/"><u>https://fairy-build-97286531.figma.site/</u></a></p><h2 id="h-v-conclusion-present-challenges-and-long-term-opportunities" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>V. Conclusion: Present Challenges and Long-Term Opportunities</strong></h2><p>From a long-term perspective, the fusion of <strong>Robotics × AI × Web3</strong> aims to build a <strong>decentralized machine economy</strong> (<em>DeRobot Economy</em>), moving embodied intelligence from “single-machine automation” to <strong>networked collaboration that is ownable, settleable, and governable</strong>. The core logic is a self-reinforcing loop—<strong>“Token → Deployment → Data → Value Redistribution”</strong>—through which robots, sensors, and compute nodes gain on-chain ownership, transact, and share proceeds.</p><p>That said, at today’s stage this paradigm remains <strong>early-stage exploration</strong>, still far from stable cash flows and a scaled commercial flywheel. Many projects are narrative-led with limited real deployment. Robotics manufacturing and operations are <strong>capital-intensive</strong>; token incentives alone cannot finance infrastructure expansion. While on-chain finance is composable, it has <strong>not yet solved</strong> real-asset risk pricing and cash-flow realization. In short, the “self-sustaining machine network” remains <strong>idealized</strong>, and its business model requires real-world validation.</p><ul><li><p><strong>Model &amp; Intelligence Layer.</strong> This is the most valuable long-term direction. Open-source robot operating systems represented by <strong>OpenMind</strong> seek to break closed ecosystems and unify multi-robot coordination with language-to-action interfaces. The technical vision is clear and systemically complete, but the <strong>engineering burden is massive</strong>, validation cycles are long, and <strong>industry-level positive feedback has yet to form</strong>.</p></li><li><p><strong>Machine Economy Layer.</strong> Still <strong>pre-market</strong>: the real-world robot base is small, and DID-based identity plus incentive networks struggle to form a self-consistent loop. We remain <strong>far</strong> from a true “machine labor economy.” Only after embodied systems are <strong>deployed at scale</strong> will the economic effects of on-chain identity, settlement, and collaboration networks become evident.</p></li><li><p><strong>Data Layer.</strong> Barriers are relatively lower—and this is <strong>closest to commercial viability today</strong>. Embodied data collection demands <strong>spatiotemporal continuity</strong> and <strong>high-precision action semantics</strong>, which determine quality and reusability. Balancing <strong>crowdscale</strong> with <strong>data reliability</strong> is the core challenge. <strong>PrismaX</strong> offers a partially replicable template by <strong>locking in B-side demand first</strong> and then distributing capture/validation tasks, but ecosystem scale and data markets will take time to mature.</p></li><li><p><strong>Middleware &amp; Simulation Layer.</strong> Still in <strong>technical validation</strong> with no unified standards and limited interoperability. Simulation outputs are <strong>hard to standardize</strong> for real-world transfer; <strong>Sim2Real efficiency</strong> remains constrained.</p></li><li><p><strong>RobotFi / RWAiFi Layer.</strong> Web3’s role is primarily auxiliary—enhancing transparency, settlement, and financing efficiency in supply-chain finance, equipment leasing, and investment governance, rather than redefining robotics economics itself.&nbsp;</p></li></ul><p>Even so, we believe the intersection of <strong>Robotics × AI × Web3</strong> marks the <strong>starting point of the next intelligent economic system</strong>. It is not only a fusion of technical paradigms; it is also an opportunity to <strong>recast production relations</strong>. Once machines possess <strong>identity, incentives, and governance</strong>, human–machine collaboration can evolve from localized automation to <strong>networked autonomy</strong>. In the short term, this domain will remain driven by <strong>narratives and experimentation</strong>, but the emerging <strong>institutional and incentive frameworks</strong> are laying groundwork for the economic order of a future machine society. In the long run, combining embodied intelligence with Web3 will <strong>redraw the boundaries of value creation</strong>—elevating intelligent agents into <strong>ownable, collaborative, revenue-bearing economic actors</strong>.</p><hr><p><strong>Disclaimer:</strong> This article was assisted by AI tools (ChatGPT-5 and Deepseek). The author has endeavored to proofread and ensure accuracy, but errors may remain. Note that crypto asset markets often exhibit divergence between project fundamentals and secondary-market price action. This content is for <strong>information synthesis and academic/research exchange only</strong> and <strong>does not constitute investment advice</strong> or a recommendation to buy or sell any token.</p><p><br></p>]]></content:encoded>
            <author>0xjacobzhao@newsletter.paragraph.com (jacobzhao)</author>
            <category>embodied ai</category>
            <category>robotics</category>
            <category>ai</category>
            <category>web3</category>
            <category>automation</category>
            <enclosure url="https://storage.googleapis.com/papyrus_images/1f75661897161c254e0e9845b0b08b6382cbdce1c99cde58c20e994d06fb3696.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[机器人产业畅想：自动化、人工智能与 Web3 的融合进化]]></title>
            <link>https://paragraph.com/@0xjacobzhao/机器人产业畅想：自动化、人工智能与-web3-的融合进化</link>
            <guid>qmYBxf13cMnLf7We5cI6</guid>
            <pubDate>Tue, 18 Nov 2025 05:05:07 GMT</pubDate>
            <description><![CDATA[机器人产业已形成工业、移动、服务、特种、人形五大体系，其中人形机器人在具身智能（Embodied AI）加持下，正以 VLA、RL/IL/SSL、Sim2Real、世界模型和多智能体协作等技术迈向跨场景“理解—预测—行动”。2025 年起，Web3 × Robotics 成为新叙事，其价值呈现分层：硬件/服务层 Web3 仅提供金融与租赁辅助；仿真/软件层具备数据确权与技能资产化潜力；平台/协作层最具机会，可依托 DID、PoPW、TEE/ZKP、链上结算与 DAO 治理构建去中心化“机器劳动力市场”。本报告按五层架构梳理 OpenMind、CodecFlow、peaq、BitRobot、PrismaX、Mecka、RoboStack、Gradient、XMAQUINA、GAIB 等代表项目。总体判断为：短期数据采集激励、RaaS 资产化与稳定币支付最可落地；中期协作网络与机器身份体系成熟；长期机器人 × AI × Web3 将推动“去中心化机器经济（DeRobot Economy）”，以身份、信任、激励、治理四大底座重塑未来劳动力与价值分配体系。]]></description>
            <content:encoded><![CDATA[<br><p style="text-align: center"><em>本独立研报由IOSG Ventures支持，感谢</em><strong><em>Hans </em></strong><em>(RoboCup Asia-Pacific) , </em><strong><em>Nichanan Kesonpat</em></strong><em>(1kx), </em><strong><em>Robert Koschig</em></strong><em> (1kx) , </em><strong><em>Amanda Young </em></strong><em>(Collab+Currency)</em><strong><em> </em></strong><em>, </em><strong><em>Jonathan Victor</em></strong><em> (Ansa Research), </em><strong><em>Lex Sokolin</em></strong><em> (Generative Ventures), </em><strong><em>Jay Yu </em></strong><em>(Pantera Capital) , </em><strong><em>Jeffrey Hu </em></strong><em>(Hashkey Capital) 对本文提出的宝贵建议。撰写过程中亦征询了 </em><strong><em>OpenMind</em></strong><em>, </em><strong><em>BitRobot</em></strong><em>, </em><strong><em>peaq</em></strong><em>, </em><strong><em>Auki Labs, XMAQUINA</em></strong><em>, </em><strong><em>GAIB, Vader, Gradient,Tashi Network 和CodecFlow</em></strong><em>等项目团队的意见反馈。本文力求内容客观准确，部分观点涉及主观判断，难免存在偏差，敬请读者予以理解。</em></p><h2 id="h-" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>一、机器人全景：从工业自动化到人形智能</strong></h2><p>传统机器人产业链已形成自下而上的完整分层体系，涵盖<strong>核心零部件—中间控制系统—整机制造—应用集成</strong>四大环节。<strong>核心零部件</strong>（控制器、伺服、减速器、传感器、电池等）技术壁垒最高，决定了整机性能与成本下限；<strong>控制系统</strong>是机器人的“大脑与小脑”，负责决策规划与运动控制；<strong>整机制造</strong>体现供应链整合能力。<strong>系统集成与应用</strong>决定商业化深度正成为新的价值核心。</p><p>按应用场景与形态，全球机器人正沿着“<strong>工业自动化 → 场景智能化 → 通用智能化</strong>”的路径演进，形成五大主要类型：<strong>工业机器人、移动机器人、服务机器人、特种机器人以及人形机器人</strong></p><ul><li><p><strong>工业机器人（Industrial Robots）</strong>：当前唯一全面成熟的赛道，广泛应用于焊接、装配、喷涂与搬运等制造环节。行业已形成标准化供应链体系，毛利率稳定，ROI 明确。其中的子类<strong>协作机器人（Cobots）</strong>强调人机共作、轻量易部署，成长最快。代表企业：<strong>ABB、发那科(Fanuc)、安川电机（Yaskawa）、</strong>库卡(<strong>KUKA)、</strong>Universal Robots、节卡、遨博。</p></li><li><p><strong>移动机器人（Mobile Robots）</strong>：包括 AGV（自动导引车） 与 AMR（自主移动机器人），在物流仓储、电商配送与制造运输中大规模落地，已成为 B 端最成熟品类。代表企业：<strong>Amazon Robotics, 极智嘉(Geek+)、快仓（Quicktron）、Locus Robotics</strong>。</p></li><li><p><strong>服务机器人（Service Robots）</strong>： 面向清洁、餐饮、酒店与教育等行业，是消费端增长最快的领域。清洁类产品已进入消费电子逻辑，医疗与商用配送加速商业化。此外一批更通用的操作型机器人正在兴起（如 Dyna 的双臂系统）——比 任务特定型产品更灵活，但又尚未达到人形机器人的通用性。代表企业：<strong>科沃斯、石头科技、普渡科技、擎朗智能、</strong>iRobot、 Dyna <strong>等</strong>。</p></li><li><p><strong>特种机器人</strong> 主要服务于医疗、军工、建筑、海洋与航天等场景，市场规模有限但利润率高、壁垒强，多依赖政府与企业订单，处于垂直细分成长阶段，典型项目包括 <strong>直觉外科、Boston Dynamics、ANYbotics、NASA Valkyrie等</strong>。</p></li><li><p><strong>人形机器人（Humanoid Robots）</strong>：被视为未来“通用劳动力平台”。代表企业包括 <strong>Tesla（Optimus）</strong>、<strong>Figure AI（Figure 01）</strong>、<strong>Sanctuary AI (Phoenix)</strong>、<strong>Agility Robotics（Digit）</strong>、<strong>Apptronik (Apollo)</strong>、<strong>1X Robotics、Neura Robotics、宇树科技（Unitree）</strong>、<strong>优必选（UBTECH）、智元机器人</strong> 等。</p></li></ul><p>人形机器人是当下最受关注的前沿方向，其核心价值在于以人形结构适配现有社会空间，被视为通往“<strong>通用劳动力平台</strong>”的关键形态。与追求极致效率的工业机器人不同，人形机器人强调<strong>通用适应性与任务迁移能力</strong>，可在不改造环境的前提下进入工厂、家庭与公共空间。</p><p>目前，大多数人形机器人仍停留在<strong>技术演示阶段</strong>，主要验证动态平衡、行走与操作能力。虽然已有部分项目在<strong>高度受控</strong>的工厂场景中开始小规模部署（如 Figure × BMW、Agility Digit），并预计自 2026 年起会有更多厂商（如 1X）进入早期分发，但这些仍是“<strong>窄场景、单任务”的受限应用</strong>，而非真正意义上的通用劳动力落地。整体来看，距离规模化商业化仍需数年时间。核心瓶颈包括：多自由度协调与实时动态平衡等控制难题；受限于电池能量密度与驱动效率的能耗与续航问题；在开放环境中容易失稳、难以泛化的感知—决策链路；显著的数据缺口（难以支撑通用策略训练）；跨形体迁移尚未攻克；以及硬件供应链与成本曲线（尤其在中国以外地区）仍构成现实门槛，使大规模、低成本部署的实现难度进一步提高。<br></p><p>未来商业化路径预计将经历三个阶段：短期以 <strong>Demo-as-a-Service</strong> 为主，依赖试点与补贴；中期演进为 <strong>Robotics-as-a-Service (RaaS)</strong>，构建任务与技能生态；长期以<strong>劳动力云</strong>与<strong>智能订阅服务</strong>为核心，推动价值重心从硬件制造转向软件与服务网络。总体而言，人形机器人正处于从演示到自学习的关键过渡期，未来能否跨越控制、成本与算法三重门槛，将决定其能否真正实现具身智能。</p><h2 id="h-ai" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>二、AI × 机器人：具身智能时代的黎明</strong></h2><p>传统自动化主要依赖预编程与流水线式控制（如感知–规划–控制的 DSOP 架构），只能在结构化环境中可靠运行。而现实世界更为复杂多变，新一代具身智能（Embodied AI）走的是另一条范式：通过大模型与统一表示学习，使机器人具备跨场景的“理解—预测—行动”能力。具身智能强调 <strong>身体（硬件）+ 大脑（模型）+ 环境（交互）</strong> 的动态耦合，机器人是载体，智能才是核心。</p><p>生成式 AI（Generative AI） 属于<strong>语言世界的智能</strong>，擅长理解符号与语义；具身智能（Embodied AI） 属于<strong>现实世界的智能</strong>，掌握感知与行动。二者分别对应“大脑”与“身体”，代表 AI 演化的两条平行主线。从智能层级上看，具身智能比生成式 AI 更高阶，但其成熟度仍明显落后。LLM 依赖互联网的海量语料，形成清晰的“数据 → 算力 → 部署”闭环；而机器人智能需要 <strong>第一视角、多模态、与动作强绑定的数据</strong>——包括远程操控轨迹、第一视角视频、空间地图、操作序列等，这些数据 <strong>天然不存在</strong>，必须通过真实交互或高保真仿真生成，因此更加稀缺且昂贵。虽然模拟与合成数据有所帮助，但仍无法替代真实的传感器—运动经验，这也是 Tesla、Figure 等必须自建遥操作数据工厂的原因，也是东南亚出现第三方数据标注工厂的原因。简而言之：<strong>LLM 从现成数据中学习，而机器人必须通过与物理世界互动来“创造”数据。</strong>未来 5–10 年，二者将在 Vision–Language–Action 模型与 Embodied Agent 架构上深度融合——LLM 负责高层认知与规划，机器人负责真实世界执行，形成数据与行动的双向闭环，共同推动 AI 从“语言智能”迈向真正的<strong>通用智能（AGI）</strong>。</p><p>具身智能的核心技术体系可视为一个自下而上的智能栈：<strong>VLA（感知融合）</strong>、<strong>RL/IL/SSL（智能学习）</strong>、<strong>Sim2Real（现实迁移）</strong>、<strong>World Model（认知建模）</strong>、以及<strong>多智能体协作与记忆推理（Swarm &amp; Reasoning）</strong>。其中，VLA 与 RL/IL/SSL 是具身智能的“发动机”，决定其落地与商业化；Sim2Real 与 World Model 是连接虚拟训练与现实执行的关键技术；多智能体协作与记忆推理则代表更高层次的群体与元认知演化。</p><br><table style="min-width: 125px"><colgroup><col><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>模块</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>关键技术</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>核心功能</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>代表机构 / 项目</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>重要性</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>感知理解</strong></p><p><strong>（Perception &amp; Understanding）</strong></p></td><td colspan="1" rowspan="1"><p><strong>Vision–Language–Action (VLA)</strong></p></td><td colspan="1" rowspan="1"><p>多模态融合与语义到动作的映射，让机器人“理解指令”</p></td><td colspan="1" rowspan="1"><p>Google RT-X / DeepMind RT-2 / Figure Helix</p></td><td colspan="1" rowspan="1"><p><span data-name="green_circle" class="emoji" data-type="emoji">🟢</span> <strong>高</strong> — 具身智能的核心入口，已进入早期落地阶段</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>智能学习</strong></p><p><strong>（Learning &amp; Adaptation）</strong></p></td><td colspan="1" rowspan="1"><p><strong>自监督学习 (SSL) + 模仿学习 (IL) +&nbsp;</strong></p><p><strong>强化学习 (RL)</strong></p></td><td colspan="1" rowspan="1"><p>从数据、经验与反馈中学习控制与策略，让机器人“学会行动”</p></td><td colspan="1" rowspan="1"><p>OpenAI Robotics / Tesla FSD / DeepMind Alpha</p></td><td colspan="1" rowspan="1"><p><span data-name="green_circle" class="emoji" data-type="emoji">🟢</span> <strong>高</strong> — 行为生成核心，训练成本最高的关键模块</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>现实迁移</strong></p><p><strong>（ Sim2Real）</strong></p></td><td colspan="1" rowspan="1"><p><strong>仿真训练 + 域随机化 + 自适应微调</strong></p></td><td colspan="1" rowspan="1"><p>将虚拟训练经验安全迁移至现实世界，提升泛化能力</p></td><td colspan="1" rowspan="1"><p>NVIDIA Isaac Sim / Meta Habitat / Boston Dynamics</p></td><td colspan="1" rowspan="1"><p><span data-name="yellow_circle" class="emoji" data-type="emoji">🟡</span> <strong>中高</strong> — 落地关键桥梁，受制于 Reality Gap</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>认知建模</strong></p><p><strong>（World Modeling）</strong></p></td><td colspan="1" rowspan="1"><p><strong>潜变量建模 + 想象规划 + 模型驱动强化学习</strong></p></td><td colspan="1" rowspan="1"><p>在内部模拟环境与结果，支持预测、规划与推理</p></td><td colspan="1" rowspan="1"><p>DeepMind Dreamer / Google Gemini + RT-2 / Tesla FSD V12</p></td><td colspan="1" rowspan="1"><p><span data-name="yellow_circle" class="emoji" data-type="emoji">🟡</span> <strong>中高</strong> — 理论前沿，支撑长期推理与自主规划</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>群体智能与记忆推理（Swarm &amp; Reasoning）</strong></p></td><td colspan="1" rowspan="1"><p><strong>多智能体协作 + 长期记忆 + 神经–符号混合AI</strong></p></td><td colspan="1" rowspan="1"><p>多机器人协同与任务分解，构建“群体智慧”与长期学习能力</p></td><td colspan="1" rowspan="1"><p>Google SWARM / Figure 集群实验 / OpenDevin</p></td><td colspan="1" rowspan="1"><p><span data-name="orange_circle" class="emoji" data-type="emoji">🟠</span> <strong>中低</strong> — 前沿探索方向，尚处实验阶段</p></td></tr></tbody></table><p><br></p><h3 id="h-vision-language-action" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>感知理解：视觉–语言–动作模型(Vision–Language–Action)</strong></h3><p>VLA 模型通过整合 <strong>视觉（Vision）—语言（Language）—动作（Action）</strong> 三个通道，使机器人能够从人类语言中理解意图并转化为具体操作行为。其执行流程包括语义解析、目标识别（从视觉输入中定位目标物体）以及路径规划与动作执行，从而实现“理解语义—感知世界—完成任务”的闭环，是具身智能的关键突破之一。当前代表项目有 <strong>Google RT-X、Meta Ego-Exo 与 Figure Helix</strong>，分别展示了跨模态理解、沉浸式感知与语言驱动控制等前沿方向。</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/6b97daf2401a8f99022fb397acc09cfcd95d059b827dcecf8095f90e556c20e7.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAARCAIAAAAzPjmrAAAACXBIWXMAAAsTAAALEwEAmpwYAAAEO0lEQVR4nJ2Vf0wbZRjH35hojDMz/mGiiTEaZ7KFaMIfi/HXBswtGAJDt/JjDiFDuoHZHEMQcG3JAFM3ZWQ/oKyF9ig/HORCO8GzlHaUXAX6a5QwM3AiV+i49crRll5puXKvobcgjoUtfvLmcsnz3vvN8zz3fV4An4BAIECS82632+WanZ2bI2OEw+En+RZsEeNiT4qiUBTFcby2tiY5OfnQoc+u/3zdYDBgGAYhx8b4PwIMwwQCAQhhSUmJVCqFEK5EVoQFBQiiikRYlo2mpKTgOM7v3CKbRwhEuVUIYbH8Mkj91IhhHZ0dWq1W3do6hONy+TWbY0JZn6G+mjticaAx+np7KYraSoDdBISw7+bAd5frbFaLrl9nNBpRFLXZ7UqV0mIdw7prsO7vJ27f6erq6tH0aLU9JEnyxz10yL8CG8Pr7xDCyTuTiQmJS4Gl1ehaWgq5YnRklN+gblUnJe3nS8dH1wu1UQNAyLW1qWUyGYIgKIoq5HKTyQQhHLRrW7GLzTcuiC4WdRsbfxlW/mptvdQuUvX++JtVbbjVIa7/qqahWIM3q/t++tM1vhwKu91zCKLq7OxEEMTpdPItBOFw2OGwSyRi4XFhWlqqVColCGJ2znVOUVjWIChvyKpuLzhasfeVeLBjz/adiS/GHXjp9feeffOj50WKnCpl3rdXMs9cSldp6rwe2u/3VZ6tyMgQpKen6fr7Q6HQmgAXy4hhGJ/f57nvuU+SfHYy9FxlY06VPL9Knn+iOi1VuDvt+LuZp/cITn6Q+mW84OT74qZcSVNelfxYRWO2wYLydmEYhqZpKgZfsf/0YGMbWJYJR6hQxBuKeMNra4Hj2Kyjh1VtLRBCf5hZjATXF//jPdIQawIkSWIYptfrBwwD5mEzRVEsyw7ZJxGNuRk1qjRD7b0jiMaMoIYvcvOrz5aOG+Rj+qZxg2JisGVisMXed4Wen2LZKEHM6Pp1+hgOh/1BDyCEDofj1OlTRUUnBILDEkkVTdMzBHHw6wu7BJVg+y6wxqvvHBG9LSj7Y3recuN8ZSqIfw68BcBOAOIAqM0EWHPpYjA0PGwuKiosKytNTz9YXl7Om+NBBjar1Wa3ORz2UcsowzBBhvm8vOHlj4UAPA3AawCAZz7MOVB4foakx/qv1uVvi98GdgDwBgBxT4F64QsGtZiD8O/pv4xG47jT6Yjh9/sedvLGIt6+O6vDnSMT07v3foJi+E3r5PCtqSjHhZa8d60aj+v3HyqPSYqzKdfI1GhPcPHeY0bFZgeGQ0ww4GMjy2XfnLnnnmUjyyFmaTUaZdnogm8pynHqjq6Ga8oox1G+4HJ4ZSsnb4ZlWRw3m0wmsUSckJAgEomNRqNeP8AwDEV5mmSN+xIT83LzjmRn7d+X1CSTEQTxmAw2w09TmqYJgvBQnkAMfh7QNM1fCR6KIklywevdYpr+Ax+JGnTkmdMpAAAAAElFTkSuQmCC" nextheight="370" nextwidth="712" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p style="text-align: center">Vision-Language-Action模型通用架构</p><p>目前，VLA 仍处于早期阶段，面临四类核心瓶颈：<br> 1）<strong>语义歧义与任务泛化弱</strong>：模型难以理解模糊、开放式指令；<br> 2）<strong>视觉与动作对齐不稳</strong>：感知误差在路径规划与执行中被放大；<br> 3）<strong>多模态数据稀缺且标准不统一</strong>：采集与标注成本高，难以形成规模化数据飞轮；<br> 4）<strong>长时任务的时间轴与空间轴挑战</strong>：任务跨度过长导致规划与记忆能力不足，而空间范围过大则要求模型推理“视野之外”的事物，当前 VLA 缺乏稳定世界模型与跨空间推理能力。</p><p>这些问题共同限制了 VLA 的跨场景泛化能力与规模化落地进程。</p><h3 id="h-ssl-il-rl" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>智能学习：自监督学习（SSL）、模仿学习 (IL)与强化学习 (RL)&nbsp;</strong></h3><ul><li><p><strong>自监督学习(Self-Supervised Learning)：</strong>从感知数据中自动提取语义特征，让机器人“理解世界”。 相当于让机器学会<strong>观察与表征</strong>。</p></li><li><p><strong>模仿学习（Imitation Learning）</strong>：通过模仿人类演示或专家示例，快速掌握基础技能。相当于让机器学会<strong>像人一样做事</strong>。</p></li><li><p><strong>强化学习（Reinforcement Learning）</strong>：通过“奖励-惩罚”机制，机器人在不断试错中优化动作策略。相当于让机器学会<strong>在试错中成长</strong>。</p></li></ul><p>在 <strong>具身智能（Embodied AI）</strong> 中，<strong>自监督学习（SSL）</strong> 旨在让机器人通过感知数据预测状态变化与物理规律，从而理解世界的因果结构；<strong>强化学习（RL）</strong> 是智能形成的核心引擎，通过与环境交互和基于奖励信号的试错优化，驱动机器人掌握行走、抓取、避障等复杂行为；<strong>模仿学习（IL）</strong> 则通过人类示范加速这一过程，使机器人快速获得行动先验。当前主流方向是将三者结合，构建层次化学习框架：SSL 提供表征基础，IL 赋予人类先验，RL 驱动策略优化，以平衡效率与稳定性，共同构成具身智能从理解到行动的核心机制。</p><table style="min-width: 100px"><colgroup><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>范式 (Paradigm)</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>自监督学习 (SSL)</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>模仿学习 (IL)</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>强化学习 (RL)</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>学习目标</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>学特征</strong></p><p style="text-align: center">&nbsp;(理解数据结构)</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>学模仿</strong>&nbsp;</p><p style="text-align: center">(复制专家行为)</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>学最优</strong>&nbsp;</p><p style="text-align: center">(最大化长期奖励)</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>监督来源</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>数据本身</strong></p><p style="text-align: center">&nbsp;(自己造标签)</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>专家演示</strong></p><p style="text-align: center">&nbsp;(人类的动作)</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>环境奖励</strong></p><p style="text-align: center">&nbsp;(试错后的分数)</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>数据类型</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>大量无标签数据</strong> (如：海量视频)</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>专家示范轨迹</strong></p><p style="text-align: center">&nbsp;(状态-动作对)</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>智能体经验</strong>&nbsp;</p><p style="text-align: center">(环境交互的记录)</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>核心挑战</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">缺乏对<strong>动作</strong>的理解</p></td><td colspan="1" rowspan="1"><p style="text-align: center">易受<strong>专家数据质量</strong>限制</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>数据效率低</strong>，需要大量试错</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>具身智能应用</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">提供<strong>强大的视觉</strong>和<strong>感知基础</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">提供<strong>安全的</strong>、<strong>可工作的初始策略</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">找到<strong>最优的</strong>、<strong>超越人类</strong>的策略</p></td></tr></tbody></table><p><br></p><h3 id="h-sim2real" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>现实迁移：Sim2Real —— 从仿真到现实的跨越</strong></h3><p><strong>Sim2Real（Simulation to Reality）</strong> 是让机器人在虚拟环境中完成训练、再迁移至真实世界。它通过高保真仿真环境（如 <strong>NVIDIA Isaac Sim &amp; Omniverse、DeepMind MuJoCo</strong>）生成大规模交互数据，显著降低训练成本与硬件磨损。 其核心在于缩小“<strong>仿真现实鸿沟</strong>”，主要方法包括：</p><ul><li><p><strong>域随机化（Domain Randomization）</strong>：在仿真中随机调整光照、摩擦、噪声等参数，提高模型泛化能力；</p></li><li><p><strong>物理一致性校准</strong>：利用真实传感器数据校正仿真引擎，增强物理逼真度；</p></li><li><p><strong>自适应微调（Adaptive Fine-tuning）</strong>：在真实环境中进行快速再训练，实现稳定迁移。</p></li></ul><p>Sim2Real 是具身智能落地的中枢环节，使 AI 模型能在安全、低成本的虚拟世界中学习“感知—决策—控制”的闭环。Sim2Real 在仿真训练上已成熟（如 NVIDIA Isaac Sim、MuJoCo），但现实迁移仍受限于 <strong>Reality Gap</strong>、高算力与标注成本，以及开放环境下泛化与安全性不足。尽管如此，<strong>Simulation-as-a-Service（SimaaS）</strong> 正成具身智能时代最轻、却最具战略价值的基础设施，其商业模式包括 <strong>平台订阅（PaaS）</strong>、<strong>数据生成（DaaS）</strong> 与 <strong>安全验证（VaaS）</strong>。</p><h3 id="h-world-model" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>认知建模：World Model —— 机器人的“内在世界”</strong></h3><p><strong>世界模型（World Model）</strong> 是具身智能的“内脑”，让机器人能在内部模拟环境与行动后果，实现预测与推理。它通过学习环境动态规律，构建可预测的内部表示，使智能体在执行前即可“预演”结果，从被动执行者进化为主动推理者，代表项目包括 DeepMind Dreamer、Google Gemini + RT-2、Tesla FSD V12、NVIDIA WorldSim 等。 典型技术路径包括：</p><ul><li><p><strong>潜变量建模（Latent Dynamics Modeling）</strong>：压缩高维感知至潜在状态空间；</p></li><li><p><strong>时序预测想象训练（Imagination-based Planning）</strong>：在模型中虚拟试错与路径预测；</p></li><li><p><strong>模型驱动强化学习（Model-based RL）</strong>：用世界模型取代真实环境，降低训练成本。</p></li></ul><p>World Model 处于具身智能的理论前沿性，是让机器人从“反应式”迈向“预测式”智能的核心路径，但仍受限于建模复杂、长时预测不稳与缺乏统一标准等挑战。</p><h3 id="h-" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>群体智能与记忆推理：从个体行动到协同认知</strong></h3><p>多智能体协作（Multi-Agent Systems）与记忆推理（Memory &amp; Reasoning）代表了具身智能从“个体智能”向“群体智能”和“认知智能”演进的两个重要方向。二者共同支撑智能系统的<strong>协作学习</strong>与<strong>长期适应</strong>能力。</p><p><strong>多智能体协作（Swarm / Cooperative RL）</strong>：<br> 指多个智能体在共享环境中通过分布式或协作式强化学习实现协同决策与任务分配。该方向已有扎实研究基础，例如 <strong>OpenAI Hide-and-Seek 实验</strong> 展示了多智能体自发合作与策略涌现， <strong>DeepMind QMIX 和 MADDPG 算法</strong> 提供了集中训练、分散执行的协作框架。这类方法已在仓储机器人调度、巡检和集群控制等场景中得到应用验证。</p><p><strong>记忆与推理（Memory &amp; Reasoning）</strong>：<br> 聚焦让智能体具备长期记忆、情境理解与因果推理能力，是实现跨任务迁移和自我规划的关键方向。典型研究包括 <strong>DeepMind Gato</strong> （统一感知-语言-控制的多任务智能体）和 <strong>DeepMind Dreamer 系列</strong> （基于世界模型的想象式规划），以及 <strong>Voyager 等开放式具身智能体</strong>，通过外部记忆与自我演化实现持续学习。这些系统为机器人具备“记得过去、推演未来”的能力奠定了基础。</p><h3 id="h-" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>全球具身智能产业格局：合作竞争并存</strong><br></h3><table style="min-width: 125px"><colgroup><col><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>模块层级</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="flag_us" class="emoji" data-type="emoji">🇺🇸</span><strong> 美国</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="china" class="emoji" data-type="emoji">🇨🇳</span><strong> 中国</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="european_union" class="emoji" data-type="emoji">🇪🇺</span><strong> 欧洲</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>日本 &amp; 韩国</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>算法层</strong></p><p style="text-align: center"><strong>（智能模型）</strong></p></td><td colspan="1" rowspan="1"><p><strong>Google DeepMind</strong> (RT-X, Gemini)</p><p><strong>OpenAI (GPT-4o 'Omni', 机器人融合)</strong></p><p><strong>Tesla</strong> (World Model, 端到端)</p><p><strong>Meta</strong> (Habitat, V-JEPA)</p></td><td colspan="1" rowspan="1"><p><strong>上海人工智能实验室</strong> (书生)</p><p><strong>百度</strong> (Apollo, 文心)</p><p><strong>清华大学/智谱AI</strong> (CogVLM)</p></td><td colspan="1" rowspan="1"><p><strong>ETH Zurich RSL</strong> (瑞士)</p><p><strong>DeepMind (欧洲)</strong> (巴黎/苏黎世)</p><p><strong>PAL Robotics</strong> (西班牙)</p><p><strong>Neura Robotics</strong> (德国, 认知)</p></td><td colspan="1" rowspan="1"><p><strong>Preferred Networks (PFN)</strong> (日本)</p><p><strong>NAVER Labs</strong> (韩国)</p><p><strong>东京大学 AI Lab</strong> (日本)</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>仿真与训练层</strong></p><p style="text-align: center"><strong>（Sim2Real）</strong></p></td><td colspan="1" rowspan="1"><p><strong>NVIDIA</strong> (Isaac Sim / Omniverse)</p><p>(注：行业绝对主导者)</p><p><strong>DeepMind MuJoCo</strong> (物理引擎)</p><p><strong>Meta</strong> (Habitat)</p></td><td colspan="1" rowspan="1"><p><strong>华为</strong> (Cyberverse / 盘古)</p><p><strong>Unity 中国</strong></p><p><strong>腾讯</strong> (Robotics X)</p><p><strong>Agibot (智元机器人)</strong> (Agi-Sim)</p></td><td colspan="1" rowspan="1"><p><strong>ETH Zurich RSL</strong> (瑞士)</p><p>(注：Sim2Real 学术高地)</p><p><strong>Dassault Systèmes</strong> (法国)</p></td><td colspan="1" rowspan="1"><p><strong>Sony AI</strong> (日本)</p><p>(注：主要为内部研发)</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>系统层</strong></p><p style="text-align: center"><strong>（人形机器人）</strong></p></td><td colspan="1" rowspan="1"><p><strong>Tesla</strong> (Optimus)</p><p><strong>Figure AI</strong> (Figure 01)</p><p><strong>Sanctuary AI</strong> (Phoenix)</p><p><strong>Agility Robotics</strong> (Digit)</p><p><strong>Apptronik</strong> (Apollo)</p></td><td colspan="1" rowspan="1"><p><strong>Agibot (智元机器人)</strong> (远瞻)</p><p><strong>Unitree (宇树科技)</strong> (H1/G1)</p><p><strong>Fourier (傅利叶)</strong> (GR-1)</p><p><strong>UBTECH (优必选)</strong> (Walker)</p></td><td colspan="1" rowspan="1"><p><strong>1X Robotics</strong> (挪威/美国)</p><p><br></p><p><strong>Neura Robotics</strong> (德国 - 4NE-1)</p></td><td colspan="1" rowspan="1"><p><strong>NAVER Labs</strong> (韩国 - Ambidex)</p><p><br></p><p><em>注：日韩在该赛道商业化落后于美中，其传统优势在</em><strong><em>工业机器人</em></strong><em>(Fanuc)和</em><strong><em>核心零部件</em></strong><em>(Harmonic Drive)领域。</em></p></td></tr></tbody></table><p><br></p><p>全球机器人产业正处于“合作主导、竞争深化”的时期。中国的供应链效率、美国的 AI 能力、日本的零部件精度、欧洲的工业标准共同塑造全球机器人产业的长期格局。</p><ul><li><p><strong>美国</strong> 在前沿 AI 模型与软件领域（DeepMind、OpenAI、NVIDIA）保持领先，但这一优势并未延伸至机器人硬件。中国厂商在迭代速度和真实场景表现上更具优势。美国通过《芯片法案》（CHIPS Act）和《通胀削减法案》（IRA）推动产业回流。</p></li><li><p><strong>中国</strong> 凭借规模化制造、垂直整合与政策驱动，在零部件、自动化工厂与人形机器人领域形成领先优势，硬件与供应链能力突出，宇树与优必选等已实现量产，正向智能决策层延伸。但在 <strong>算法与仿真训练层</strong>与美国仍存较大差距。</p></li><li><p><strong>日本</strong> 长期垄断高精度零部件与运动控制技术，工业体系稳健，但 AI 模型融合仍处早期阶段，创新节奏偏稳。</p></li><li><p><strong>韩国</strong>在消费级机器人普及方面突出——由 LG、NAVER Labs 等企业引领，并拥有成熟强劲的服务机器人生态体系。</p></li><li><p><strong>欧洲</strong> 工程体系与安全标准完善，1X Robotics 等在研发层保持活跃，但部分制造环节外迁，创新重心偏向协作与标准化方向。</p></li></ul><h2 id="h-ai-web3" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>三、机器人 × AI × Web3：叙事愿景与现实路径</strong></h2><p><br>2025 年，Web3 行业出现与机器人和 AI 融合的新叙事。尽管 Web3 被视为去中心化机器经济的底层协议，但其在不同层面的结合价值与可行性仍存在明显分化：</p><ul><li><p><strong>硬件制造与服务层</strong>资本密集、数据闭环弱，Web3 目前仅能在供应链金融或设备租赁等边缘环节发挥辅助作用；</p></li><li><p><strong>仿真与软件生态层</strong>的契合度较高，仿真数据与训练任务可上链确权，智能体与技能模块也可通过<em>NFT</em> 或 <em>Agent Token</em> 实现资产化；</p></li><li><p><strong>平台层</strong>，去中心化的劳动力与协作网络正展现出最大潜力——Web3 可通过身份、激励与治理一体化机制，逐步构建可信的“机器劳动力市场”，为未来机器经济奠定制度雏形。<br></p></li></ul><table style="min-width: 125px"><colgroup><col><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>层级</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>商业模式</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>资本强度</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Web3 结合潜力</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>代表企业 / 项目</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>L1 硬件制造层</strong>&nbsp;</p></td><td colspan="1" rowspan="1"><p style="text-align: center">整机生产、关键零部件、维护服务</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="red_circle" class="emoji" data-type="emoji">🔴</span> 极高</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="white_circle" class="emoji" data-type="emoji">⚪</span> 低｜资产重、数据闭环弱；适合供应链金融</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Boston Dynamics、Tesla Optimus、Figure AI、Unitree、优必选</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>L2 服务部署层</strong>&nbsp;</p></td><td colspan="1" rowspan="1"><p style="text-align: center">RaaS租赁、系统集成、项目费、订阅制</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="orange_circle" class="emoji" data-type="emoji">🟠</span> 高</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="yellow_circle" class="emoji" data-type="emoji">🟡</span> 中｜任务/工时上链、自动结算</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Agility Robotics、ABB Robotics、Geek+</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>L3 仿真数据层</strong>&nbsp;</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Simulation-as-a-Service、数据授权、云订阅</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="yellow_circle" class="emoji" data-type="emoji">🟡</span> 中低</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="yellow_circle" class="emoji" data-type="emoji">🟡</span> 中｜仿真数据与训练任务可上链确权</p></td><td colspan="1" rowspan="1"><p style="text-align: center">NVIDIA Isaac Sim / Omniverse、MuJoCo</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>L4 软件系统层</strong>&nbsp;</p></td><td colspan="1" rowspan="1"><p style="text-align: center">AI智能体运行时、SDK、控制框架、开发工具收费</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="green_circle" class="emoji" data-type="emoji">🟢</span> 低</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="green_circle" class="emoji" data-type="emoji">🟢</span><span data-name="green_circle" class="emoji" data-type="emoji">🟢</span> 高｜技能/策略可资产化（Skill NFT / Agent Token）</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Isaac ROS、ROS2 Nav2 / MoveIt、OpenMind</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>L5 系统级实时协作层</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">低延迟传感器交换、多机器人实时状态同步、边缘算力共享、加密访问控制</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="green_circle" class="emoji" data-type="emoji">🟢</span> 低</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="green_circle" class="emoji" data-type="emoji">🟢</span><span data-name="green_circle" class="emoji" data-type="emoji">🟢</span><span data-name="green_circle" class="emoji" data-type="emoji">🟢</span> 高｜实时协作需链上身份/签名/权限</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Geodnet、Auki</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>L6 机器人经济平台</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">机器人身份、支付与协作，机器人市场，以Token激励实现网络效应；</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="green_circle" class="emoji" data-type="emoji">🟢</span> 最低</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="green_circle" class="emoji" data-type="emoji">🟢</span><span data-name="green_circle" class="emoji" data-type="emoji">🟢</span><span data-name="green_circle" class="emoji" data-type="emoji">🟢</span> 最高｜链上身份、结算、治理的最佳场景</p></td><td colspan="1" rowspan="1"><p style="text-align: center">BitRobot、Peaq、PrismaX、IoTeX</p></td></tr></tbody></table><p><br></p><p>从长期愿景来看，<strong>协作与平台层</strong>是 Web3 与机器人及 AI 融合中最具价值的方向。随着机器人逐步具备感知、语言与学习能力，它们正演化为能自主决策、协作与创造经济价值的智能个体。这些“智能劳动者”真正参与经济体系，仍需跨越四个<strong>身份、信任、激励与治理</strong>核心门槛<strong>。</strong></p><ul><li><p>在<strong>身份层</strong>，机器需具备可确权、可追溯的数字身份。通过<strong>Machine DID</strong>，每个机器人、传感器或无人机都能在链上生成唯一可验证的“身份证”，绑定其所有权、行为记录与权限范围，实现安全交互与责任界定。</p></li><li><p>在<strong>信任层</strong>，关键在于让“机器劳动”可验证、可计量、可定价。借助 <strong>智能合约、预言机与审计机制</strong>，结合 <strong>物理工作证明（PoPW）</strong>、<strong>可信执行环境（TEE）</strong> 与 <strong>零知识证明（ZKP）</strong>，可确保任务执行过程的真实性与可追溯性，使机器行为具备经济核算价值。</p></li><li><p>在<strong>激励层</strong>，Web3 通过 <strong>Token 激励体系、账户抽象与状态通道</strong> 实现机器间的自动结算与价值流转。机器人可通过微支付完成算力租赁、数据共享，并以质押与惩罚机制保障任务履约；借助智能合约与预言机，还可形成无需人工调度的去中心化“机器协作市场”。</p></li><li><p>在<strong>治理层</strong>，当机器具备长期自治能力后，Web3 提供透明、可编程的治理框架：以 <strong>DAO 治理</strong> 共同决策系统参数，以 <strong>多签与信誉机制</strong> 维护安全与秩序。长期来看，这将推动机器社会迈向 <strong>“算法治理”</strong> 阶段——人类设定目标与边界，机器间以合约维系激励与平衡。</p></li></ul><p><strong>Web3 与机器人融合终极愿景</strong>：<strong>真实环境评测网络</strong>——由分布式机器人组成的“现实世界推理引擎”，在多样、复杂的物理场景中持续测试与基准模型能力；以及<strong>机器人劳动力市场</strong>——机器人在全球执行可验证的现实任务，通过链上结算获取收益，并将价值再投入算力或硬件升级。</p><p>从现实路径来看，具身智能与Web3的结合仍处于早期探索期， 去中心化机器智能经济体更多停留在叙事与社区驱动层面。现实中具备可行潜力的结合方向，主要体现在以下三方面：<br> （1）<strong>数据众包与确权</strong>——Web3 通过链上激励与追溯机制，鼓励贡献者上传真实世界数据；<br> （2）<strong>全球长尾参与</strong>——跨境小额支付与微激励机制有效降低数据采集与分发成本；<br> （3）<strong>金融化与协作创新</strong>——DAO 模式可推动机器人资产化、收益凭证化及机器间结算机制。</p><p>总体来看，短期主要集中在<strong>数据采集与激励层</strong>；中期有望在“<strong>稳定币支付 + 长尾数据聚合</strong>”及 <strong>RaaS 资产化与结算层</strong> 实现突破；长期，若人形机器人规模化普及，<strong>Web3 或将成为机器所有权、收益分配与治理的制度底层</strong>，推动真正的去中心化机器经济形成。</p><h2 id="h-web3" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>四、Web3机器人生态图谱与精选案例</strong></h2><p><br>基于“可验证进展、技术公开度、产业相关度”三项标准，梳理当前 <strong>Web3 × Robotics</strong> 代表性项目，并按五层架构归类：<strong>模型智能层、机器经济层、数据采集层、感知与仿真基础层、机器人资产收益层</strong>。为保持客观，我们已剔除明显“蹭热点”或资料不足项目；如有疏漏，欢迎指正。<br></p><table style="min-width: 100px"><colgroup><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>层级</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>子类</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>代表项目</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>主要功能</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>模型智能层</strong></p><p style="text-align: center"><strong>（Model &amp; Intelligence）</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">操作系统与智能规划</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>OpenMind</strong>, <strong>CodecFlow</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">OpenMind：去中心化 Robot OS 与多机器人协调；CodecFlow：VLA 运行时与通用执行引擎</p></td></tr><tr><td colspan="1" rowspan="2"><p style="text-align: center"><strong>机器经济层</strong></p><p style="text-align: center"><strong>（Machine Economy Layer）</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">机器身份与支付结算</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>peaq</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">机器身份与钱包、任务结算基础设施、专用 SDK</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center">机器人任务激励与经济协调</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>BitRobot Network</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">去中心化机器人协作与激励网络，通过 Subnets 组织任务执行、验证与奖励</p></td></tr><tr><td colspan="1" rowspan="3"><p style="text-align: center"><strong>数据采集层</strong></p><p style="text-align: center"><strong>（Data Layer）</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">远程操控数据</p><p style="text-align: center">（Teleoperation）</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>PrismaX</strong></p><p style="text-align: center"><strong>BitRobot Network</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">采集远程操控与人类反馈数据，用于训练数据集</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center">视角与动作数据</p><p style="text-align: center">（POV / Motion）</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Mecka</strong>,&nbsp;</p><p style="text-align: center"><strong>BitRobot FrodoBots&nbsp;</strong></p><p style="text-align: center"><strong>Sapien</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">POV/游戏化/可穿戴人体数据，构建多模态具身训练集</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center">仿真与合成数据</p><p style="text-align: center">（Simulation / Synthetic）</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>BitRobot Network</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">仿真环境中，将人机交互数据规模扩展到超越脚本化场景的更丰富、多样化环境。</p></td></tr><tr><td colspan="1" rowspan="2"><p style="text-align: center"><strong>感知与仿真</strong></p><p style="text-align: center"><strong>（Middleware &amp; Simulation）</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">定位与通信中间件</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>RoboStack</strong>,</p><p style="text-align: center"><strong>GEODNET</strong>, <strong>Auki</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">RoboStack：RCP 标准 + 云仿真 + 工作流编排；</p><p style="text-align: center">GEODNET：厘米级 RTK；Auki：共享 3D 空间映射</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center">仿真与训练系统</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Gradient</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">Mirage 面向具身智能训练提供分布式仿真、动态交互环境与大规模并行学习能力</p></td></tr><tr><td colspan="1" rowspan="2"><p style="text-align: center"><strong>机器人资产收益层</strong></p><p style="text-align: center"><strong>（RobotFi RWAiF）</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">机器人资产代币化</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>XMAQUINA</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">以高流动性方式参与人形机器人公司增长的投资DAO</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center">AI基金资产金融化</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>GAIB</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">将 AI GPU 与机器人硬件业务现金流收益上链</p></td></tr></tbody></table><p><br></p><h3 id="h-model-and-intelligence" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>模型智能层（Model &amp; Intelligence）</strong></h3><h4 id="h-openmind-building-android-for-robots-httpsopenmindorg" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>Openmind - Building Android for Robots </strong>&nbsp;(<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://openmind.org/"><u>https://openmind.org/</u></a>)</h4><p><strong>OpenMind</strong> 是一个面向具身智能（Embodied AI）与机器人控制的开源操作系统（Robot OS），目标是构建全球首个去中心化机器人运行环境与开发平台。 项目核心包括两大组件：</p><ul><li><p><strong>OM1</strong>：构建在 ROS2之上的模块化开源 AI 智能体运行时(AI Runtime Layer)，用于编排感知、规划与动作管线，服务于数字与实体机器人；</p></li><li><p><strong>FABRIC</strong>：分布式协调层（Fabric Coordination Layer），连接云端算力、模型与现实机器人，使开发者可在统一环境中控制和训练机器人。</p></li></ul><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/a01c971e8d1cb811c64ede05db6a69d5df880303118693426d846f16013239f3.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAATCAIAAAB+9pigAAAACXBIWXMAAAsTAAALEwEAmpwYAAAFgklEQVR4nG2V32/TVhTHIwJ4YSRtnNSxHV87jp3YcWI7JrHjxnHTpGuTNs0P2qQNJNDRtKGB/qCUQulga7XCmPgxJKZNiKHygqbuhbGH/RvbpIl/YhKPe9heJqi0lYzzcHXuD52P7v2ec4/l2LtmfWsHncOHj7x69ftff//562+/7O19//r1Hy9fvnj58sfdZ7uHrIesVisEQba35yEI2h8PBrQcnMAwnEwmWTaAo25dk9KmGqAJH4ntPX+8s7W+sdrae/74xQ+7P//0/JsH27e31wujmeLooJGQAjSeNlRNCVEkQBDk/QCL5Y2vaSrLsrwfLM+dWl04a2phTeZWL3xsqLJ+gqtP5C/Mn2nUK6VcKhblr2+sbG9dm6rmU0lpfe3i7JmKJEscx7ndrn/v8SYoBEEwDJtmamPj2s4XO/OzZ4MeXA2G0kpM8vp4HNTy+RjH9/u56sjoZ2tr0+OlsVh/lOUXZ1vtxowels2Q1Go2RxMGz/H9AwM4jncDUBQdHh7SNLVSqYxkM4TLIwUFNRJDe/rQnr5iJs8StK8PT4RjMxOnE6LKEzRA8GImPzFSCOI+FiVro5WkEAMEKNWq3YCD8losFrvNdiJu3ty6nR3OG+bQcL6USKazw2OJZLoPwQDJLF1aL5RrWsIczOQqJ6cMc4jyB50wgnr9doeTD4cxFO2+AY7jjx9/216Yf7r79OrlywD4Ydgpy5FWa2Zx8fyNGxvVarnZnJqqnaR9lNvtBIAQhFAkIpimwXEBZ2/PkcNWl9vj6IFFRUFRdD8DuzVIJpOF8UIiHh9Ij03VT8GwKxbTotFYJpMFgIJhNwy7HY6e1dXLEVH2M+xRyAZBtqOQzXbsOATZXAjOsMG4rgMAYBi22+3vf6Ke48fHylMznUuFiYacSMWNbNzIclI8NVQQ44aiD+ZKNSM7qqdH9HQubmT19Mj+VDGG7371MDsyXCoW2wvtfSX+A9jtdgAAw7LBQMAcHDBShqapk9XJCxc7mzc2q9XJ083pUnm8vTA/MVka+mioXp/O53P5fO50c7q90J5tnesfSJGUL6aqOI6/v9BkWcpkMqZhiIqC4ziGojzHGcmkaaZMMwUA8Hg8b9YxDEEQr9eLIAhBECjqwVCUwHFRUZwul6hEMQzrziL7W9uvdYfDkS+VZ+ZbtdOncoXRc+351bW1Yrm0fu3qdKPRudjZ+OR6o9mcbjTKk5PhSCQUFkRZ1vv7R8bGXQgiKsp7APsC7Gvg7O2Ny1HSSxAYLgQ5MSQAmg6rJ+hggA3xbDhEsky+Uj5zvqUPmjhFkixDsYzXR7FhwemCuwEQBLnd7uWVpbm5uStX1u7d+3JurqXwfD6fW+x0KpWKwod8PhJB0ZAoCtFoKBpNGMb1T29ubm/fvn8/ntBS2WxAEPwsi5MEimGDQ0MIgnQD6vXpanVyeWWpUimXikWepikSdNqzshihvd7+/oQvGGjOzbZWlhoXOtt3bi0sLRYbzc2dnc9u7dx++DA3WdMzGSbEcTwf1/UPDnyo7zyRxWKxWq29vb1ySJC40PhwLsLxSkTyMayo64KqBmU5oqr+UCgoiRFdC0giExaYcDiiJjhZEXWdYhgnDEP/B+wvWSwWAICmqX0YQwZURjC8fgmwJ1jBIGgJMFE6oAFGAUzUQ/AkEyOZGIJzBC15aUmOjzC84YT7uvsBBEEIgiwuXqzX65curdy5c6tUKXkQ6vMb97Y27wBMiHDGo/tPfIRIA+nRg+/icpom5Pa55dlG5+6tr5c718JBIy4OPHn0TAgmPrTZjx2zdfcDu92OoiiGYePjhVqtJsmSy+lh/JyPYjAPhXkARTGoB6AeQAIaRXDMQzF+jvTSfirA+Dkc9WPo260+r8PR0wX4B00+VN4oH+2+AAAAAElFTkSuQmCC" nextheight="885" nextwidth="1456" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>OpenMind 的核心在于充当 <strong>LLM（大语言模型）与机器人世界之间的智能中间层</strong>，让语言智能真正转化为具身智能（Embodied Intelligence），构建起从 <strong>理解（Language → Action）</strong> 到 <strong>对齐（Blockchain → Rules）</strong> 的智能骨架。OpenMind 多层系统实现了完整的协作闭环：人类通过 <strong>OpenMind App</strong> 提供反馈与标注（RLHF 数据），<strong>Fabric Network</strong> 负责身份验证、任务分配与结算协调，<strong>OM1 Robots</strong> 执行任务并遵循区块链上的“机器人宪法”完成行为审计与支付，从而实现 <strong>人类反馈 → 任务协作 → 链上结算</strong> 的去中心化机器协作网络。</p><table style="min-width: 100px"><colgroup><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>层级</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>系统模块</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>核心组成</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>主要功能 / 作用</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>区块链层（Blockchain Layer）</strong></p></td><td colspan="1" rowspan="1"><p>Ethereum / L2 网络</p></td><td colspan="1" rowspan="1"><p>• 机器人身份注册</p><p>• 智能合约（机器人宪法）</p><p>• 稳定币结算（USDC / DAI / sUSDe）</p><p>• Fabric Token 与声誉日志</p></td><td colspan="1" rowspan="1"><p>实现身份确权、行为审计、任务结算与激励分配</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>协调层（Fabric Layer）</strong></p></td><td colspan="1" rowspan="1"><p>FABRIC 协议</p></td><td colspan="1" rowspan="1"><p>• 身份认证与任务市场</p><p>• P2P 通信（Zenoh / DDS）</p><p>• 自动支付与合规验证</p><p>• 技能与声誉注册表</p></td><td colspan="1" rowspan="1"><p>任务分发与协作、低延迟通信、链上结算与治理</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>运行层（OM1 Layer）</strong></p></td><td colspan="1" rowspan="1"><p>OM1 Runtime（Python + ROS2）</p></td><td colspan="1" rowspan="1"><p>• 多模态传感输入</p><p>• 语言数据总线（NL Data Bus）</p><p>• LLM 决策核心</p><p>• 硬件抽象层（Unitree SDK）</p></td><td colspan="1" rowspan="1"><p>将机器人转化为语言智能体，支持多平台兼容与上链审计</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>应用层（OpenMind App）</strong></p></td><td colspan="1" rowspan="1"><p>iOS / Android / Web</p></td><td colspan="1" rowspan="1"><p>• 地图众包与评测标注</p><p>• 远程接管与任务发布</p><p>• 机器人 ID 管理与激励领取</p></td><td colspan="1" rowspan="1"><p>提供人类参与入口，构建 RobotFi 数据与激励平台</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>生态层（Ecosystem Integration）</strong></p></td><td colspan="1" rowspan="1"><p>OEM / 实验室 / 开发者网络</p></td><td colspan="1" rowspan="1"><p>• 合作方：Unitree、Ubtech、Stanford 等</p><p>• 标准化 SDK 与企业解决方案</p></td><td colspan="1" rowspan="1"><p>建立硬件接入标准，推动行业级应用与生态共建</p></td></tr></tbody></table><p><br><strong>项目进展与现实评估</strong></p><p>OpenMind 处于“技术可运行、商业未落地”的早期阶段。核心系统 <strong>OM1 Runtime</strong> 已在 GitHub 开源，可在多平台运行并支持多模态输入，通过自然语言数据总线（NLDB）实现语言到行动的任务理解，具备较高原创性但仍偏实验，<strong>Fabric 网络</strong> 与链上结算仅完成接口层设计。</p><p>生态上，项目已与 <strong>Unitree、Ubtech、TurtleBot</strong> 等开放硬件及 <strong>Stanford、Oxford、Seoul Robotics</strong> 等高校合作，主要用于教育与研究验证，尚无产业化落地。App 已上线测试版，但激励与任务功能仍处早期。</p><p>商业模式方面，OpenMind 构建了 <strong>OM1（开源系统）+ Fabric（结算协议）+ Skill Marketplace（激励层）</strong> 的三层生态，目前尚无营收，依赖约 <strong>2000 万美元早期融资</strong>（Pantera、Coinbase Ventures、DCG）。总体来看，技术领先但商业化与生态仍处起步阶段，若 <strong>Fabric</strong> 成功落地，有望成为“具身智能时代的 Android”，但周期长、风险高、对硬件依赖强。</p><br><h4 id="h-codecflow-the-execution-engine-for-robotics-httpscodecflowai" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>CodecFlow - <em>The Execution Engine for Robotics</em>&nbsp; (</strong><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://codecflow.ai"><strong><u>https://codecflow.ai</u></strong></a><strong>)</strong></h4><p>CodecFlow 是一个基于 <strong>Solana 网络</strong> 的去中心化执行层协议（Fabric），旨在为 AI 智能体与机器人系统提供按需运行环境，让每一个智能体拥有“即时机器（Instant Machine）”。项目核心由三大模块构成：</p><ul><li><p><strong>Fabric </strong>：跨云算力聚合层（Weaver + Shuttle + Gauge），可在数秒内为AI任务生成安全的虚拟机、GPU容器或机器人控制节点；</p></li><li><p><strong>optr SDK</strong>：智能体执行框架（Python接口），用于创建可操作桌面、仿真或真实机器人的“Operator”；</p></li><li><p><strong>Token 激励</strong>：链上激励与支付层，连接计算提供者、智能体开发者与自动化任务用户，形成去中心化算力与任务市场。</p></li></ul><p>CodecFlow 的核心目标是打造“AI与机器人操作员的去中心化执行底座”，让任何智能体可在任意环境（Windows / Linux / ROS / MuJoCo / 机器人控制器）中安全运行，实现从 <strong>算力调度（Fabric） → 系统环境（System Layer） → 感知与行动（VLA Operator）</strong> 的通用执行架构。</p><p><strong>项目进展与现实评估</strong></p><p>已发布早期版本的 <strong>Fabric 框架（Go）</strong> 与 <strong>optr SDK（Python）</strong>，可在网页或命令行环境中启动隔离算力实例。<strong>Operator 市场</strong> 预计于 2025 年底上线，定位为 <strong>AI 算力的去中心化执行层</strong>，</p><p>主要服务对象包括 AI 开发者、机器人研究团队与自动化运营公司。<br></p><table style="min-width: 125px"><colgroup><col><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>项目</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>核心定位</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>角色比喻</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>主要功能</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>加密结合点</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>OpenMind</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">去中心化机器人操作系统（Robot OS Layer）</p></td><td colspan="1" rowspan="1"><p style="text-align: center">系统大脑</p></td><td colspan="1" rowspan="1"><p style="text-align: center">连接 LLM 与机器人，实现多机器人协作与任务调度</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Fabric 网络上的节点协调与任务激励</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>CodecFlow</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">运行环境</p><p style="text-align: center">（ Runtime Layer）</p></td><td colspan="1" rowspan="1"><p style="text-align: center">行动引擎</p></td><td colspan="1" rowspan="1"><p style="text-align: center">执行多模态任务，连接 AI agent 与具身行为</p></td><td colspan="1" rowspan="1"><p style="text-align: center">VLA 执行市场与 Operator 激励机制</p></td></tr></tbody></table><p><br></p><h3 id="h-machine-economy-layer" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>机器经济层（Machine Economy Layer）</strong></h3><h4 id="h-bitrobot-the-worlds-open-robotics-lab-httpsbitrobotai" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>BitRobot - The World’s Open Robotics Lab</strong>&nbsp; (<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://bitrobot.ai"><u>https://bitrobot.ai</u></a>)</h4><p>BitRobot 是一个面向具身智能（Embodied AI）与机器人研发的<strong>去中心化科研与协作网络（Open Robotics Lab）</strong>，由 <strong>FrodoBots Labs</strong> 与 <strong>Protocol Labs</strong> 联合发起。其核心愿景是：通过“<strong>子网（Subnets）+ 激励机制 + 可验证工作（VRW）</strong>”的开放架构， 核心作用包括：</p><ul><li><p>通过 <strong>VRW (Verifiable Robotic Work)</strong> 标准定义并验证每一项机器人任务的真实贡献；</p></li><li><p>通过 <strong>ENT (Embodied Node Token)</strong> 为机器人赋予链上身份与经济责任；</p></li><li><p>通过 <strong>Subnets</strong> 组织科研、算力、设备与操作者的跨地域协作；</p></li><li><p>通过 <strong>Senate + Gandalf AI</strong> 实现“人机共治”的激励决策与科研治理。</p></li></ul><table style="min-width: 100px"><colgroup><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>层级</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>系统模块</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>核心组成</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>主要功能 / 作用</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>区块链层</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">Solana / BitRobot Token</p></td><td colspan="1" rowspan="1"><p style="text-align: center">VRW 验证机制 · 子网注册治理 · 激励结算</p></td><td colspan="1" rowspan="1"><p style="text-align: center">提供可验证任务与激励分配的经济闭环</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>协调层</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">Subnet Framework</p></td><td colspan="1" rowspan="1"><p style="text-align: center">任务定义 · 资源调度 · 数据与模型共享</p></td><td colspan="1" rowspan="1"><p style="text-align: center">构建跨机构的开放科研与执行网络</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>身份层</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">ENT（Embodied Node Token）</p></td><td colspan="1" rowspan="1"><p style="text-align: center">机器人注册 · 质押 · 信用追踪</p></td><td colspan="1" rowspan="1"><p style="text-align: center">建立机器人链上身份与数字孪生体系</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>经济层</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">MER Loop</p></td><td colspan="1" rowspan="1"><p style="text-align: center">度量–评估–奖励循环 · Senate + Gandalf AI</p></td><td colspan="1" rowspan="1"><p style="text-align: center">将科研成果转化为可量化激励活动</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>治理层</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">Senate / Gandalf AI / Foundation</p></td><td colspan="1" rowspan="1"><p style="text-align: center">参议院评估 · AI 决策建议 · 基金会支持</p></td><td colspan="1" rowspan="1"><p style="text-align: center">实现人机共治的科研资源分配机制</p></td></tr></tbody></table><p><br>自 2025 年发布白皮书以来，BitRobot 已运行多个子网（如 <strong>SN/01 ET Fugi</strong>、<strong>SN/05 SeeSaw by Virtuals Protocol</strong>），实现去中心化远程操控与真实场景数据采集，并推出 <strong>$5M Grand Challenges 基金</strong> 推动全球模型开发的科研竞赛。</p><h4 id="h-peaq-the-economy-of-things-httpswwwpeaqnetwork" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>peaq – The Economy of Things</strong>&nbsp; (<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.peaq.network"><u>https://www.peaq.network</u></a>)</h4><p>peaq 是专为机器经济打造的 Layer-1 区块链，为数百万台机器人与设备提供机器身份、链上钱包、访问控制以及纳秒级时间同步（Universal Machine Time）等底层能力。其 Robotics SDK 使开发者能够以极少代码让机器人“机器经济就绪”，实现跨厂商、跨系统的互操作性与交互。</p><p>目前，peaq 已上线全球首个代币化机器人农场，并支持 60 余个真实世界的机器应用。其代币化框架帮助机器人公司为资本密集型硬件筹集资金，并将参与方式从传统 B2B/B2C 扩展至更广泛的社区层。凭借由网络费用注入的协议级激励池，peaq 可补贴新设备接入并支持开发者，从而形成推动机器人与物理 AI 项目加速扩张的经济飞轮。</p><table style="min-width: 75px"><colgroup><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>组件</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>功能</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>价值</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>peaq 区块链</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">提供机器身份、支付、访问控制等基础能力</p></td><td colspan="1" rowspan="1"><p style="text-align: center">作为机器经济的底层操作系统，实现原生互操作与链上交互</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>经济模型</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">协议激励池由网络费用补充</p></td><td colspan="1" rowspan="1"><p style="text-align: center">补贴机器接入、支持开发者，形成正向增长飞轮</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Robotics SDK</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">以少量代码接入 peaq 通用机器功能</p></td><td colspan="1" rowspan="1"><p style="text-align: center">让机器人实现 Machine Economy-ready，可连接应用与存储数据</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>x402 支付集成</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">接入 Coinbase/Google 支持的 x402 协议</p></td><td colspan="1" rowspan="1"><p style="text-align: center">机器人与 AI 智能体可即时支付 API 和服务费用</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>UMT 时间同步</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">纳秒级链上 Precision Time Protocol</p></td><td colspan="1" rowspan="1"><p style="text-align: center">提供全球机器的精准同步、时间戳与审计能力</p></td></tr></tbody></table><p><br></p><h3 id="h-data-layer" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>数据采集层 （Data Layer）</strong></h3><p>旨在解决具身智能训练中稀缺且昂贵的高质量现实世界数据。通过多种路径采集和生成人机交互数据，包括远程操控（PrismaX, BitRobot Network）、第一视角与动作捕捉（Mecka、BitRobot Network、Sapien、Vader、NRN）以及仿真与合成数据（BitRobot Network），为机器人模型提供可扩展、可泛化的训练基础。</p><br><p>需要明确的是，<strong>Web3 并不擅长“生产数据”</strong>——在硬件、算法与采集效率上，Web2 巨头远超任何 DePIN 项目。其真正价值在于<strong>重塑数据的分配与激励机制</strong>。基于“<strong>稳定币支付网络 + 众包模型</strong>”，通过无许可的激励体系与链上确权机制，实现低成本的小额结算、贡献溯源与自动分润。但开放式众包仍面临质量与需求闭环难题——数据质量参差不齐，缺乏有效验证与稳定买方。</p><h4 id="h-prismax-httpsgatewayprismaxai" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>PrismaX </strong>&nbsp;(<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://gateway.prismax.ai"><u>https://gateway.prismax.ai</u></a>)</h4><p>PrismaX 是一个面向具身智能（Embodied AI）的去中心化远程操控与数据经济网络，旨在构建“全球机器人劳动力市场”，让人类操作者、机器人设备与AI模型通过链上激励系统协同进化。项目核心包括两大组件：</p><ul><li><p><strong>Teleoperation Stack</strong> —— 远程操控系统（浏览器/VR界面 + SDK），连接全球机械臂与服务机器人，实现人类实时操控与数据采集；</p></li><li><p><strong>Eval Engine</strong> —— 数据评估与验证引擎（CLIP + DINOv2 + 光流语义评分），为每条操作轨迹生成质量评分并上链结算。</p></li></ul><p>PrismaX 通过去中心化激励机制，将人类操作行为转化为机器学习数据，构建从 <strong>远程操控 → 数据采集 → 模型训练 → 链上结算</strong> 的完整闭环，实现“人类劳动即数据资产”的循环经济。</p><table style="min-width: 75px"><colgroup><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>层级</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>模块</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>功能</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>区块链层</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">PIX 协议<strong>L2 网络</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">管理质押、验证与结算，实现可信激励。</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>操控层</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">Browser / VR Stack</p></td><td colspan="1" rowspan="1"><p style="text-align: center">远程操控机器人，采集动作与视觉数据。</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>数据层</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Eval Engine Data Hub</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">自动评估数据质量并上链确权。</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>应用层</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>PrismaX Gateway</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">任务发布、接单与激励结算。</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>模型层</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">Robots + AI Models</p></td><td colspan="1" rowspan="1"><p style="text-align: center">机器人生成数据，模型持续学习优化。</p></td></tr></tbody></table><p><strong>项目进展与现实评估：</strong> PrismaX 已在 2025 年 8 月上线测试版（<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://gateway.prismax.ai">gateway.prismax.ai</a>），用户可远程操控机械臂执行抓取实验并生成训练数据。Eval Engine 已在内部运行， 整体来看，PrismaX 技术实现度较高，定位清晰，是连接“人类操作 × AI模型 × 区块链结算”的关键中间层。其长期潜力有望成为“具身智能时代的去中心化劳动与数据协议”，但短期仍面临规模化挑战。</p><h4 id="h-bitrobot-networkhttpsbitrobotai" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>BitRobot Network</strong>（<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://bitrobot.ai/"><u>https://bitrobot.ai/</u></a>）</h4><p>BitRobot Network 通过其子网实现视频、远程操控与仿真等多源数据采集。SN/01 ET Fugi 允许用户远程控制机器人完成任务，在“现实版 Pokémon Go 式”的交互中采集导航与感知数据。该玩法促成了 FrodoBots-2K 数据集的诞生，这是当前最大规模的人机导航开源数据集之一，被 UC Berkeley RAIL 和 Google DeepMind 等机构使用。SN/05 SeeSaw (Virtual Protocol)则通过 iPhone 在真实环境中大规模众包采集第一视角视频数据。其他已公布的子网，如 RoboCap 和 Rayvo，则专注于利用低成本实体设备采集第一视角视频数据。</p><h4 id="h-mecka-httpswwwmeckaai" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>Mecka </strong>&nbsp;(<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.mecka.ai"><u>https://www.mecka.ai</u></a>)</h4><p>Mecka 是一家机器人数据公司，通过游戏化的手机采集和定制硬件设备，众包获取第一视角视频、人体运动数据以及任务演示，用于构建大规模多模态数据集，支持具身智能模型的训练。</p><h4 id="h-sapien-httpswwwsapienio" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>Sapien</strong> (<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.sapien.io/"><u>https://www.sapien.io/</u></a>)</h4><p>Sapien 是一个以“人类运动数据驱动机器人智能”为核心的众包平台，通过可穿戴设备和移动端应用采集人体动作、姿态与交互数据，用于训练具身智能模型。项目致力于构建全球最大的人体运动数据网络，让人类的自然行为成为机器人学习与泛化的基础数据源。</p><h4 id="h-vaderhttpswwwvaderaiai" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>Vader</strong>（<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.vaderai.ai"><u>https://www.vaderai.ai</u></a>）</h4><p>Vader 通过其现实世界 MMO 应用 <strong>EgoPlay</strong> 众包收集第一视角视频与任务示范：用户以第一人称视角记录日常活动并获得 $VADER 奖励。其 <strong>ORN 数据流水线</strong> 能将原始 POV 画面转换为经过隐私处理的结构化数据集，包含动作标签与语义叙述，可直接用于人形机器人策略训练。</p><h4 id="h-nrn-agentshttpswwwnrnagentsai" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>NRN Agents</strong>（<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.nrnagents.ai/"><u>https://www.nrnagents.ai/</u></a>）</h4><p>一个游戏化的具身 RL 数据平台，通过浏览器端机器人控制与模拟竞赛来众包人类示范数据。NRN 通过“竞技化”任务生成长尾行为轨迹，用于模仿学习与持续强化学习，并作为可扩展的数据原语支撑 sim-to-real 策略训练。</p><p><strong>具身智能数据采集层项目对比</strong></p><table style="min-width: 75px"><colgroup><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>项目</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>主要数据类型</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>项目特性</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>PrismaX</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">人类远程操控（真实机器人）</p></td><td colspan="1" rowspan="1"><p style="text-align: center">专家级示范，高质量但规模较小</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>BitRobot Network</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">远程操控 + 第一视角视频 + 仿真</p></td><td colspan="1" rowspan="1"><p style="text-align: center">覆盖多形体、真实环境多样性强</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Mecka / Sapien / Vader / NRN</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">第一视角视频 + 身体动作（可穿戴 / 游戏化任务）</p></td><td colspan="1" rowspan="1"><p style="text-align: center">低成本众包，规模大但数据更噪声</p></td></tr></tbody></table><p><br></p><h3 id="h-middleware-and-simulation" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>感知与仿真（Middleware &amp; Simulation）</strong></h3><p>感知与仿真层为机器人提供连接物理世界与智能决策的核心基础设施，包括定位、通信、空间建模、仿真训练等能力，是构建大规模具身智能系统的“中间层骨架”。当前该领域仍处于早期探索阶段，各项目分别在高精度定位、共享空间计算、协议标准化与分布式仿真等方向形成差异化布局，尚未出现统一标准或互通生态。</p><p><strong>中间件与空间基建（Middleware &amp; Spatial Infra）</strong></p><p>机器人核心能力——导航、定位、连接性与空间建模——构成了连接物理世界与智能决策的关键桥梁。尽管更广泛的 DePIN 项目（Silencio、WeatherXM、DIMO）开始提及“机器人，但下列项目与具身智能最直接相关。</p><h4 id="h-robostack-cloud-native-robot-operating-stack-httpsrobostackio" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>RoboStack – Cloud-Native Robot Operating Stack&nbsp; </strong>(<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://robostack.io"><u>https://robostack.io</u></a>)</h4><p>RoboStack 是云原生机器人中间件，通过 RCP（Robot Context Protocol）实现机器人任务的实时调度、远程控制与跨平台互操作，并提供云端仿真、工作流编排与 Agent 接入能力。</p><h4 id="h-geodnet-decentralized-gnss-network-httpsgeodnetcom" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>GEODNET – Decentralized GNSS Network&nbsp; </strong>(<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://geodnet.com"><u>https://geodnet.com</u></a>)</h4><p>GEODNET 是全球去中心化 GNSS 网络，提供厘米级 RTK 高精度定位。通过分布式基站和链上激励，为无人机、自动驾驶与机器人提供实时“地理基准层”。</p><h4 id="h-auki-posemesh-for-spatial-computing-httpswwwaukicom" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>Auki – Posemesh for Spatial Computing </strong>(<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.auki.com"><u>https://www.auki.com</u></a>)</h4><p><strong>Auki</strong> 构建了去中心化的 <strong>Posemesh 空间计算网络</strong>，通过众包传感器与计算节点生成实时 3D 环境地图，为 AR、机器人导航和多设备协作提供共享空间基准。它是连接 <strong>虚拟空间与现实场景</strong> 的关键基础设施，推动 <strong>AR × Robotics</strong> 的融合。</p><p><strong>Tashi Network — 机器人实时网格协作网络</strong> (<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://tashi.network"><u>https://tashi.network</u></a>)</p><p>去中心化实时网格网络，实现亚 30ms 共识、低延迟传感器交换与多机器人状态同步。其 MeshNet SDK 支持共享 SLAM、群体协作与鲁棒地图更新，为具身 AI 提供高性能实时协作层。</p><p><strong>Staex — 去中心化连接与遥测网络</strong> (<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.staex.io"><u>https://www.staex.io</u></a>)</p><p>源自德国电信研发部门的去中心化连接层，提供安全通信、可信遥测与设备到云的路由能力，使机器人车队能够可靠交换数据并跨不同运营方协作。</p><p><strong>仿真与训练系统（Distributed Simulation &amp; Learning）</strong></p><h4 id="h-gradient-towards-open-intelligencehttpsgradientnetwork" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>Gradient - Towards Open Intelligence</strong>（<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://gradient.network/"><u>https://gradient.network/</u></a>）</h4><p>Gradient 是建设“开放式智能（Open Intelligence）”的 AI 实验室，致力于基于去中心化基础设施实现分布式训练、推理、验证与仿真；其当前技术栈包括 Parallax（分布式推理）、Echo（分布式强化学习与多智能体训练） 以及 Gradient Cloud（面向企业的AI 解决方案）。在机器人方向，Mirage 平台面向具身智能训练提供 <strong>分布式仿真、动态交互环境与大规模并行学习</strong> 能力，用于加速世界模型与通用策略的训练落地。Mirage 正在与 NVIDIA 探讨与其 Newton 引擎的潜在协作方向。</p><br><h3 id="h-robotfi-rwaifi" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>机器人资产收益层（RobotFi / RWAiFi）</strong></h3><p>这一层聚焦于将机器人从“生产性工具”转化为“可金融化资产”的关键环节，通过 资产代币化、收益分配与去中心化治理，构建机器经济的金融基础设施。代表项目包括：</p><h4 id="h-xmaquinadao-physical-ai-dao-httpswwwxmaquinaio" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>XmaquinaDAO – Physical AI DAO</strong> (<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://www.xmaquina.io"><u>https://www.xmaquina.io</u></a>)</h4><p>XMAQUINA 是一个去中心化生态系统，为全球用户提供对顶尖人形机器人与具身智能公司的高流动性参与渠道，将原本只属于风险投资机构的机会带上链。其代币 DEUS 既是流动化指数资产，也是治理载体，用于协调国库分配与生态发展。通过 DAO Portal 与 Machine Economy Launchpad，社区能够通过机器资产的代币化与结构化的链上参与，共同持有并支持新兴的 Physical AI 项目。</p><h4 id="h-gaib-the-economic-layer-for-ai-infrastructure-httpsgaibai" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>GAIB – The Economic Layer for AI Infrastructure </strong>&nbsp;(<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://gaib.ai/"><u>https://gaib.ai/</u></a>)</h4><p>GAIB 致力于为 GPU 与机器人等实体 AI 基础设施提供统一的 <strong>经济层</strong>，将去中心化资本与真实AI基建资产连接起来，构建可验证、可组合、可收益的智能经济体系。</p><p>在机器人方向上，GAIB 并非“销售机器人代币”，而是通过将机器人设备与运营合同（RaaS、数据采集、遥操作等）<strong>金融化上链</strong>，实现“<strong>真实现金流 → 链上可组合收益资产</strong>”的转化。这一体系涵盖硬件融资（融资租赁 / 质押）、运营现金流（RaaS / 数据服务）与数据流收益（许可 / 合约）等环节，使机器人资产及其现金流变得 <strong>可度量、可定价、可交易</strong>。</p><p>GAIB 以 <strong>AID / sAID</strong> 作为结算与收益载体，通过结构化风控机制（超额抵押、准备金与保险）保障稳健回报，并长期接入 DeFi 衍生品与流动性市场，形成从“机器人资产”到“可组合收益资产”的金融闭环。目标是成为 <strong>AI 时代的经济主干（Economic Backbone of Intelligence）</strong></p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/e07a086b41e07f758fb1a67527f0c6b32cff03733c9f69e550b4ff484c197f9a.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAZCAIAAADfbbvGAAAACXBIWXMAAAsTAAALEwEAmpwYAAAFhklEQVR4nJWW228VRRzH5x8w8R3iEwYTjFExMcZovD1owqO8GoMPhkRivCYGVCJYxHjh0iBEsMFaWi5VwFIC9RSorVSgWqD0ds5hT86t57Jndmd3bjuzuz+zO6U0gIKbbzazm9n57f4+v+/8Fjk+w4TZJD27vu36OB1jh2EiHF86viRcYSIoVwDApbYJd3zRuoPkLWoSjoTUABCnMkcMEEXJlY7iIIwED1qtJgBkMoPLli3r7u5OwwRBGN0uqcJbhMq1esf+ro7OnlMDZzo6e/bs+7HnyNFTmbM9R442W24M0CpWew92AsC6t9YhhB57fCUAEMruuvR8AMenkzOzE1PT+WJpYmr6z0t/T87M5qzClWuTlIsQoFydu3x1kgnxx+joK6tWHT3+KwBIpe9l9SQA5YHJSRBGOopjgDBNjo5iobRQutJoZq2S7Xi+kFwGHuXp/ZBLTcXdhTBhLce7RbZzAzVhhEsVho4vMGGOz2zCDVKbcKPmv6jhsIbD5iEbtosH5iNiAAezq5fnpNKUK9umUun/SMgdUlSs1n7oPLC/61Bf/8DxvpMHj/xyOnOu59BRq1BOA8R/jZV3fvt7DGF1jgwPX2dc/r8AhLLZbH46myuWKrlcYXY2n8tbV69Ne5QbesVqbSab9SjHxCsUi5xLpcN7FyJcmmxorU3hmwqhXFGuuNTlmj2ZzSc2dPzpfNGYDjvcZYHxo+OLG4QE4cplAeFqQahJuDGwPe9njlN0ZoAJJ1wFYWRWUWHYwJTLIIYEO0/5YSI8kbwKTq3rsGAxapTOvgk2CCOlFzKoQ4Ds7HTmdJ/PqW2z0ZECALz+5mGE1o6NV9e88RpCaOLa1EcbNixZumT88pUQYDabI5TdTFG1YXd0HugfGOw+1NtsuYtdw6WOAQZ/69++9VNCWhNXa7vbRwBidP+7CD168PAVlB7H+04sX/4QQqi753AMcHZopN5sLqyDKBd5q5S3SlMzWcqDWxDpKC6WKxcvXSIeLVarY+OXAyUzmcn1G3sxEZkzgxs/24wd7+zQ0Oa2rS7xrXJlbHwib5V0FM9/geMLEy0Io8VwFtTAvlVtuiyo1Fo5q2ITxgR3E1hmQrLXeiIZeDQo1VpZq2KV6y4LjFDD5bZDbcIamCbACW9gmt7h2JdJwfhSqBD7ks7TlrYvPK1bTNqJy3XD4ZgIriMzTaowiuLkwVSIcrUAOYriKHVvnAI338i4ODc8BADvvf8BQqint7c1m9vy7MvFS+M1jNva2gBg/Mo1hNBTK1cAwIebtjz49PNBWvVJigrlua93fLdt5+69HT8dO35yf9ehffu7trXvOTEwGAIEYZS/fn37zh0AsOLhRxBCH2/+/K+OrhcQmsycs+bmVq9eDQD9A2cMcABYuvJJhJDtuGHanZBHebE8ZxWreas8V2/Um7ZVrFiFYq2eVEIIkLMKI6MXuQyGhoc3tW3JWYViqZLpP2W7PuW8XK1KpXNWYfuuvd2Hfw7CKGcVhs+P3oSMCRcqZOkObNxLheZSe0IlDHxZb/mlast2qEdTGCmkuptwwkRUm25CxaFChS6TdtposZ8AN0qdvCBnfjc2l6apuiwQ6VaKCfe0dtPAThrbNO0bZ+P8BCzxJPGkTwOfBsi4yVA1A6WTkg3CiMsAAPbu60AIrV33NkC89Yn7jr3zqorjIJjvaGayMb/SSf2EOko6VxTHURzpKIHc9uW2Hbu+PzEwuPmLb7Z8taPebKWra49yAFi/4ROE0DPPvQig1yDU/tIDKo6FvNkVRHpWOgx1dLuSlulR7lFOGMfEJ5QtbikhwMTU1Lad7SMXLugoPn+qb3p8TMXxwgRD0rz1nQOYLBsZc6b/QsneaQaYMCq0ucRcYRaYnyXzCKV30T8vvFbkK8QY6gAAAABJRU5ErkJggg==" nextheight="718" nextwidth="927" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p style="text-align: center"><strong>Web3机器人生态图谱:</strong> <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://fairy-build-97286531.figma.site/"><u>https://fairy-build-97286531.figma.site/</u></a></p><h2 id="h-" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>五、总结与展望：现实挑战与长期机会</strong></h2><p>从长期愿景看，<strong>机器人 × AI × Web3</strong> 的融合旨在构建去中心化机器经济体系（DeRobot Economy），推动具身智能从“单机自动化”迈向“可确权、可结算、可治理”的网络化协作。其核心逻辑是通过“<strong>Token → 部署 → 数据 → 价值再分配</strong>”形成自循环机制，使机器人、传感器与算力节点实现确权、交易与分润。</p><p>然而，从现实阶段来看，该模式仍处早期探索期，距离形成稳定现金流与规模化商业闭环尚远。多数项目停留在叙事层面，实际部署有限。机器人制造与运维属资本密集型产业，单靠代币激励难以支撑基础设施扩张；链上金融设计虽具可组合性，但尚未解决真实资产的风险定价与收益兑现问题。因此，所谓“机器网络自循环”仍偏理想化，其商业模式有待现实验证。</p><ul><li><p><strong>模型智能层（Model &amp; Intelligence Layer）</strong>是当前最具长期价值的方向。以 OpenMind 为代表的开源机器人操作系统，尝试打破封闭生态、统一多机器人协作与语言到动作接口。其技术愿景清晰、系统完整，但工程量巨大、验证周期长，尚未形成产业级正反馈。</p></li><li><p><strong>机器经济层（Machine Economy Layer）</strong> 仍处于前置阶段，现实中机器人数量有限，DID 身份与激励网络尚难形成自洽循环。当前距离“机器劳动力经济”尚远。未来唯有具身智能实现规模化部署后，链上身份、结算与协作网络的经济效应才会真正显现。</p></li><li><p><strong>数据采集层（Data Layer）</strong> 数据采集层门槛相对最低，但是目前最接近商业可行的方向。具身智能数据采集对时空连续性与动作语义精度要求极高，决定其质量与复用性。如何在“众包规模”与“数据可靠性”之间平衡，是行业核心挑战。PrismaX 先锁定 B 端需求，再分发任务采集验证一定程度上提供可复制模板，但生态规模与数据交易仍需时间积累。</p></li><li><p><strong>感知与仿真层（Middleware &amp; Simulation Layer）</strong> 仍在技术验证期，缺乏统一标准与接口尚未形成互通生态。仿真结果难以标准化迁移至真实环境，Sim2Real 效率受限。</p></li><li><p><strong>资产收益层（RobotFi / RWAiFi）Web3 主要在供应链金融、设备租赁与投资治理等环节发挥辅助作用，提升透明度与结算效率，而非重塑产业逻辑。</strong></p></li></ul><p>当然，我们认为，<strong>机器人 × AI × Web3</strong> 的交汇点依然代表着下一代智能经济体系的原点。它不仅是技术范式的融合，更是生产关系的重构契机：当机器具备身份、激励与治理机制，人机协作将从局部自动化迈向网络化自治。短期内，这一方向仍以叙事与实验为主，但它所奠定的制度与激励框架，正为未来机器社会的经济秩序铺设基础。从长期视角看，具身智能与 Web3 的结合将重塑价值创造的边界——让智能体成为真正可确权、可协作、可收益的经济主体。</p><p><strong><em>免责声明：</em></strong><em>本文在创作过程中借助了 ChatGPT-5 与Deepseek的 AI 工具辅助完成，作者已尽力校对并确保信息真实与准确，但仍难免存在疏漏，敬请谅解。需特别提示的是，加密资产市场普遍存在项目基本面与二级市场价格表现背离的情况。本文内容仅用于信息整合与学术/研究交流，不构成任何投资建议，亦不应视为任何代币的买卖推荐。</em></p><p><br></p>]]></content:encoded>
            <author>0xjacobzhao@newsletter.paragraph.com (jacobzhao)</author>
            <category>机器人</category>
            <category>具身智能</category>
            <category>web3</category>
            <category>ai</category>
            <category>自动化</category>
            <enclosure url="https://storage.googleapis.com/papyrus_images/6e60ae3ab7d9404032f65d98bddba38a0a33ae73118d2c46a9d361a0407f504a.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Brevis Research Report: The Infinite Verifiable Computing Layer of zkVM and ZK Data Coprocessor]]></title>
            <link>https://paragraph.com/@0xjacobzhao/brevis-research-report-the-infinite-verifiable-computing-layer-of-zkvm-and-zk-data-coprocessor</link>
            <guid>6hFgilbmzjMz0ms6mrOQ</guid>
            <pubDate>Mon, 27 Oct 2025 05:09:33 GMT</pubDate>
            <description><![CDATA[ZK verifiable computing is evolving from L2 zkRollups → general-purpose zkVM/zkCoprocessors → L1 zkEVM real-time proving (RTP) — unlocking computational freedom through off-chain computation + on-chain verification without sacrificing decentralization. Brevis stands out with its dual-engine architecture: Pico zkVM adopts a modular zkVM + coprocessor framework, Prism achieves block-level, sub-second proving on multi-GPU clusters, and the ZK Data Coprocessor enables verifiable computation and proo]]></description>
            <content:encoded><![CDATA[<p>The paradigm of <strong>Verifiable Computing</strong>—“off-chain computation + on-chain verification”—has become the universal computational model for blockchain systems. It allows blockchain applications to achieve <em>near-infinite computational freedom</em> while maintaining decentralization and <em>trustlessness</em> as core security guarantees. <strong>Zero-knowledge proofs (ZKPs)</strong> form the backbone of this paradigm, with applications primarily in three foundational directions: <strong>scalability</strong>, <strong>privacy</strong>, and <strong>interoperability &amp; data integrity</strong>. <strong>Scalability</strong> was the first ZK application to reach production, moving execution off-chain and verifying concise proofs on-chain for high throughput and low-cost trustless scaling.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/357395478f12d5a3656fa55d52c5ea8e91e2b45b627bd69fcd7262b4ca5f57bc.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAANCAIAAABHKvtLAAAACXBIWXMAAAsTAAALEwEAmpwYAAAEz0lEQVR4nB2TW2xTZQCA/3Np17Pe1q7tabfVXs7p6b1d6XW97LRdr2vXy7Yy127uRgd1E7axjLkFvMxdIoYI6yAhBMSQEEAji4aEQIhBDYnIolGXGNQXeSHAk0YfjBwDyffyvX8fMFsceoNVpzMRpOEFlJnUW612r8XuMzsCBksHofNqdB16S1hvixodScJAawydKiqo0gWUL2kjA5QtZnQmKVtMZwlrDEGtkdYaacoWMzgSAABEqSLd3lA8mSn2lelY1mBxw5gEwXAABGLcFIn3ZwvjcpULoEoAmklzNN4z5o8MdEQG6EQlkhwKxQebFHbAUgpwK2VPBuNDqfxEplDlytoBzwgghCOWKFo1RkLv0JlclKWDsoUQnhLDzXJnUdzmIgydOnO0sdnkHnpbl64pNAGqvdviyeutSVdoYI9/r9VX5CvcsNAgbvOPDIwUMmWFMd9i6edp0hxlCqBsHsTiArYEIGIANQG2AnCUoEHpHF0rnLgJ8Sm4kYD5JgQjowdP5ja2Ib4NEthViUNU9jAksL9Qvg2W+mCpR+ks1Y6eP37k8OzwsNFVwl7pRmU+AFhCwJawhBq0iYS5aoDikIAAmNpSmEvX3peSNIb7MXVK1N6XGl0KFufZLTFYEgzPnhm5cJ/d2sXX9XCJDIrTjZoMGZ2JVjfdqalIIL8wmOfInIhQDwAqBywFzFMDIGr11IIT5xCBHvAps7cYTIyhuFObWPSMXVX4a02aLkusxiOKIn0pWTlGl1f42n5tcql6bddS2iRja66+jfmp5eXatMOdKYY6eVIrxCcAYMmhhlYAiT2jJ+bvMZWtZxDfhsi9kNgNNbXL7eNrt5k7fzIHP34s0o+ITBXcuR9vrzlL6+kDp9XhpeD0ldk7/4QXb+G2193xqctnPrt5eXu6d+9wKsrHrRBXDWCsFeYoRKrAQP1x+dRzuvYV5R3EDQkUD0JiV2fl8uZt5i7DHNp6bkxtktmjtqEtU9+H3skr/okvEnM/dL/zU/SNq+auGVRGR9LVK/eZhY2vzx2anC50YWIKYCoAc1Vaym2ix3uO7aYmfy+t/p1bvBqfPEHvr+8ZXsvPXZ9Y3bH3nXek3tPF3yW7Vh2li7njDxceMsefMuP1H/2jdalzWKDrxo258cG50ycvnl1enB/sm8hG+VL9i14AR4lwNUT6wNi1pxeeMau7TOmjXzTJuWZ9of+DnbH6r5nKWbE4rDKMdi9/E575cvjCf5eeMA++fbBx96+lR8zgud1iukK5i0JrL2FO56O9uWjKaqbLSZrXTMIcBQCYCm5U63PT6fVbG98xk58zkZUbZHYREnl9I6dnb/w7sL5jjq30rOwc+Z7Z9ykzc4+Z/eS37bW3/jizcP7ijfU361vVKuXKonK3RGZ8NZEup7IWU+i1ZEggUEBsGYB5GlRIAo4albnsE6eInmVEFkDxUKMqzsKDCs/k2KUnKz8zvZuPglM3O/Ztk5G1FudsG30wV1ms99PVRLbNUUb4WpSngrhKlKfUqk0eqyfhcjRw8Zd7CQhEbGbhXlTmQyRepNnBknsbWkOslgiX6OFqc+rgsY5916XWCZFhqElflllqfHUGw/2wzAeJHYjQBGFtCFcJc1UsrhLBFACWAJYUsCUQKoVR8f9izEtDNAONcwAAAABJRU5ErkJggg==" nextheight="352" nextwidth="901" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>The evolution of ZK verifiable computing can be summarized as <strong>L2 zkRollup → zkVM → zkCoprocessor → L1 zkEVM</strong>.</p><ul><li><p><strong>L2 zkRollups</strong> moved execution off-chain while posting validity proofs on-chain, achieving scalability and cost efficiency.</p></li><li><p><strong>zkVMs</strong> expanded into <strong>general-purpose verifiable computing</strong>, enabling cross-chain validation, AI inference, and cryptographic workloads.</p></li><li><p><strong>zkCoprocessors</strong> modularized this model into <strong>plug-and-play proof services</strong> for DeFi, RWA, and risk management.</p></li><li><p><strong>L1 zkEVMs</strong> brought this to <strong>Layer 1 Realtime Proving (RTP)</strong>, integrating proofs directly into Ethereum’s execution pipeline.</p></li></ul><p>Together, these advances mark blockchain’s shift from <strong>scalability</strong> to <strong>verifiability</strong>—ushering in an era of <strong>trustless computation.</strong></p><br><h3 id="h-i-ethereums-zkevm-scaling-path-from-l2-rollups-to-l1-realtime-proving" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>I. Ethereum’s zkEVM Scaling Path: From L2 Rollups to L1 Realtime Proving</strong></h3><p>Ethereum’s zkEVM scalability journey can be divided into two phases:</p><ul><li><p><strong>Phase 1 (2022–2024): </strong>&nbsp;L2 zkRollups migrated execution to Layer 2 and posted validity proofs on Layer 1—achieving lower costs and higher throughput, but introducing liquidity and state fragmentation while L1 remained constrained by N-of-N re-execution.</p></li><li><p><strong>Phase 2 (2025– ): </strong>&nbsp;L1 <em>Realtime Proving (RTP)</em> replaces full re-execution (N-of-N) with <em>1-of-N proof generation + lightweight network-wide verification</em>, boosting throughput without compromising decentralization—an approach still under active development.</p></li></ul><h4 id="h-l2-zkrollups-balancing-compatibility-and-performance" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>L2 zkRollups: Balancing Compatibility and Performance</strong></h4><p>In the flourishing Layer 2 ecosystem of 2022, Ethereum co-founder <strong>Vitalik Buterin</strong> classified ZK-EVMs into four types—<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://vitalik.eth.limo/general/2022/08/04/zkevm.html"><strong><u>Type 1–4</u></strong></a>—highlighting the structural trade-offs between <strong>compatibility</strong> and <strong>performance</strong>. This framework established the coordinates for zkRollup design:</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/82a2b6b5df9971a001001829dd995e6a4cee6fb3470faa0f012d7b866e016b23.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAYCAIAAAAUMWhjAAAACXBIWXMAAAsTAAALEwEAmpwYAAAFR0lEQVR4nKWVa0xTZxjH+2X7si/bMjcSdcZ5ASzIpSC3Sgul9EIpPS2ltKW0IA0tjosioJsKmmnauaErZVgQSyykCKF4KUer4RST2SzZYF8GgnanjbojqJyWbVnBAO+CzSrFoeh+OV/Om3Pye//P857nEEAQi/rmZqVKLSlUlKrLJIUKgUhaWLRHUqhQqtRQnkSpUrcaDD6fD6wZwvIbt8t1qOE4UyQXFqsERSpGniyFDWUKpByJgsYX0/jiX8fvIQiCYdhbCjAMO6HVpGUzyQwqlUPfkRDFl+XJSovjqUmJ9N2h8UTrjau3hm79L0HNweotpK3rwzdGU2IzBPRNUZu3xW+Pz0jQtemq6qqOaY7BMIyiqO85AIDXluuFwOfzYRj21fGGuPBQdgY1gRhOiY/h0KmJEeHsVHJ95T630wkAgGH4LRM0NDS43Q/VqgqZKK+jtbXxxMlSRREjnUaKiiInJMZG7uRkc6UFBTW1tT29vZ1d5r4+y+3bP+A4viaBzzc7N4cTngPAos83CwDo7OqSlVVxC0vEqnJBkSpLLGfkSmMpmcmZ2dt37U7l8E83NTscjrUmGBy8ecHUl5DEBWDR4/ECAGq/OLKZlEKD8jkSRXQai5ItTGLxisv39w9c74dt/bCt6ey5NxDAMLyiY22tbQwBJ4VFiSaTSGmJ8enJ6TzG3pryoyeP7D+4r99qwTAMQZA3EHg8Xn9x/NhuDKj2SiNit4USN27dsT6UuHE7ccPBuvKL5gs/D4+M37uLIHa/4BVn6b8T+DX+lvx4+6bulNbUbjC1G5oaNYdrq1r038EwfKnf0m02j98Z83rwqclJHJ/GsN/dLtcK2esFAIA/Zv58vrzY2WVmQPkUDkRhcUkUWnQKNSIpNY5KJ1Fo/AL5FatVe+qbsbGxIAGKogiCwDBsNBp9Ph+O426Xy78XD/4YgMXMqCxaJBMA8PD+g0ZdE0dWQuOLObISKk+SmStjixVMkZzKy+dI94hKy3Vn28bv3AkSaDSayMjIdevWlZWVzXi937e0qquq1VXVyrJKvd7gmnIGcricLlOnqWSvkiuEIkg76WwGG8rJYGWQ08kxSSRBvvDK9UEEsY+Ojq5aIrcLLVZXvPtpGDtfHpaSeUTzbavBYOo0LW0fdTce1+r0OrVanEqJ5HJTmcyE5OTw3Fx6T49pevrxI+w+AMDhcIyMjPiLPL+wsFIwMT5xqP4opJAazUaNTvvTL8OXL11++vhJoB+2IaS0VAFBbLGYnyeCRPk5xvNtQ3b7kN1uu26zXbt2sbsbRdFVE0yMT3xZX/fOB4T3PiJsjdiA/nbXarXOzS41fKC7v0JUsvxNHMdr6g7TOJCwQM7mi7hCCS2LlydTpLFz2HyRtFjJ4ArazxuDBE6ns7a6Ij2ZGBMWwmUkDQxcNRqNgQf+PWOL8wvzXg8++QijsrI2RJLCdpGjd6dTOFBI6M6IpFQShbaJGJ3M4klUlT29vUECHMdHR0eH7HY7Mjg8PDw2NhbIOzc7648S4On09IFKlfZoebGIXn+guKFGWSbnVJbwLnY0W/tMXs/SEEQQ5FWjYjXmFxYIBAJEi/tadzpbyGBBGUsXL50nZJV+Ltec0bSZzmnPaOcX5oMEFosFRVF8DUzjS3M3ZsuHMpUyZEtIeByR8D7h488+Sckgt3d06Fv0+ha9oc0Aw7DFYnkhcDgcFosFfgkEQRwOh/9jDPDgyZOe3l7TBdOzuWdTk1Mz3pkZ7wyO49gy/NsN+mW+jEajiYuLYzKZZrN5+frs33+taMlqvEYQGCQrTvfa+Qf4W5Phlay9aAAAAABJRU5ErkJggg==" nextheight="631" nextwidth="856" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><ul><li><p><strong>Type 1: Fully Ethereum-equivalent</strong> — replicates Ethereum exactly with no protocol changes, ensuring perfect compatibility but resulting in the slowest proving performance (e.g., Taiko).</p></li><li><p><strong>Type 2: Fully EVM-equivalent</strong> — identical to the EVM at the execution level but allows limited modifications to data structures for faster proof generation (e.g., Scroll, Linea).</p></li><li><p><strong>Type 2.5: EVM-equivalent except for gas costs</strong> — adjusts gas pricing for ZK-unfriendly operations to improve prover efficiency while maintaining broad compatibility (e.g., Polygon zkEVM, Kakarot).</p></li><li><p><strong>Type 3: Almost EVM-equivalent</strong> — simplifies or removes some hard-to-prove features such as precompiles, enabling faster proofs but requiring minor app-level adjustments (e.g., zkSync Era).</p></li><li><p><strong>Type 4: High-level-language equivalent</strong> — compiles Solidity or Vyper directly to ZK-friendly circuits, achieving the best performance but sacrificing bytecode compatibility and requiring ecosystem rebuilds (e.g., StarkNet / Cairo).</p></li></ul><p>Today, the L2 zkRollup model is mature: execution runs off-chain, proofs are verified on-chain, maintaining Ethereum’s ecosystem and tooling while delivering high throughput and low cost. Yet, liquidity fragmentation and L1’s re-execution bottleneck remain persistent issues.</p><h4 id="h-l1-zkevm-realtime-proving-redefines-ethereums-light-verification-logic" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>L1 zkEVM: Realtime Proving Redefines Ethereum’s Light-Verification Logic</strong></h4><p>In <strong>July 2025</strong>, the <strong>Ethereum Foundation</strong> published <em>“</em><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://blog.ethereum.org/2025/07/10/realtime-proving"><em><u>Shipping an L1 zkEVM #1: Realtime Proving</u></em></a><em>”</em>, formally proposing the L1 zkEVM roadmap.</p><p>L1 zkEVM upgrades Ethereum from an <strong>N-of-N re-execution</strong> model to a <strong>1-of-N proving + constant-time verification</strong> paradigm:&nbsp; a small number of provers re-execute entire blocks to generate succinct proofs, and all other nodes verify them instantly. This enables <strong>Realtime Proving (RTP)</strong> at the L1 level—enhancing throughput, raising gas limits, and lowering hardware requirements—all while preserving decentralization. The rollout plan envisions <strong>zk clients</strong> running alongside traditional execution clients, eventually becoming the protocol default once performance, security, and incentive models stabilize.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/32d1efa6e948700939277653fca0e1432585419cd373d93c79632bd458984bee.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAANCAIAAABHKvtLAAAACXBIWXMAAAsTAAALEwEAmpwYAAAB30lEQVR4nJ1T227bMAz1///E9lSse9hL0nUPuXRFUSTAgAFbGnfOtRckbrzKtWzLkihSg+LMcZMMa3ugB4IgeUjq0KM9IKJB3PFYa4nI8zz7GhiDBxKUhjQv6h4iKo3XEhDZAwlVy/8lICKsvRKWCIwppKK1cYDgb/QbJ0AiqSET0q0I1yuq10o0jTiMU3Mv8CUEK2luhL1X9kagNg6A23KOwCAmqSh/1RJFQL0ZO/fDSYbW7cpFp1mxQ3A9i+bL2CDOcnMxYa1heMVh+RgvIobWajDbCYhIaah6fJB4lcCAqVFqqka0hh2C+SJe/uYGcZpBEMtRLIcJJEKkogByBKJQ2xXVoZEYIAPiQC9ZEQNcKhtqGxabIEBnlCkHCA6C9ggQbU1nrmBlIZHQuhT6ZkXPFOPuQGfi2R2UMMbUCMov2+ht54HBQsI/ZQrgVLzvt287NERUChJepJliSf7ERZbLNJM8k4+JEELluc6FQkQAqAhClj6w/C7iv27Z3YovonTF8lUiFxGfLePbFQ+fRDW0FwRBtz9tXQTN7vDj5++N9s/25bh9OW52/WZ3eHruN75ex/GmfW89x5dv80ZvctTy35/8OD7zjzt+sz/pDBYn/cmHjv/udPCpN3WlPe8PA9aH42ywYQIAAAAASUVORK5CYII=" nextheight="455" nextwidth="1114" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><br><table style="min-width: 75px"><colgroup><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Paradigm</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Description</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Pros &amp; Cons</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>N-of-N (old)</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">All validators re-execute every transaction for consensus</p></td><td colspan="1" rowspan="1"><p style="text-align: center">High security, low throughput, high fees</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>1-of-N (new)</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">A few provers execute and produce short proofs; everyone else verifies</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Safe scalability, low cost, hardware-light</p></td></tr></tbody></table><p><br><strong>L1 zkEVM Roadmap: Three Core Tracks</strong></p><ol><li><p><strong>Realtime Proving (RTP):</strong> Achieving block-level proof generation within a 12-second slot via parallelization and hardware acceleration.</p></li><li><p><strong>Client &amp; Protocol Integration:</strong> Standardizing proof-verification interfaces—initially optional, later default.</p></li><li><p><strong>Incentive &amp; Security Design:</strong> Establishing a prover marketplace and fee model to reinforce censorship resistance and network liveness.</p></li></ol><p>L1 zkEVM’s Realtime Proving (RTP) uses <strong>zkVMs</strong> to re-execute entire blocks off-chain and produce cryptographic proofs, allowing validators to verify results in under 10 seconds—replacing “re-execution” with <strong>“verify instead of execute”</strong> to drastically enhance Ethereum’s scalability and trustless validation efficiency.</p><p>According to the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://zkevm.ethereum.foundation/zkvm-tracker"><strong><u>Ethereum Foundation’s zkEVM Tracker</u></strong></a>, the main teams participating in the L1 zkEVM RTP roadmap include:&nbsp; <strong>SP1 Turbo (Succinct Labs)</strong>, <strong>Pico (Brevis)</strong>, <strong>Risc Zero</strong>, <strong>ZisK</strong>, <strong>Airbender (zkSync)</strong>, <strong>OpenVM (Axiom)</strong>, and <strong>Jolt (a16z)</strong>.</p><br><h3 id="h-ii-beyond-ethereum-general-purpose-zkvms-and-zkcoprocessors" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>II. Beyond Ethereum: General-Purpose zkVMs and zkCoprocessors</strong></h3><p>Beyond the Ethereum ecosystem, <strong>zero-knowledge proof (ZKP)</strong> technology has expanded into the broader field of <strong>Verifiable Computing</strong>, giving rise to two core technical systems: <strong>zkVMs</strong> and <strong>zkCoprocessors</strong>.</p><h4 id="h-zkvm-general-purpose-verifiable-computing-layer" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>zkVM: General-Purpose Verifiable Computing Layer</strong></h4><p>A <strong>zkVM (zero-knowledge virtual machine)</strong> serves as a <em>verifiable execution engine</em> for arbitrary programs, typically built on instruction set architectures such as <strong>RISC-V</strong>, <strong>MIPS</strong>, or <strong>WASM</strong>.</p><p>Developers can compile business logic into the zkVM, where provers execute it off-chain and generate <strong>zero-knowledge proofs (ZKPs)</strong> that can be verified on-chain. This enables applications ranging from <strong>Ethereum L1 block proofs</strong> to <strong>cross-chain validation, AI inference, cryptographic computation, and complex algorithmic verification</strong>.</p><p>Its key advantages lie in <strong>generality and flexibility</strong>, supporting a wide range of use cases; however, it also entails <strong>high circuit complexity and proof generation costs</strong>, requiring <strong>multi-GPU parallelism and deep engineering optimization</strong>.<br> Representative projects include <strong>Risc Zero</strong>, <strong>Succinct SP1</strong>, and <strong>Brevis Pico / Prism</strong>.<br></p><h4 id="h-zkcoprocessor-scenario-specific-verifiable-module" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>zkCoprocessor: Scenario-Specific Verifiable Module</strong></h4><p>A <strong>zkCoprocessor</strong> provides <em>plug-and-play</em> computation and proof services for specific business scenarios.<br> These platforms predefine data access and circuit logic—such as <strong>historical on-chain data queries, TVL calculations, yield settlement, and identity verification</strong>—so that applications can simply call SDKs or APIs to receive both computation results and on-chain proofs.</p><p>This model offers <strong>fast integration, high performance, and low cost</strong>, though it sacrifices generality.<br> Representative projects include <strong>Brevis zkCoprocessor</strong>, <strong>Axiom</strong>.<br></p><h4 id="h-comparative-logic-and-core-differences" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>Comparative Logic and Core Differences</strong></h4><p>Overall, both <strong>zkVMs</strong> and <strong>zkCoprocessors</strong> follow the <em>“off-chain computation + on-chain verification”</em> paradigm of verifiable computing, where zero-knowledge proofs are used to validate off-chain results on-chain. Their economic logic rests on a simple premise: <strong>the cost of executing computations directly on-chain is significantly higher than the combined cost of off-chain proof generation and on-chain verification.</strong></p><p>In terms of <strong>generality vs. engineering complexity</strong>:</p><ul><li><p><strong>zkVM</strong> — a <em>general-purpose computing infrastructure</em> suitable for complex, cross-domain, or AI-driven tasks, offering maximum flexibility.</p></li><li><p><strong>zkCoprocessor</strong> — a <em>modular verification service</em> tailored for high-frequency, reusable scenarios such as <strong>DeFi</strong>, <strong>RWA</strong>, and <strong>risk management</strong>, offering low-cost, directly callable proof interfaces.</p></li></ul><p>In terms of <strong>business models</strong>:</p><ul><li><p><strong>zkVM</strong> follows a <strong>Proving-as-a-Service</strong> model, charging per proof (ZKP). It mainly serves <strong>L2 Rollups and infrastructure providers</strong>, characterized by <em>large contracts, long cycles, and stable gross margins.</em></p></li><li><p><strong>zkCoprocessor</strong> operates under a <strong>Proof-API-as-a-Service</strong> model, charging per task via API or SDK integration—similar to SaaS—targeting <strong>DeFi and application-layer protocols</strong> with <em>fast integration and high scalability.</em></p></li></ul><p>Overall, <strong>zkVMs are the foundational engines</strong> of verifiable computation, while <strong>zkCoprocessors are the application-layer verification modules</strong>. The former builds the <em>technical moat</em>, and the latter drives <em>commercial adoption</em>—together forming a <strong>universal trustless computing network</strong>.</p><table style="min-width: 125px"><colgroup><col><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Category</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Core Function</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Execution Layer</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Primary Users</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Representative Projects</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>zkRollup</strong></p></td><td colspan="1" rowspan="1"><p>Executes on L2, submits validity proofs to L1 for cheaper and faster transactions</p></td><td colspan="1" rowspan="1"><p>L2 (settled on L1)</p></td><td colspan="1" rowspan="1"><p>Applications &amp; Users</p></td><td colspan="1" rowspan="1"><p>zkSync, Scroll, Starknet</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>zkEVM (L1 Realtime Proving)</strong></p></td><td colspan="1" rowspan="1"><p>Replaces full L1 re-execution with proofs, safely raising gas limits</p></td><td colspan="1" rowspan="1"><p>L1</p></td><td colspan="1" rowspan="1"><p>Ethereum Clients / Protocol</p></td><td colspan="1" rowspan="1"><p>EF RTP Program, zkVM-based L1 provers</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>General-Purpose zkVM</strong></p></td><td colspan="1" rowspan="1"><p>Generates ZK proofs for arbitrary programs (block proofs or other computations)</p></td><td colspan="1" rowspan="1"><p>Off-chain / Any Environment → On-chain Verification</p></td><td colspan="1" rowspan="1"><p>Infrastructure</p></td><td colspan="1" rowspan="1"><p>Brevis Pico, Succinct SP1, Risc Zero R0VM</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>zkCoprocessor</strong></p></td><td colspan="1" rowspan="1"><p>Computes ZK proofs for historical on-chain data and business logic</p></td><td colspan="1" rowspan="1"><p>Off-chain Service + Smart Contract Verification</p></td><td colspan="1" rowspan="1"><p>dApp Teams</p></td><td colspan="1" rowspan="1"><p>Brevis, Axiom, Herodotus</p></td></tr></tbody></table><p><br></p><h3 id="h-iii-brevis-product-landscape-and-technical-roadmap" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>III. Brevis: Product Landscape and Technical Roadmap</strong></h3><p>Starting from Ethereum’s <strong>L1 Realtime Proving (RTP)</strong>, zero-knowledge (ZK) technology is evolving toward an era of <strong>Verifiable Computing</strong> built upon the architectures of <strong>general-purpose zkVMs</strong> and <strong>zkCoprocessors</strong>.&nbsp;</p><p><strong>Brevis Network</strong> represents a fusion of these two paradigms — a <strong>universal verifiable computing infrastructure</strong> that combines high performance, programmability, and zero-knowledge verification — an <strong>Infinite Compute Layer for Everything.</strong></p><h3 id="h-31-pico-zkvm-modular-proof-architecture-for-general-purpose-verifiable-computing" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>3.1 Pico zkVM: Modular Proof Architecture for General-Purpose Verifiable Computing</strong></h3><p>In 2024, Vitalik Buterin proposed the concept of <strong>“</strong><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://vitalik.eth.limo/general/2024/09/02/gluecp.html"><strong><u>Glue and Coprocessor Architectures</u></strong></a><strong>”</strong>, envisioning a structure that separates <strong>general-purpose execution layers</strong> from <strong>specialized coprocessor acceleration layers</strong>.&nbsp; Complex computations can thus be divided into flexible business logic (e.g., EVM, Python, RISC-V) and performance-focused structured operations (e.g., GPU, ASIC, hash modules).</p><p>This “general + specialized” dual-layer model is now converging across <strong>blockchain</strong>, <strong>AI</strong>, and <strong>cryptographic computing</strong>: EVM accelerates via <em>precompiles</em>; AI leverages <em>GPU parallelism</em>; ZK proofs combine <em>general-purpose VMs</em> with <em>specialized circuits</em>. The future lies in optimizing the “glue layer” for <strong>security and developer experience</strong>, while letting the “coprocessor layer” focus on <strong>efficient execution</strong>—achieving a balance among performance, security, and openness.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/bc5ee7ab9af667b197aef7ec3844d90ccbd86d12a1b330853dc11b467f233dea.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAOCAIAAADBvonlAAAACXBIWXMAAAsTAAALEwEAmpwYAAADwElEQVR4nKWUf0wTZxzG383NGdMFmbqtlTp14I+xYWM0ndmP6BJjWDLMGlNDM1gC0uKW6WLCcBsLdOugFUq8ji5bQWE0XZHWMZfQiMBJjzbXa+mNO/u7cC2t0KBH4Da8zcOMpda4RV39Y598/7w8z/s+z/c9sPw/+JPjhkbQi7bhUWx8jAxjuN+FB71kdBQb7x9E+geR2Rs0yHzKcVxWqb/uG5b943xPD18gKD5yXHu2r+j1QwA899iazU+uLwCrhdvEbysh07FT0IjTc9cgC9zDvFEUNRgMyq9Usg/qa5vO7X/36BPPFqzLFz+/bS9PWLTngPSTxo6y4yrE5U0bpFKpgYEBhmH+w+BWLOINEfYQiUT9Tp8XXphLeDwetbppKpGITCYwnAzHkvHp64npWera7LXZuUA05iH8HsL/2+Ji2sBsNkMQhCBIxoxhGI7jWJal6RsLzOLlX4xd6kpL24kfNFXdzfIftcfO608auzvUag1FUQ/EeD+AYRiVSiUSiYxGYzKZNJvNKpVKIpHIZLILVkt9w9ebN6z8ub1mqKcVtRlQmwG2ntHWHinIF5jNvclkAsN9w3bUTQR9IYoITJDBSTJIuYngsB29fMU5v8AAlmVTqZRer6coKhM3iqJisbi8vDxzIq+jr1tT+VohHwCQC0BLTdm5JnlB/gaNpuWpVauKSz863f7T7n2HV6x7kZdXmPvCztWC7TvfPKSETPJarQP79W7JEARl1P9d6e2lpeXlZdx5sauxUrL/ZXCHlpqyjsaqZ3JX6nRtTWpN2YcNSsi096AMPM7n5RWu3bIL5GzavU9SrzXKT7WOuvC0AQzDCoWCYRir1SoSidRqdUVFhUKh0Ovbei19utM1lrYT9j79GNyF2tph6xnos9Id2zf1Wi6kUjMY7hsaQfGrEX906mok5o9ORWIzOBmCHW7Y4U5HFIlEOjs7aZo2GAw0TTMMU1dXJxQKc3JyPB53+gYu27dfSLubq79rKG//8n2TtvqsRl7yTrFO9004HHp0yfQdMvvDsuyD689xt6jIWNTvjAbQiYAr7Bv9fWHm3pr6Q5MODA9OxKPx6czEp68HJ+IYTrq8xOLNm+BhimmyP28EQVq12jyhsOS9j5WQ6dWDpWs2FvG3ivlbxTzhK3sOSD9v7qo4qfmn5Oxyt5eW7k0mCo7jLJbeKnm1VP6p5nvrrrekAPBW5G7hCV4CT28seqNECZmqaltHnO5H/yqywHHcsN3VP4g4MGKMDLuJkJsIjQcop5e8BDsuwY65+fm/AZXS7PNgqITEAAAAAElFTkSuQmCC" nextheight="181" nextwidth="411" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p><strong>Pico zkVM</strong>, developed by <strong>Brevis</strong>, is a representative realization of this idea.<br> It integrates <strong>a general-purpose zkVM with hardware-accelerated coprocessors</strong>, merging programmability with high-performance ZK computation.</p><ul><li><p>Its <strong>modular architecture</strong> supports multiple proof backends (KoalaBear, BabyBear, Mersenne31), freely combining <strong>execution, recursion, and compression</strong> modules into a <em>ProverChain</em>.</p></li><li><p>Developers can write business logic in <strong>Rust</strong>, automatically generating cryptographic proofs without prior ZK knowledge—significantly lowering the entry barrier.</p></li><li><p>The architecture supports continuous evolution by introducing new proof systems and <em>application-level coprocessors</em> (for on-chain data, zkML, or cross-chain verification).</p></li></ul><p>Compared to <strong>Succinct’s SP1</strong> (a relatively monolithic RISC-V zkVM) and <strong>Risc Zero R0VM</strong> (a universal RISC-V execution model), <strong>Pico</strong>’s <em>Modular zkVM + Coprocessor System</em> decouples execution, recursion, and compression phases, supports backend switching, and enables coprocessor integration—yielding superior <strong>performance and extensibility</strong>.</p><table style="min-width: 100px"><colgroup><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Dimension</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Pico zkVM (Brevis)</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Succinct SP1</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Risc Zero R0VM</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Architecture Philosophy</strong></p></td><td colspan="1" rowspan="1"><p>Modular zkVM + Coprocessor (zkCoprocessor)</p></td><td colspan="1" rowspan="1"><p>General-purpose RISC-V zkVM</p></td><td colspan="1" rowspan="1"><p>Pure zk-RISC-V design</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Modular Design</strong></p></td><td colspan="1" rowspan="1"><p>Execution, recursion, and compression are decoupled, supporting multiple proving backends and coprocessor integration, while remaining compatible with precompiled extensions.</p></td><td colspan="1" rowspan="1"><p>Precompiled module extensions; relatively monolithic</p></td><td colspan="1" rowspan="1"><p>Weak modularity</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Performance Mechanism</strong></p></td><td colspan="1" rowspan="1"><p>Multi-GPU parallelism (Pico Prism RTP); KoalaBear / BabyBear / M31 backends; zkData Coprocessor</p></td><td colspan="1" rowspan="1"><p>CPU-optimized recursive STARK; on-chain verifier contracts</p></td><td colspan="1" rowspan="1"><p>CPU-dominant zk-STARK</p></td></tr></tbody></table><p><br></p><h3 id="h-32-pico-prism-multi-gpu-cluster-breakthrough" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>3.2 Pico Prism: Multi-GPU Cluster Breakthrough</strong></h3><p><strong>Pico Prism</strong> marks a major leap for Brevis in multi-server GPU architecture, setting new records under the <strong>Ethereum Foundation’s RTP (Realtime Proving)</strong> framework.<br> It achieves <strong>6.9-second average proof time</strong> and <strong>96.8% RTP coverage</strong> on a <strong>64×RTX 5090 GPU cluster</strong>, leading the zkVM performance benchmarks.</p><p>This demonstrates the transition of zkVMs from <strong>research prototypes</strong> to <strong>production-grade infrastructure</strong> through optimizations at the architectural, engineering, hardware, and system levels.</p><ul><li><p><strong>Architecture:</strong> Traditional zkVMs (SP1, R0VM) focus on single-machine GPU optimization. Pico Prism pioneers <strong>cluster-level zkProving</strong>—multi-server, multi-GPU parallel proving—scaling ZK computation through multithreading and sharding orchestration.</p></li><li><p><strong>Engineering:</strong> Implements an <strong>asynchronous multi-stage pipeline</strong> (Execution / Recursion / Compression), cross-layer data reuse (proof chunk caching, embedding reuse), and multi-backend flexibility—boosting throughput dramatically.</p></li><li><p><strong>Hardware:</strong> On a <strong>64×RTX 5090 ($128K)</strong> setup, achieves <strong>6.0–6.9s</strong> average proving time and <strong>96.8% RTP coverage</strong>, delivering a <strong>3.4× performance-to-cost improvement</strong> over <strong>SP1 Hypercube (160×4090, 10.3s)</strong>.</p></li><li><p><strong>System Evolution:</strong> As the first zkVM to meet EF RTP benchmarks (&gt;96% sub-10s proofs, &lt;$100K hardware), <strong>Pico Prism</strong> establishes zk proving as mainnet-ready infrastructure for <strong>Rollups, DeFi, AI, and cross-chain verification</strong> scenarios.<br></p></li></ul><h3 id="h-33-zk-data-coprocessor-intelligent-zk-layer-for-blockchain-data" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>3.3 ZK Data Coprocessor: Intelligent ZK Layer for Blockchain Data</strong></h3><p>Traditional smart contracts “lack memory”—they cannot access historical states, recognize user behavior over time, or analyze cross-chain data.&nbsp; <strong>Brevis</strong> addresses this with a <strong>high-performance ZK Data Coprocessor</strong>, enabling contracts to <strong>query, compute, and verify</strong> historical blockchain data in a trustless way. This empowers <strong>data-driven DeFi</strong>, <strong>active liquidity management</strong>, <strong>reward distribution</strong>, and <strong>cross-chain identity verification</strong>.</p><p><strong>Brevis workflow:</strong></p><ol><li><p><strong>Data Access:</strong> Contracts call APIs to retrieve historical data trustlessly.</p></li><li><p><strong>Computation Execution:</strong> Developers define logic via SDK; Brevis performs off-chain computation and generates ZK proofs.</p></li><li><p><strong>Result Verification:</strong> Proofs are verified on-chain, triggering subsequent contract logic.</p></li></ol><table style="min-width: 100px"><colgroup><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Mode</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Technical Characteristics</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Advantages</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Typical Use Cases</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Pure-ZK Mode</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">All results are submitted as ZK proofs and verified on-chain</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Full trust minimization</p></td><td colspan="1" rowspan="1"><p style="text-align: center">High-security scenarios: identity verification, cross-chain asset proofs, loyalty/points systems</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>coChain (OP Mode)</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">Results are first verified via PoS; ZK proofs are submitted only if challenged</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Lower cost, lower latency</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Real-time response applications: GameFi</p></td></tr></tbody></table><p>Brevis supports both <strong>Pure-ZK</strong> and <strong>coChain (Optimistic)</strong> models:</p><ul><li><p>The former achieves full trustlessness at higher cost.</p></li><li><p>The latter introduces <strong>PoS verification with ZK challenge-response</strong>, lowering costs while maintaining verifiability.</p></li></ul><p>Validators stake on Ethereum and are slashed if ZK challenges succeed—striking a balance between <strong>security and efficiency</strong>. Through the integration of <strong>ZK + PoS + SDK</strong>, Brevis builds a scalable and verifiable data computation layer. Currently, Brevis powers <strong>PancakeSwap, Euler, Usual, Linea</strong>, and other protocols. All <strong>zkCoprocessor partnerships</strong> operate under the <strong>Pure-ZK model</strong>, providing trusted data support for <strong>DeFi incentives, reward distribution, and on-chain identity systems</strong>, enabling smart contracts to truly gain “memory and intelligence.”<br></p><h3 id="h-34-incentra-zk-powered-verifiable-incentive-distribution-layer" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>3.4 Incentra: ZK-Powered Verifiable Incentive Distribution Layer</strong></h3><p><strong>Incentra</strong>, built on the <strong>Brevis zkCoprocessor</strong>, is a verifiable incentive platform that uses <strong>ZK proofs</strong> for secure, transparent, and on-chain reward distribution. It enables <strong>trustless, low-cost, cross-chain automation</strong>, allowing anyone to verify rewards directly while supporting compliant, access-controlled execution.</p><p><strong>Supported incentive models:</strong></p><ul><li><p><strong>Token Holding:</strong> Rewards based on ERC-20 time-weighted average balances (TWA).</p></li><li><p><strong>Concentrated Liquidity:</strong> Rewards tied to AMM DEX fee ratios; compatible with Gamma, Beefy, and other ALM protocols.</p></li><li><p><strong>Lending &amp; Borrowing:</strong> Rewards derived from average balances and debt ratios.</p></li></ul><p>Already integrated by <strong>PancakeSwap</strong>, <strong>Euler</strong>, <strong>Usual</strong>, and <strong>Linea</strong>, Incentra enables a <strong>fully verifiable on-chain incentive loop</strong>—a foundational ZK-level infrastructure for DeFi rewards.<br></p><h3 id="h-35-brevis-complete-product-and-technology-stack-overview" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>3.5 Brevis: Complete Product and Technology Stack Overview</strong></h3><table style="min-width: 100px"><colgroup><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Layer</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Module</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Core Function</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Technical Highlights / Features</strong></p></td></tr><tr><td colspan="1" rowspan="2"><p style="text-align: center"><strong>Base Layer: Computation &amp; Proof Kernel</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Pico zkVM</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">General-purpose ZK virtual machine for verifiable arbitrary computation</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Rust support; modular recursion/compression; function- and app-level coprocessors</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Pico Prism</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">Multi-GPU cluster zk-proving system providing high-performance computation</p></td><td colspan="1" rowspan="1"><p style="text-align: center">6.9s avg proof time (45M gas block); 96.8% RTP coverage; ~50% cost reduction</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Middle Layer: Data Coprocessor Layer</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Brevis zkCoprocessor</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">Reads multi-chain historical data, executes logic, generates composite proofs</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Go/TS SDKs &amp; smart contract interface; supports Pure-ZK &amp; coChain; outputs verifiable results</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Top Layer: Incentive Protocol Layer</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Incentra Protocol</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">Manages reward pools, computes and distributes incentives automatically</p></td><td colspan="1" rowspan="1"><p style="text-align: center">ZK-verified incentive layer; cross-chain claiming &amp; compliance-aware controls</p></td></tr></tbody></table><p><br></p><h3 id="h-iv-brevis-zkvm-technical-benchmarks-and-performance-breakthroughs" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>IV. Brevis zkVM: Technical Benchmarks and Performance Breakthroughs</strong></h3><p>The <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://blog.ethereum.org/2025/07/10/realtime-proving"><strong><u>Ethereum Foundation (EF)</u></strong><u>’s </u><strong><u>L1 zkEVM Realtime Proving (RTP)</u></strong><u> </u></a>standard has become the de facto benchmark and entry threshold for zkVMs seeking mainnet integration. Its core evaluation criteria include:</p><ul><li><p><strong>Latency:</strong> &lt;= 10s for P99 of mainnet blocks</p></li><li><p><strong>On-prem CAPEX: </strong>&lt;= 100k USD</p></li><li><p><strong>On-prem power:</strong> &lt;= 10kW</p></li><li><p><strong>Code: </strong>Fully open source</p></li><li><p><strong>Security: </strong>&gt;= 128 bits</p></li><li><p><strong>Proof size: </strong>&lt;= 300KiB with no trusted setups</p></li></ul><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/d8c11e249823c8d66032f9df5d9968af43e283d63b5bd406431eda58068cf4ae.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAQCAIAAAD4YuoOAAAACXBIWXMAAAsTAAALEwEAmpwYAAADnklEQVR4nIVUz08bRxhdqVL7R7THHhL1Vqm3XnKpqvRQqWoPUdWWNkqkqglN0yppDykcUipFVkEFtQlugcZKaYhpAIEJWmyMYe2N12sMOMbBhtgbG2ywh13vD8be3ZlqPMayINCn1ezM6H3f23nfN8vgOlB9tG1kWhbG2LQs27YPJohy7AYQfdFNGkUfksdGqJ7NtBpRDMb47kz8vMPTORToci1ccEzdGg7+NMxf7vN1u8Vb94Xu0eg150LfWPTr3+a+uePvGFq85vR/d8d/08XfGOT6J1cu9sxe6vP9OLjouB+6MRA47/D0PAhfcHimuFRDoAi0J5ulp1JZKsqbWyCVA0JszT3pn2Q5b4AfmwlMeHnPLPfvlHfax037Qo/8/LQ/7HJ7Jrz8jD88NhO495CdFxJSsbIulZdShWf58mYeFIDWEGg9L0U+l1uJicHFhcd8SAjz8XgsthQNBTlREOKxWCK+mlp/GhOFqPA4Kgory0t8cFHKZg4laVhEs1MHIayaptk0vRWoPmb2dthkbDohHiVQ033Lz0f9a6n0Rmw1Ka4kVN1gstms1+udm5tjWXZ+fr5QKLTUswGMkaLqJaDmdkE0k1zbythHQLvDK2b+Zpedf97tvT3w+x+uMtAYtQ4IoWEYEMIXRJomxii/K5/6amicW9dqMqxVj9JalWSlUlG1fVg1LYsxDONoDVphU/dqJsO0dQ4FEVJV3TiBfMhhIlD3/cXYh9VI8nmuCIjAK1/c/CuEUEXb30cHt+FkEAFVVY8ToOYwZx3tvSyhvtTW5eJVKPNPMkWgYkxu5f8LHHcC07Jo3ZjXLr/VPkwmzKe/jEQ2tvPM6avcsoQxrtZI4AkyJIoWljYizUiXQNFzuzJhvP7tme9HqcCvbjEuSQzzoU+UMEaaAZtFahaS6qFWgeYJKI/K9LrDL7f11wWunrk+Uhf4rHskUhc4F00Wb49Fvux5hDHaLskQVjFGCJO0zS5vFLkKoWZAoBigokeSOaDoV/rYYkl+v2OCYT4hjFcvvXmFWvSxY1hYJQIfPMvvMe/1MG/8ABSdeefndanEhtMTC2u6Ace5ZH5H1jV1u1CwbZvcZCAr2fwOUHQfv1qtWec6HyQzu15h46MO8uGfdz38ZzaOMX73+r1wYqtUUd5uH1RU2DvCdw74i0A7fdG5p0LnuOBwBTDG/aOBItBAuZROpxv/ImIcaRiMMTHnYDxu2brfSji0JFZhjP8D9tJAOhgpElgAAAAASUVORK5CYII=" nextheight="723" nextwidth="1456" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>In <strong>October 2025</strong>, <strong>Brevis</strong> released the report <em>“</em><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://blog.brevis.network/2025/10/15/pico-prism-99-6-real-time-proving-for-45m-gas-ethereum-blocks-on-consumer-hardware/"><em><u>Pico Prism — 99.6% Real-Time Proving for 45M Gas Ethereum Blocks on Consumer Hardware</u></em></a><em>,”</em> announcing that <strong>Pico Prism</strong> became the <strong>first zkVM to fully meet the Ethereum Foundation’s RTP standard</strong> for block-level proving.</p><p>Running on a <strong>64×RTX 5090 GPU cluster (~$128K)</strong>, Pico Prism achieved:</p><ul><li><p><strong>Average latency:</strong> 6.9 seconds</p></li><li><p><strong>96.8% &lt;10s coverage</strong>, <strong>99.6% &lt;12s coverage</strong> for <strong>45M gas blocks</strong>, significantly outperforming <strong>Succinct SP1 Hypercube</strong> (36M gas, 10.3s average, 40.9% &lt;10s coverage) With <strong>71% lower latency</strong> and <strong>half the hardware cost</strong>, Pico Prism demonstrated a <strong>3.4× improvement in performance-per-dollar efficiency</strong>.</p></li><li><p>Public recognition from <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://x.com/ethereum/status/1978497335115051056"><strong><u>Ethereum Foundation</u></strong></a><strong>, </strong><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://x.com/VitalikButerin/status/1978432581298204951"><strong><u>Vitalik Buterin</u></strong></a><strong>, and </strong><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://x.com/drakefjustin/status/1978435449489158312"><strong><u>Justin Drake</u></strong></a>.</p></li></ul><table style="min-width: 100px"><colgroup><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Metric</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>EF Standard</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Pico Prism (Brevis)</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>SP1 Hypercube (Succinct)</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>RTP Coverage (&lt;10s)</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">99% of blocks ≤10s</p></td><td colspan="1" rowspan="1"><p style="text-align: center">45M gas: 96.8% (&lt;10s), 99.6% (&lt;12s)</p></td><td colspan="1" rowspan="1"><p style="text-align: center">36M gas: 40.9%</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Avg Proof Time</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">—</p></td><td colspan="1" rowspan="1"><p style="text-align: center">6.0–6.9s</p></td><td colspan="1" rowspan="1"><p style="text-align: center">10.3s</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>GPU Config / Cost</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">≤$100K, ≤10kW</p></td><td colspan="1" rowspan="1"><p style="text-align: center">64×5090 GPUs (~$128K)</p></td><td colspan="1" rowspan="1"><p style="text-align: center">160×4090 GPUs (~$256K)</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Performance Efficiency</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">—</p></td><td colspan="1" rowspan="1"><p style="text-align: center">~3.4× improvement</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Baseline</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Open Source &amp; Reproducibility</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">Core must be open source</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Public repo (<em>pico-ethproofs</em>) with reproducible experiments</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Results public but limited details</p></td></tr></tbody></table><p><br></p><h3 id="h-v-brevis-ecosystem-expansion-and-application-deployment" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>V. Brevis Ecosystem Expansion and Application Deployment</strong></h3><p>The <strong>Brevis zkCoprocessor</strong> handles <strong>complex computations that dApps cannot efficiently perform</strong>—such as analyzing historical user behavior, aggregating cross-chain data, or performing large-scale analytics—and outputs <strong>zero-knowledge proofs (ZKPs)</strong> that can be <strong>verified on-chain</strong>. This allows on-chain applications to <strong>trustlessly consume results</strong> by verifying a small proof, dramatically reducing gas, latency, and trust costs. Unlike traditional oracles that merely deliver data, <strong>Brevis provides mathematical assurance that the data is correct. </strong>&nbsp;Its application scenarios can be broadly categorized as follows:</p><ul><li><p><strong>Intelligent DeFi:</strong> Data-driven incentives and personalized user experiences based on behavioral and market history (e.g., <em>PancakeSwap, Uniswap, MetaMask</em>).</p></li><li><p><strong>RWA &amp; Stable Token Growth:</strong> Automated distribution of real-world yield and stablecoin income via ZK verification (e.g., <em>OpenEden, Usual Money, MetaMask USD</em>).</p></li><li><p><strong>Privacy-Preserving DEX (Dark Pools):</strong> Off-chain matching with on-chain verification—upcoming deployment.</p></li><li><p><strong>Cross-Chain Interoperability:</strong> Cross-chain restaking and Rollup–L1 verification, building a shared security layer (e.g., <em>Kernel, Celer, 0G</em>).</p></li><li><p><strong>Blockchain Bootstrap:</strong> ZK-based incentive mechanisms accelerating new chain ecosystems (e.g., <em>Linea, TAC</em>).</p></li><li><p><strong>High-Performance Blockchains (100× Faster L1s):</strong> Leveraging Realtime Proving (RTP) to enhance mainnet throughput (e.g., <em>Ethereum, BNB Chain</em>).</p></li><li><p><strong>Verifiable AI:</strong> Privacy-preserving and verifiable inference for the AgentFi and data-intelligence economy (e.g., <em>Kaito, Trusta</em>).</p></li></ul><p><br></p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/7d4a3fb8d8eb9c7146760a361fae97a124565ee1686c044b305cca80604fe6a8.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAANCAIAAABHKvtLAAAACXBIWXMAAAsTAAALEwEAmpwYAAAEMUlEQVR4nEWS709bZRTH74iVODpo6e6v59e997m/Smmhd22lFDphZLjAfuCgcxthWzqbFbJCKNAWbPk1MO4P8LWJr8jMojFGs5iRZQ7iC53bIiNhvpqbxsxFNJmL+kJz26FPPi+ePCfne87zPYdxlE91dTXDMJYVLBSyqdS5wcE3U6lz2ezoZHa0VCoUS7mB/jeSyaFMZjidPp9On7+QSk5mR4eGTmcyFzKZ4ax9PzkykioUppqa/AzDVFdXV5QZh+Olqqoqh8Oxi9klCEJX14FIOOxrbAyHQ+FwyLKsaGu0PR4zTSPaGu080LF/f7xCd/fBoBX0NnhbopGWaMSygpZlxdpiEEKG2eVwvFyBEUQVEj8vKrxARCDxPKxxuvfUumucrhqn2+l0QZFAUSKEsqywp7beWX6scbpe2V1X43S53bwoyh6PKAK5zsXuqa13udm9e4UK9R6OaX/tyMPHP19fvfHDo4dr62tr67cikTYRKEQyiGSwHLh65cOtzft/PPu9UCiyHJSptxIikiECKRCIbGxsPvj+wfXVVar7EdGJZECsIawhootAYnz+0Nh4bio3nctN5wvFk4NnIFYhomVUnoeHD/dPvz07lZtJptKJxCnDbBKBDCAFkApAxpJ2Npmem790MTPe09uHsLaTayMAieFEUlfPu9xcbR3L8wgiCqACUQUqAIljxd27XQBICKsVIFIAfAEvoHqPUOdiWQ4QSd9JfBEVAGFajx9dev+9i9P5wqV5f6iFEwnEajkmC0DheZA5O/TxygdWoBkBgjBFOxIQKTwgZoNvYmpsYmq898hhADGWFIgphLKNXUZmDiUStzfurn+1vrZ+q+dInwBkiKgIFAAUhDWWA5fffef58+1YLE6pKVMvgDsOYFUAsm42fvrZJ/e+u10sFXkBlf/3v0UQKYwRCMU6D7bGu+IdBxOJU2ZDkBfIfyoikIOByKuR9oH+k0PnkqFwzO4AqyK2C4iYYklra493H+rt6T0qU7MyHhHY9toiWGO8erMu+wjSIaS8gO3pYw1Luj0urGLFpKEwafCLRPPwwMOBvTwS7EwVluep6U0+X5SqfoR0iHWIVHuRFLOySEQymJHTw//8+dfW/Y0vb9549uy3b25/PVMq1bo5HpY7JerKzc+f/P3rva2NKx9dvfbFtZUrK09+edzXf4LlgAjkWPvrd+9sbm9vb21tPn365M7db3/86VF2IlfvESqNMrFox8LCUr4wPZWbnl9cHhkbH0q+dex4IhrvrPhw4szZucWlyXx+cXm5MFM81jcQbeuUVa8AZBHIqt48ns2XZhfyhWJpdq6ic6xvAEsagNQuQKlP0wOG2ayaAUX3q2bAbNzXvC+qNzQhSZNVHzUC1Gikup+qPpl6eYGwHCy7rCKsqZodMswmqto6mubX9IDXZxHJsC1SzH8BB/IpxEl0/tAAAAAASUVORK5CYII=" nextheight="597" nextwidth="1456" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>Network Scale and Metrics, According to <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://explorer.brevis.network/"><strong><u>Brevis Explorer</u></strong></a> (as of October 2025):</p><ul><li><p>Over <strong>125 million ZK proofs</strong> generated</p></li><li><p>Covering <strong>~95,000 on-chain addresses</strong> and <strong>~96,000 application requests</strong></p></li><li><p>Cumulative <strong>incentive distribution: $223 million+</strong></p></li><li><p><strong>TVL supported:</strong> &gt;$2.8 billion</p></li><li><p><strong>Total verified transaction volume:</strong> &gt;$1 billion</p></li></ul><p>Brevis’s ecosystem currently focuses on <strong>DeFi incentive distribution</strong> and <strong>liquidity optimization</strong>, with computing power mainly consumed by <strong>Usual Money, PancakeSwap, Linea Ignition,</strong> and <strong>Incentra</strong>, which together account for <strong>over 85% of network load</strong>.</p><ul><li><p><strong>Usual Money (46.6M proofs):</strong> Demonstrates long-term stability in large-scale incentive distribution.</p></li><li><p><strong>PancakeSwap (20.6M):</strong> Highlights Brevis’s performance in real-time fee and discount computation.</p></li><li><p><strong>Linea Ignition (20.4M):</strong> Validates Brevis’s high-concurrency capacity for L2 ecosystem campaigns.</p></li><li><p><strong>Incentra (15.2% share):</strong> Marks Brevis’s transition from SDK toolkit to standardized incentive platform.</p></li></ul><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/72ed34ff223a272f67803c4d34564ffd66b9f73b9ed1367a78209c35631c0163.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAASCAIAAAC1qksFAAAACXBIWXMAAAsTAAALEwEAmpwYAAAFtklEQVR4nF2V228cdxXHhwceEH8AEo9ICAmQuEk0gT5FiAde0oJKVS5CBZEnWqmVUAqCCBKgqHbUNjixU5Q4QUpdx3HsXa8d39aXzbp2vV6vd70zO57rb+Y3v/tlZtdeO2moitaJVEA6j0fno/OVzucYWmAtCOPJHkOVALZsQGKoJe7uq24mu/vqsKuP9nU741KJVPNOig87qruvDjryoCMPO72Gg47SkhyP+v8ypCA+dxZZ6xqCkzY0nVAKAkInl5ssl1ZnZ2bGxm7nC4Xtynpmvx/UNnZMkstPTRemVpaXlpYWZ2fv3b07vrgwn8ShlkRQJFnyPwCb3MqTW2+Scj8CKwmRWqaaUgSWl4rF4sLK8tL0dCGXzzUb1cxeMyuV+TVSLpeLxbnND9bX18tr5dLc7L2FhXngO23N2inPNBMcfQLYws+9R96+RFZui3jTMtfWSknst1N+HII67Grfs1tm3W6ZWqt2xjsadfdVJxM7tSrD0WFXP44r1ZThaHtr07J2U00/ATTo6ZGkfziYr8R776+Vc7ncbqPGMEQQEBRHwC0WF3L5fLG4QBMgKJSCKEmi0L0zPr5Tq6aa8uNMGI6cPbNQKKytlRiGjMIngCY9VQB9ZbNEUYhjL4l94DRh0EpiP4n90LOS2MfQ5xgg5AsKGQIcRyQJGAaCQ8UShgCOfc/eDT2LIUCSAAGX4+gJwKGn3WQ6tM2NSkVRCFvVFvAxhirxNAlEEmY8yThiKERmwJIA2HUKHIkAT8LW9g5HEbBriWdFrpV4LQJ8gSKBIoJijuMegKLlo6P20N/OnTKMpZkJ/vDolSvnLy7eFSQbmzXTNr2Y3+yfWGnbtPrzpn3dOnqk/nVtsHgvN7eQN75ojIwOf/zocHj4euA7kxPjhmHs1Dbe/sfEG2+MtDOuWGJISrv7auLW8E+f/lp14z4k+Ow7/f+cm4gDeC1X0ppeHJ7oG3rvALGlM/edvNk54CsL043tja2ttW//4JurpcVM87m5ORC65fLSs888s7tbG758ffDNQUp7l2EoiTkKYtfkCaQxkCjspDoJnNBp4MAOHbsjYFcTDAOWxVRA6Fopg4rEqZQHnQ8VIxTasW+55k5b8ofdThJ6KQgECGMQaEkMLYhCfsN3c07MGFkvFd+50seSsFldHxoYAK5553b+xo1RLcjq5Iq/XQvdxp/+8ufND9bh9nJ1tB/DYK+FAo/wxLnUd35q/N1uKq47lalgN+NY8MTgHH2o4FUTG+9a8YOPX/vNj75gGPV6fXRkxDCMcrn4+a+8YHz2O0zyU595duLCwFazahjG2d//8f7Q7147+Wlob18aQHduU4FdwzCe+saXP/rogVG6cbI2k3GuhTC0wCmL3SicqDbabVVaXrg88BaCkbdn3rxxk+MoX5gfGctrSe9NLfqWhVF4behKy6xvV9bzd8ckg6srrdq2m2k8OHh5Zjr3sJsVmptzZk1xW5BVQ0nCExe7dQUcFLi+2ewInFHIE0+LmOGQhm5kNzy77rt1p7kTu42OgAIDRBJEMAE+R66kHon9Rwe6kymaAE0wgV6Ifqtto7eBJgCJBB4oQZEm/vdO/+zFMy8/2NfFvyfVUSJlwDE49HTHp5zHklKlhIj83W89F589j7X+w7ng6lXQbidPPX3i3KuvdA6yF3754oW3LmD1K209BtCYCBRo3pMUCT73pRPf/f7pfx+277wUlIcSncY9cIMrh6UZbVktAKMMhhvG1/d++Guk2784Y/319bCTAcP41I+ff1529VdPnDzz6ks4ffkJQLEYx6Flmhj1BOLZDeDZSnHPcWAcMQw5jqxW3fdbiiEQuggCIZnZbLiep1UPCeNQSW47XjWKyyIaqcwVrfGA/kS3Hm/wuBSVrGfzVAstjxWmepciWc+9qaT//VIkQ8dtvQfw2J0EQxORmzF6HVvDcnaFXQ5Jn0aT/wF013NVbRu6PwAAAABJRU5ErkJggg==" nextheight="819" nextwidth="1456" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p><strong>DeFi Incentive Layer: </strong>Through <strong>Incentra</strong>, Brevis supports multiple protocols with transparent and continuous reward allocation:</p><ul><li><p><strong>Usual Money</strong> — Annual incentives exceeding <strong>$300M</strong>, sustaining stablecoin and LP yields.</p></li><li><p><strong>OpenEden &amp; Bedrock</strong> — CPI-based models for automated U.S. Treasury and Restaking yield distribution.</p></li><li><p><strong>Euler, Aave, BeraBorrow</strong> — ZK-verified lending positions and reward calculations.</p></li></ul><p><strong>Liquidity Optimization: </strong>Protocols such as <strong>PancakeSwap, QuickSwap, THENA, and Beefy</strong> employ Brevis’s <strong>dynamic fee and ALM incentive plugins</strong> for trade discounts and cross-chain yield aggregation.&nbsp; <strong>Jojo Exchange</strong> and the <strong>Uniswap Foundation</strong> use ZK verification to build safer, auditable trading incentive systems.</p><p><strong>Cross-Chain &amp; Infrastructure Layer: </strong>Brevis has expanded from <strong>Ethereum</strong> to <strong>BNB Chain, Linea, Kernel DAO, TAC, and 0G</strong>, offering <strong>verifiable computation and cross-chain proof capabilities</strong> across multiple ecosystems.&nbsp; Projects like <strong>Trusta AI, Kaito AI,</strong> and <strong>MetaMask</strong> are integrating Brevis’s <strong>ZK Data Coprocessor</strong> to power <strong>privacy-preserving loyalty programs, reputation scoring, and reward systems</strong>, advancing <strong>data intelligence within Web3</strong>.</p><p>At the infrastructure level, <strong>Brevis leverages the EigenLayer AVS network</strong> for restaking security, and integrates <strong>NEBRA’s Universal Proof Aggregation (UPA)</strong> to compress multiple ZK proofs into single submissions—<strong>reducing on-chain verification cost and latency</strong>.</p><p>Overall, <strong>Brevis</strong> now spans the full application cycle—from long-term incentive programs and event-based rewards to transaction verification and platform-level services. Its high-frequency verification tasks and reusable circuit templates provide <strong>Pico/Prism</strong> with real-world performance pressure and optimization feedback, which in turn can reinforce the <strong>L1 zkVM Realtime Proving (RTP)</strong> system at both the engineering and ecosystem levels—forming a <strong>two-way flywheel between technology and application</strong></p><h3 id="h-vi-team-background-and-project-funding" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>VI. Team Background and Project Funding</strong></h3><p><strong>Mo Dong | Co-founder, Brevis Network<br></strong> Dr. <strong>Mo Dong</strong> is the co-founder of <strong>Brevis Network</strong>. He holds a Ph.D. in Computer Science from the <strong>University of Illinois at Urbana–Champaign (UIUC)</strong>. His research has been published in top international conferences, adopted by major technology companies such as Google, and cited thousands of times.</p><p>As an expert in <strong>algorithmic game theory</strong> and <strong>protocol mechanism design</strong>, Dr. Dong focuses on integrating <strong>zero-knowledge computation (ZK)</strong> with <strong>decentralized incentive mechanisms</strong>, aiming to build a <strong>trustless Verifiable Compute Economy</strong>. He also serves as a <strong>Venture Partner at IOSG Ventures</strong>, where he actively supports early-stage investments in Web3 infrastructure.</p><p>The <strong>Brevis team</strong> was founded by cryptography and computer science Ph.D. holders from <strong>UIUC</strong>, <strong>MIT</strong>, and <strong>UC Berkeley</strong>. The core members have years of research experience in <strong>zero-knowledge proof systems (ZKP)</strong> and <strong>distributed systems</strong>, with multiple peer-reviewed publications in the field.&nbsp; Brevis has received <strong>technical recognition from the Ethereum Foundation</strong>, with its core modules regarded as foundational components for on-chain scalability infrastructure.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/551504935ba6fcd497e6dbd4aeb4e772d4d6db8d32d0aee003b9b1fa5180f3cf.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAANCAIAAABHKvtLAAAACXBIWXMAAAsTAAALEwEAmpwYAAACkklEQVR4nIVTS2gTURR9i9KFUqmgSFW6UEEEKdTaKhZEBd0p6koEUfwsBHEj/YBtF7aKiCJCNrUU2tIkGD9p4iTmQ7QZ0sFkzDOZl8m8vEwnSfMh0+A0EHEtyZRhMtoWDsO8N4d77j33DHBRDp/HxSMYhxEVkGW093gzeATDTNBqmdPzNwcIM3QkHOIR1CCJgh687pNaN0QHmi+jzWjiA4KR/gxZ5p1l1uN2+jyuhU82v5cyEHgEhcRPffVkAhGMGyApbGwI6A8EI7+X6tzf0doCdmxvbW0BQ4OPJFFQzfkfoiSVdtvG30/fd9vG7XMDfvurZILbUCAOI5IoWOZnLpw/29vbfe/u7X8tMgikcJJdcoYC1u/0QihgZZeozSbQUC5lf1WKW1WHmkYKC+ozmUBb7ECdQwsJv2l1IqhubLjhukCYocNMUB+sBBcNLvoN6YQsY6DxCIbogCHTkGWW6G9NMXVRDhfl0LcchxGLeVZPIhj5PC6H/QMXq5eALKN2ZzHP6gUIRsFF/2fnR4KRNn19B/mcWFktlEvZfE7URiusZIrFlbJcxOs+1DOmKPKasirLBY2Wz6Vr1UqtWjFYnU2n1gUkUTC9efngzs2x0eGJJ2M8glwssiwKE4+HXowNPx14+PWLEwscwSjgpU51d/UdPXLtyiW1QUkU5uemr16+eOvGdTdlJxgh7gePIOLYKfNrlQNq1crJvh4AwM72tt272hu3USlDju/bc/rwgb6DnfNvTcuNOIXowKGOjr1t2871n9AyPTM9eay7CwAwNWlS/xiC0cjIYP+ZnufPxglGdYskUSgV6/5oFnGxSD4nFgoZvUVY4BRFVhRZlvNa3vI58c/vtVq1os80F2Oz6ZQ6wV8xQLmCWRfyVQAAAABJRU5ErkJggg==" nextheight="529" nextwidth="1309" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>In <strong>November 2024</strong>, <strong>Brevis</strong> completed a <strong>$7.5 million seed round</strong>, <strong>co-led by Polychain Capital and Binance Labs</strong>, with participation from <strong>IOSG Ventures, Nomad Capital, HashKey, Bankless Ventures</strong>, and strategic angel investors from <strong>Kyber, Babylon, Uniswap, Arbitrum,</strong> and <strong>AltLayer</strong>.</p><h3 id="h-vii-competitive-landscape-zkvm-and-zkcoprocessor-markets" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>VII. Competitive Landscape: zkVM and zkCoprocessor Markets</strong></h3><p>The <strong>Ethereum Foundation–backed </strong><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://ETHProofs.org"><strong><u>ETHProofs.org</u></strong></a> has become the primary public platform tracking the <strong>L1 zkEVM Realtime Proving (RTP)</strong> roadmap, providing open data on zkVM performance, security, and mainnet readiness.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/d2b9ae676f438bb64c68c3d7311329e765c3a7729031b149c2d014c692aa5a74.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAANCAIAAABHKvtLAAAACXBIWXMAAAsTAAALEwEAmpwYAAAEPklEQVR4nF2Tb0wbZRzHH9/5QrclJkbdMhdndNH4YjHM6DQ6mIlxuvjCf0yQf3FkIHGDDMKEVhkDIdYNmIxR/qV1tj0oRw+40h4H7e2uV67Xcne59o5ycOVaKQXCn7jM7J1pzhFi8n3xe54Xv+/v93y+DwAAgBPHP7l6pdhgrO3qNDrvnK7LzzMU5RmKcg0lHzWXHfn87WfP53x8o/yssTTPUHTWWHq67sIVyy+EKs4u8b5lQVfjHVNhbRVET9N/yYQq+pcFTGKplAzA0wfA4efzmxqKDcaKtlZ6ddG3LFApWRe7mcAWI7jCjbC+WyOWTqfVQWO3Riz3CA9E4b0IBIf81hnU5pvC4xwc8lOaBFE4KtDyPzuLjx5E1hPAHiRvw8MDnslOZLgLcVKa1I3Y7k5Ag9OuIQ/S3N8dTCncRpLbSNJJmdIkbiNJrsQi6wkmrRBqlM2oHpFBwiSbUSEKpzTJQXr5nfWCi5+CA2BUpEE7ZLs9PtbtRm6ODdeY2jwiMzQ7cdc9bPY4W4Z6LhrrCDXKpBW9nW9ZYNIKrnBMWqFX475lkVlTUYGGQ34mrfShMLkSsxFuUlts6W96I/cVGzMLSq4bqjpM1wZ6e3zTx3JOQn7PjXs9nU5r7W+tyqMdcXdN786kFf2t2Yw6FQtRmkSo0azTmmojvdaZSUqTTLZ+fCECUbj8cOvFVw+Dg8BKe0B52/Xiph8NVrOND778To4r6HPNk3DIPzjtIldi5EqM0iRde0dCjeo3eoHHOUwOZ4uFCKFG4ZCf21r9urzwtQ/fROUQeK/gqy9qLw8GZmx88PUz71q8yIX6ik6ndSJCsRl1vwg1SqhRYSdtxyfopMykldklXtxdc7HEn7NTbEZts5gpTcpuk1yqaSk5+sEhs28SfHb5UkHD1d+9yACFWzF0LhUfCeKoELST2N7s1GP9N/JC5H8beESG0iQXS/gWeTuJhTKJysYfTp1/CxYC4P3CL3NL8s0+dIDCYtuZuVS8oq3R7HEWVH9PadIegH0MlnTIWQZLPLOmTvABHbIZHSXUaD/q5P/eOPhU9oP9EZkBVTdb63o6ECni4OmAGp9Lxc0ep53EWoZ6IApn0sqejf5EutPjFGULTA6jQpBJK3pMUSHIba2WVn939NRxJ0cCJ3O/yw11jNncsfD9RAyPc79CA4PTrtL66l4EEnfX5Ieb+xlE1hN6lnQnbiOJhEkXS7AZ1YyO6gzEB5tHjj0HngSwSAE6KX9rrDl57kybxax36ffCDhpzzZPjHGWdmWwZ6tEHxKJhLJpNCzof8CnifgaoEKQ0SSeBCjS/nalqLH/hxKGxaBCgAl3587XK5safervYjIrJ4XZHX0Ovqayptt3RZ7IPniv7Bg75+e3UeJh0c4H5rSQcxMmVGL0axxWO307BIb+N9DJp5ZZjiFCjFhwhNeVSfdEzLz1hCXj/BUeyaJGkR9IJAAAAAElFTkSuQmCC" nextheight="611" nextwidth="1456" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><h4 id="h-rtp-track-four-core-competitive-dimensions" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>RTP Track: Four Core Competitive Dimensions</strong></h4><ol><li><p><strong>Maturity: </strong><em>Succinct SP1</em> leads in production deployment; <em>Brevis Pico</em> demonstrates the strongest performance, nearing mainnet readiness; <em>RISC Zero</em> is stable but has not yet disclosed RTP benchmarks.</p></li><li><p><strong>Performance: </strong><em>Pico’s</em> proof size (~990 kB) is about <strong>33% smaller</strong> than <em>SP1’s</em> (1.48 MB), reducing cost and latency.</p></li><li><p><strong>Security &amp; Audit: </strong><em>RISC Zero</em> and <em>SP1</em> have both undergone independent audits; <em>Pico</em> is currently completing its formal audit process.</p></li><li><p><strong>Developer Ecosystem:</strong>Most zkVMs use the <strong>RISC-V</strong> instruction set; <em>SP1</em> leverages its <strong>Succinct Rollup SDK</strong> for broad ecosystem integration; <em>Pico</em> supports <strong>Rust-based auto proof generation</strong>, with a rapidly maturing SDK.</p></li></ol><h4 id="h-market-structure-two-leading-tiers" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>Market Structure: Two Leading Tiers</strong></h4><ul><li><p><strong>Tier 1 — Brevis Pico (+ Prism) &amp; Succinct SP1 Hypercube </strong>&nbsp;Both target the <strong>EF RTP P99 ≤ 10 s</strong> benchmark.</p><ul><li><p><em>Pico</em> innovates through a <strong>distributed multi-GPU architecture</strong>, delivering superior performance and cost efficiency.</p></li><li><p><em>SP1</em> maintains robustness with a monolithic system and ecosystem maturity.<br> → <em>Pico</em> represents <strong>architectural innovation and performance leadership</strong>, while <em>SP1</em> represents *<em>production readiness and ecosystem dominance</em>.<br><br></p></li></ul></li><li><p><strong>Tier 2 — RISC Zero, ZisK, ZKM </strong>&nbsp;These projects focus on <strong>lightweight and compatibility-first</strong> designs but have not published complete RTP metrics (latency, power, CAPEX, security bits, proof size, reproducibility).&nbsp; <em>Scroll (Ceno)</em> and <em>Matter Labs (Airbender)</em> are extending <strong>Rollup proof systems to the L1 verification layer</strong>, signaling a shift from L2 scaling toward <strong>L1 verifiable computing</strong>.</p></li></ul><p>2025 zkVM field has converged on <strong>RISC-V standardization</strong>, <strong>modular evolution</strong>, <strong>recursive proof standardization</strong>, and <strong>parallel hardware acceleration</strong>.&nbsp; The <strong>Verifiable Compute Layer</strong> can be categorized into three main archetypes:</p><ul><li><p><strong>Performance-oriented:</strong> <em>Brevis Pico</em>, <em>SP1</em>, <em>Jolt</em>, <em>ZisK</em> — focus on low-latency, realtime proving via recursive STARKs and GPU acceleration.</p></li><li><p><strong>Modular / Extensible:</strong> <em>OpenVM</em>, <em>Pico</em>, <em>SP1</em> — emphasize plug-and-play modularity and coprocessor integration.</p></li><li><p><strong>Ecosystem / Developer-friendly:</strong> <em>RISC Zero</em>, <em>SP1</em>, <em>ZisK</em> — prioritize SDK completeness and language compatibility for mass adoption.</p></li></ul><h4 id="h-zkvm-project-comparison-as-of-oct-2025" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>zkVM Project Comparison (as of Oct 2025)</strong></h4><table style="min-width: 125px"><colgroup><col><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Project</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>One-Line Positioning</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Technical Route</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Architecture &amp; Highlights</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Current Stage</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Brevis Pico</strong></p></td><td colspan="1" rowspan="1"><p>Modular zkVM + Data Coprocessor</p></td><td colspan="1" rowspan="1"><p>RISC-V · Turbo Plonk · Recursive RTP</p></td><td colspan="1" rowspan="1"><p>“Glue + Coprocessor” plug-in system, multi-GPU parallel proving, historical-data acceleration; EF RTP-compliant (P99 ≤ 10 s)</p></td><td colspan="1" rowspan="1"><p>v1.1.4 released · active OSS · listed on EF RTP</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Succinct SP1</strong></p></td><td colspan="1" rowspan="1"><p>General RISC-V zkVM / Rollup SDK</p></td><td colspan="1" rowspan="1"><p>STARK + Recursion + SNARK (FFLONK)</p></td><td colspan="1" rowspan="1"><p>Precompiled acceleration (hash/EC), on-chain verifier, broad Rollup SDK adoption</p></td><td colspan="1" rowspan="1"><p>Mainnet ready · active OSS · multi-chain integration</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>RISC Zero (R0VM)</strong></p></td><td colspan="1" rowspan="1"><p>General zkVM + Cloud Verification Platform</p></td><td colspan="1" rowspan="1"><p>zk-STARK + Recursion + Groth16 Wrapper</p></td><td colspan="1" rowspan="1"><p>Bonsai API, Receipt model, mature Rust SDK, high compatibility</p></td><td colspan="1" rowspan="1"><p>Stable OSS · no public RTP metrics</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>OpenVM (Axiom)</strong></p></td><td colspan="1" rowspan="1"><p>Modular Extensible zkVM Framework</p></td><td colspan="1" rowspan="1"><p>CPU-less Core · Multi-ISA Extensions</p></td><td colspan="1" rowspan="1"><p>EC/Pairing/Int256 instructions + GPU accel · modular circuits</p></td><td colspan="1" rowspan="1"><p>v1.2 beta · active OSS · pre-mainnet</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Ziren (ZKM)</strong></p></td><td colspan="1" rowspan="1"><p>General MIPS-based zkVM</p></td><td colspan="1" rowspan="1"><p>MIPS32r2 + STARK</p></td><td colspan="1" rowspan="1"><p>Cross-platform verification · stability focus</p></td><td colspan="1" rowspan="1"><p>Dev in progress · locally compilable</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Airbender (Matter Labs)</strong></p></td><td colspan="1" rowspan="1"><p>zkSync RISC-V Proving System</p></td><td colspan="1" rowspan="1"><p>STARK/FRI Optimized + SNARK Wrap</p></td><td colspan="1" rowspan="1"><p>Six-stage pipeline · DEEP FRI · consumer GPU friendly</p></td><td colspan="1" rowspan="1"><p>Alpha stage · GitHub OSS · pre-mainnet</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Ceno (Scroll)</strong></p></td><td colspan="1" rowspan="1"><p>Scroll Accelerated zkVM / Proof Stack</p></td><td colspan="1" rowspan="1"><p>Rust + RISC-V + Recursive Pipeline</p></td><td colspan="1" rowspan="1"><p>Multithreaded data parallelism · Scroll internal Rollup verifier</p></td><td colspan="1" rowspan="1"><p>Internal testing</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Jolt (a16z)</strong></p></td><td colspan="1" rowspan="1"><p>High-Performance RISC-V zkVM (Lookup/Sum-Check)</p></td><td colspan="1" rowspan="1"><p>Lookup + Sum-Check + Lasso</p></td><td colspan="1" rowspan="1"><p>Minimal (&lt;25K LoC) research impl · perf-focused</p></td><td colspan="1" rowspan="1"><p>Open research · Rust impl</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>ZisK (ex-Polygon Hermez)</strong></p></td><td colspan="1" rowspan="1"><p>Low-Latency zkVM Toolchain</p></td><td colspan="1" rowspan="1"><p>RISC-V 64 + STARK</p></td><td colspan="1" rowspan="1"><p>Modular interfaces (JSON-RPC/gRPC) · multi-language SDK</p></td><td colspan="1" rowspan="1"><p>Open development</p></td></tr></tbody></table><p><br></p><h4 id="h-zkcoprocessor-landscape" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>zkCoprocessor Landscape</strong></h4><p>The zk-Coprocessor market is now led by <strong>Brevis</strong>, <strong>Axiom</strong>, <strong>Herodotus</strong>, and <strong>Lagrange</strong>.</p><ul><li><p><strong>Brevis</strong> stands out with a <strong>hybrid architecture</strong> combining a <strong>ZK Data Coprocessor + General-Purpose zkVM</strong>, enabling historical data access, programmable computation, and <strong>L1 Realtime Proving (RTP)</strong> capability.</p></li><li><p><strong>Axiom</strong> specializes in verifiable queries and circuit callbacks.</p></li><li><p><strong>Herodotus</strong> focuses on provable access to historical blockchain states.</p></li><li><p><strong>Lagrange</strong> adopts a <strong>ZK + Optimistic hybrid design</strong> to improve cross-chain computation efficiency.</p></li></ul><p>Overall, zk-Coprocessors are emerging as <strong>“Verifiable Service Layers”</strong> that bridge <strong>DeFi, RWA, AI, and digital identity</strong> through trustless computational APIs.</p><table style="min-width: 100px"><colgroup><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Project</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Type</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>One-Line Positioning</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Representative Capabilities / Features</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Brevis</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">ZK Data Coprocessor + General zkVM (Pico/Prism)</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Cross-chain historical data access + programmable computation engine + L1 Realtime Proofs</p></td><td colspan="1" rowspan="1"><p style="text-align: center">TS/Go SDK &amp; circuit abstractions · Pico Prism RTP-compliant · Ecosystem products like Incentra</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Axiom</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">ZK Data Coprocessor</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Verifiable range queries on accounts/storage/logs with circuit callbacks</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Mainnet V2 live · TypeScript SDK for custom historical data circuits</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Herodotus</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">ZK Data Coprocessor</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Provable access to historical states &amp; events</p></td><td colspan="1" rowspan="1"><p style="text-align: center">“Historical Data Availability + Composable Query Layer” · multi-chain data access &amp; verifiable computing</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Lagrange</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">State Coprocessor (ZK + Optimistic Hybrid)</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Cross-chain state proof &amp; aggregate computation services</p></td><td colspan="1" rowspan="1"><p style="text-align: center">“State Committees” model + ZK Coprocessor · EigenLayer staking &amp; light-client verification</p></td></tr></tbody></table><p><br></p><h3 id="h-viii-conclusion-business-logic-engineering-implementation-and-potential-risks" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>VIII. Conclusion: Business Logic, Engineering Implementation, and Potential Risks</strong></h3><h4 id="h-business-logic-performance-driven-flywheel-at-dual-layers" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>Business Logic: Performance-Driven Flywheel at Dual Layers</strong></h4><p>Brevis builds a <strong>multi-chain verifiable computing layer</strong> by integrating its <strong>general-purpose zkVM (Pico/Prism)</strong> with a <strong>data coprocessor (zkCoprocessor)</strong>.</p><ul><li><p>zkVM addresses <em>verifiability of arbitrary computation</em>,</p></li><li><p>zkCoprocessor enables <em>business deployment for historical and cross-chain data</em>.</p></li></ul><p>This creates a <strong>“Performance → Ecosystem → Cost” positive feedback loop</strong>:<br> as <strong>Pico Prism’s RTP performance</strong> attracts leading protocol integrations, proof volume scales up and per-proof cost declines, forming a <strong>self-reinforcing dual flywheel</strong>.</p><p>Brevis’s core competitive advantages can be summarized as:</p><ul><li><p><strong>Reproducible performance</strong> — verified within the Ethereum Foundation’s <em>ETHProofs RTP</em> framework;</p></li><li><p><strong>Architectural moat</strong> — modular design with multi-GPU parallel scalability;</p></li><li><p><strong>Commercial validation</strong> — large-scale deployment across <strong>incentive distribution</strong>, <strong>dynamic fee modeling</strong>, and <strong>cross-chain verification</strong>.</p></li></ul><h4 id="h-engineering-implementation-verification-as-execution" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>Engineering Implementation: Verification-as-Execution</strong></h4><p>Through its <strong>Pico zkVM</strong> and <strong>Prism parallel proving framework</strong>, Brevis achieves <strong>6.9-second average latency</strong> and <strong>P99 &lt; 10 seconds</strong> for <strong>45M gas blocks</strong> (on a 64×5090 GPU setup, &lt;$130K CAPEX) — maintaining top-tier performance and cost efficiency. The <strong>zkCoprocessor module</strong> supports <strong>historical data access, circuit generation, and on-chain proof verification</strong>, flexibly switching between <strong>Pure-ZK</strong> and <strong>Hybrid (Optimistic + ZK)</strong> modes.&nbsp; Overall, its performance now aligns closely with the <strong>Ethereum RTP hardware and latency benchmarks</strong>.</p><p><strong>Potential Risks and Key Considerations</strong></p><ul><li><p><strong>Technical &amp; Compliance:</strong> Brevis must validate power use, security level, proof size, and trusted setup via third-party audits. Performance tuning and potential EIP changes remain key challenges.</p></li><li><p><strong>Competition:</strong> Succinct (SP1/Hypercube) leads in ecosystem maturity, while RISC Zero, Axiom, OpenVM, Scroll, and zkSync continue to compete strongly.</p></li><li><p><strong>Revenue Concentration:</strong> Proof volume is ~80% concentrated in four apps; diversification across chains and sectors is needed. GPU price volatility may also affect margins.</p></li></ul><p>Overall, Brevis has established an initial moat across both <strong>technical reproducibility</strong> and <strong>commercial deployment</strong>: <strong>Pico/Prism</strong> firmly leads the L1 RTP track, while the <strong>zkCoprocessor</strong> unlocks high-frequency, reusable business applications. Going forward, Brevis should aim to <strong>fully meet the Ethereum Foundation’s RTP benchmarks</strong>, continue to <strong>standardize coprocessor products and expand ecosystem integration</strong>, and advance <strong>third-party reproducibility, security audits, and cost transparency</strong>. By balancing <strong>infrastructure and SaaS-based revenues</strong>, Brevis can build a <strong>sustainable commercial growth loop</strong>.</p><p><strong>Disclaimer:<br></strong>This report was prepared with assistance from the AI tool <strong>ChatGPT-5</strong>. The author has made every effort to ensure factual accuracy and reliability; however, minor errors may remain.&nbsp; Please note that crypto asset markets often show a disconnect between <strong>project fundamentals</strong> and <strong>secondary-market token performance</strong>.&nbsp; All content herein is intended for <strong>informational and academic/research purposes only</strong>, and <strong>does not constitute investment advice</strong> or a recommendation to buy or sell any token.</p><p><br></p>]]></content:encoded>
            <author>0xjacobzhao@newsletter.paragraph.com (jacobzhao)</author>
            <category>zk</category>
            <category>zkvm</category>
            <category>zkevm</category>
            <category>zkcoprocessor</category>
            <category>brevis</category>
            <enclosure url="https://storage.googleapis.com/papyrus_images/4c981022ececa9808424549bb2da5bcca1e332be6666a2af241f11eaf592c5a0.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Brevis研报：ZKVM 与数据协处理器的无限可信计算层]]></title>
            <link>https://paragraph.com/@0xjacobzhao/brevis研报：zkvm-与数据协处理器的无限可信计算层</link>
            <guid>xkMB9hij19PwSytnJtDq</guid>
            <pubDate>Mon, 27 Oct 2025 03:16:38 GMT</pubDate>
            <description><![CDATA[ZK 可信计算正从 L2 zkRollup → 通用 zkVM/zkCoprocessor → L1 zkEVM 实时证明（RTP） 演进，以“链下计算 + 链上验证”在去中心化前提下释放计算自由度。
Brevis 以“双引擎”架构突围：Pico zkVM 采用模块化 zkVM + 协处理器体系，Prism 在多 GPU 上实现以太坊整块级秒级证明；ZK Data Coprocessor 面向历史与跨链数据提供可验证计算与证明回链。Brevis 已在“性能领先 + 应用密度”上建立初步护城河，下一阶段将 巩固技术与商业双飞轮。]]></description>
            <content:encoded><![CDATA[<p>“<strong>链下计算 + 链上验证</strong>”的<strong>可信计算（Verifiable Computing）</strong>范式，已成为区块链系统的通用计算模型。它让区块链应用在保持去中心化与信任最小化（trustlessness）安全性的前提下，获得几乎无限的计算自由度（computational freedom）。零知识证明（ZKP）是该范式的核心支柱，其应用主要集中在扩容（Scalability）、隐私（Privacy）以及互操作与数据完整性（Interoperability &amp; Data Integrity）三大基础方向。其中，扩容是 ZK 技术最早落地的场景，通过将交易执行移至链下、以简短证明在链上验证结果，实现高 TPS 与低成本的可信扩容。</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/0b56d3265d5283991c7fc1a86e1869afd884cac6a7fddc618fa62f4788554dbc.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAANCAIAAABHKvtLAAAACXBIWXMAABYlAAAWJQFJUiTwAAAE1klEQVR4nBXSW2xTdQCA8f+5dGt71nYt7Wm7rdfTc3pOe3qjXbeu7Xrbelm7XrbObe3ubIMuDMqcMAaigttQwqJS0KAZoiYjM0TQBGIgxOAtSlg0ATVifJEXBZ80+mI4huR7/D1+wGJ1mGkbSVkJE02YaBNpJSjW5vDaHO12l59mfQTlJcwdDBui7V2MK0EwIaM5qKOe1kL6tWRASwZoZ5xxJSh7F8WGjebgU8OEKHsX7YoDAGCtjmxtC3bHM4X+cmc0w1jdCF+O8JUANOxQWSLdxd78lErrAagGAAVpjcYzU/7IkC8yGIqXI4mRQKzUqHYCnkasdND2ZCA2ksrtyvTNYLgTNDAAhgVSeVOzjiEoJ8l4KNZnYoNIg6Yet6jcBVmLh6BDJiYi3GFxj7xIJefUBj/l7LF5cmZrwhMc3NlRZNsKInUrLLHIWvzjg+O5dEnN5JrYImZM8zUpgPIwCMFAnRwgOwCQAF4TVK8B9Rr3xGr+9KeQiIL5BCyywAJTdP713MmPIbEdEjn08f1U7wIkdkAiOySyw4o2ROHVuAYqz2+cPvTsgdExxjMg1PWieDsAPAnEk/MkBrSRhIV6gKoQMQGERja/0FM5hZMRocov1CeljkJyfDlQWKxrjsHyjnD13NiFO3XNMRGZxog0iocEhhQZ3RebqXmTcyF/5uBQtk7hhcU0ALASQtUwpgdA2uytBKY2YAkNRGarNx/onkCVbiK+5J3aUndUZIaYravSQBSk9ECy/EK4dEJkLBoTy7sv/8g+c4bsWvP0v7o4v3y0MudqTeeDQUxugzASAJ4KrtcAIPNMri9+xZXPPoJEdkTZDsk8kMSpcu5au8nd+ourvve71DwutZSV7t1KZ2XnwGpqtqYPLwf2bh249W946SbOVlq75zbf+ujG5tW9fUOjyahIaYMwA4AFzTBfJdX5B2t/lN54Eqp8TrUNK+k4qgxAUk9nabN2k7vNcdWzT5jkGVP6mH2kZul/rW3PpY7pa/GF73teuhed37TGqigeivTMXvqGO3jyi3f2z+7NxzCpGfC1AMY0BNVqDe/qPXY/OfvrwMo/2aWtrtn10O7aztHV/MKVyZVtR/+GM3GC7D5uiq04ixezpx4cfMCdesxN1e53TNTk7lExmVIx6clytbZ+8fzhpecG+6azUUxBP/0F8LUIZjD2VCY+fHzhT27lB27g3Z8N8arCnCuub0/WfsmUz8tlnXp6LHnky3D1s9EL3AePuLvf3j15++/lh9zw2z8VUkOm1oKELRitqVykkI0mbdbQcKJTLCPhejWABHpYaDBn51NrN165w81+wkVOXDNllqHGtvbxc9Vr/w2ubtu6Xs6ufHdom5u5zFW/5qpbD66uvvDbm4sbF6+vHa7VZmfM7gyqapXjTKm7ZziVYS2BsWRQLFZDPBzADQZUQgG+kYe7nVM1ovcIrPChygBfG0Nwn9o7Pfn+4+P3uL4zDwNz133TV4jISpP7QEvnvmx5qVYMzSQyza4RVGTkNWghQQuCaYx62mvzxj0uvlAJoXKAiE08GVuvakfxDkTeBss9qNLHUwfqmiMNRC9mzOoDR33TV+TsdKO51Ggexq17xLp0Pe6DFe2QzAVJaFigQzANgul4WAvCVwMYBzwFjMogVA6j0v8BdcpLHNrvq0cAAAAASUVORK5CYII=" nextheight="352" nextwidth="901" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>ZK 可信计算的演进可概括为 <strong>L2 zkRollup → zkVM → zkCoprocessor → L1 zkEVM</strong>。早期 <strong>L2 zkRollup</strong> 将执行迁至二层并在一层提交有效性证明（Validity Proof），以最小改动实现高吞吐与低成本扩容。 <strong>zkVM</strong> 随后扩展为通用可验证计算层，支持跨链验证、AI 推理与加密计算（代表项目：<strong>Risc Zero、Succinct、Brevis Pico</strong>）。 <strong>zkCoprocessor</strong> 与之并行发展，作为场景化验证模块，为 DeFi、RWA、风控等提供即插即用的计算与证明服务（代表项目：<strong>Brevis、Axiom</strong>）。<strong>2025 年</strong>，<strong>zkEVM</strong> 概念延伸至 <strong>L1 实时证明（Realtime Proving, RTP）</strong>，在 EVM 指令级构建可验证电路，使零知识证明直接融入以太坊主网执行与验证流程，成为原生可验证的执行机制。这一脉络体现出区块链从“可扩展”迈向“可验证”的技术跃迁，开启可信计算的新阶段。</p><h2 id="h-zkevm-l2-rollup-l1" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">一、<strong>以太坊zkEVM扩容之路：从 L2 Rollup 到 L1实时证明</strong></h2><p>以太坊的 zkEVM 扩容路径经历两个阶段：</p><ul><li><p><strong>阶段一（2022–2024）：L2 zkRollup</strong>将执行搬至二层，在一层提交有效性证明；显著降低成本并提升吞吐，但带来流动性与状态碎片化，L1 仍受制于 <strong>N-of-N 重执行</strong>。</p></li><li><p><strong>阶段二（2025–）：L1 实时证明（Realtime Proving, RTP）</strong> 以 “1-of-N 证明 + 全网轻量验证” 取代重执行，在不牺牲去中心化的前提下提升吞吐，仍在演进发展中。</p></li></ul><h3 id="h-l2-zkrollup" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>L2 zkRollup 阶段：兼容与扩容性能间平衡</strong></h3><p>在 2022 年 在Layer2生态百花齐放的阶段，以太坊创始人 <strong>Vitalik Buterin</strong> 提出了 <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://vitalik.eth.limo/general/2022/08/04/zkevm.html"><strong><u>ZK-EVM 四类分类（Type 1–4）</u></strong></a>，系统性揭示了 兼容性（compatibility）与性能（performance）之间的结构性权衡。这一框架为后续 zkRollup 技术路线确立了清晰的坐标：</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/773f7e05f77f2803fee940527fbc2b2283c91dd86b294cded66535ae35beb4d9.jpg" alt="" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAXCAIAAADlZ9q2AAAACXBIWXMAAAsTAAALEwEAmpwYAAAFTUlEQVR4nJ1V/U8TZxzvL/sL9rtblkx/2ZIlS8YUUQsCQmnLFVtLbem1vPWkL2Jbaikvoy0FLGqqKFIhZQJzi5xucpRBKTF92fRqXSwmBZ20QBVE1lxhcKLALfQy3hIV/eT54Z7nntzn+Xw/n/s+FGITXi8tmc+eByWl/HypRKHhF8qZufkAL/8YeEIMnTpMz60xmWG4e3p6mtgxKJsnc7FYlbEumS0qVOqYwuJUTl62EMqVKDmF8iNcMbugBJufR1F0dHT0Iwnm5+abrjQxhQwABOjHs1ggi1t8TF6pAEBAKMs7zEquNFX67/s/nmD25T9qTdn+tO/p3Mx0VipHAKQyqAeOJNI5GYOOvgeBB8GnI263OxgMEgSxsrr6YQRut3thcWGw71bi7t0SNpublkZLSBDSMhl7E3KSkoTp6b034UfDw7duwjMvpgliFcdxDMN2RIDjOEEQjgHkEJVLoVB8/rvYvwvzCwt9/f2ayh9kau3aUJ0Gi0sE4uICSHG6ygApVOXVRgTpCYfDO1Lw6tXSmzdzlDjIKUGsWltt7CI1p1AllFawxCc5xWXsIjW7SC3V1nGKy/LkVUazxe/zrR/xPSVyOBxk/tZ3X7zUksTggDIVpwCiZucy84qYeYV6s7l3oLenH/kr8PCPu/dQFP0AgvHxiVhsnpziOA7D3aVVJwEwO4WVksHNyBLQaLzMJtulju4Oa4d1yOOMTEY8Hg9BEAuLi3HRO1OwDufQYGmJUMTPBmgHOdmHaWn7MlMTastP/n7rRmTy2eTEhMMxEAgE4nvfmigKHgdJQKYCw2IYFiOIVdKS5eVF1IuGw6FnkYjdbjcYjNU1htq6elN9g9FU39F5veunX6yt7bd7kAd+n9O5/ZTvUkChfGq1/Pi/52vHtLbasgVSQCTjSVQZvOKMY4UpOSCVKUjJAcUybft1uPmqbZvtFBiGJRIJnc7Q6/U4jo+MBL0e930fet+HhkNPVlaW1qMVevzU2moDFToeVJbBKQBEMpZIASp0gEiWROcBoJRdpIR/Qx4ND28hcDgcFRUVcrncYrG8nJlpPHfh22SAyuQnpB+1XGy79/DPdYLobBRBeoxGLQQJcnl0CBLI5GLe8ayyshMGY+W1a1cJghgZCW7L1UaJBgedkxNhnb6OAcpt13/V1p7vhHvtdjtpY0k2SKFQum78LJHwAVYKN5fGzaVl0ZMaG/XB4CPyiziOoyhK7t8g2GxyZDKira7ck/i14ITgqDhnIjLeZ+8LPX4a92MN07Fou83W2dne1tbS1taiUpU2ms2NZrNeX1NrNJIDQRCvx+31eFwu1+jo6IYCl8s1Pj5RV1cD0PczsxJlEC86O9M/MLDecLZ1tyePRw3mpgMsMUukYIByrkSdnCMWyMr3ZfEPMYVUJv8Q4/jl5pYNgiGn8/nUlPXSBZ1cXMDJNJWXulyu/oEB8u3rpaW5WIzUTlKiKKrVVStV6nyxCIIkFRUVkFQWfyiHIKgLRuxO7+2eni0xxTAsGo3+PRZ6NjUTCoUDgeHN/XJzP1heWfb5/be7O6+cKa/XQVfPVV+s09SXS3QyvrMP9qN3otEXBEEgCPKu/+AtWF1ZXfsHP/+EcrmrY9dXu/am7/vsmy++/G4PlUGtMlUZzpoMZ03aGu1YaAyG4Q0CBEECgcD0VmAYRtZk09rUbDRayDpoOWPU15s0VRrlaaVUKS3VnDI1mDwej9vtvjN0x+v1OuLYIAiHww6Hw70VNputoaHBYrHAMLy+OOR0jk0+v9G9dv+8V++WK3MzyIo3NzenxwHD8FYb1hJFRvzd+A9n1XM+awMvKgAAAABJRU5ErkJggg==" nextheight="631" nextwidth="856" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><ul><li><p><strong>Type 1 完全等价</strong>：与以太坊字节码一致，迁移成本最低、证明最慢。<em>Taiko</em>。</p></li><li><p><strong>Type 2 完全兼容</strong>：极少底层优化，兼容性最强。<em>Scroll、Linea</em>。</p></li><li><p><strong>Type 2.5 准兼容</strong>：小幅改动（gas/预编译等）换性能。<em>Polygon zkEVM、Kakarot</em>。</p></li><li><p><strong>Type 3 部分兼容</strong>：改动更大，能跑多数应用但难完全复用 L1 基建。<em>zkSync Era</em>。</p></li><li><p><strong>Type 4 语言级</strong>：放弃字节码兼容，直接由高级语言编译为电路，性能最优但需重建生态（代表：Starknet / Cairo）。</p></li></ul><p>当前 <strong>L2 zkRollup</strong> 模式已趋成熟：通过将执行迁移至二层、在一层提交有效性证明（Validity Proof），以最小改动沿用以太坊生态与工具链，成为主流的扩容与降费方案。其证明对象为 <strong>L2 区块与状态转移</strong>，而结算与安全仍锚定于 L1。该架构显著提升吞吐与效率，并保持对开发者的高度兼容，但也带来 <strong>流动性与状态碎片化</strong>，且 <strong>L1 仍受限于 N-of-N 重执行瓶颈</strong>。</p><h3 id="h-l1-zkevm" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>L1 zkEVM：实时证明重塑以太坊轻验证逻辑</strong></h3><p>2025 年 7 月，以太坊基金会发表文章《<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://blog.ethereum.org/2025/07/10/realtime-proving"><u>Shipping an L1 zkEVM #1: Realtime Proving</u></a>》 正式提出 L1 zkEVM 路线。L1 zkEVM 把以太坊从 <strong>N-of-N 重执行</strong> 升级为 <strong>1-of-N 证明 + 全网快速验证</strong>：由少数 prover 对整块 EVM 状态转移生成短证明，所有验证者仅做常数时间验证。该方案在不牺牲去中心化的前提下，实现 <strong>L1 级实时证明（Realtime Proving）</strong>，安全提升主网 <strong>Gas 上限与吞吐</strong>，并显著降低节点硬件门槛。其落地计划是以 <strong>zk 客户端</strong> 替代传统执行客户端，先行并行运行，待性能、安全与激励机制成熟后，逐步成为协议层的新常态。</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/32d1efa6e948700939277653fca0e1432585419cd373d93c79632bd458984bee.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAANCAIAAABHKvtLAAAACXBIWXMAAAsTAAALEwEAmpwYAAAB30lEQVR4nJ1T227bMAz1///E9lSse9hL0nUPuXRFUSTAgAFbGnfOtRckbrzKtWzLkihSg+LMcZMMa3ugB4IgeUjq0KM9IKJB3PFYa4nI8zz7GhiDBxKUhjQv6h4iKo3XEhDZAwlVy/8lICKsvRKWCIwppKK1cYDgb/QbJ0AiqSET0q0I1yuq10o0jTiMU3Mv8CUEK2luhL1X9kagNg6A23KOwCAmqSh/1RJFQL0ZO/fDSYbW7cpFp1mxQ3A9i+bL2CDOcnMxYa1heMVh+RgvIobWajDbCYhIaah6fJB4lcCAqVFqqka0hh2C+SJe/uYGcZpBEMtRLIcJJEKkogByBKJQ2xXVoZEYIAPiQC9ZEQNcKhtqGxabIEBnlCkHCA6C9ggQbU1nrmBlIZHQuhT6ZkXPFOPuQGfi2R2UMMbUCMov2+ht54HBQsI/ZQrgVLzvt287NERUChJepJliSf7ERZbLNJM8k4+JEELluc6FQkQAqAhClj6w/C7iv27Z3YovonTF8lUiFxGfLePbFQ+fRDW0FwRBtz9tXQTN7vDj5++N9s/25bh9OW52/WZ3eHruN75ex/GmfW89x5dv80ZvctTy35/8OD7zjzt+sz/pDBYn/cmHjv/udPCpN3WlPe8PA9aH42ywYQIAAAAASUVORK5CYII=" nextheight="455" nextwidth="1114" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><ul><li><p><strong>N of N 旧范式</strong>：所有验证者<strong>重复执行</strong>整块交易来校验，安全但吞吐受限、峰值费高。</p></li><li><p><strong>1 of N 新范式</strong>：由少数 <strong>prover</strong> 执行整块并产出<strong>短证明</strong>；全网只做<strong>常数时间验证</strong>。验证成本远低于重执行，可<strong>安全提高 L1 gas 上限</strong>，并减少硬件要求。</p></li></ul><p><strong>L1 zkEVM 路线图三大主线</strong></p><ol><li><p><strong>实时证明（Realtime Proving）</strong>：在 12 秒槽时间内完成整块证明，通过并行化与硬件加速压缩延迟；</p></li><li><p><strong>客户端与协议集成</strong>：标准化证明验证接口，先可选、后默认；</p></li><li><p><strong>激励与安全：</strong>建立 Prover 市场与费用模型，强化抗审查与网络活性。</p></li></ol><p><strong>以太坊 L1 实时证明（RTP）</strong> 是用 zkVM 在链下重执行整块交易并生成加密证明，让验证者无需重算、只需在 10 秒内验证一个小型证明，从而实现“以验代执”，大幅提升以太坊的可扩展性与去信任验证效率。根据以太坊基金会官方<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://zkevm.ethereum.foundation/zkvm-tracker"><u> </u><strong><u>zkEVM Tracker</u></strong><u> </u></a>页面，目前参与 <strong>L1 zkEVM 实时证明</strong>路线的主要团队包括 SP1 Turbo（Succinct Labs）、Pico（Brevis）、Risc Zero、ZisK、Airbender（zkSync）、OpenVM(Axiom）和Jolt(a16z)。<br></p><h2 id="h-zkvmzkcoprocessor" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>二、超越以太坊：通用zkVM和zkCoprocessor</strong></h2><p>而在以太坊生态之外，零知识证明（ZKP）技术也延伸至更广泛的 <strong>通用可验证计算（Verifiable Computing）</strong> 领域，形成以 <strong>zkVM</strong> 与 <strong>zkCoprocessor</strong> 为核心的两类技术体系。</p><h3 id="h-zkvm" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>zkVM：通用可验证计算层</strong></h3><p>面向任意程序的可验证执行引擎，常见指令集架构包括 <strong>RISC-V、MIPS 与 WASM</strong>。开发者可将业务逻辑编译至 zkVM，由 prover 在链下执行并生成可在链上验证的零知识证明（ZKP），既可用于 <strong>以太坊 L1 的区块证明</strong>，也适用于 <strong>跨链验证、AI 推理、加密计算与复杂算法</strong> 等场景。其优势是通用性与适配范围广，但电路复杂、证明成本高，需依赖多 GPU 并行与强工程优化。代表项目包括 <strong>Risc Zero、Succinct SP1、Brevis Pico / Prism</strong>。</p><h3 id="h-zkcoprocessor" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>zkCoprocessor：场景化可验证模块</strong></h3><p>面向具体业务场景提供“即插即用”的计算与证明服务。平台预置数据访问与电路逻辑（如历史链上数据读取、TVL、收益结算、身份验证等），应用方通过 <strong>SDK / API</strong> 调用即可获得计算结果与证明上链消费。该模式上手快、性能优、成本低，但通用性有限。典型项目包括 <strong>Brevis zkCoprocessor、Axiom等</strong>。</p><p>总体而言，<strong>zkVM</strong> 与 <strong>zkCoprocessor</strong> 均遵循“<strong>链下计算 + 链上验证</strong>”的可信计算范式，通过零知识证明在链上验证链下结果。其经济逻辑建立在这样一个前提之上：<strong>链上直接执行的成本远高于链下证明生成与链上验证的综合成本</strong>。</p><p>在<strong>通用性与工程复杂度</strong>上，二者的关键差异在于 ：</p><ul><li><p>zkVM 是 <strong>通用计算基础设施</strong>，适合复杂、跨域或 AI 场景，具备最高灵活度；</p></li><li><p>zkCoprocessor 是 <strong>模块化验证服务</strong>，为高频可复用场景（DeFi、RWA、风控等）提供低成本、可直接调用的验证接口。</p></li></ul><p>在<strong>商业路径</strong>上，zkVM 与 zkCoprocessor 二者的差异在于<strong>：</strong></p><ul><li><p>zkVM 采用 <em>Proving-as-a-Service</em> 模式，按每次证明（ZKP）计费，主要面向 L2 Rollup 等基础设施客户，特点是合同规模大、周期长、毛利率稳定；</p></li><li><p>zkCoprocessor 则以 <em>Proof API-as-a-Service</em> 为主，通过 API 调用或 SDK 集成按任务计费，更接近 SaaS 模式，面向 DeFi等应用层协议，集成快、扩张性强。</p></li></ul><p>总体而言，<strong>zkVM 是可验证计算的底层引擎，zkCoprocessor 是应用层验证模块</strong>：前者构筑技术护城河，后者驱动商业化落地，共同构成<strong>通用可信计算网络</strong>。</p><table style="min-width: 125px"><colgroup><col><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>类别</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>核心作用</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>运行在哪</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>主要给谁用</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>代表</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>zkRollup</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">把执行搬到 L2、提交有效性证明降费提速</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>L2</strong></p><p style="text-align: center">结算在 L1</p></td><td colspan="1" rowspan="1"><p style="text-align: center">应用与用户</p></td><td colspan="1" rowspan="1"><p style="text-align: center">zkSync、Scroll、Starknet</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>zkEVM</strong></p><p style="text-align: center"><strong>（L1 实时证明）</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">用证明替代 L1 重执行，安全提 <strong>gas 上限</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>L1</strong></p><p style="text-align: center">验证范式升级</p></td><td colspan="1" rowspan="1"><p style="text-align: center">以太坊客户端/协议</p></td><td colspan="1" rowspan="1"><p style="text-align: center">EF RTP 计划、各 zkVM L1 prover</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>通用 zkVM</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">给<strong>任意程序</strong>生成 ZK 证明（可做块证明或其它计算）</p></td><td colspan="1" rowspan="1"><p style="text-align: center">链下/任意环境 → 上链验证</p></td><td colspan="1" rowspan="1"><p style="text-align: center">基础设施</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Brevis <strong>Pico</strong>、Succinct <strong>SP1</strong>、Risc Zero <strong>R0VM</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>zkCoprocessor</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">对<strong>历史链上数据+业务计算</strong>出 ZK 证明</p></td><td colspan="1" rowspan="1"><p style="text-align: center">链下服务 + 合约验证</p></td><td colspan="1" rowspan="1"><p style="text-align: center">dApp 团队</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Brevis</strong>、<strong>Axiom</strong>、Herodotus</p></td></tr></tbody></table><p><br></p><h2 id="h-brevis" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>三、Brevis的产品版图与技术路径</strong></h2><p>从以太坊的 <strong>L1 实时证明（Realtime Proving）</strong> 出发，ZK 技术正逐步迈向以 <strong>通用 zkVM</strong> 与 <strong>zkCoprocessor</strong> 架构为核心的 <strong>可验证计算时代</strong>。而<strong>Brevis Network</strong> 是 zkVM 与 zkCoprocessor 的融合体，构建了一个以零知识计算为核心、兼具高性能与可编程性的 <strong>通用可验证计算基础设施</strong> —— 通向万物的无限计算层(<em>The Infinite Compute Layer for Everything.)</em></p><h3 id="h-31-pico-zkvm" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>3.1 Pico zkVM：通用可验证计算的模块化证明架构</strong></h3><p>2024年Vitalik 在《<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://vitalik.eth.limo/general/2024/09/02/gluecp.html"><u>Glue and Coprocessor Architectures</u></a>》中提出“<strong>通用执行层 + 协处理器加速层”（glue &amp; coprocessor）</strong>架构。复杂计算可拆分为通用的业务逻辑与结构化的密集计算——前者追求灵活性（如 EVM、Python、RISC-V），后者追求效率（如 GPU、ASIC、哈希模块）。这一架构正成为区块链、AI 与加密计算的共同趋势：EVM 通过 precompile 提速，AI 借助 GPU 并行，ZK 证明则结合通用 VM 与专用电路。未来的关键，是让“胶水层”优化安全与开发体验，而“协处理层”聚焦高效执行，在性能、安全与开放性之间取得平衡。</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/bc5ee7ab9af667b197aef7ec3844d90ccbd86d12a1b330853dc11b467f233dea.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAOCAIAAADBvonlAAAACXBIWXMAAAsTAAALEwEAmpwYAAADwElEQVR4nKWUf0wTZxzG383NGdMFmbqtlTp14I+xYWM0ndmP6BJjWDLMGlNDM1gC0uKW6WLCcBsLdOugFUq8ji5bQWE0XZHWMZfQiMBJjzbXa+mNO/u7cC2t0KBH4Da8zcOMpda4RV39Y598/7w8z/s+z/c9sPw/+JPjhkbQi7bhUWx8jAxjuN+FB71kdBQb7x9E+geR2Rs0yHzKcVxWqb/uG5b943xPD18gKD5yXHu2r+j1QwA899iazU+uLwCrhdvEbysh07FT0IjTc9cgC9zDvFEUNRgMyq9Usg/qa5vO7X/36BPPFqzLFz+/bS9PWLTngPSTxo6y4yrE5U0bpFKpgYEBhmH+w+BWLOINEfYQiUT9Tp8XXphLeDwetbppKpGITCYwnAzHkvHp64npWera7LXZuUA05iH8HsL/2+Ji2sBsNkMQhCBIxoxhGI7jWJal6RsLzOLlX4xd6kpL24kfNFXdzfIftcfO608auzvUag1FUQ/EeD+AYRiVSiUSiYxGYzKZNJvNKpVKIpHIZLILVkt9w9ebN6z8ub1mqKcVtRlQmwG2ntHWHinIF5jNvclkAsN9w3bUTQR9IYoITJDBSTJIuYngsB29fMU5v8AAlmVTqZRer6coKhM3iqJisbi8vDxzIq+jr1tT+VohHwCQC0BLTdm5JnlB/gaNpuWpVauKSz863f7T7n2HV6x7kZdXmPvCztWC7TvfPKSETPJarQP79W7JEARl1P9d6e2lpeXlZdx5sauxUrL/ZXCHlpqyjsaqZ3JX6nRtTWpN2YcNSsi096AMPM7n5RWu3bIL5GzavU9SrzXKT7WOuvC0AQzDCoWCYRir1SoSidRqdUVFhUKh0Ovbei19utM1lrYT9j79GNyF2tph6xnos9Id2zf1Wi6kUjMY7hsaQfGrEX906mok5o9ORWIzOBmCHW7Y4U5HFIlEOjs7aZo2GAw0TTMMU1dXJxQKc3JyPB53+gYu27dfSLubq79rKG//8n2TtvqsRl7yTrFO9004HHp0yfQdMvvDsuyD689xt6jIWNTvjAbQiYAr7Bv9fWHm3pr6Q5MODA9OxKPx6czEp68HJ+IYTrq8xOLNm+BhimmyP28EQVq12jyhsOS9j5WQ6dWDpWs2FvG3ivlbxTzhK3sOSD9v7qo4qfmn5Oxyt5eW7k0mCo7jLJbeKnm1VP6p5nvrrrekAPBW5G7hCV4CT28seqNECZmqaltHnO5H/yqywHHcsN3VP4g4MGKMDLuJkJsIjQcop5e8BDsuwY65+fm/AZXS7PNgqITEAAAAAElFTkSuQmCC" nextheight="181" nextwidth="411" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p><strong>Pico zkVM</strong> 由 <strong>Brevis</strong>开发，正是这一理念的代表性实现。通过 <strong>“通用 zkVM + 协处理器加速”</strong> 架构，将灵活的可编程性与专用电路的高性能计算结合。其模块化设计支持多种证明后端（KoalaBear、BabyBear、Mersenne31），并可自由组合执行、递归、压缩等组件形成 <strong>ProverChain</strong>。</p><p>Pico 的<strong>模块化体系</strong>不仅可自由重组核心组件，还能引入新的证明后端与应用级协处理器（如链上数据、zkML、跨链验证），实现持续演进的可扩展性。开发者可直接使用 Rust 工具链编写业务逻辑，无需零知识背景即可自动生成加密证明，大幅降低开发门槛。</p><p>相较于 <strong>Succinct SP1</strong> 的相对单体化 RISC-V zkVM 架构和 <strong>RISC Zero R0VM</strong> 的通用 RISC-V 执行模型，<strong>Pico</strong> 通过 <strong>Modular zkVM + Coprocessor System</strong> 实现执行、递归与压缩阶段的解耦与扩展，支持多后端切换及协处理器集成，在性能与可扩展性上形成差异化优势。</p><table style="min-width: 100px"><colgroup><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>维度</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Pico zkVM（Brevis）</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Succinct SP1</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Risc Zero R0VM</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>架构哲学</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">模块化 zkVM + 协处理器（zkCoprocessor）架构</p></td><td colspan="1" rowspan="1"><p style="text-align: center">通用 RISC-V zkVM</p></td><td colspan="1" rowspan="1"><p style="text-align: center">纯 zk-RISC-V 架构</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>模块化设计</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">执行、递归、压缩解耦，支持多后端与协处理器接入，并兼容预编译扩展。</p></td><td colspan="1" rowspan="1"><p style="text-align: center">支持预编译模块扩展但整体结构相对单体</p></td><td colspan="1" rowspan="1"><p style="text-align: center">通用 VM</p><p style="text-align: center">模块化较弱</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>性能机制</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">Multi-GPU 并行（Pico Prism RTP）；KoalaBear / BabyBear / M31 多后端；zkData Coprocessor</p></td><td colspan="1" rowspan="1"><p style="text-align: center">CPU 优化的递归 STARK；支持链上验证合约</p></td><td colspan="1" rowspan="1"><p style="text-align: center">CPU 主导的 zk-STARK</p></td></tr></tbody></table><h3 id="h-32-pico-prism-gpu" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>3.2 Pico Prism：多 GPU 集群的性能突破</strong></h3><p>Pico Prism 是 Brevis 在多服务器 GPU 架构上的重要突破，并在以太坊基金会的“实时证明（Real-Time Proving, RTP）”框架下创下新纪录。在 64×5090 GPU 集群上实现 <strong>6.9 秒平均证明时间</strong> 与 <strong>96.8% RTP 覆盖率</strong>，性能位居同类 zkVM 之首。该系统在架构、工程、硬件与系统层面均实现优化，标志着 zkVM 正从研究原型迈向生产级基础设施。</p><ol><li><p><strong>架构设计：</strong>传统 zkVM（如 SP1、R0VM）主要依赖单机 GPU 优化。Pico Prism 首次实现多服务器、多 GPU 集群并行证明（Cluster-Level zkProving），通过多线程与分片调度，将 zk 证明扩展为分布式计算体系，大幅提升并行度与可扩展性。</p></li><li><p><strong>工程实现：</strong>构建多阶段异步流水线（Execution / Recursion / Compression）与跨层数据复用机制（proof chunk 缓存与 embedding 重用），并支持多后端切换（KoalaBear、BabyBear、M31），显著提升吞吐效率。</p></li><li><p><strong>硬件策略：</strong> 在 64×RTX 5090 GPU（约 $128K）配置下，Pico Prism 实现 6.0–6.9 秒平均证明时间、96.8% RTP 覆盖率，性能/成本比提升约 3.4 倍，较 SP1 Hypercube（160×4090 GPU，10.3 秒）表现更优。</p></li><li><p><strong>系统演进：</strong> 作为首个满足以太坊基金会 RTP 指标（&gt;96% sub-10s、&lt;$100K 成本）的 zkVM， Pico Prism 标志着 zk 证明系统从研究原型迈向主网级生产基础设施，为 Rollup、DeFi、AI 与跨链验证等场景提供更具经济性的零知识计算方案。</p></li></ol><h3 id="h-33-zk-data-coprocessor" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>3.3 ZK Data Coprocessor：区块链数据智能零知识协处理层</strong></h3><p>智能合约原生设计中“缺乏记忆”——无法访问历史数据、识别长期行为或跨链分析。<strong>Brevis</strong> 提供的高性能的零知识协处理器（ZK Coprocessor），为智能合约提供跨链历史数据访问与可信计算能力，对区块链的全部历史状态、交易与事件进行验证与计算，应用于<strong>数据驱动型 DeFi、主动流动性管理、用户激励及跨链身份识别</strong> 等场景。</p><p>Brevis 的工作流程包括三步：</p><ol><li><p><strong>数据访问</strong>：智能合约通过 API 无信任地读取历史数据；</p></li><li><p><strong>计算执行</strong>：开发者使用 SDK 定义业务逻辑，由 Brevis 链下计算并生成 ZK 证明；</p></li><li><p><strong>结果验证</strong>：证明结果回传链上，由合约验证并调用后续逻辑。</p></li></ol><table style="min-width: 100px"><colgroup><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>模式</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>技术特征</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>优势</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>典型场景</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Pure-ZK 模式</strong></p></td><td colspan="1" rowspan="1"><p>所有结果提交 ZK 证明并上链验证</p></td><td colspan="1" rowspan="1"><p>完全信任最小化</p></td><td colspan="1" rowspan="1"><p>高安全场景：身份验证、跨链资产证明、积分系统</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>coChain</strong></p><p><strong>&nbsp;(OP 模式)</strong></p></td><td colspan="1" rowspan="1"><p>先通过 PoS 验证生成结果，若被挑战再提交 ZK 证明</p></td><td colspan="1" rowspan="1"><p>成本低、延迟小</p></td><td colspan="1" rowspan="1"><p>实时响应型应用：GameFi</p></td></tr></tbody></table><p>Brevis 同时支持 <strong>Pure-ZK</strong> 与 <strong>CoChain（OP）模型</strong>：前者实现完全信任最小化，但成本较高；后者通过 PoS 验证与 ZK 挑战机制，允许以更低成本实现可验证计算。验证者在以太坊上质押，若结果被 ZK 证明挑战成功将被罚没，从而在安全与效率间取得平衡。通过 <strong>ZK + PoS + SDK</strong> 的架构融合，Brevis 在安全性与效率之间取得平衡，构建出一个可扩展的可信数据计算层。目前，Brevis 已服务于 <strong>PancakeSwap、Euler、Usual、Linea</strong> 等协议，所有 <strong>zkCoprocessor 合作</strong> 均基于 <strong>Pure-ZK 模式，</strong>为 DeFi、奖励分配与链上身份系统提供可信数据支撑，使智能合约真正具备“记忆与智能”。</p><h3 id="h-34-incentra-zk" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>3.4 Incentra：基于 ZK 的“可验证激励分发层</strong></h3><p><strong>Incentra</strong> 是由 <strong>Brevis zkCoprocessor</strong> 驱动的可信激励分发平台，为 DeFi 协议提供安全、透明、可验证的奖励计算与发放机制。它通过零知识证明在链上直接验证激励结果，实现了 无信任、低成本、跨链化 的激励执行。系统在 ZK 电路中完成奖励计算与验证，确保任何用户都可独立验证结果；同时支持跨链操作与访问控制，实现合规、安全的自动化激励分发。</p><p>Incentra 主要支持三类激励模型：</p><ul><li><p><strong>Token Holding</strong>：基于 ERC-20 时间加权余额（TWA）计算长期持有奖励；</p></li><li><p><strong>Concentrated Liquidity</strong>：根据 AMM DEX 手续费比例分配流动性奖励，兼容 Gamma、Beefy 等 ALM 协议；</p></li><li><p><strong>Lend &amp; Borrow</strong>：基于余额与债务均值计算借贷奖励。</p></li></ul><p>该系统已应用于 <strong>PancakeSwap、Euler、Usual、Linea</strong> 等项目，实现从激励计算到分发的全链可信闭环，为 DeFi 协议提供了 <strong>ZK 级的可验证激励基础设施</strong>。</p><h3 id="h-35-brevis" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>3.5 Brevis 产品技术栈总览</strong></h3><table style="min-width: 100px"><colgroup><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>层级</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>模块</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>功能定位</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>技术亮点 / 特征</strong></p></td></tr><tr><td colspan="1" rowspan="2"><p style="text-align: center"><strong>底层：计算与证明内核</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Pico zkVM</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">通用零知识虚拟机，执行任意计算任务的可验证证明</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Rust 支持、模块化递归/压缩架构、支持函数级与应用级协处理器</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Pico Prism</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">多 GPU 集群 zk 证明系统，为 zkVM 提供高性能计算</p></td><td colspan="1" rowspan="1"><p style="text-align: center">6.9s 平均证明时间（45M gas block）、96.8% RTP 覆盖率、成本降低约 50%</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>中层：数据协处理层</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Brevis zkCoprocessor</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">读取多链历史数据、执行业务电路并生成组合证明</p></td><td colspan="1" rowspan="1"><p style="text-align: center">提供 Go/TS SDK 与合约接口，支持 Pure-ZK 与 coChain 模式，向上输出可验证结果</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>上层：激励协议层</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Incentra Protocol</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">管理奖励池、计算激励、自动分发奖励</p></td><td colspan="1" rowspan="1"><p style="text-align: center">基于 Brevis 证明结果构建“可验证激励分发层”，支持跨链领取与合规控制</p></td></tr></tbody></table><br><h2 id="h-brevis-zkvm" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>四、Brevis zkVM 技术指标与性能突破</strong></h2><p>以太坊基金会（EF）提出的 <strong>L1 zkEVM 实时证明标准（Realtime Proving, RTP）</strong>，已成为 zkVM 能否进入以太坊主网验证路线的行业共识与准入门槛，其核心评估指标包括：</p><ul><li><p><strong>延迟要求：</strong> P99 ≤ 10 秒（匹配以太坊 12 秒出块周期）；</p></li><li><p><strong>硬件约束：</strong> CAPEX ≤ $100K、功耗 ≤ 10kW（适配家用/小型机房）；</p></li><li><p><strong>安全等级：</strong> ≥128-bit（过渡期 ≥100-bit）；</p></li><li><p><strong>证明尺寸：</strong> ≤300 KiB；</p></li><li><p><strong>系统要求：</strong> 不得依赖可信设置、核心代码需完全开源。</p></li></ul><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/d8c11e249823c8d66032f9df5d9968af43e283d63b5bd406431eda58068cf4ae.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAQCAIAAAD4YuoOAAAACXBIWXMAAAsTAAALEwEAmpwYAAADnklEQVR4nIVUz08bRxhdqVL7R7THHhL1Vqm3XnKpqvRQqWoPUdWWNkqkqglN0yppDykcUipFVkEFtQlugcZKaYhpAIEJWmyMYe2N12sMOMbBhtgbG2ywh13vD8be3ZlqPMayINCn1ezM6H3f23nfN8vgOlB9tG1kWhbG2LQs27YPJohy7AYQfdFNGkUfksdGqJ7NtBpRDMb47kz8vMPTORToci1ccEzdGg7+NMxf7vN1u8Vb94Xu0eg150LfWPTr3+a+uePvGFq85vR/d8d/08XfGOT6J1cu9sxe6vP9OLjouB+6MRA47/D0PAhfcHimuFRDoAi0J5ulp1JZKsqbWyCVA0JszT3pn2Q5b4AfmwlMeHnPLPfvlHfax037Qo/8/LQ/7HJ7Jrz8jD88NhO495CdFxJSsbIulZdShWf58mYeFIDWEGg9L0U+l1uJicHFhcd8SAjz8XgsthQNBTlREOKxWCK+mlp/GhOFqPA4Kgory0t8cFHKZg4laVhEs1MHIayaptk0vRWoPmb2dthkbDohHiVQ033Lz0f9a6n0Rmw1Ka4kVN1gstms1+udm5tjWXZ+fr5QKLTUswGMkaLqJaDmdkE0k1zbythHQLvDK2b+Zpedf97tvT3w+x+uMtAYtQ4IoWEYEMIXRJomxii/K5/6amicW9dqMqxVj9JalWSlUlG1fVg1LYsxDONoDVphU/dqJsO0dQ4FEVJV3TiBfMhhIlD3/cXYh9VI8nmuCIjAK1/c/CuEUEXb30cHt+FkEAFVVY8ToOYwZx3tvSyhvtTW5eJVKPNPMkWgYkxu5f8LHHcC07Jo3ZjXLr/VPkwmzKe/jEQ2tvPM6avcsoQxrtZI4AkyJIoWljYizUiXQNFzuzJhvP7tme9HqcCvbjEuSQzzoU+UMEaaAZtFahaS6qFWgeYJKI/K9LrDL7f11wWunrk+Uhf4rHskUhc4F00Wb49Fvux5hDHaLskQVjFGCJO0zS5vFLkKoWZAoBigokeSOaDoV/rYYkl+v2OCYT4hjFcvvXmFWvSxY1hYJQIfPMvvMe/1MG/8ABSdeefndanEhtMTC2u6Ace5ZH5H1jV1u1CwbZvcZCAr2fwOUHQfv1qtWec6HyQzu15h46MO8uGfdz38ZzaOMX73+r1wYqtUUd5uH1RU2DvCdw74i0A7fdG5p0LnuOBwBTDG/aOBItBAuZROpxv/ImIcaRiMMTHnYDxu2brfSji0JFZhjP8D9tJAOhgpElgAAAAASUVORK5CYII=" nextheight="723" nextwidth="1456" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>2025 年 10 月，<strong>Brevis</strong>发布《<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://blog.brevis.network/2025/10/15/pico-prism-99-6-real-time-proving-for-45m-gas-ethereum-blocks-on-consumer-hardware/"><em><u>Pico Prism — 99.6% Real-Time Proving for 45M Gas Ethereum Blocks on Consumer Hardware</u></em></a>》报告，宣布其 <strong>Pico Prism</strong> 成为首个全面通过以太坊基金会（EF）实时块证明（RTP）标准的 zkVM。</p><p>在 <strong>64×RTX 5090 GPU（约 $128K）</strong> 配置下，Pico Prism 在 45M gas 区块中实现 <strong>平均延迟 6.9 秒、96.8% &lt;10s、99.6% &lt;12s</strong> 的性能表现，显著优于 <strong>Succinct SP1 Hypercube</strong>（36M gas，均时 10.3s，40.9% &lt;10s）。在延迟降低 71%、硬件成本减半的条件下，整体性能/成本效率提升约 <strong>3.4×</strong>。该成果已获<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://x.com/ethereum/status/1978497335115051056"><u>以太坊基金会</u></a>、<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://x.com/VitalikButerin/status/1978432581298204951"><u>Vitalik Buterin</u></a> 与 <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://x.com/drakefjustin/status/1978435449489158312"><u>Justin Drake</u></a> 的公开认可。</p><table style="min-width: 100px"><colgroup><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>指标</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>EF 标准</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Pico Prism</strong></p><p style="text-align: center"><strong>（Brevis）</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>SP1 Hypercube</strong></p><p style="text-align: center"><strong>（Succinct）</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>RTP 覆盖率（&lt;10s）</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">99% 区块 ≤10s</p></td><td colspan="1" rowspan="1"><p style="text-align: center">45M gas：96.8%（99.6% ≤12s）</p></td><td colspan="1" rowspan="1"><p style="text-align: center">36M gas：40.9%</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>平均证明时间</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">—</p></td><td colspan="1" rowspan="1"><p style="text-align: center">6.0–6.9 秒</p></td><td colspan="1" rowspan="1"><p style="text-align: center">10.3 秒</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>GPU 配置 / 成本</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">≤$100K</p><p style="text-align: center">≤10 kW</p></td><td colspan="1" rowspan="1"><p style="text-align: center">64×5090，约 $128K</p></td><td colspan="1" rowspan="1"><p style="text-align: center">160×4090，约 $256K</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>性能/成本效率</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">—</p></td><td colspan="1" rowspan="1"><p style="text-align: center">提升约 3.4×</p></td><td colspan="1" rowspan="1"><p style="text-align: center">基准</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>开源与复现</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">核心开源</p></td><td colspan="1" rowspan="1"><p style="text-align: center">提供可复现实验仓库（pico-ethproofs）</p></td><td colspan="1" rowspan="1"><p style="text-align: center">官方结果公开，细节有限</p></td></tr></tbody></table><p><br></p><h2 id="h-brevis" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>五、Brevis生态扩张与应用落地</strong></h2><p>Brevis的<strong>ZK 数据协处理器(zkCoprocessor)</strong>，负责处理 dApp 无法高效完成的复杂计算（如历史行为、跨链数据、聚合分析），并生成可验证的 <strong>零知识证明（ZKP）</strong>。链上仅需验证这份小证明即可安全调用结果，大幅降低 Gas、延迟与信任成本。相较传统预言机，Brevis 提供的不只是“结果”，更是“结果正确的数学保证”，其主要应用场景可以分为如下几类</p><ul><li><p><strong>智能 DeFi（Intelligent DeFi）</strong>：基于历史行为与市场状态，实现智能激励与差异化体验（PancakeSwap、Uniswap、MetaMask等）</p></li><li><p><strong>RWA 与稳定币增长（RWA &amp; Stable Token Growth）</strong>：通过 ZK 验证实现稳定币与 RWA 收益的自动化分配（OpenEden、Usual Money、MetaMask USD）</p></li><li><p><strong>隐私去中心化交易（DEX with Dark Pools）</strong>：采用链下撮合与链上验证的隐私交易模型，即将上线</p></li><li><p><strong>跨链互操作（Cross-chain Interoperability）</strong>：支持跨链再质押与 Rollup–L1 互操作，构建共享安全层（Kernel、Celer、0G）</p></li><li><p><strong>公链冷启动（Blockchain Bootstrap）</strong>：以 ZK 激励机制助力新公链生态冷启动与增长（Linea、TAC）</p></li><li><p><strong>高性能公链（100× Faster L1s）</strong>：通过实时证明（RTP）技术推动以太坊等公链性能提升（Ethereum、BNB Chain）</p></li><li><p><strong>可验证 AI（Verifiable AI）</strong>：融合隐私保护与可验证推理，为 AgentFi 与数据经济提供可信算力（Kaito、Trusta）</p></li></ul><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/7d4a3fb8d8eb9c7146760a361fae97a124565ee1686c044b305cca80604fe6a8.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAANCAIAAABHKvtLAAAACXBIWXMAAAsTAAALEwEAmpwYAAAEMUlEQVR4nEWS709bZRTH74iVODpo6e6v59e997m/Smmhd22lFDphZLjAfuCgcxthWzqbFbJCKNAWbPk1MO4P8LWJr8jMojFGs5iRZQ7iC53bIiNhvpqbxsxFNJmL+kJz26FPPi+ePCfne87zPYdxlE91dTXDMJYVLBSyqdS5wcE3U6lz2ezoZHa0VCoUS7mB/jeSyaFMZjidPp9On7+QSk5mR4eGTmcyFzKZ4ax9PzkykioUppqa/AzDVFdXV5QZh+Olqqoqh8Oxi9klCEJX14FIOOxrbAyHQ+FwyLKsaGu0PR4zTSPaGu080LF/f7xCd/fBoBX0NnhbopGWaMSygpZlxdpiEEKG2eVwvFyBEUQVEj8vKrxARCDxPKxxuvfUumucrhqn2+l0QZFAUSKEsqywp7beWX6scbpe2V1X43S53bwoyh6PKAK5zsXuqa13udm9e4UK9R6OaX/tyMPHP19fvfHDo4dr62tr67cikTYRKEQyiGSwHLh65cOtzft/PPu9UCiyHJSptxIikiECKRCIbGxsPvj+wfXVVar7EdGJZECsIawhootAYnz+0Nh4bio3nctN5wvFk4NnIFYhomVUnoeHD/dPvz07lZtJptKJxCnDbBKBDCAFkApAxpJ2Npmem790MTPe09uHsLaTayMAieFEUlfPu9xcbR3L8wgiCqACUQUqAIljxd27XQBICKsVIFIAfAEvoHqPUOdiWQ4QSd9JfBEVAGFajx9dev+9i9P5wqV5f6iFEwnEajkmC0DheZA5O/TxygdWoBkBgjBFOxIQKTwgZoNvYmpsYmq898hhADGWFIgphLKNXUZmDiUStzfurn+1vrZ+q+dInwBkiKgIFAAUhDWWA5fffef58+1YLE6pKVMvgDsOYFUAsm42fvrZJ/e+u10sFXkBlf/3v0UQKYwRCMU6D7bGu+IdBxOJU2ZDkBfIfyoikIOByKuR9oH+k0PnkqFwzO4AqyK2C4iYYklra493H+rt6T0qU7MyHhHY9toiWGO8erMu+wjSIaS8gO3pYw1Luj0urGLFpKEwafCLRPPwwMOBvTwS7EwVluep6U0+X5SqfoR0iHWIVHuRFLOySEQymJHTw//8+dfW/Y0vb9549uy3b25/PVMq1bo5HpY7JerKzc+f/P3rva2NKx9dvfbFtZUrK09+edzXf4LlgAjkWPvrd+9sbm9vb21tPn365M7db3/86VF2IlfvESqNMrFox8LCUr4wPZWbnl9cHhkbH0q+dex4IhrvrPhw4szZucWlyXx+cXm5MFM81jcQbeuUVa8AZBHIqt48ns2XZhfyhWJpdq6ic6xvAEsagNQuQKlP0wOG2ayaAUX3q2bAbNzXvC+qNzQhSZNVHzUC1Gikup+qPpl6eYGwHCy7rCKsqZodMswmqto6mubX9IDXZxHJsC1SzH8BB/IpxEl0/tAAAAAASUVORK5CYII=" nextheight="597" nextwidth="1456" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>根据<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://explorer.brevis.network/"> <strong><u>Brevis Explorer</u></strong></a> 数据，截至 2025 年 10 月，<strong>Brevis 网络</strong> 已累计生成超 <strong>1.25 亿条 ZK 证明</strong>，覆盖 <strong>近 9.5 万个地址</strong>、<strong>9.6 万次应用请求</strong>，广泛服务于奖励分发、交易验证与质押证明等场景。生态层面，平台累计分发激励约 <strong>2.23 亿美元</strong>，支撑的 <strong>TVL 超 28 亿美元</strong>，相关交易量累计突破 <strong>10 亿美元</strong>。</p><p>当前 Brevis 的生态业务主要聚焦 <strong>DeFi 激励分发</strong> 与 <strong>流动性优化</strong> 两大方向，算力核心消耗由 <strong>Usual Money、PancakeSwap、Linea Ignition、Incentra</strong> 四个项目贡献，合计占比超 <strong>85%</strong>。其中</p><ul><li><p><strong>Usual Money（46.6M proofs）</strong>：展现其在大规模激励分发中的长期稳定性；</p></li><li><p><strong>PancakeSwap（20.6M）</strong>：体现 Brevis 在实时费率与折扣计算中的高性能；</p></li><li><p><strong>Linea Ignition（20.4M）</strong>：验证其在 L2 生态活动中的高并发处理能力；</p></li><li><p><strong>Incentra（15.2%）</strong>：标志着 Brevis 从 SDK 工具向标准化激励平台的演进。</p></li></ul><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/72ed34ff223a272f67803c4d34564ffd66b9f73b9ed1367a78209c35631c0163.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAASCAIAAAC1qksFAAAACXBIWXMAAAsTAAALEwEAmpwYAAAFtklEQVR4nF2V228cdxXHhwceEH8AEo9ICAmQuEk0gT5FiAde0oJKVS5CBZEnWqmVUAqCCBKgqHbUNjixU5Q4QUpdx3HsXa8d39aXzbp2vV6vd70zO57rb+Y3v/tlZtdeO2moitaJVEA6j0fno/OVzucYWmAtCOPJHkOVALZsQGKoJe7uq24mu/vqsKuP9nU741KJVPNOig87qruvDjryoCMPO72Gg47SkhyP+v8ypCA+dxZZ6xqCkzY0nVAKAkInl5ssl1ZnZ2bGxm7nC4Xtynpmvx/UNnZMkstPTRemVpaXlpYWZ2fv3b07vrgwn8ShlkRQJFnyPwCb3MqTW2+Scj8CKwmRWqaaUgSWl4rF4sLK8tL0dCGXzzUb1cxeMyuV+TVSLpeLxbnND9bX18tr5dLc7L2FhXngO23N2inPNBMcfQLYws+9R96+RFZui3jTMtfWSknst1N+HII67Grfs1tm3W6ZWqt2xjsadfdVJxM7tSrD0WFXP44r1ZThaHtr07J2U00/ATTo6ZGkfziYr8R776+Vc7ncbqPGMEQQEBRHwC0WF3L5fLG4QBMgKJSCKEmi0L0zPr5Tq6aa8uNMGI6cPbNQKKytlRiGjMIngCY9VQB9ZbNEUYhjL4l94DRh0EpiP4n90LOS2MfQ5xgg5AsKGQIcRyQJGAaCQ8UShgCOfc/eDT2LIUCSAAGX4+gJwKGn3WQ6tM2NSkVRCFvVFvAxhirxNAlEEmY8yThiKERmwJIA2HUKHIkAT8LW9g5HEbBriWdFrpV4LQJ8gSKBIoJijuMegKLlo6P20N/OnTKMpZkJ/vDolSvnLy7eFSQbmzXTNr2Y3+yfWGnbtPrzpn3dOnqk/nVtsHgvN7eQN75ojIwOf/zocHj4euA7kxPjhmHs1Dbe/sfEG2+MtDOuWGJISrv7auLW8E+f/lp14z4k+Ow7/f+cm4gDeC1X0ppeHJ7oG3rvALGlM/edvNk54CsL043tja2ttW//4JurpcVM87m5ORC65fLSs888s7tbG758ffDNQUp7l2EoiTkKYtfkCaQxkCjspDoJnNBp4MAOHbsjYFcTDAOWxVRA6Fopg4rEqZQHnQ8VIxTasW+55k5b8ofdThJ6KQgECGMQaEkMLYhCfsN3c07MGFkvFd+50seSsFldHxoYAK5553b+xo1RLcjq5Iq/XQvdxp/+8ufND9bh9nJ1tB/DYK+FAo/wxLnUd35q/N1uKq47lalgN+NY8MTgHH2o4FUTG+9a8YOPX/vNj75gGPV6fXRkxDCMcrn4+a+8YHz2O0zyU595duLCwFazahjG2d//8f7Q7147+Wlob18aQHduU4FdwzCe+saXP/rogVG6cbI2k3GuhTC0wCmL3SicqDbabVVaXrg88BaCkbdn3rxxk+MoX5gfGctrSe9NLfqWhVF4behKy6xvV9bzd8ckg6srrdq2m2k8OHh5Zjr3sJsVmptzZk1xW5BVQ0nCExe7dQUcFLi+2ewInFHIE0+LmOGQhm5kNzy77rt1p7kTu42OgAIDRBJEMAE+R66kHon9Rwe6kymaAE0wgV6Ifqtto7eBJgCJBB4oQZEm/vdO/+zFMy8/2NfFvyfVUSJlwDE49HTHp5zHklKlhIj83W89F589j7X+w7ng6lXQbidPPX3i3KuvdA6yF3754oW3LmD1K209BtCYCBRo3pMUCT73pRPf/f7pfx+277wUlIcSncY9cIMrh6UZbVktAKMMhhvG1/d++Guk2784Y/319bCTAcP41I+ff1529VdPnDzz6ks4ffkJQLEYx6Flmhj1BOLZDeDZSnHPcWAcMQw5jqxW3fdbiiEQuggCIZnZbLiep1UPCeNQSW47XjWKyyIaqcwVrfGA/kS3Hm/wuBSVrGfzVAstjxWmepciWc+9qaT//VIkQ8dtvQfw2J0EQxORmzF6HVvDcnaFXQ5Jn0aT/wF013NVbRu6PwAAAABJRU5ErkJggg==" nextheight="819" nextwidth="1456" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>在 <strong>DeFi 激励领域</strong>，Brevis 依托 Incentra 平台支撑多个协议实现透明、持续的奖励分配：</p><ul><li><p><strong>Usual Money</strong> 年激励规模超 <strong>$300M</strong>，为稳定币用户与 LP 提供持续收益；</p></li><li><p><strong>OpenEden</strong> 与 <strong>Bedrock</strong> 基于 CPI 模型实现美债与 Restaking 收益分配；</p></li><li><p><strong>Euler、Aave、BeraBorrow</strong> 等协议通过 ZK 验证借贷仓位与奖励计算。</p></li></ul><p>在 <strong>流动性优化</strong> 方面，<strong>PancakeSwap、QuickSwap、THENA、Beefy</strong> 等采用 Brevis 的动态费率与 ALM 激励插件，实现交易折扣与跨链收益聚合；<strong>Jojo Exchange</strong> 与 <strong>Uniswap Foundation</strong> 则利用 ZK 验证机制构建更安全的交易激励体系。</p><p>在 <strong>跨链与基础设施层</strong>，Brevis 已从以太坊扩展至 <strong>BNB Chain、Linea、Kernel DAO、TAC 与 0G</strong>，为多链生态提供可信计算与跨链验证能力。与此同时，<strong>Trusta AI、Kaito AI、MetaMask</strong> 等项目正利用 <strong>ZK Data Coprocessor</strong> 构建隐私保护型积分、影响力评分与奖励系统，推动 Web3 数据智能化发展。在系统底层，Brevis 依托 <strong>EigenLayer AVS 网络</strong> 提供再质押安全保障，并结合 <strong>NEBRA 聚合证明（UPA）</strong> 技术，将多份 ZK 证明压缩为单次提交，显著降低链上验证成本与时延。</p><p>整体来看，Brevis 已覆盖从 <strong>长期激励、活动奖励、交易验证到平台化服务</strong> 的全周期应用场景。其高频验证任务与可复用电路模板为 Pico/Prism 提供了真实的性能压力与优化反馈，有望在工程与生态层面反哺 L1 zkVM 实时证明体系，形成技术与应用的双向飞轮。</p><h2 id="h-" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>六、团队背景及项目融资</strong></h2><p><strong>Mo Dong｜联合创始人（Co-founder, Brevis Network）</strong></p><p>Dr. <strong>Mo Dong</strong> 是 <strong>Brevis Network</strong> 的联合创始人，拥有伊利诺伊大学香槟分校（<strong>UIUC</strong>）计算机科学博士学位，他的研究成果发表于国际顶级学术会议，被谷歌等科技公司采纳，并获得数千次学术引用。他是算法博弈论与协议机制设计领域的专家，专注推动 <strong>零知识计算（ZK）</strong> 与 <strong>去中心化激励机制</strong> 的结合，致力于构建可信的 <em>Verifiable Compute Economy</em>。作为 <strong>IOSG Ventures</strong> 的风险合伙人，亦长期关注 Web3 基础设施的早期投资。</p><p>Brevis团队由来自 <strong>UIUC、MIT、UC Berkeley</strong> 的密码学与计算机科学博士创立，核心成员在零知识证明系统（ZKP）与分布式系统领域具有多年研究经验，并发表多篇经过同行评审的论文。Brevis 曾获 <strong>以太坊基金会（Ethereum Foundation）</strong> 的技术认可，其核心模块被视为关键的链上可扩展性基础设施。</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/a6b8d6682acd79b944cb696145f06fbd368ee415df847609731b8b949cd37606.jpg" alt="" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAANCAIAAABHKvtLAAAACXBIWXMAAAsTAAALEwEAmpwYAAAClElEQVR4nJVUz0sUYRj+kPQYdUqhTPQSHjwEZj8QFIoOHTwoKLEdAzsEQSRalHQJKy07BpW7sG44ti2siz9YzHFwNvbH7DrfzO77rTPjjrPu1m4xHvwHYvvacZg1LHgYZt55533e9/med1AoGFgNL8pYSCWjR0LGQiy64Zv1SDjxL/mpZBTxPBuLbshYsGDoiqYChapk7K9oH+ts2BGUcMKC4xUigO3PEZ7zuN8zjI+Z8zJz3oWg35EgY8EekXACIA2QAcgQkiUk6+BA9tYI4OWlUFvr2ZMnjjfU1zUcQyMP7hm6QsWpxe/qsOh/Mf/h7sLH8aBvbCUwIUuineOAgHJoKvi87qHB/mtXewb6+zQVDi19AEmM8SGe/RTdWODC3gjrl6XNwyewz6GpUCzomgp/6/0wif4IdcQZ/BdI9VuqCb06CqIIz0V4zuFFjv1S606uJkgd6PClIw0FPs8vL4WsPZCxIKbiMzPvUsmomIzRIAG8Gl5kGB9lsnTzzXp4nrVqERB5bs3PzBHAohg/OANDV0ql3WJBLxZ0a/b8rp7f1Y2ioVRXgQDe3zdNs2SapWpE1FT4US7smWUCmBLTGyVL6GkjTYU305PDrpt3hm8/HBuhSYqSmXz29OX4o4nR+ytBfyYjEsDrbPhy5/nrvd0DfTfoblYsN+u55RpyuQYD/krjVABFSTOhGUH4SgCjPbPc030FIdTUeKqttYXn2bSUVLe3Lpxu7Gpu6u049/bV8+3cVkVxbq29+UxjfV17S7OEE5gSeN2XLnYihKZfT9KNIZCempro6ul48ni0QkCHMvIaAWz9GMRUPJdTdvK5XD6X3UpXLb9Z/lksfNv5XipYJtFU2DPLhq7YzZOMRzaFOBXjFwLOtM7Lz6oLAAAAAElFTkSuQmCC" nextheight="529" nextwidth="1309" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>Brevis 于 <strong>2024 年 11 月完成 750 万美元种子轮融资</strong>，由 <strong>Polychain Capital</strong> 与 <strong>Binance Labs</strong> 共同领投，参投方包括 <strong>IOSG Ventures、Nomad Capital、HashKey、Bankless Ventures</strong> 及来自 <strong>Kyber、Babylon、Uniswap、Arbitrum、AltLayer</strong> 的战略天使投资人。</p><h2 id="h-zkvmzk-coprocessor" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>七、ZKVM与ZK Coprocessor市场竞品分析</strong></h2><p>目前，以太坊基金会支持的<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://ethproofs.org/zkvms"> </a><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://ETHProofs.org"><strong><u>ETHProofs.org</u></strong></a> 已成为 L1 zkEVM 实时证明（Realtime Proving, RTP）路线的核心追踪平台，用于公开展示各 zkVM 的性能、安全与主网适配进展。</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/d2b9ae676f438bb64c68c3d7311329e765c3a7729031b149c2d014c692aa5a74.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAANCAIAAABHKvtLAAAACXBIWXMAAAsTAAALEwEAmpwYAAAEPklEQVR4nF2Tb0wbZRzHH9/5QrclJkbdMhdndNH4YjHM6DQ6mIlxuvjCf0yQf3FkIHGDDMKEVhkDIdYNmIxR/qV1tj0oRw+40h4H7e2uV67Xcne59o5ycOVaKQXCn7jM7J1pzhFi8n3xe54Xv+/v93y+DwAAgBPHP7l6pdhgrO3qNDrvnK7LzzMU5RmKcg0lHzWXHfn87WfP53x8o/yssTTPUHTWWHq67sIVyy+EKs4u8b5lQVfjHVNhbRVET9N/yYQq+pcFTGKplAzA0wfA4efzmxqKDcaKtlZ6ddG3LFApWRe7mcAWI7jCjbC+WyOWTqfVQWO3Riz3CA9E4b0IBIf81hnU5pvC4xwc8lOaBFE4KtDyPzuLjx5E1hPAHiRvw8MDnslOZLgLcVKa1I3Y7k5Ag9OuIQ/S3N8dTCncRpLbSNJJmdIkbiNJrsQi6wkmrRBqlM2oHpFBwiSbUSEKpzTJQXr5nfWCi5+CA2BUpEE7ZLs9PtbtRm6ODdeY2jwiMzQ7cdc9bPY4W4Z6LhrrCDXKpBW9nW9ZYNIKrnBMWqFX475lkVlTUYGGQ34mrfShMLkSsxFuUlts6W96I/cVGzMLSq4bqjpM1wZ6e3zTx3JOQn7PjXs9nU5r7W+tyqMdcXdN786kFf2t2Yw6FQtRmkSo0azTmmojvdaZSUqTTLZ+fCECUbj8cOvFVw+Dg8BKe0B52/Xiph8NVrOND778To4r6HPNk3DIPzjtIldi5EqM0iRde0dCjeo3eoHHOUwOZ4uFCKFG4ZCf21r9urzwtQ/fROUQeK/gqy9qLw8GZmx88PUz71q8yIX6ik6ndSJCsRl1vwg1SqhRYSdtxyfopMykldklXtxdc7HEn7NTbEZts5gpTcpuk1yqaSk5+sEhs28SfHb5UkHD1d+9yACFWzF0LhUfCeKoELST2N7s1GP9N/JC5H8beESG0iQXS/gWeTuJhTKJysYfTp1/CxYC4P3CL3NL8s0+dIDCYtuZuVS8oq3R7HEWVH9PadIegH0MlnTIWQZLPLOmTvABHbIZHSXUaD/q5P/eOPhU9oP9EZkBVTdb63o6ECni4OmAGp9Lxc0ep53EWoZ6IApn0sqejf5EutPjFGULTA6jQpBJK3pMUSHIba2WVn939NRxJ0cCJ3O/yw11jNncsfD9RAyPc79CA4PTrtL66l4EEnfX5Ieb+xlE1hN6lnQnbiOJhEkXS7AZ1YyO6gzEB5tHjj0HngSwSAE6KX9rrDl57kybxax36ffCDhpzzZPjHGWdmWwZ6tEHxKJhLJpNCzof8CnifgaoEKQ0SSeBCjS/nalqLH/hxKGxaBCgAl3587XK5safervYjIrJ4XZHX0Ovqayptt3RZ7IPniv7Bg75+e3UeJh0c4H5rSQcxMmVGL0axxWO307BIb+N9DJp5ZZjiFCjFhwhNeVSfdEzLz1hCXj/BUeyaJGkR9IJAAAAAElFTkSuQmCC" nextheight="611" nextwidth="1456" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>综合来看，RTP 赛道竞争正聚焦四个核心维度：</p><ul><li><p><strong>成熟度</strong>：SP1 生产化部署最成熟；Pico 性能领先且接近主网标准；RISC Zero 稳定但 RTP 数据未公开。</p></li><li><p><strong>性能表现</strong>：Pico 证明体积约 990 kB，较 SP1（1.48 MB）缩小约 33%，成本更低；</p></li><li><p><strong>安全与审计</strong>：RISC Zero 与 SP1 均已通过独立安全审计；Pico 正在审计流程中；</p></li><li><p><strong>开发生态</strong>：主流 zkVM 均采用 RISC-V 指令集，SP1 依托 Succinct Rollup SDK 形成广泛集成生态；Pico 支持 Rust 自动生成证明，SDK 完善度快速提升。</p></li></ul><p>从最新数据看，目前RTP 赛道已形成“两强格局</p><ul><li><p>第一梯队<strong>Brevis Pico（含 Prism）</strong> 与 <strong>Succinct SP1 Hypercube</strong> 均直指 EF 设定的 <em>P99 ≤ 10s</em> 标准。前者以分布式多 GPU 架构实现性能与成本突破；后者以单体化系统保持工程成熟与生态稳健。Pico 代表性能与架构创新，SP1 代表实用化与生态领先。</p></li><li><p>第二梯队<strong>RISC Zero、ZisK、ZKM</strong> 在生态兼容与轻量化方面持续探索，但尚未公开完整 RTP 指标（延迟、功耗、CAPEX、安全位、证明体积、可复现性）。<strong>Scroll（Ceno）</strong> 与 <strong>Matter Labs（Airbender）</strong> 则尝试将 Rollup 技术延伸至 L1 验证层，体现出从 L2 扩容向 L1 可验证计算的演进趋势。</p></li></ul><p>2025 年，zkVM 赛道已形成以 <strong>RISC-V 统一、模块化演进、递归标准化、硬件加速并行</strong> 的技术格局。zkVM的通用可验证计算层（<strong>Verifiable Compute Layer</strong>）可分为三个类别：</p><ul><li><p><strong>性能导向型</strong>：Brevis Pico、SP1、Jolt、ZisK 聚焦低延迟与实时证明，通过递归 STARK 与 GPU 加速提升计算吞吐。</p></li><li><p><strong>模块化与可扩展型</strong>：OpenVM、Pico、SP1强调模块化可插拔，支持协处理器接入。</p></li><li><p><strong>生态与通用开发型</strong>：RISC Zero、SP1、ZisK 聚焦 SDK 与语言兼容，推动普适化。</p></li></ul><p><strong>zkVM 竞品项目对比（截至 2025 年 10 月）</strong></p><table style="min-width: 125px"><colgroup><col><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>项目</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>一句话定位</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>技术路线</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>架构与亮点</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>当前阶段</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Brevis Pico</strong></p></td><td colspan="1" rowspan="1"><p>模块化 zkVM + 数据协处理器（zkCoprocessor）</p></td><td colspan="1" rowspan="1"><p>RISC-V 兼容 · Turbo Plonk · 递归 RTP</p></td><td colspan="1" rowspan="1"><p>“Glue + Coprocessor” 可插拔架构，支持多 GPU 并行与历史数据证明加速；已通过 EF RTP 标准（P99 ≤10s）</p></td><td colspan="1" rowspan="1"><p>v1.1.4 已发布 · 开源活跃 · EF RTP 榜单入选</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Succinct SP1</strong></p></td><td colspan="1" rowspan="1"><p>通用 RISC-V zkVM / Rollup SDK</p></td><td colspan="1" rowspan="1"><p>STARK + 递归 + SNARK（FFLONK）</p></td><td colspan="1" rowspan="1"><p>预编译加速（哈希/EC），链上验证合约；Rollup SDK 广泛集成</p></td><td colspan="1" rowspan="1"><p>主网可用 · 开源活跃 · 多链合作中</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>RISC Zero (R0VM)</strong></p></td><td colspan="1" rowspan="1"><p>通用 zkVM 与云验证平台</p></td><td colspan="1" rowspan="1"><p>zk-STARK + 递归 + Groth16 封装</p></td><td colspan="1" rowspan="1"><p>Bonsai API、Receipt 模型、成熟 Rust SDK；生态兼容度高</p></td><td colspan="1" rowspan="1"><p>稳定开源 · 未公开 RTP 性能数据</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>OpenVM</strong></p><p><strong>（Axiom）</strong></p></td><td colspan="1" rowspan="1"><p>模块化、可扩展 zkVM 框架</p></td><td colspan="1" rowspan="1"><p>无 CPU 核心 · 多 ISA 扩展</p></td><td colspan="1" rowspan="1"><p>支持 EC/Pairing/Int256 指令与 GPU 加速；模块化电路设计</p></td><td colspan="1" rowspan="1"><p>v1.2 测试版 · 开源活跃 · 尚未上线</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Ziren (ZKM)</strong></p></td><td colspan="1" rowspan="1"><p>MIPS 系通用 zkVM</p></td><td colspan="1" rowspan="1"><p>MIPS32r2 + STARK 路线</p></td><td colspan="1" rowspan="1"><p>强调稳定性与通用性，支持跨平台验证</p></td><td colspan="1" rowspan="1"><p>开源开发中 · 可本地编译运行</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Airbender (Matter Labs)</strong></p></td><td colspan="1" rowspan="1"><p>zkSync RISC-V 证明系统</p></td><td colspan="1" rowspan="1"><p>STARK/FRI 优化 + SNARK 封装</p></td><td colspan="1" rowspan="1"><p>六阶段流水线、DEEP FRI 优化、兼容消费级 GPU</p></td><td colspan="1" rowspan="1"><p>Alpha 阶段 · GitHub 开源 · 未主网上线</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Ceno (Scroll)</strong></p></td><td colspan="1" rowspan="1"><p>Scroll 加速型 zkVM / 证明栈</p></td><td colspan="1" rowspan="1"><p>Rust + RISC-V + Recursive Pipeline</p></td><td colspan="1" rowspan="1"><p>多线程数据并行，Scroll Rollup 内部验证框架</p></td><td colspan="1" rowspan="1"><p>内测阶段 · 开发中</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Jolt</strong></p><p><strong>&nbsp;(a16z)</strong></p></td><td colspan="1" rowspan="1"><p>高性能 RISC-V zkVM（Lookup/Sum-Check）</p></td><td colspan="1" rowspan="1"><p>Lookup + Sum-Check + Lasso 框架</p></td><td colspan="1" rowspan="1"><p>极简实现（&lt;25K LoC），性能导向研究型架构</p></td><td colspan="1" rowspan="1"><p>开源研究项目 · Rust 实现</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>ZisK (ex-Polygon Hermez)</strong></p></td><td colspan="1" rowspan="1"><p>低延迟 zkVM 工具栈</p></td><td colspan="1" rowspan="1"><p>RISC-V 64 + STARK</p></td><td colspan="1" rowspan="1"><p>模块化接口（JSON-RPC/gRPC），多语言 SDK 支持</p></td><td colspan="1" rowspan="1"><p>开源开发中&nbsp;</p></td></tr></tbody></table><p>当前 zk-Coprocessor 赛道已形成以 <strong>Brevis、Axiom、Herodotus、Lagrange</strong> 为代表的格局。 其中 <strong>Brevis</strong> 以「ZK 数据协处理器 + 通用 zkVM」融合架构领先，兼具历史数据读取、可编程计算与 L1 RTP 能力；<strong>Axiom</strong> 聚焦可验证查询与电路回调；<strong>Herodotus</strong> 专注历史状态访问；<strong>Lagrange</strong> 以 ZK+Optimistic 混合架构优化跨链计算性能。 整体来看，zk-Coprocessor 正以“可验证服务层”的方式成为连接 <strong>DeFi、RWA、AI、身份</strong> 等应用的可信计算接口。</p><table style="min-width: 100px"><colgroup><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>项目</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>类型</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>一句话定位</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>代表能力 / 形态</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Brevis</strong></p></td><td colspan="1" rowspan="1"><p>ZK Data Coprocessor + 通用 zkVM（Pico/Prism）</p></td><td colspan="1" rowspan="1"><p>跨链历史数据读取 + 可编程计算引擎，并兼具 L1 实时块证明能力</p></td><td colspan="1" rowspan="1"><p>提供 TS/Go SDK、电路抽象；Pico Prism 已通过 EF RTP 基准；生态产品包括 Incentra 等</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Axiom</strong></p></td><td colspan="1" rowspan="1"><p>ZK Data Coprocessor</p></td><td colspan="1" rowspan="1"><p>针对账户、存储与日志的可验证区间查询，支持自定义电路与链上回调</p></td><td colspan="1" rowspan="1"><p>V2 已上线主网；TypeScript SDK 支持开发者构建历史数据查询电路</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Herodotus</strong></p></td><td colspan="1" rowspan="1"><p>ZK Data Coprocessor</p></td><td colspan="1" rowspan="1"><p>专注历史状态与事件的可验证访问</p></td><td colspan="1" rowspan="1"><p>“历史数据可用性层 + 可组合查询接口”定位，支持多链数据访问与可证明计算</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Lagrange</strong></p></td><td colspan="1" rowspan="1"><p>State Coprocessor（ZK + Optimistic 混合）</p></td><td colspan="1" rowspan="1"><p>提供跨链状态证明与聚合计算服务</p></td><td colspan="1" rowspan="1"><p>集成 “State Committees” 模型 + ZK Coprocessor，支持 EigenLayer 质押与轻客户端验证</p></td></tr></tbody></table><h2 id="h-" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>八、总结：商业逻辑、工程实现及潜在风险</strong></h2><p><strong>商业逻辑：性能驱动与双层飞轮<br></strong> Brevis 以「通用 zkVM（Pico/Prism）」与「数据协处理器（zkCoprocessor）」构建多链可信计算层：前者解决任意计算可验证问题，后者实现历史与跨链数据的业务落地。<br> 其增长逻辑形成“性能—生态—成本”正循环：Pico Prism 的 RTP 性能吸引头部协议集成，带来证明规模增长与单次成本下降，形成持续强化的双层飞轮。竞争优势主要在三点：</p><ol><li><p><strong>性能可复现</strong> —— 已纳入以太坊基金会 ETHProofs RTP 体系；</p></li><li><p><strong>架构壁垒</strong> —— 模块化设计与多 GPU 并行实现高扩展性；</p></li><li><p><strong>商业验证</strong> —— 已在激励分发、动态费率与跨链验证中规模化落地。</p></li></ol><p><strong>工程实现：从“重执行”到“以验代执”</strong></p><p>Brevis 通过 Pico zkVM 与 Prism 并行框架，在 45M gas 区块中实现平均 6.9 秒、P99 &lt; 10 秒（64×5090 GPU，&lt;$130 K CAPEX），性能与成本均处领先。 zkCoprocessor 模块支持历史数据读取、电路生成与回链验证，并可在 Pure-ZK 与 Hybrid 模式间灵活切换，整体性能已基本对齐以太坊 RTP 硬标准。</p><p><strong>潜在风险与关注要点</strong></p><ul><li><p><strong>技术与合规门槛：</strong>Brevis 仍需完成功耗、安全位、证明大小及可信设置依赖等硬指标的公开与第三方验证。长尾性能优化仍为关键，EIP 调整可能改变性能瓶颈。</p></li><li><p><strong>竞争与替代风险：</strong> Succinct（SP1/Hypercube）在工具链与生态整合上依然领先，Risc Zero、Axiom、OpenVM、Scroll、zkSync 等团队竞争力依然不容忽视。</p></li><li><p><strong>收入集中与业务结构：</strong> 当前证明量高度集中（前四大应用占比约 80%），需通过多行业、多公链、多用例拓展降低依赖。GPU 成本或将影响单位毛利。</p></li></ul><p>综合来看，<strong>Brevis 已在“性能可复现”与“业务可落地”两端构筑了初步护城河</strong>：Pico/Prism 已稳居 L1 RTP 赛道第一梯队，zkCoprocessor 则打开高频、可复用的商业化场景。未来建议以达成以太坊基金会 RTP 全量硬指标为阶段性目标，持续强化协处理器产品标准化与生态拓展，同时推进第三方复现、安全审计与成本透明。通过在基础设施与 SaaS 收入间实现结构平衡，形成可持续的商业增长闭环。</p><p><strong><em>免责声明：</em></strong><em>本文在创作过程中借助了 ChatGPT-5 的 AI 工具辅助完成，作者已尽力校对并确保信息真实与准确，但仍难免存在疏漏，敬请谅解。需特别提示的是，加密资产市场普遍存在项目基本面与二级市场价格表现背离的情况。本文内容仅用于信息整合与学术/研究交流，不构成任何投资建议，亦不应视为任何代币的买卖推荐。</em></p><p><br></p>]]></content:encoded>
            <author>0xjacobzhao@newsletter.paragraph.com (jacobzhao)</author>
            <category>zk</category>
            <category>zkvm</category>
            <category>zkevm</category>
            <category>coprocessor</category>
            <category>brevis</category>
            <enclosure url="https://storage.googleapis.com/papyrus_images/5f12380d85c85d7e77f150857cbea049f19619ce9001fd95548df7f773d7753b.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Cysic Research Report: The ComputeFi Path of ZK Hardware Acceleration]]></title>
            <link>https://paragraph.com/@0xjacobzhao/cysic-research-report-the-computefi-path-of-zk-hardware-acceleration</link>
            <guid>ANqLSPlCg6iccstex47U</guid>
            <pubDate>Wed, 15 Oct 2025 16:56:40 GMT</pubDate>
            <description><![CDATA[Zero-knowledge proofs (ZK), as a new generation of cryptography and scaling infrastructure, are showing strong potential across scaling, privacy computing, zkML, and cross-chain verification. Yet the heavy computation and latency of proof generation remain the key bottlenecks to industrial adoption. Hence ZK hardware acceleration is critical: GPUs lead in generality and iteration speed, ASICs point to the endgame in efficiency, and FPGAs strike a balance between programmability and efficiency—to]]></description>
            <content:encoded><![CDATA[<p>Zero-Knowledge Proofs (ZK) — as a next-generation cryptographic and scalability infrastructure — are demonstrating immense potential across blockchain scaling, privacy computation, zkML, and cross-chain verification. However, the proof generation process is extremely compute-intensive and latency-heavy, forming the biggest bottleneck for industrial adoption. <strong>ZK hardware acceleration</strong> has therefore emerged as a core enabler. Within this landscape, <strong>GPUs</strong> excel in versatility and iteration speed, <strong>ASICs</strong> pursue ultimate efficiency and large-scale performance, while <strong>FPGAs</strong> serve as a flexible middle ground combining programmability with energy efficiency. Together, they form the hardware foundation powering ZK’s real-world adoption.</p><h3 id="h-i-the-industry-landscape-of-zk-hardware-acceleration" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>I. The Industry Landscape of ZK Hardware Acceleration</strong></h3><p><strong>GPU, FPGA, and ASIC</strong> represent the three mainstream paths of hardware acceleration:</p><ul><li><p><strong>GPU (Graphics Processing Unit):</strong> A general-purpose parallel processor, originally designed for graphics rendering but now widely used in AI, ZK, and scientific computing.</p></li><li><p><strong>FPGA (Field Programmable Gate Array):</strong> A reconfigurable hardware circuit that can be repeatedly configured at the logic-gate level “like LEGO blocks,” bridging between general-purpose processors and specialized circuits.</p></li><li><p><strong>ASIC (Application-Specific Integrated Circuit):</strong> A dedicated chip customized for a specific task. Once fabricated, its function is fixed — offering the highest performance and efficiency but the least flexibility.</p></li></ul><h3 id="h-gpu-dominance" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>GPU Dominance:</strong></h3><p>GPUs have become the backbone of both AI and ZK computation. <br> In AI, GPUs’ parallel architecture and mature software ecosystem (CUDA, PyTorch, TensorFlow) make them nearly irreplaceable — the long-term mainstream choice for both training and inference.<br> In ZK, GPUs currently offer the best trade-off between <strong>cost and availability</strong>, but their performance in <strong>big integer modular arithmetic, MSM, and FFT/NTT</strong> operations is limited by memory and bandwidth constraints. Their energy efficiency and scalability economics remain insufficient, suggesting the eventual need for more specialized hardware.</p><h3 id="h-fpga-flexibility" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>FPGA Flexibility:</strong></h3><p>Paradigm’s 2022 investment thesis highlighted FPGA as the “sweet spot” balancing flexibility, efficiency, and cost. Indeed, FPGAs are <strong>programmable, reusable, and quick to prototype</strong>, suitable for <strong>rapid algorithm iteration</strong>, <strong>low-latency environments</strong> (e.g., high-frequency trading, 5G base stations), <strong>edge computing under power constraints</strong>, and <strong>secure cryptographic tasks</strong>.<br>However, FPGAs lag behind GPUs and ASICs in raw performance and scale economics. Strategically, they are best suited as <strong>development and iteration platforms before algorithm standardization</strong>, or for niche verticals requiring long-term customization.</p><h3 id="h-asic-as-the-endgame" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>ASIC as the Endgame:</strong></h3><p>ASICs are already dominant in crypto mining (e.g., Bitcoin’s SHA-256, Litecoin/Dogecoin’s Scrypt). By hardwiring algorithms directly into silicon, ASICs achieve <strong>orders of magnitude</strong> better performance and energy efficiency — becoming the exclusive infrastructure for mining.<br> In ZK proving (e.g., <strong>Cysic</strong>) and AI inference (e.g., <strong>Google TPU</strong>, <strong>Cambricon</strong>), ASICs show similar potential. Yet, in ZK, algorithmic diversity and operator variability have delayed standardization and large-scale demand. Once standards solidify, ASICs could <strong>redefine ZK compute infrastructure</strong> — delivering <strong>10–100×</strong> improvements in performance and efficiency with minimal marginal cost post-production.<br> In AI, where training workloads evolve rapidly and rely on dynamic matrix operations, GPUs will remain the mainstream for training. Still, ASICs will hold <strong>irreplaceable value</strong> in <strong>fixed-task, large-scale inference scenarios</strong>.</p><p><strong>Dimension Comparison: GPU vs FPGA vs ASIC</strong><br></p><table style="min-width: 100px"><colgroup><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Dimension</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>GPU</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>FPGA</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>ASIC</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Performance / Cost (Perf/$)</strong></p></td><td colspan="1" rowspan="1"><p><strong>Strong:</strong> boosted by AI and gaming economies of scale; consumer and enterprise GPUs (RTX / A / H series) offer high cost-performance ratio.</p></td><td colspan="1" rowspan="1"><p><strong>Average:</strong> typically lower throughput than GPUs at the same price level.</p></td><td colspan="1" rowspan="1"><p><strong>Best:</strong> lowest amortized cost after mass production; dominates in long-term cost efficiency.</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Performance / Power (Perf/W)</strong></p></td><td colspan="1" rowspan="1"><p><strong>Moderate:</strong> relatively high power consumption under ZK workloads.</p></td><td colspan="1" rowspan="1"><p><strong>Moderate–Good:</strong> certain designs outperform GPUs.</p></td><td colspan="1" rowspan="1"><p><strong>Best:</strong> custom-designed for MSM/FFT/hash operations with leading energy efficiency.</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Flexibility</strong></p></td><td colspan="1" rowspan="1"><p><strong>Highest:</strong> rapidly adaptable to Plonky2, Halo2, HyperPlonk, etc.</p></td><td colspan="1" rowspan="1"><p><strong>High:</strong> reconfigurable but requires RTL/HDL expertise.</p></td><td colspan="1" rowspan="1"><p><strong>Lowest:</strong> logic hardcoded; needs abstract ISA layers to support multiple proving systems.</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Deployment Cycle</strong></p></td><td colspan="1" rowspan="1"><p><strong>Fastest:</strong> available off-the-shelf with mature CUDA ecosystem.</p></td><td colspan="1" rowspan="1"><p><strong>Medium:</strong> weeks to months from board design to stable deployment.</p></td><td colspan="1" rowspan="1"><p><strong>Slowest:</strong> 12–18 months fabrication cycle.</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Scalability</strong></p></td><td colspan="1" rowspan="1"><p><strong>Limited:</strong> constrained by PCIe interface and chassis form factor.</p></td><td colspan="1" rowspan="1"><p><strong>Strong:</strong> supports custom interconnects and pipelining.</p></td><td colspan="1" rowspan="1"><p><strong>Excellent:</strong> can be fully customized for workload and topology.</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Ecosystem &amp; Tools</strong></p></td><td colspan="1" rowspan="1"><p><strong>Most mature:</strong> rich CUDA, cuFFT, MSM libraries, and strong developer community.</p></td><td colspan="1" rowspan="1"><p><strong>Niche:</strong> limited toolchain maturity and talent availability.</p></td><td colspan="1" rowspan="1"><p><strong>Early-stage:</strong> requires in-house software stack; highly stable once mature.</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Best Use Cases</strong></p></td><td colspan="1" rowspan="1"><p>Production-grade ZK provers, rapid iteration, decentralized GPU networks.</p></td><td colspan="1" rowspan="1"><p>Algorithm validation, prototyping, ultra-low-latency or custom interconnect scenarios.</p></td><td colspan="1" rowspan="1"><p>Large-scale zkML, recursive proving, and long-term infrastructure.</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Key Risks</strong></p></td><td colspan="1" rowspan="1"><p>Rising energy and rack-space costs.</p></td><td colspan="1" rowspan="1"><p>Talent scarcity, high per-board cost, weak economies of scale.</p></td><td colspan="1" rowspan="1"><p>Algorithm changes, high upfront capital, and long payback cycles.</p></td></tr></tbody></table><p><br>In the evolution of <strong>ZK hardware acceleration</strong>, <strong>GPUs</strong> are currently the optimal solution — balancing cost, accessibility, and development efficiency, making them ideal for rapid deployment and iteration. <strong>FPGAs</strong> serve more as <strong>specialized tools</strong>, valuable in ultra-low-latency, small-scale interconnect, and prototyping scenarios, but unable to compete with GPUs in economic efficiency.<br> In the <strong>long term</strong>, as ZK standards stabilize, <strong>ASICs</strong> will emerge as the industry’s core infrastructure, leveraging unmatched performance-per-cost and energy efficiency.</p><p><strong>Overall trajectory:<br></strong> <strong>Short term –</strong> rely on GPUs to capture market share and generate revenue;<br> <strong>Mid term –</strong> use FPGAs for verification and interconnect optimization;<br> <strong>Long term –</strong> bet on ASICs to build a sustainable compute moat.</p><h3 id="h-ii-hardware-perspective-the-underlying-technical-barriers-of-zk-acceleration" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>II. Hardware Perspective: The Underlying Technical Barriers of ZK Acceleration</strong></h3><p>Cysic’s core strength lies in <strong>hardware acceleration for zero-knowledge proofs (ZK)</strong>.</p><p>In the representative paper <em>“</em><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://hackmd.io/@Cysic/BJQcpVbXn"><em><u>ZK Hardware Acceleration: The Past, the Present and the Future</u></em></a><em>,”</em> the team highlights that <strong>GPUs</strong> offer flexibility and cost efficiency, while <strong>ASICs</strong> outperform in energy efficiency and peak performance—but require trade-offs between development cost and programmability.</p><p>Cysic adopts a <strong>dual-track strategy</strong> — combining <strong>ASIC innovation</strong> with <strong>GPU acceleration</strong> — driving ZK from “verifiable” to “real-time usable” through a full-stack approach from custom chips to general SDKs.</p><h3 id="h-1-the-asic-path-cysic-c1-chip-and-dedicated-devices" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>1. The ASIC Path: Cysic C1 Chip and Dedicated Devices</strong></h3><p>Cysic’s self-developed <strong>C1 chip</strong> is built on a <strong>zkVM-based architecture</strong>, featuring high bandwidth and flexible programmability.<br> Based on this, Cysic plans to launch two hardware products:</p><ul><li><p><strong>ZK Air:</strong> a portable accelerator roughly the size of an iPad charger, plug-and-play, designed for lightweight verification and developer use;</p></li><li><p><strong>ZK Pro:</strong> a high-performance system integrating the C1 chip with front-end acceleration modules, targeting large-scale <strong>zkRollup</strong> and <strong>zkML</strong> workloads.</p></li></ul><p>Cysic’s research directly supports its ASIC roadmap.<br>The team introduced <strong>Hypercube IR</strong>, a ZK-specific intermediate representation that abstracts proof circuits into standardized parallel patterns—reducing the difficulty of cross-hardware migration. It explicitly preserves modular arithmetic and memory access patterns in circuit logic, enabling better hardware recognition and optimization.</p><p>In <strong>Million Keccak/s</strong> experiments, a single C1 chip achieved <strong>~1.31M Keccak proofs per second (~13× acceleration)</strong>, demonstrating the throughput and energy-efficiency potential of specialized hardware.<br> In <strong>HyperPlonk hardware analysis</strong>, the team showed that <strong>MSM/MLE</strong> operations parallelize well, while <strong>Sumcheck</strong> remains a bottleneck.</p><p>Overall, Cysic is developing a holistic methodology across <strong>compiler abstraction, hardware verification, and protocol adaptation</strong>, laying a strong foundation for productization.</p><h3 id="h-2-the-gpu-path-general-sdk-zkpog-end-to-end-stack" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>2. The GPU Path: General SDK + ZKPoG End-to-End Stack</strong></h3><p>On the GPU side, Cysic is advancing both a <strong>general-purpose acceleration SDK</strong> and a full <strong>ZKPoG (Zero-Knowledge Proof on GPU)</strong> stack:</p><ul><li><p><strong>General GPU SDK:</strong> built on Cysic’s custom CUDA framework, compatible with <strong>Plonky2, Halo2, Gnark, Rapidsnark</strong>, and other backends. It surpasses existing open-source frameworks in performance, supports multiple GPU models, and emphasizes <strong>compatibility and ease of use</strong>.</p></li><li><p><strong>ZKPoG:</strong> developed in collaboration with <strong>Tsinghua University</strong>, it is the first end-to-end GPU stack covering the entire proof flow—from <strong>witness generation</strong> to <strong>polynomial computation</strong>. On consumer-grade GPUs, it achieves <strong>up to 52× speedup (average 22.8×)</strong> and expands circuit scale by <strong>1.6×</strong>, verified across <strong>SHA256, ECDSA, and MVM</strong> applications.</p></li></ul><table style="min-width: 75px"><colgroup><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Dimension</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>ASIC Path (Cysic C1 / ZK Air / ZK Pro)</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>GPU Path (General SDK + ZKPoG)</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Positioning</strong></p></td><td colspan="1" rowspan="1"><p>Customized extreme performance for large-scale ZKP workloads</p></td><td colspan="1" rowspan="1"><p>General-purpose acceleration compatible with mainstream proving systems</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Features</strong></p></td><td colspan="1" rowspan="1"><p>- C1 chip built on zkVM architecture</p><p>- Hypercube IR optimizes circuit logic</p><p>- 13× acceleration per chip, supporting real-time proofs</p></td><td colspan="1" rowspan="1"><p>- Custom CUDA SDK supporting Plonky2 / Halo2 backends</p><p>- ZKPoG enables full GPU pipeline (witness → polynomial computation)</p><p>- 22.8× average CPU uplift (up to 52×)</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Product Form</strong></p></td><td colspan="1" rowspan="1"><p>- ZK Air (portable accelerator)</p><p>- ZK Pro (high-performance system)</p></td><td colspan="1" rowspan="1"><p>- General GPU SDK</p><p>- ZKPoG end-to-end stack</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Advantages</strong></p></td><td colspan="1" rowspan="1"><p>Ultimate efficiency, hardware-friendly, specialized optimization</p></td><td colspan="1" rowspan="1"><p>High flexibility, rapid iteration, low development barrier</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Limitations</strong></p></td><td colspan="1" rowspan="1"><p>High cost and long R&amp;D cycle; limited flexibility; ecosystem dependency; products still in roadmap phase</p></td><td colspan="1" rowspan="1"><p>Lower energy efficiency than ASICs; VRAM bottlenecks limit scalability; performance varies by GPU; more competitive landscape</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Use Cases</strong></p></td><td colspan="1" rowspan="1"><p>Long-term stable, high-throughput workloads: zkRollup mainnets, large-scale zkML, recursive proofs</p></td><td colspan="1" rowspan="1"><p>R&amp;D and flexibility-driven use: new ZK systems testing, cross-chain verification, small-scale zkML inference, identity authentication</p></td></tr></tbody></table><p><br>Cysic’s key differentiator lies in its <strong>hardware–software co-design</strong> philosophy.<br> Its in-house <strong>ZK ASICs, GPU clusters, and portable mining devices</strong> together form a <strong>full-stack compute infrastructure</strong>, enabling deep integration from the <strong>chip layer to the protocol layer</strong>. By leveraging the <strong>complementarity between ASICs’ extreme energy efficiency and scalability</strong> and <strong>GPUs’ flexibility and rapid iteration</strong>, Cysic has positioned itself as a <strong>leading ZKP hardware provider</strong> for high-intensity proof workloads — and is now extending this foundation toward the <strong>financialization of ZK hardware (ComputeFi)</strong> as its next industrial phase.</p><h3 id="h-iii-protocol-perspective-cysic-network-a-universal-proof-layer-under-poc-consensus" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>III. Protocol Perspective: Cysic Network — A Universal Proof Layer under PoC Consensus</strong></h3><p>On <strong>September 24, 2025</strong>, the Cysic team released the <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://hackmd.io/@Cysic/H1XlGr0jle"><em><u>Cysic Network Whitepaper</u></em></a>.<br> The project centers on <strong>ComputeFi</strong>, financializing <strong>GPU, ASIC, and mining hardware</strong> into programmable, verifiable, and tradable computational assets. Built with <strong>Cosmos CDK</strong>, <strong>Proof-of-Compute (PoC)</strong> consensus, and an <strong>EVM execution layer</strong>, Cysic Network establishes a decentralized “task-matching + multi-verification” marketplace supporting <strong>ZK proving, AI inference, mining, and HPC</strong> workloads.</p><p>By vertically integrating <strong>self-developed ZK ASICs, GPU clusters, and portable miners</strong>, and powered by a <strong>dual-token model ($CYS / $CGT)</strong>, Cysic aims to unlock real-world compute liquidity — filling a key gap in Web3 infrastructure: <strong>verifiable compute power</strong>.</p><h3 id="h-modular-architecture-four-layers-of-computefi-infrastructure" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>Modular Architecture: Four Layers of ComputeFi Infrastructure</strong></h3><p>Cysic Network adopts a <strong>bottom-up four-layer modular architecture</strong>, enabling cross-domain expansion and verifiable collaboration:</p><ol><li><p><strong>Hardware Layer:<br></strong> Comprising CPUs, GPUs, FPGAs, ASIC miners, and portable devices — forming the network’s computational foundation.</p></li><li><p><strong>Consensus Layer:<br></strong> Built on <strong>Cosmos CDK</strong>, using a modified <strong>CometBFT + Proof-of-Compute (PoC)</strong> mechanism that integrates <strong>token staking</strong> and <strong>compute staking</strong> into validator weighting, ensuring both computational and economic security.</p></li><li><p><strong>Execution Layer:<br></strong> Handles <strong>task scheduling, workload routing, bridging, and voting</strong>, with <strong>EVM-compatible smart contracts</strong> enabling programmable, multi-domain computation.</p></li><li><p><strong>Product Layer:<br></strong> Serves as the application interface — integrating <strong>ZK proof markets, AI inference frameworks, crypto mining</strong>, and <strong>HPC modules</strong>, while supporting new task types and verification methods.</p></li></ol><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/48f3ef53710bdf2e0d91c0ca3c90881400c14254c2e08908f15122e5fb671ec4.jpg" alt="" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAPCAIAAAAK4lpAAAAACXBIWXMAAAsTAAALEwEAmpwYAAAEFElEQVR4nG1UYUwbZRj+/i5L9IeJ+7M/aAxGTZaQ6HSakCVGExoMJm6JkV+ixCiJRMNiFxCYGaOWdTKoVjpJ8UbbMevQbqUM2124Vuo+OA4qtwuXHscoyY5eD44dZ1tKX9Mea8r0+fHl++6+93ve5/2e70VQgXw+n8vlAMDlcplMptbW1sV4nOeXV8S1y46f2RI0TSNJMhaLcRzHMEw8Hpdl2Yg1RpqmeZ4XRZHneVVVUSVBeV8gEGhsbLRarSzLCoLwQEp5fcGlpSWWZXVdp2maYZhkMimKoiAImqaVAvfKBIKQEEWRZVlFUQ4Q+P3+aDRapilPMMYzf0YNceWPFBXx+cYxnr1569Z0+Hbmnx3jr6Jszs3R8wwzzyxspFIHCERRlCQJAHKPkC+dpaqqJEmVrKV5tlDIKIqibsqt3c6frpMAoGk7iURimpwetNspiuJ5HkmStLa2dn91tTJBQzUACMIKRVHzDHMXY4qKzM7NZbJZAHio7Zz80PJU3VfEb0XFCL2MXmkGgC11e3X1PgAQo25N05LJdTQ2Nmax9v0RCpUT7OjoaGpqMubhcLi5ubmzs9NsNrd81mK1Wo08ttTtJ2s/Rs+fOtfvAdhFh2vRyc8N4oWFBZIkr7o9qqomk0mE8WwwGKRpWtf13Xxe1/W2trbh4WGDwOl0IoRqampePX4cIWQymQyCTDbbZvG81zo4GYkDZNHRt1883QUASxwfCpPRaNTtuaYoSlHBRHDye4cDYxwiZ852WmOxv8p21DTN4/ZUVVWZTKb6+vrq6urGxg82Uild1xPCKjr2Pjry1tm+UYA99Mw7R+u+ANi9GZy6Q1KRSOTqqGefgGVZjLEsp6Kx+V9uBAyDC4Igy2lZTo35/G3tF3ptP/ZcdJxpv3CuZ0AURVmWE8JK14Dvoy7X7chiZieNnjiBjp0WlhftDudMDGOMiVH3PsHk1JTLNcKy9wzzAIDFYmloaCiUSjTmC3T32M9bf+i1DZ23DnV8c0nX9aIXt1T0wil0uLbd5i1e8rMNz73bDgAxTIdCYYqaJkbdaUVZT64jiqL8fj9FUWXnEARhNpuN5Y3fJ7t77N9+d8Viu9JrG+rquWwQ5HK7l1zBLy2e6CxXKOjo0IkjdWeMy+e45Q1J8nivy3K6eMml9/LrPY57rGEYCianyL5+56CD6LeP9NtHLg44DWJF2UJPv4nQS5987SzZ9DX0+qeVNvV4rynKZrFEj7WKyrdaflP5g6jYWMhki1VFh95AtS2GTTmOeyBJgYkJQRA4bhn9J+YANG1HkjbkfaRVdbsiib3CoxYUuft3nBMBYDeff6hpW6qq67paGv9fQVmEKIper9fn8wUCAb/fjzGuaJx7xumVGB8fJwgiHA57vV6apgHgX8gxhdO+MmULAAAAAElFTkSuQmCC" nextheight="682" nextwidth="1456" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><h3 id="h-zk-proof-layer-decentralization-meets-hardware-acceleration" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>ZK Proof Layer: Decentralization Meets Hardware Acceleration</strong></h3><p>Zero-knowledge proofs allow computation to be verified without revealing underlying data — but generating these proofs is time- and cost-intensive.&nbsp; Cysic Network enhances efficiency through <strong>decentralized Provers + GPU/ASIC acceleration</strong>, while <strong>off-chain verification and on-chain aggregation</strong> reduce latency and verification costs on Ethereum.</p><p><strong>Workflow: </strong>&nbsp;ZK projects publish proof tasks via smart contracts → decentralized Provers compete to generate proofs → Verifiers perform multi-party validation → results are settled via on-chain contracts.</p><p>By combining <strong>hardware acceleration</strong> with <strong>decentralized orchestration</strong>, Cysic builds a scalable <strong>Proof Layer</strong> that underpins <strong>ZK Rollups, zkML</strong>, and <strong>cross-chain applications</strong>.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/2c942d2ddc377b5b50f956b46c4d82f009c6c4ef5d5536843c50ba09cbf85073.jpg" alt="" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAARCAIAAAAzPjmrAAAACXBIWXMAAAsTAAALEwEAmpwYAAAEOklEQVR4nLWUX0xbVRzH++Sz8dUXEt2DxJAoZMJkKEs2TCBmzbJCHAm+rA4tCW203ZQ7kyLcQXKNclmwCNxEDmy9FXoH5UK5LTSlrmVKBS1YLv8albbQP/y5bHJK5Gdub1aJ8WEz8Zv78Du/e875/L7n/HJU8D9L9a9ZjDEAuFwuhBD3WAihvr5ehPoRQizL5vIsa41Go/8FoFara6qrTSaTTqczmUwFBQWqx9JqtXq9XqfTEQRx9mxZB307t+oJARkAoDp6uizM8vJSMBj8YW5OFMP9AzaDyRw8oVDop2/tDvOtrqdzAAChxZUBlldIOS0tb3BjHgD48/j4ZN4f+NFq458CsLcvdTO2ZGoH4BhnBSDvuBhetdknslOOjzIZ5ZdyMqxdYBD3pAAGcXPBxZOV4qyVpfAa55gGgEeHh7nJ62u/eaZmneO+trYeYcI3NuJxjvt83jlhwueZms0B/rbM2gW7Yyo3xDizGY0nUzsYZ0zXm14tKo1Efk2ndzej8b19CQAeBBY+b+u9PxNkB8YK86sK86v09ea7/SNDd/j2lm4VxhmlOkk6CAYXRHGlf/Be9nBAkh4CQDK1o3SO3FfvNLxU9JZsPKu9/YN0ehcADI0fGRqN8XgycP/7c2+eF8PrK+GN0tfL79l5VTq9G4vFJwWXJB0QxE2Npto3M/Po8NBmG4rFtjDOnAQ0GlvyC88DQF5enkqlSqZ2FEA73X7bInfq2nrEYLyBMT7KHJLNxPz8z7IDhFBFRQXG2O/3a7VaAOA47tSpF2OxOMaZRDJlNjer1epAIEBR1BtlZaFQiKbpMyUlfxxiSXqYSKYu6Wt883758u6MOiZ9MumXufD89PZWSq7L6/W53W4ACC+LgiAHweBCc3NrLLZ1lDmKxbbM5maN5jLDMDRNazQamqYpitJoNHv7BwDg9Ew3EPpRJ59O73ZaBq02OQj6vV7X+OryugxACHm93mzhIxwnt9p2IkFRlCTJ62OxrY+bmsxmMwCQJKnX6wHAYrEoAKVHKYoSRVF2wDDKVmur652dnfJtIYROv3ZaFMVAIFBcXOxyuSKRiFp9kSRblS7ajMYJ4qYypGnaZDJhjBmGqa29kgOQJKnVXgUAnud1ug8ikYjy0vD8uOJgcNguF+52uztoGbu7t/dJE5FM7UjSgbiycaW2zmAwMAxTWFhIEARJkkVFReXl536PbieSKQBoJW9NCi4A6O1jOujORDIlSQf11+ojkYjK7Qmcqajp/HpwhPdY+qz0V98McS5kHaU6eli7M/sJpeWV1bXXGoxtzzz3wrPPv9xgJC9UXr5QqbEOO+/axjjHtLn1S2SVr7eN6mL6hwTPA+KzL85frFta3lDxgu+VkqruPhaxDsQ6WLvwjwCxDl74btgxVXv1xrvvf3qp7sMGI+mcnuUc07kJJ1Y5laHe1PK25r35kPgXrz7qDkl3IXAAAAAASUVORK5CYII=" nextheight="776" nextwidth="1456" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><h3 id="h-node-roles-cysic-prover-mechanism" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>Node Roles: Cysic Prover Mechanism</strong></h3><p>Within the network, <strong>Prover nodes</strong> are responsible for heavy-duty computation.<br> Users can contribute their own compute resources or purchase <strong>Digital Harvester</strong> devices to perform proof tasks and earn <strong>$CYS / $CGT rewards</strong>.&nbsp; A <strong>Multiplier</strong> factor boosts task acquisition speed. Each node must stake <strong>10 CYS</strong> as collateral, which may be slashed for misconduct.</p><p>Currently, the main task is <strong>ETHProof Prover</strong> — generating ZK proofs for Ethereum mainnet blocks, advancing the base layer’s ZK scalability.<br> Provers thus form the <strong>computational and security backbone</strong> of the Cysic Network, also providing trusted compute power for future <strong>AI inference and AgentFi</strong> applications.</p><h3 id="h-node-roles-cysic-verifier-mechanism" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>Node Roles: Cysic Verifier Mechanism</strong></h3><p>Complementing Provers, <strong>Verifier nodes</strong> handle lightweight proof verification to enhance network <strong>security and scalability</strong>.<br> Users can run Verifiers on a <strong>PC, server, or official Android app</strong>, with the <strong>Multiplier</strong> also boosting task efficiency and rewards.</p><p>The participation barrier is much lower — requiring only <strong>0.5 CYS</strong> as collateral. Verifiers can join or exit freely, making participation accessible and flexible.<br> This <strong>low-cost, light-participation</strong> model expands Cysic’s reach to mobile and general users, strengthening decentralization and trustworthy verification across the network.</p><table style="min-width: 75px"><colgroup><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Dimension</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Prover Node</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Verifier Node</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Role</strong></p></td><td colspan="1" rowspan="1"><p>High-intensity computation; generates Ethereum block proofs; forms the network’s execution and security layer</p></td><td colspan="1" rowspan="1"><p>Lightweight validation of Prover outputs; enhances network scalability and reliability</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Hardware Requirements</strong></p></td><td colspan="1" rowspan="1"><p>High-performance GPU / ASIC servers</p></td><td colspan="1" rowspan="1"><p>PC, server, or Android device</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Staking Requirement</strong></p></td><td colspan="1" rowspan="1"><p>10 CYS</p></td><td colspan="1" rowspan="1"><p>0.5 CYS</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Incentive Model</strong></p></td><td colspan="1" rowspan="1"><p>CYS/CGT rewards + Multiplier speed boost; higher-capacity nodes earn more</p></td><td colspan="1" rowspan="1"><p>CYS/CGT rewards + Multiplier; lower returns, designed for mass participation</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Network Scale (as of Oct 2025)</strong></p></td><td colspan="1" rowspan="1"><p>~42,000 nodes</p></td><td colspan="1" rowspan="1"><p>100,000+ nodes</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Participation Traits</strong></p></td><td colspan="1" rowspan="1"><p>High barrier; limited to stable, long-term compute contributors</p></td><td colspan="1" rowspan="1"><p>Low barrier; broad participation; lightweight verification tasks</p></td></tr></tbody></table><p><br><strong>Network Status and Outlook</strong></p><p>As of <strong>October 15, 2025</strong>, the <strong>Cysic Network</strong> has reached a significant early milestone:</p><ul><li><p><strong>≈42,000 Prover nodes</strong> and <strong>100,000+ Verifier nodes</strong></p></li><li><p><strong>≈91,000 total tasks completed</strong></p></li><li><p><strong>≈700,000 $CYS/$CGT</strong> distributed as rewards</p></li></ul><p>However, despite the impressive node count, activity and compute contribution remain <strong>uneven</strong> due to entry and hardware differences.&nbsp; Currently, the network is integrated with <strong>three external projects</strong>, marking the beginning of its ecosystem. Whether Cysic can evolve into a <strong>stable compute marketplace and core ComputeFi infrastructure</strong> will depend on <strong>further real-world integrations and partnerships</strong> in the coming phases.</p><h3 id="h-iv-ai-perspective-cysic-ai-cloud-services-agentfi-and-verifiable-inference" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>IV. AI Perspective: Cysic AI — Cloud Services, AgentFi, and Verifiable Inference</strong></h3><p>Cysic AI’s business framework follows a <strong>three-tier structure — Product, Application, and Strategy</strong>: At the base, <strong>Serverless Inference</strong> offers standardized APIs to lower the barrier for AI model access; At the middle, the <strong>Agent Marketplace</strong> explores on-chain applications of AI Agents and autonomous collaboration; At the top, <strong>Verifiable AI</strong> integrates <strong>ZKP + GPU acceleration</strong> to enable trusted inference, representing the long-term vision of ComputeFi.</p><h3 id="h-1-standard-product-layer-cloud-inference-service-serverless-inference" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>1. Standard Product Layer: Cloud Inference Service (Serverless Inference)</strong></h3><p>Cysic AI provides <strong>instant-access, pay-as-you-go inference services</strong>, allowing users to call large language models via APIs without managing or maintaining compute clusters.<br> This <strong>serverless design</strong> achieves low-cost and flexible intelligent integration for both developers and enterprises.</p><p>Currently supported models include:</p><ul><li><p><strong>Meta-Llama-3-8B-Instruct</strong> (task &amp; dialogue optimization)</p></li><li><p><strong>QwQ-32B</strong> (reasoning-enhanced)</p></li><li><p><strong>Phi-4</strong> (lightweight instruction model)</p></li><li><p><strong>Llama-Guard-3-8B</strong> (content safety review)</p></li></ul><p>These cover diverse needs — from general conversation and logical reasoning to compliance auditing and edge deployment.<br> The service balances <strong>cost and efficiency</strong>, supporting both <strong>rapid prototyping for developers</strong> and <strong>large-scale inference for enterprises</strong>, forming a foundational layer in Cysic’s <strong>trusted AI infrastructure</strong>.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/9c0f9f635bf74185642609fdef4860bb504b01401a6e45e368cbb35eb7d4b2ea.jpg" alt="" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAJCAIAAADcu7ldAAAACXBIWXMAAAsTAAALEwEAmpwYAAACkklEQVR4nIWR70/aQByHmzhSuF57VFqktBtDhjgFNrSIPyDlDNbGUm261KOjsyZujBlxCW+IiZKY4Iv5Tv+B7R9dtkD3Yi/243lzd59LnvvkvhT3JxBCNE1rmnZ9fU0IGQwGjuP0+/3TIHAcezAYDIfDfr/v+z4hhPsn1N8uAACa1jw//2RZbc/zXNf1PM9xHNu2HccJ1YQQy7LifPw/D7CzzmFxNNuER0VRMplMNvs8k8nIspzP5xuaViquSlJKVhRZ+UUikQhdbGj47R84hKg4QvMcAyGkZ0AIGQBi0SjPxxHiYtHo3NzcLGenYTwOWJaiaAAYwDBhNwbEAABTD4IQQgDAdJl5GBCjFpdK4tKObhi93nvf93cxVneaNdySUul5IYsx7vV6GGOOY7eaeqFaJ7Xytwe7ub+rrqmCIMQRqmqtWqOZfl6QVhoYY0KIaZpVVa1qrVe1HUpZ3lBWO5t1/Max6/VGKpVC/PyzxUWeT1FUslJZs6y2IAgURYkLkigKzofR9x9fy8tpQViITHmykJZzuWxyaSv90t1t7VVVFUIWIU5SMmnlKVUulwwd27ZtGIau77mua5qGKIqVSsWy2oZhWFbbNA86XkfX9VQyWVKrB0dO2wxDLwiCrc0aQmh9/fXBPj48OrQsS9f3Wq0Wz8dj0Sil1jZOToOu3/V9/2Jwcfn5sl6vQwgxxre3tzfjm+Fw+O7k5Ozs7PjYTSZF27YfHx/u779033YJIR97vWJxBULWNM3J3eTq6mpyNxmPx6PRaHt7OyEIlCzLEELEcRBCjmPD+fA8n8+/KBQKxWIxm83ORsewszyXyymKIssyAwBN05FIBEKYSCQ0TTsNAtc97ngdzyOu65bLZUmSfgLz2Zi4c/9YBwAAAABJRU5ErkJggg==" nextheight="430" nextwidth="1456" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><h3 id="h-2-application-layer-decentralized-intelligent-agent-marketplace-agent-marketplace" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>2. Application Layer: Decentralized Intelligent Agent Marketplace (Agent Marketplace)</strong></h3><p>The <strong>Cysic Agent Marketplace</strong> functions as a <strong>decentralized platform for AI Agent applications</strong>.&nbsp; Users can simply connect their <strong>Phantom wallet</strong>, complete verification, and interact with various Agents — payments are handled automatically through <strong>Solana USDC</strong>.</p><p>Currently, the platform integrates three core agents:</p><ul><li><p><strong>X Trends Agent</strong> — analyzes real-time X (Twitter) trends and generates creative MEME coin concepts.</p></li><li><p><strong>Logo Generator Agent</strong> — instantly creates custom project logos from user descriptions.</p></li><li><p><strong>Publisher Agent</strong> — deploys MEME coins on the Solana network (e.g., via <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://Pump.fun">Pump.fun</a>) with one click.<br></p></li></ul><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/3952674007ee7e0ce2ff942056516a0c6e7568d501ba2f8fd21adfe84d9dca55.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAPCAIAAAAK4lpAAAAACXBIWXMAAAsTAAALEwEAmpwYAAADtUlEQVR4nK2S/UtbVxjH84M01LNMsfPm3nvOvTnnvsS8mTqTWPOiMbkkLlezmGCjCXkx0b5EoaA/rDqK60oZ6+zLakNXpq60pkLWFek6NsbewK3SFSx7o2OU7e/YD4MNc+l+WwfFL4fDw+F5vp/znPPoWv9bBoPBaDQuvXVhfW2jXrt9Y3W9XruzvrZxdfn96sX3Lr9dXZw/A1loMBieYaL7X8DFc+/WVuq//fz7w+2dJ4//+GHnl8X5M5v1j//68++Ts6/vAeDSO5dXrqx+du+rX3968sWn3/z46PHW1/fvfnRvfW1jbzpYOnv+2vLKzbVbD7cfbW99/+D+zoNvd+q1O0tnL5w6eXoPANVL127f2ly9+sFm/e7Wl999svn5xvUPV66s3lypnXvzPGQhAOB5AAaDgaIommZCwfD44ezhZDoRTyXiqVwm/0o0Nqy+qg4Oh4JhloVtbW3PAwAAECKwLNTr9zU3729u3q/X76MoihDS1NQEGtLr9QzDIISe8Uo6w1O1vNii5Wk7AAAhRNM0AEBLAABoAABAW0Nal+0U9QIAT8tb/nXQAp0kSZgQjucYloUIGmkjRJCm6QMvHWAYhqKMhGBMMMfvCjdiiFA71d7a2ko1JMsyRAgixDZEURREsAFuN9K07pDf2+3rUePDhYmiEo2UpspEFELRyHK1SkQBY5PN6U7nCpWZypHjR0tT5cJEkcd8OjO2sLjIQogJ9h1SRkfHE6OpbD4XUsJHjh/lsenU6TeOTVcQz+l6A/5AKJhIjmTzuXRmLKSEiSg4nM6govDYxPG8ZLFncvkTc7PdHrfL4yaiQBmNLo8nkRxhIeR4LjgQUoeGEskRbfV4e3lsOtj9ssPZyWOTLjgQPDZdUeOxQLB/UI2FlLASjYSUsMvjIqIAEfIF/Nl8To0PpzNjhYmiL+BXohGr3UYzDOI5iOCgGkukUulMRolGBtWYy+PevRk28diECdH1DfSXpsrpzJg6NMQ3ThHPdXW7kmPZDocNIuTyuAsTxUE1ZrZ0mC0dVrvNareZLR2CJKHGz2XzucrMdCI54nB2shBqJtni5Hi+LMlmXUgJl6bKpanyxGRZkmUN4A30FSYrtq5uiGDfQP9rC/Mn5mZ7A16aYXhsohkmPpqemVsQZStE0OVx+wJ+rVBz57HJ4exyOJ1YEnUOhwPtzgfP8ZwgCpiQxpzsjhPTmIrOg5293l5PT4/VbhNEQcuRZNlssbAsZFnW4ewkoiCbzZIsiaKICdZgEEGI0D/AJRlRNbSzmQAAAABJRU5ErkJggg==" nextheight="690" nextwidth="1456" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>Technically, the marketplace leverages the <strong>Agent Swarm Framework</strong> to coordinate multiple autonomous agents into <strong>collaborative task groups (Swarms)</strong>, enabling division of labor, parallelism, and fault tolerance.<br> Economically, it employs the <strong>Agent-to-Agent Protocol</strong>, achieving <strong>on-chain payments and automated incentives</strong> where users pay only for successful actions.</p><p>Together, these features form a <strong>complete on-chain loop — trend analysis → content generation → deployment</strong>, demonstrating how AI Agents can be <strong>financialized and integrated within the ComputeFi ecosystem</strong>.</p><h3 id="h-3-strategic-layer-hardware-accelerated-verifiable-inference-verifiable-ai" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>3. Strategic Layer: Hardware-Accelerated Verifiable Inference (Verifiable AI)</strong></h3><p>A core challenge in AI inference is <strong>trust</strong> — how to mathematically guarantee that an inference result is correct without exposing inputs or model weights.<br> <strong>Verifiable AI</strong> addresses this through <strong>zero-knowledge proofs (ZKPs)</strong>, ensuring cryptographic assurance over model outputs.<br> However, traditional <strong>ZKML proof generation</strong> is too slow for real-time use.<br> Cysic solves this via <strong>GPU hardware acceleration</strong>, introducing three key technical innovations:</p><ol><li><p><strong>Parallelized Sumcheck Protocol:<br></strong> Breaks large polynomial computations into tens of thousands of CUDA threads running in parallel, achieving near-linear speedup relative to GPU core count.</p></li><li><p><strong>Custom Finite Field Arithmetic Kernels:<br></strong> Deeply optimized across register allocation, shared memory, and warp-level parallelism to overcome modular arithmetic memory bottlenecks — keeping GPUs consistently saturated and efficient.</p></li><li><p><strong>End-to-End ZKPoG Acceleration Stack:<br></strong> Covers the full chain — from <strong>witness generation to proof creation and verification</strong>, compatible with <strong>Plonky2 and Halo2</strong> backends.<br> Benchmarking shows up to <strong>52× speedup</strong> over CPUs and <strong>~10× acceleration</strong> on CNN-4M models.</p></li></ol><p>Through this optimization suite, Cysic advances verifiable inference from being <strong>“theoretically possible but impractically slow”</strong> to <strong>“real-time deployable.”<br></strong> This dramatically reduces latency and cost, making <strong>Verifiable AI</strong> viable for the first time in real-world, latency-sensitive applications.</p><p>The platform supports <strong>PyTorch</strong> and <strong>TensorFlow</strong> — developers can simply wrap their model in a <strong>VerifiableModule</strong> to receive both inference results and corresponding cryptographic proofs <strong>without changing existing code</strong>.<br> On its roadmap, Cysic plans to extend support to <strong>CNN, Transformer, Llama, and DeepSeek</strong> models, release real-time demos for <strong>facial recognition and object detection</strong>, and open-source code, documentation, and case studies to foster community collaboration.</p><table style="min-width: 125px"><colgroup><col><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Layer</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Module</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Core Function</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Engineering Difficulty</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Business Value</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Standard Product</strong></p></td><td colspan="1" rowspan="1"><p><strong>Serverless Inference</strong></p></td><td colspan="1" rowspan="1"><p>Standardized cloud inference APIs integrating mainstream open models; lowers developer entry barriers</p></td><td colspan="1" rowspan="1"><p><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span> (Moderate — compute scheduling cost)</p></td><td colspan="1" rowspan="1"><p>Foundational access point; meets rapid scaling needs; limited differentiation</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Experimental Application</strong></p></td><td colspan="1" rowspan="1"><p><strong>Agent Marketplace</strong></p></td><td colspan="1" rowspan="1"><p>Decentralized AI agent market; connects trend analysis → logo generation → on-chain publishing</p></td><td colspan="1" rowspan="1"><p><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span> (Low–moderate — model &amp; payment integration)</p></td><td colspan="1" rowspan="1"><p>Application experiment; showcases AgentFi and on-chain payment fusion</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Strategic Capability</strong></p></td><td colspan="1" rowspan="1"><p><strong>Verifiable AI</strong></p></td><td colspan="1" rowspan="1"><p>ZKP + GPU acceleration enabling real-time verifiable inference</p></td><td colspan="1" rowspan="1"><p><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span> (Very High — cryptography &amp; system-level optimization)</p></td><td colspan="1" rowspan="1"><p>Strategic pillar; provides trusted compute power and long-term moat</p></td></tr></tbody></table><p>Cysic AI’s three-layer roadmap forms a <strong>bottom-up evolution logic</strong>:</p><ul><li><p><strong>Serverless Inference</strong> solves <strong>“can it be used”</strong>,</p></li><li><p><strong>Agent Marketplace</strong> answers <strong>“can it be applied”</strong>,</p></li><li><p><strong>Verifiable AI</strong> ensures <strong>“can it be trusted.”</strong></p></li></ul><p>The first two serve as transitional and experimental stages, while the <strong>true strategic differentiation</strong> lies in <strong>Verifiable AI</strong> — where Cysic integrates <strong>ZK hardware acceleration</strong> and <strong>decentralized compute networks</strong> to establish its <strong>long-term competitive edge within the ComputeFi ecosystem</strong>.</p><h3 id="h-v-financialization-perspective-nft-based-compute-access-and-computefi-nodes" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>V. Financialization Perspective: NFT-Based Compute Access and ComputeFi Nodes</strong></h3><p>Cysic Network introduces the <strong>“Digital Compute Cube” Node NFT</strong>, which tokenizes high-performance compute assets such as <strong>GPUs and ASICs</strong>, creating a <strong>ComputeFi gateway</strong> accessible to mainstream users.&nbsp; Each NFT functions as a <strong>verifiable node license</strong>, simultaneously representing <strong>yield rights, governance rights, and participation rights</strong>.<br> Users can delegate or proxy participation in <strong>ZK proving, AI inference, and mining tasks</strong> — without owning physical hardware — and earn <strong>$CYS rewards</strong> directly.</p><table style="min-width: 125px"><colgroup><col><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Tier</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Name</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Price (USDC)</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Supply (Units)</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>$CYS Allocation per NFT</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center">Tier 1</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Tesseract</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">69</p></td><td colspan="1" rowspan="1"><p style="text-align: center">5,000</p></td><td colspan="1" rowspan="1"><p style="text-align: center">350 CYS / NFT</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center">Tier 2</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Monolith</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">99</p></td><td colspan="1" rowspan="1"><p style="text-align: center">7,000</p></td><td colspan="1" rowspan="1"><p style="text-align: center">450 CYS / NFT</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center">Tier 3</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Allspark</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">139</p></td><td colspan="1" rowspan="1"><p style="text-align: center">8,000</p></td><td colspan="1" rowspan="1"><p style="text-align: center">600 CYS / NFT</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center">Tier 4</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>MotherBox</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">189</p></td><td colspan="1" rowspan="1"><p style="text-align: center">9,000</p></td><td colspan="1" rowspan="1"><p style="text-align: center">750 CYS / NFT</p></td></tr></tbody></table><p>The total NFT supply is <strong>29,000 units</strong>, with approximately <strong>16.45 million CYS</strong> distributed (1.65% of total supply, within the community allocation cap of 9%).<br> <strong>Vesting:</strong> 50% unlocked at TGE + 50% linearly over six months.<br> Beyond fixed token allocations, holders enjoy <strong>Multiplier boosts (up to 1.2×)</strong>, <strong>priority access to compute tasks</strong>, and <strong>governance weight</strong>.<br> Public sales have ended, and the NFTs are now <strong>tradable on OKX NFT Marketplace</strong>.</p><p>Unlike traditional cloud-compute rentals, the <strong>Compute Cube</strong> model represents <strong>on-chain ownership of physical compute infrastructure</strong>, combining:</p><ul><li><p><strong>Fixed token yield:</strong> Each NFT secures a guaranteed allocation of $CYS.</p></li><li><p><strong>Real-time compute rewards:</strong> Node-connected workloads (ZK proving, AI inference, crypto mining) distribute earnings directly to holders’ wallets.</p></li><li><p><strong>Governance and priority rights:</strong> Holders gain voting power in compute scheduling and protocol upgrades, along with early access privileges.</p></li><li><p><strong>Positive feedback loop:</strong> More workloads → more rewards → greater staking → stronger governance influence.</p></li></ul><p>In essence, <strong>Node NFTs</strong> convert fragmented GPU/ASIC resources into <strong>liquid on-chain assets</strong>, opening a <strong>new investment market for compute power</strong> in the era of surging AI and ZK demand.&nbsp; This <strong>ComputeFi flywheel</strong> — <em>more tasks → more rewards → stronger governance</em> — serves as a key bridge for expanding Cysic’s compute network to retail participants.</p><h3 id="h-vi-consumer-use-case-home-asic-miners-dogecoin-and-cysic" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>VI. Consumer Use Case: Home ASIC Miners (Dogecoin &amp; Cysic)</strong></h3><p><strong>Dogecoin</strong>, launched in 2013, uses <strong>Scrypt PoW</strong> and has been <strong>merge-mined with Litecoin (AuxPoW)</strong> since 2014, sharing hashpower for stronger network security.&nbsp; Its tokenomics feature <strong>infinite supply</strong> with a <strong>fixed annual issuance of 5 billion DOGE</strong>, emphasizing <strong>community and payment utility</strong>.&nbsp; Among all ASIC-based PoW coins, Dogecoin remains the most popular after Bitcoin — its <strong>meme culture and loyal community</strong> sustain long-term ecosystem stickiness.</p><p>On the hardware side, <strong>Scrypt ASICs</strong> have fully replaced GPU/CPU mining, with industrial miners like <strong>Bitmain Antminer L7/L9</strong> dominating. However, unlike Bitcoin’s industrial-scale mining, <strong>Dogecoin still supports home mining</strong>, with devices such as <strong>Goldshell MiniDoge, Fluminer L1, and ElphaPex DG Home 1</strong> catering to retail miners, combining <strong>cash flow</strong> and <strong>community engagement</strong>.</p><p>For <strong>Cysic</strong>, entering the Dogecoin ASIC sector holds <strong>three strategic advantages</strong>:</p><ol><li><p><strong>Lower technical threshold:</strong> Scrypt ASICs are simpler than ZK ASICs, allowing faster validation of mass production and delivery capabilities.</p></li><li><p><strong>Mature cash flow:</strong> Mining generates immediate and stable revenue streams.</p></li><li><p><strong>Supply chain &amp; brand building:</strong> Dogecoin ASIC production strengthens Cysic’s manufacturing and market expertise, paving the way for future <strong>ZK/AI ASICs</strong>.</p></li></ol><p>Thus, <strong>home ASIC miners</strong> represent a <strong>pragmatic revenue base</strong> and a <strong>strategic stepping stone</strong> for Cysic’s long-term ZK/AI hardware roadmap.</p><h3 id="h-cysic-portable-dogecoin-miner-a-home-scale-innovation" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>Cysic Portable Dogecoin Miner: A Home-Scale Innovation</strong></h3><p>During <strong>Token2049</strong>, Cysic unveiled the <strong>DogeBox 1</strong>, a <strong>portable Scrypt ASIC miner</strong> for home and community users — designed as a <strong>verifiable consumer-grade compute terminal</strong>:</p><ul><li><p><strong>Portable &amp; energy-efficient:</strong> pocket-sized, 55 W power, suitable for households and small setups.</p></li><li><p><strong>Plug-and-play:</strong> managed via mobile app, built for global retail users.</p></li><li><p><strong>Dual functionality:</strong> mines <strong>DOGE</strong> and verifies <strong>DogeOS ZK proofs</strong>, achieving <strong>L1 + L2 security</strong>.</p></li><li><p><strong>Circular incentive:</strong> integrates <strong>DOGE mining + CYS rewards</strong>, forming a <strong>DOGE → CYS → DogeOS</strong> economic loop.</p></li></ul><p>This product synergizes with <strong>DogeOS</strong> (a ZK-based Layer-2 Rollup developed by the <strong>MyDoge team</strong>, backed by <strong>Polychain Capital</strong>) and <strong>MyDoge Wallet</strong>, enabling DogeBox users to mine DOGE <strong>and</strong> participate in ZK validation — combining <strong>DOGE rewards + CYS subsidies</strong> to reinforce engagement and integrate directly into the <strong>DogeOS ecosystem</strong>.</p><p>The <strong>Cysic Dogecoin home miner</strong> thus serves as both a <strong>practical cashflow device</strong> and a <strong>strategic bridge to ZK/AI ASIC deployment</strong>.<br> By merging <strong>mining + ZK verification</strong>, Cysic gains hands-on experience in market distribution and hardware scaling — while bringing a <strong>scalable, verifiable, community-driven L1 + L2 narrative</strong> to the Dogecoin ecosystem.</p><h3 id="h-vii-ecosystem-expansion-and-core-progress" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>VII. Ecosystem Expansion and Core Progress</strong></h3><ol><li><p><strong>Collaboration with Succinct &amp; Boundless Prover Networks: </strong>Cysic operates as a <strong>multi-node Prover</strong> within <strong>Succinct Network</strong>, leveraging its GPU clusters to handle <strong>SP1 zkVM real-time proofs</strong> and co-develop GPU optimization layers. It has also joined the <strong>Boundless Mainnet Beta</strong>, providing <strong>hardware acceleration</strong> for its Proof Marketplace.<br></p></li><li><p><strong>Early Partnership with Scroll: </strong>In early stages, Cysic provided <strong>high-performance ZK computation</strong> for <strong>Scroll</strong>, executing large-scale proving tasks on GPU clusters with low latency and cost, generating <strong>over 10 million proofs</strong>. This validated Cysic’s engineering capability and laid the foundation for its future computer-network development.<br></p></li><li><p><strong>Home Miner Debut at Token2049: </strong>Cysic’s <strong>DogeBox 1</strong> portable ASIC miner officially entered the <strong>Dogecoin/Scrypt compute market</strong>. Specs: 55 W power, 125 MH/s hashrate, <strong>100 × 100 × 35 mm</strong>, Wi-Fi + Bluetooth support, noise &lt; 35 dB — ideal for home or community use. Beyond DOGE/LTC mining, it supports <strong>DogeOS ZK verification</strong>, achieving <strong>dual-layer (L1 + L2) security</strong> and forming a <strong>DOGE → CYS → DogeOS</strong> incentive loop.<br></p></li><li><p><strong>Testnet Completion &amp; Mainnet Readiness: </strong>On <strong>Sept 18, 2025</strong>, Cysic completed <strong>Phase III: Ignition</strong>, marking the end of its testnet and transition toward mainnet launch.</p></li></ol><p>The testnet onboarded <strong>Succinct, Aleo, Scroll, and Boundless</strong>, attracting 55,000+ wallets, 8 million transactions, and 100,000+ reserved high-end GPU devices. 1.36 million registered users, 13 million transactions, ~223 k Verifiers + 41.8 k Provers = 260 k+ total nodes.&nbsp; 1.46 million total tokens distributed (733 k $CYS + 733 k $CGT + 4.6 million FIRE) and 48,000+ users staked, validating both incentive sustainability and network scalability.</p><ol start="5"><li><p><strong>Ecosystem Integration Overview: </strong>&nbsp;According to Cysic’s official ecosystem map, the network is now <strong>interconnected with leading ZK and AI projects</strong>, underscoring its <strong>hardware-compatibility and openness</strong> across the decentralized compute stack.<br> These integrations strengthen Cysic’s position as a <strong>foundational compute and hardware acceleration provider</strong>, supporting future expansion across <strong>ZK, AI, and ComputeFi</strong> ecosystems. <strong>Partner Categories:</strong></p><ul><li><p><strong>zkEVM / L2:</strong> zkSync, Scroll, Manta, Nil, Kakarot</p></li><li><p><strong>zkVM / Prover Networks:</strong> Succinct, Risc0, Nexus, Axiom</p></li><li><p><strong>zk Coprocessors:</strong> Herodotus, Axiom</p></li><li><p><strong>Infra / Cross-chain:</strong> zkCloud, ZKM, Polyhedra, Brevis</p></li><li><p><strong>Identity &amp; Privacy:</strong> zkPass, <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://Human.tech">Human.tech</a></p></li><li><p><strong>Oracles:</strong> Chainlink, Blocksense</p></li><li><p><strong>AI Ecosystem:</strong> Talus, Modulus Labs, Gensyn, Aspecta, Inference Labs</p></li></ul></li></ol><h3 id="h-viii-token-economics-design" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>VIII. Token Economics Design</strong></h3><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/572d78ffca93f04338bd546d0eda379c39065c47485d2e315ae0c533748fd2f5.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAKCAIAAABaL8vzAAAACXBIWXMAAAsTAAALEwEAmpwYAAABgklEQVR4nJ1SIY8qMRAuDodBIBDIFTUVmBWINVgMArP/Yc0adM2aPpIqzF5SVVNTVVNTg6qpqUJh1lQ14bLmLkffbe6Rx4O8T02mM/P1m2/Ax/+iv+FpGXhlVtd1jDEhhFJKSmmMGUYnmn8wvUTgnNtut+CGyWRS13WMEWNMKX0qAjz6yM/YOae1LopiPB5jjL33Xdc1TYMQWq1WEEJjzCM1Lynov9sYY03TVFV1Op3SUwjh8Ovw1r5d369/VQP6vrfWpuUqpbz3qc05572PMQ4ecM6rqprNZnmeU0qPxyMhhHOutZZSCiHquiaEMMbSkN8EMUbO+WKxAACs12uttXOubVsI4Wg04pz3fU8pxRhPp1MAQFEUSilCyHK5zPO8LEsIIUJICJFlGQBgs9k45wY1XwqklLvdLsuysiyNMZxzCOF8Pkc3AAD2+30IoW1bhFDyI4Rwt0NrrZQy0Vtr//AgxpgIB5fCNy6Xy/l8jjekmiG+Q8r/nPbkiu4O6VHmacEnl7gbsTR7C+oAAAAASUVORK5CYII=" nextheight="367" nextwidth="1150" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>Cysic Network adopts a <strong>dual-token system</strong>: the network token <strong>$CYS</strong> and the governance token <strong>$CGT</strong>.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/bc716e2fa6f7b0fa9cbb019d596dfcd5fcf680219fb9fad0d170e7d0d8a6c0a6.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAARCAIAAAAzPjmrAAAACXBIWXMAAAsTAAALEwEAmpwYAAACqUlEQVR4nLWTsUscQRTG5x+Ixi4iB+LBtVscGBYOBOstrrKzu+LgCouIYClYWBxYnrCygoIB60HYwMAkDDjFFFMcbDGRiUxkiy2WOMUUW0xw3+lpMHgG8uv2zb753nvzPuT/M8h7Px6PKaXiCZxzQkhZlrNcIaVMa4QQWmshBCGEUuqcmwikaVrWaK1vf94aY7z3nHOt9SwCGGPnHMY4DMOV5srh4aH3njGmlJoKeO8ppYsfFufn5rvdLrQFSq/COVdKbX/abtf0ej2tdVYzFTDGFEUBUaWUtZYxNmMHlNI0Td/VLC0tIYT29/efCWitGWMwu9FoJKVkNTDEV8myjHOe5zmlNEkSa22WZVLKqqomAo9Ya6WU3ns4eyvWWoyxUirPc601TPheoHrAOUcIieMYY0wImUWpqrOcc9Za7/3BwQFCqNFoIIQ2Njb+7MA5xzmPogghFIbhW1txzmVZ5pzL89xaO13Tx1ustbBbT4NVzYu1Q8lhGO7u7vb7/cFgAA8LR9M3qKqqKAprrdb68vISSnjVZVWdf3p6GgRBs9lst9tRFCVJUlUVdAC/3QsIISiljLGLi4vhcEgI4ZyD+1zddVmW9gGwZFmWv+7u4Ap4VWMMpXQ0GqVp+pg+EaCUeu+LooiiqNFo7OzsQHA8HjPGUM3CwsLq6sdut/t+bg4iCCFo9Pz8HKpstVpBEMRxDJ9THzDGvPd7e3udTmdtbW1zc9MYI4Q4jo9vftwwxjDGX9L06urq29evn09OzuL45OjoLEmUUoyx4XDovd/a2lpfXw+CoN/vl2UJhp0IEEKstYPBoNlstlqtTqdDCJFSXn+/nmV5YBq9Xq/dbi8vL4dhKKUUQkwFwIQwSgD+eFyGvwECWmtIN8ZAuhACDPvMB0936994caF/A58kY7sPaR3MAAAAAElFTkSuQmCC" nextheight="375" nextwidth="700" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p><strong>$CYS (Network Token):<br></strong> A native, transferable asset used for paying transaction fees, node staking, block rewards, and network incentives—ensuring network activity and economic security. $CYS is also the primary incentive for compute providers and verifiers. Users can stake $CYS to obtain governance weight and participate in resource allocation and governance decisions of the <strong>Computing Pool</strong>.</p><p><strong>$CGT (Governance Token):<br></strong> A non-transferable asset minted <strong>1:1 by locking $CYS</strong>, with a longer unbonding period to participate in <strong>Computing Governance (CG)</strong>. $CGT reflects compute contribution and long-term participation. Compute providers must maintain a reserve of $CGT as an admission bond to deter malicious behavior.</p><p>During network operation, compute providers connect their resources to Cysic Network to serve ZK, AI, and crypto-mining workloads. Revenue sources include block rewards, external project incentives, and compute governance distributions. <strong>Scheduling and reward allocation</strong> are dynamically adjusted by multiple factors, with <strong>external project incentives</strong> (e.g., ZK, AI, Mining rewards) as a key weight.</p><h3 id="h-ix-team-background-and-fundraising" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>IX. Team Background &amp; Fundraising</strong></h3><p><strong>Co-founder &amp; CEO: Xiong (Leo) Fan.<br></strong> Previously an Assistant Professor of Computer Science at Rutgers University (USA); former researcher at Algorand and Postdoctoral Researcher at the University of Maryland; Ph.D. from Cornell University. Leo’s research focuses on cryptography and its intersections with formal verification and hardware acceleration, with publications at top venues such as <strong>IEEE S&amp;P, ACM CCS, POPL, Eurocrypt, and Asiacrypt</strong>, spanning homomorphic encryption, lattice cryptography, functional encryption, and protocol verification. He has contributed to multiple academic and industry projects, combining theoretical depth with systems implementation, and has served on program committees of international cryptography conferences.</p><p>According to public information on LinkedIn, the Cysic team blends backgrounds in <strong>hardware acceleration, cryptographic research, and blockchain applications</strong>. Core members have industry experience in chip design and systems optimization and academic training from leading institutions across the US, Europe, and Asia. The team’s strengths are complementary across <strong>hardware R&amp;D, ZK optimization, and business operations</strong>.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/1b6f00d17562402a5d2b65b008e008582147536abd0ebe6adaeaab5ddc953c70.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAKCAIAAABaL8vzAAAACXBIWXMAAAsTAAALEwEAmpwYAAACn0lEQVR4nJWTwUsbQRTGFyEsDNtxhmE2ZE1Tu4QlBLYIexHpYAmshUgMCHYPQZDAwrpgtTdhYYunGsUGUyslbLv1sIRaUg9KIIdaD0LwlKOXBg8evQj+ARYz2ov10N9hGN7Me9/jfTMCeQCMsSiK8A5JkiCEAIC/EXCHJEkPFSGECPdDlFIIoWEYtdrmVGGqVCpNvJyo1+uO46ysrFQqleXlZdd1362uep63tr7+yrIghJTS/xCIxWJl2766uvp5eNjpdHab33tnZ+12+/z8/FO9/nF7OwiC6+vrbrd7cXHxY29PkqQHBei/IIQofZLJpKqqqVTKMAxd1w3DSCQS8XjcNM1kMinL8sjIiKZpGGPe2f06AkKIz5HPV7oDAIAx5hsAAEJIFEVBELgBgiDwLG4GF+CeiaIY6wMAuBGwLGt/fz8Iglpts1qtRlHked5OuON5nizLvu+3Wq0oihBClUql1+tls1nTNLvdbi6X0/Vnsiw7jsMYI4R0Oh3f96vVarPZ9H3fsiyMsTA+Pu44juu6S0tLpVLJdV3bthdfL+ZyOQhhOp02TXNubi4WixWLxSiKMpmMqqpbWx8KhcLw8BOMMWNM0zRCSBRFZp9yuey67vT09I0AAEDXddu2NU1LJBL5fJ4xZppmsVgcGxsTRfEFY1OTk4yxdDqdzWZHR0cRQqnUsKIMDSUfP1XVdB9KaSaTmZ2dnZmZURRlYGDgdkQAANu2e73fhUIhn8+32+0wDE9PT1utVhiGCKGNtbX2wcG8O//r6Oj4+Pjb7i6lNPjcWK3UFt+83Xi/GYZfg+CLpmmMPT85Obm8vGw0GgsLC7cmE0K4UQgh/r8wxpRSvlJKEUKDg4P8iDvcT3l0cw/dwk2GEPIi8XhcURT+TP8A2uXfaNM3d8oAAAAASUVORK5CYII=" nextheight="438" nextwidth="1456" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p><strong>Fundraising:<br></strong> In <strong>May 2024</strong>, Cysic announced a <strong>$12M Pre-A round</strong> co-led by <strong>HashKey Capital</strong> and <strong>OKX Ventures</strong>, with participation from <strong>Polychain, IDG, Matrix Partners, SNZ, ABCDE, Bit Digital, Coinswitch, </strong><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://Web3.com"><strong>Web3.com</strong></a><strong> Ventures</strong>, as well as notable angels including <strong>George Lambeth</strong> (early investor in Celestia/Arbitrum/Avax) and <strong>Ken Li</strong> (Co-founder of Eternis).</p><h3 id="h-x-competitive-landscape-in-zk-hardware-acceleration" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>X. Competitive Landscape in ZK Hardware Acceleration</strong></h3><h4 id="h-1-direct-competitors-hardware-accelerated" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>1) Direct Competitors (Hardware-Accelerated)</strong></h4><p>In the <strong>hardware-accelerated prover</strong> and <strong>ComputeFi</strong> track, Cysic’s core peers include <strong>Ingonyama, Irreducible (formerly Ulvetanna), Fabric Cryptography, and Supernational</strong>—all focusing on “hardware + networks that accelerate ZK proving.”</p><ul><li><p><strong>Cysic:</strong> Full-stack (GPU + ASIC + network) with a <strong>ComputeFi</strong> narrative. Strengths lie in the tokenization/financialization of compute; challenges include market education and hardware mass-production.</p></li><li><p><strong>Irreducible:</strong> Strong theory + engineering; exploring new algebraic structures (<strong>Binius</strong>) and zkASIC. High theoretical innovation; commercialization pace may be constrained by FPGA economics.</p></li><li><p><strong>Ingonyama:</strong> Open-source friendly; <strong>ICICLE</strong> SDK is a de-facto GPU ZK acceleration standard with high ecosystem adoption, but <strong>no in-house hardware</strong>.</p></li><li><p><strong>Fabric:</strong> “Hardware–software co-design” path; building a <strong>VPU (Verifiable Processing Unit)</strong> general crypto-compute chip—business model akin to “CUDA + NVIDIA,” targeting a broader cryptographic compute market.</p></li></ul><table style="min-width: 100px"><colgroup><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Project</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Technical Path</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Hardware Direction</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Positioning / Model</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Cysic</strong></p></td><td colspan="1" rowspan="1"><p>GPU → ASIC; tokenizes compute via ComputeFi</p></td><td colspan="1" rowspan="1"><p>In-house ASIC (C1 + ZK Air + ZK Pro) plus large-scale GPU clusters</p></td><td colspan="1" rowspan="1"><p>ComputeFi: assetized compute + real-time ZK proving network</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Irreducible (ex-Ulvetanna)</strong></p></td><td colspan="1" rowspan="1"><p>Math-driven: <strong>Binius</strong> (binary polynomial commitments) → hardware-aware</p></td><td colspan="1" rowspan="1"><p>Early FPGA; now Binius + HW/SW co-design</p></td><td colspan="1" rowspan="1"><p>Algorithm-first; hardware as “experimental validation platform”; research-infra flavor</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Ingonyama</strong></p></td><td colspan="1" rowspan="1"><p>Software-first: <strong>ICICLE</strong> CUDA libs for MSM/FFT on GPUs</p></td><td colspan="1" rowspan="1"><p>No in-house hardware (leverages existing GPUs)</p></td><td colspan="1" rowspan="1"><p>Open GPU acceleration toolchain for developers; not building chips</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Fabric Cryptography</strong></p></td><td colspan="1" rowspan="1"><p>HW/SW co-design: <strong>VPU</strong> between GPU flexibility and ASIC performance</p></td><td colspan="1" rowspan="1"><p>In-house VPU + boards (FC1000 / VPU8060 / Byte Smasher)</p></td><td colspan="1" rowspan="1"><p>Platform play: chips + compiler + libs + cloud services</p></td></tr></tbody></table><h4 id="h-2-indirect-competitors-zk-marketplace-prover-network-zk-coprocessor" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>2) Indirect Competitors (ZK Marketplace / Prover Network / zk Coprocessor)</strong></h4><p>In <strong>ZK Marketplaces, Prover Networks, and zk Coprocessors</strong>, Cysic currently acts more as an <strong>upstream compute supplier</strong>, while <strong>Succinct, Boundless, Risc0, Axiom</strong> target the same end customers (L2s, zkRollups, zkML) via zkVMs, task routing, and open markets.</p><ul><li><p><strong>Short term:</strong> Cooperation dominates. Succinct routes tasks; Cysic supplies high-performance provers. zk Coprocessors may offload tasks to Cysic.</p></li><li><p><strong>Long term:</strong> If <strong>Boundless</strong> and <strong>Succinct</strong> scale their marketplace models (auction vs. routing) while <strong>Cysic</strong> also builds a marketplace, direct competition at the <strong>customer access layer</strong> is likely. Similarly, a mature zk Coprocessor loop could disintermediate direct hardware access, risking Cysic’s marginalization as an “upstream contractor.”</p></li></ul><table style="min-width: 100px"><colgroup><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Project</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Core Positioning</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Business Model / Product</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Relationship to Cysic</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Cysic</strong></p></td><td colspan="1" rowspan="1"><p>ZK hardware acceleration + Prover/Verifier network</p></td><td colspan="1" rowspan="1"><p>High-performance ZK proof generation on GPU/ASIC; operates prover/verifier node network</p></td><td colspan="1" rowspan="1"><p>With <strong>Succinct</strong>: upstream prover; with <strong>Boundless</strong>: potential partner/competitor</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Succinct</strong></p></td><td colspan="1" rowspan="1"><p>General zkVM (<strong>SP1</strong>) + Prover Network</p></td><td colspan="1" rowspan="1"><p>Open zkVM + decentralized Prover Marketplace; auto-routes optimal paths</p></td><td colspan="1" rowspan="1"><p>Cysic is one prover among many, supplying high-perf compute</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Boundless</strong></p></td><td colspan="1" rowspan="1"><p>Open <strong>Proof Marketplace</strong></p></td><td colspan="1" rowspan="1"><p>Reverse Dutch Auction matching provers with tasks</p></td><td colspan="1" rowspan="1"><p>Cysic’s provers can connect; competition emerges if Cysic builds its own market</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>zk Coprocessors (Axiom, etc.)</strong></p></td><td colspan="1" rowspan="1"><p>Outsourced ZK compute module</p></td><td colspan="1" rowspan="1"><p>Off-chain compute + on-chain verification APIs; devs avoid hardware complexity</p></td><td colspan="1" rowspan="1"><p>Short term: task source; long term: possible disintermediation</p></td></tr></tbody></table><p><br></p><h3 id="h-xi-conclusion-business-logic-engineering-execution-and-potential-risks" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>XI. Conclusion: Business Logic, Engineering Execution, and Potential Risks</strong></h3><p><strong>Business Logic<br></strong> Cysic centers on the <strong>ComputeFi</strong> narrative—connecting compute from <strong>hardware production</strong> and <strong>network scheduling</strong> to <strong>financialized assets</strong>.</p><ul><li><p><strong>Short term:</strong> Leverage GPU clusters to meet current ZK prover demand and generate revenue.</p></li><li><p><strong>Mid term:</strong> Enter a mature cash-flow market with <strong>Dogecoin home ASIC miners</strong> to validate mass production and tap community-driven retail hardware.</p></li><li><p><strong>Long term:</strong> Develop dedicated <strong>ZK/AI ASICs</strong>, combined with <strong>Node NFTs / Compute Cubes</strong> to assetize and marketize compute—building an infrastructure-level moat.</p></li></ul><p><strong>Engineering Execution</strong></p><ul><li><p><strong>Hardware:</strong> Completed GPU-accelerated prover/verifier optimizations (MSM/FFT parallelization); disclosed ASIC R&amp;D (1.3M Keccak/s prototype).</p></li><li><p><strong>Network:</strong> Built a <strong>Cosmos SDK-based</strong> validation chain for prover accounting and task distribution; tokenized compute via <strong>Compute Cube / Node NFTs</strong>.</p></li><li><p><strong>AI:</strong> Released the <strong>Verifiable AI</strong> framework; accelerated Sumcheck and finite-field arithmetic via GPU parallelism for trusted inference—though differentiation from peers remains limited.</p></li></ul><p><strong>Potential Risks</strong></p><ul><li><p><strong>Market education &amp; demand uncertainty:</strong> ComputeFi is new; it’s unclear whether customers will invest in compute via NFTs/tokens.</p></li><li><p><strong>Insufficient ZK demand:</strong> The prover market is early; current GPU capacity may satisfy most needs, limiting ASIC shipment scale and revenue.</p></li><li><p><strong>ASIC engineering &amp; mass-production risk:</strong> Proving systems aren’t fully standardized; ASIC R&amp;D takes <strong>12–18 months</strong> with high tape-out costs and uncertain yields—impacting commercialization timelines.</p></li><li><p><strong>Home-miner capacity constraints:</strong> The household market is limited; electricity costs and community-driven behavior skew toward “enthusiast consumption,” hindering stable scale revenue.</p></li><li><p><strong>Limited AI differentiation:</strong> Despite GPU parallel optimizations, cloud inference services are commoditized and the Agent Marketplace has low barriers—overall defensibility remains modest.</p></li><li><p><strong>Competitive dynamics:</strong> Long-term clashes at the <strong>customer access layer</strong> with <strong>Succinct/Boundless</strong> (marketplaces) or mature <strong>zk Coprocessors</strong> could push Cysic into an upstream “contract manufacturer” role.</p></li></ul><p><strong>Disclaimer:<br></strong> This article was produced with assistance from <strong>ChatGPT-5</strong> as an AI tool. The author has endeavored to proofread and ensure the accuracy of all information, yet errors may remain. Note that in crypto markets, a project’s fundamentals often diverge from secondary-market price performance. The content herein is for <strong>information aggregation and academic/research exchange only</strong>; it does <strong>not</strong> constitute investment advice nor a recommendation to buy or sell any token.</p>]]></content:encoded>
            <author>0xjacobzhao@newsletter.paragraph.com (jacobzhao)</author>
            <category>cysic</category>
            <category>zk</category>
            <category>ai</category>
            <category>gpu</category>
            <category>asic</category>
            <category>doge</category>
            <enclosure url="https://storage.googleapis.com/papyrus_images/f08fb626749b6c772ab93a1dc872f7dd2d9c35d32b0be409e5db601be1c9004b.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[Cysic研报：ZK 硬件加速的ComputeFi之路]]></title>
            <link>https://paragraph.com/@0xjacobzhao/cysic研报：zk-硬件加速的computefi之路</link>
            <guid>5dIvpyvsUE1H6yrzTwRm</guid>
            <pubDate>Wed, 15 Oct 2025 15:47:47 GMT</pubDate>
            <description><![CDATA[零知识证明（ZK）作为新一代加密与扩容基础设施，已在扩容、隐私计算、zkML、跨链验证等场景展现潜力。但证明生成计算量与延迟高企，成为产业化瓶颈。ZK 硬件加速因此为关键：GPU 以通用与迭代速度占优，ASIC 以能效指向终局，FPGA 在可编程与能效间折中，共同构成落地基座。本文解构 Cysic Network：以 PoC 共识与 Prover/Verifier 搭建通用 Proof Layer，以 Compute Cube 节点 NFT 推进算力金融化，并在 Verifiable AI 与 Serverless 推理上拓展应用；同时以 DogeBox 1 家庭级 Scrypt ASIC 验证量产与现金流。报告亦评估生态合作、代币模型、竞品与风险，给出 ComputeFi 路径的可行性判断。]]></description>
            <content:encoded><![CDATA[<p>零知识证明（ZK）作为新一代加密与扩容基础设施，已在区块链扩容、隐私计算以及zkML、跨链验证等新兴应用中展现出广阔潜力。然而，其证明生成过程计算量巨大、延迟高昂，成为产业化落地的最大瓶颈。ZK 硬件加速正是在此背景下崛起的核心环节，在 ZK 硬件加速路径上，GPU 以通用性和迭代速度见长，ASIC 追求极致能效与规模化性能，而 FPGA 则作为中间形态，兼具灵活可编程性与较高能效，三者共同构成推动零知识证明落地的硬件基础。</p><h2 id="h-zk" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>一、ZK 硬件加速的行业格局</strong></h2><p>GPU、FPGA 和 ASIC 构成了硬件加速的三大主流方案：GPU 以通用并行架构和成熟生态在 AI、ZK 等领域广泛应用；FPGA 依靠可重构特性适合算法快速迭代和低延迟场景；ASIC 则通过专用电路实现极致性能与能效，是规模化和长期基础设施的最终形态。</p><ul><li><p><strong>GPU (Graphics Processing Unit)：</strong> 通用并行处理器，最初为图形渲染优化，现在广泛用于 AI、ZK与科学计算。</p></li><li><p><strong>FPGA (Field Programmable Gate Array)：</strong> 可编程硬件电路，逻辑门级别“像乐高一样”可以反复配置，介于通用处理和专用电路之间。</p></li><li><p><strong>ASIC (Application-Specific Integrated Circuit)：</strong> 为特定任务定制的专用芯片，一次烧录，固定功能，性能和能效最高，但灵活性最差。</p></li></ul><p><strong>GPU市场主流</strong>：GPU 已成为 AI 与 ZK 的核心算力资源。在 AI 领域，GPU 依托并行架构与成熟生态（CUDA、PyTorch、TensorFlow），几乎不可替代，是训练与推理的长期主流。在 ZK 领域，GPU 凭借成本与可得性优势成为现阶段最佳方案，但其在大整数模运算、MSM 与 FFT/NTT 等任务上受限于存储与带宽，能效与规模化经济性不足，长期仍需更专用的硬件方案。</p><p><strong>FPGA灵活方案：</strong>Paradigm 在 2022 年曾押注 FPGA，认为其在灵活性、效率与成本之间处于“甜蜜点”。FPGA 的确具备灵活可编程、开发周期短、硬件可复用等优势，适用于 ZK 证明算法迭代、原型验证、低延迟场景（高频交易、5G 基站）、功耗受限的边缘计算与高安全加密等任务。但在性能和规模化经济性上，FPGA 难以与 GPU、ASIC 竞争。其战略定位更接近“算法未定型时的验证与迭代平台”，以及少数细分行业中的长期刚需。</p><p><strong>ASIC终局形态：</strong>ASIC 在加密货币挖矿中已高度成熟（比特币SHA-256、莱特币/狗狗币Scryp），通过将算法固化到电路中，ASIC 实现数量级的性能与能效优势成为矿业唯一主导。ASIC在 ZK 证明（如Cysic）与 AI 推理（如 Google TPU、寒武纪）中同样展现巨大潜力。但在 ZK 证明中，由于算法和算子尚未完全标准化，大规模需求仍在酝酿。未来一旦标准固化，ASIC 有望凭借 10–100 倍的性能与能效优势，以及量产后的低边际成本，像矿业 ASIC 一样重塑 ZK 的算力基建。在 AI 领域，由于算法迭代频繁、训练高度依赖矩阵并行，GPU 将继续占据训练主流，但 ASIC 在固定任务和规模化推理中将具备不可替代的价值。<br></p><table style="min-width: 100px"><colgroup><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>维度</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>GPU</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>FPGA</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>ASIC</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p>性能 / 成本</p><p>（Perf/$）</p></td><td colspan="1" rowspan="1"><p><strong>强</strong>：受 AI/游戏规模效应加持，通用卡（RTX/A/H 系列）性价比高</p></td><td colspan="1" rowspan="1"><p><strong>一般</strong>：同价位吞吐通常落后于 GPU</p></td><td colspan="1" rowspan="1"><p><strong>最佳</strong>：量产后摊销成本低，长期碾压</p></td></tr><tr><td colspan="1" rowspan="1"><p>性能 / 功耗</p><p>（Perf/W）</p></td><td colspan="1" rowspan="1"><p><strong>中等</strong>：ZK 负载下功耗偏高</p></td><td colspan="1" rowspan="1"><p><strong>中等~较优</strong>：部分设计优于 GPU</p></td><td colspan="1" rowspan="1"><p><strong>最佳</strong>：为 MSM/FFT/哈希等定制，能效领先</p></td></tr><tr><td colspan="1" rowspan="1"><p>灵活性</p></td><td colspan="1" rowspan="1"><p><strong>最高</strong>：快速适配 Plonky2/Halo2/HyperPlonk 等</p></td><td colspan="1" rowspan="1"><p><strong>较高</strong>：可重构，但需 RTL/HDL 能力</p></td><td colspan="1" rowspan="1"><p><strong>最低</strong>：逻辑固化，需抽象 ISA 才能兼容多证明系统</p></td></tr><tr><td colspan="1" rowspan="1"><p>上线周期</p></td><td colspan="1" rowspan="1"><p><strong>最快</strong>：现货采购+CUDA 生态</p></td><td colspan="1" rowspan="1"><p><strong>中等</strong>：板卡到稳定部署需周/月</p></td><td colspan="1" rowspan="1"><p><strong>最慢</strong>：流片 12–18 个月</p></td></tr><tr><td colspan="1" rowspan="1"><p>可扩展性</p></td><td colspan="1" rowspan="1"><p><strong>受限</strong>：受 PCIe/机箱形态约束</p></td><td colspan="1" rowspan="1"><p><strong>较强</strong>：可做定制互联/流水线</p></td><td colspan="1" rowspan="1"><p><strong>极强</strong>：可按工作负载定制互联/形态</p></td></tr><tr><td colspan="1" rowspan="1"><p>生态与工具</p></td><td colspan="1" rowspan="1"><p><strong>最成熟</strong>：CUDA、cuFFT、MSM 库、社区经验丰富</p></td><td colspan="1" rowspan="1"><p><strong>小众</strong>：工具链成熟度和人才密度较低</p></td><td colspan="1" rowspan="1"><p><strong>前期稀缺</strong>：需自建软件栈，成型后稳定</p></td></tr><tr><td colspan="1" rowspan="1"><p>最佳用途</p></td><td colspan="1" rowspan="1"><p><strong>生产级 ZK prover、快速迭代、去中心化 GPU 网络</strong></p></td><td colspan="1" rowspan="1"><p><strong>算法验证/原型、低时延/定制互联场景</strong></p></td><td colspan="1" rowspan="1"><p><strong>规模化 zkML、递归证明、长期基建</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p>主要风险</p></td><td colspan="1" rowspan="1"><p>能耗/机位成本持续攀升</p></td><td colspan="1" rowspan="1"><p>人才稀缺、单片价高、规模经济差</p></td><td colspan="1" rowspan="1"><p>算法变动风险、资金与周期压力</p></td></tr></tbody></table><p>在 ZK 硬件加速的演进路径中，GPU 目前是最优解，兼顾成本、可得性与开发效率，适合快速上线与迭代；FPGA 更像“专项工具”，在超低时延、小批量互联和原型验证中具备价值，但难与 GPU 的经济性抗衡；长期来看，随着 ZK标准趋于稳定，ASIC 将凭借极致的性能/成本与能效优势成为行业主力。整体路径为：短期依赖 GPU 抢占市场与营收，中期以 FPGA 做验证和互联优化，长期押注 ASIC 构筑算力护城河。</p><h2 id="h-zk" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>二、硬件视角：ZK 加速的底层技术壁垒</strong></h2><p>Cysic 的核心优势在于 <strong>零知识证明（ZK）的硬件加速</strong>。在代表性论文 <em>《ZK Hardware Acceleration: The Past, the Present and the Future》</em> 中，团队指出 <strong>GPU</strong> 具备灵活性和成本效率，而 <strong>ASIC</strong> 在能效和极致性能上更胜一筹，但需权衡开发成本与可编程性。Cysic 走 <strong>ASIC 创新 + GPU 加速</strong> 双线并进的路线，从定制芯片到通用 SDK，推动 ZK 从“可验证”走向“实时可用”。</p><p><strong>1. ASIC 路线：Cysic C1 芯片与专用设备</strong></p><p>Cysic 自研的 <strong>C1 芯片</strong> 基于 zkVM 架构，具备高带宽与灵活可编程性。基于此Cysic 规划推出ZK Air（便携式）与ZK Pro（高性能）两款硬件产品</p><ul><li><p><strong>ZK Air</strong>：便携式加速器，体积类似 iPad 充电器，即插即用，面向轻量级验证与开发；</p></li><li><p><strong>ZK Pro</strong>：高性能系统，结合 C1 芯片与前端加速模块，定位于大规模 zkRollup、zkML 等场景。</p></li></ul><p>Cysic 的研究成果直接支撑其 ASIC 路线。团队提出 <strong>Hypercube IR</strong> 作为 ZK 专用中间表示，将证明电路抽象为规则化并行模式，降低跨硬件迁移门槛，并在电路逻辑中显式保留模运算与访存模式，便于硬件识别与优化；在 <strong>Million Keccak/s</strong> 实验中，自研 C1 芯片单片实现约 <strong>1.31M 次 Keccak 证明/秒（约 13× 加速）</strong>，展示了专用硬件在能效与吞吐上的潜力；在 <strong>Hyperplonk 硬件分析</strong> 中，则指出 MSM/MLE 更易并行化，而 Sumcheck 仍是瓶颈。整体来看，Cysic 正在编译抽象、硬件验证和协议适配三方面形成完整方法论，为产品化奠定基础。</p><p><strong>2. GPU 路线：通用 SDK + ZKPoG 端到端栈</strong></p><p>在 GPU 方向，Cysic 同时推进 <strong>通用加速 SDK</strong> 与 <strong>ZKPoG 全流程优化栈</strong>：</p><ul><li><p><strong>通用 GPU SDK</strong>：基于自研 CUDA 框架，兼容 Plonky2、Halo2、Gnark、Rapidsnark 等后端，性能超越开源方案，支持多型号 GPU，强调 <strong>兼容性与易用性</strong>。</p></li><li><p><strong>ZKPoG（Zero-Knowledge Proof on GPU）</strong>：与清华大学合作研发的端到端 GPU 栈，首次实现从 witness 生成到多项式计算的全流程优化。在消费级 GPU 上最高提速 <strong>52×</strong>（平均 <strong>22.8×</strong>），并扩展电路规模 1.6 倍，已在 SHA256、ECDSA、MVM 等应用中验证。</p></li></ul><table style="min-width: 75px"><colgroup><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>维度</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>ASIC 路线</strong></p><p style="text-align: center"><strong>（Cysic C1 / ZK Air / ZK Pro）</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>GPU 路线</strong></p><p style="text-align: center"><strong>（通用 SDK + ZKPoG）</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>定位</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">定制化极限性能，大规模 ZKP 工作量</p></td><td colspan="1" rowspan="1"><p style="text-align: center">通用加速方案，兼容主流证明系统</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>特点</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">- C1 芯片基于 zkVM 架构&nbsp;</p><p style="text-align: center">- Hypercube IR 优化电路逻辑&nbsp;</p><p style="text-align: center">- 单芯片 13× 加速，支持实时证明</p></td><td colspan="1" rowspan="1"><p style="text-align: center">- 自研 CUDA SDK，支持 Plonky2/Halo2 等后端 - ZKPoG 实现 witness → 多项式计算端到端 GPU 化 - CPU 提升 22.8×（最高 52×）</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>产品形态</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">- ZK Air（便携式加速器）</p><p style="text-align: center">&nbsp;- ZK Pro（高性能系统）</p></td><td colspan="1" rowspan="1"><p style="text-align: center">- 通用 GPU SDK&nbsp;</p><p style="text-align: center">- ZKPoG 端到端栈</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>优势</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">极致能效、硬件友好、专用优化</p></td><td colspan="1" rowspan="1"><p style="text-align: center">灵活性高、快速迭代、开发门槛低</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>不足</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">成本高与研发周期长；灵活性不足；生态依赖高；产品仍在规划阶段</p></td><td colspan="1" rowspan="1"><p style="text-align: center">能效不如 ASIC；显存限制导致规模瓶颈；</p><p style="text-align: center">不同 GPU 性能差异大；竞争方案多</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>适用场景</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">长期稳定、高吞吐需求：zkRollup 主网、zkML 大模型、递归证明</p></td><td colspan="1" rowspan="1"><p style="text-align: center">研发与灵活性优先：新型 ZK 系统测试、跨链验证、zkML 小规模推理、身份认证</p></td></tr></tbody></table><p>Cysic 的核心竞争力在于 <strong>软硬件一体化设计（Hardware–Software Co-Design）</strong>。团队自研的 <strong>ZK ASIC、GPU 集群与便携矿机</strong> 共同构成算力供给的全栈体系，实现从芯片层到协议层的深度协同。Cysic 通过 “<strong>ASIC 的极致能效与规模化</strong>” 与 “<strong>GPU 的灵活性与快速迭代</strong>” 的互补格局，在高强度零知识证明场景中确立了领先的 ZKP 硬件供应商地位，并以此为基础，持续推进 <strong>ZK 硬件金融化（ComputeFi）</strong> 的产业路径。</p><h2 id="h-cysic-networkpoc-proof-layer" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>三、协议视角Cysic Network：PoC 共识下的通用 Proof Layer</strong></h2><p>Cysic 团队于 2025 年 9 月 24 日发布《Cysic Network Whitepaper》。项目以 <strong>ComputeFi</strong> 为核心，将 GPU、ASIC 与矿机金融化为可编程、可验证、可交易的算力资产，基于 <strong>Cosmos CDK + Proof-of-Compute (PoC)</strong> 与 EVM 执行层构建去中心化“任务撮合 + 多重验证”市场，统一支持 <strong>ZK 证明、AI 推理、挖矿与 HPC</strong>。依托自研 <strong>ZK ASIC、GPU 集群与便携矿机</strong> 的垂直整合能力，以及 <strong>CYS/CGT 双代币机制</strong>，Cysic 旨在释放真实算力流动性，补齐 Web3 基础设施中“算力”这一关键支柱。</p><p>Cysic Network 采用 <strong>自底向上的四层模块化架构</strong>，实现跨领域的灵活扩展与可验证协作：</p><ul><li><p><strong>硬件层（Hardware Layer）</strong>：由 CPU、GPU、FPGA、ASIC 矿机及便携式设备组成，构成网络算力基础。</p></li><li><p><strong>共识层（Consensus Layer）</strong>：基于 <strong>Cosmos CDK</strong> 构建，并采用改良版 <strong>CometBFT + Proof-of-Compute (PoC)</strong> 共识机制，将代币质押与算力质押同时纳入验证权重，确保计算与经济安全性统一。</p></li><li><p><strong>执行层（Execution Layer）</strong>：负责任务调度、负载路由、桥接与投票等核心逻辑，通过 <strong>EVM 兼容智能合约</strong> 实现多域可编程计算。</p></li><li><p><strong>产品层（Product Layer）</strong>：面向最终应用场景，集成 <strong>ZK 证明市场、AI 推理框架、加密挖矿与 HPC 模块</strong>，可灵活接入新型任务类型与验证方法。</p></li></ul><p>作为面向全行业的 <strong>ZK Proof Layer</strong>，Cysic 提供高性能、低成本的证明生成与验证服务。网络通过 <strong>去中心化 Prover 网络</strong> 与 <strong>离链验证 + 聚合上链机制</strong> 提升效率，并以 <strong>PoC 模型</strong> 将算力贡献与质押权重结合，构建兼具安全性与激励性的计算治理体系。</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/48f3ef53710bdf2e0d91c0ca3c90881400c14254c2e08908f15122e5fb671ec4.jpg" alt="" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAPCAIAAAAK4lpAAAAACXBIWXMAAAsTAAALEwEAmpwYAAAEFElEQVR4nG1UYUwbZRj+/i5L9IeJ+7M/aAxGTZaQ6HSakCVGExoMJm6JkV+ixCiJRMNiFxCYGaOWdTKoVjpJ8UbbMevQbqUM2124Vuo+OA4qtwuXHscoyY5eD44dZ1tKX9Mea8r0+fHl++6+93ve5/2e70VQgXw+n8vlAMDlcplMptbW1sV4nOeXV8S1y46f2RI0TSNJMhaLcRzHMEw8Hpdl2Yg1RpqmeZ4XRZHneVVVUSVBeV8gEGhsbLRarSzLCoLwQEp5fcGlpSWWZXVdp2maYZhkMimKoiAImqaVAvfKBIKQEEWRZVlFUQ4Q+P3+aDRapilPMMYzf0YNceWPFBXx+cYxnr1569Z0+Hbmnx3jr6Jszs3R8wwzzyxspFIHCERRlCQJAHKPkC+dpaqqJEmVrKV5tlDIKIqibsqt3c6frpMAoGk7iURimpwetNspiuJ5HkmStLa2dn91tTJBQzUACMIKRVHzDHMXY4qKzM7NZbJZAHio7Zz80PJU3VfEb0XFCL2MXmkGgC11e3X1PgAQo25N05LJdTQ2Nmax9v0RCpUT7OjoaGpqMubhcLi5ubmzs9NsNrd81mK1Wo08ttTtJ2s/Rs+fOtfvAdhFh2vRyc8N4oWFBZIkr7o9qqomk0mE8WwwGKRpWtf13Xxe1/W2trbh4WGDwOl0IoRqampePX4cIWQymQyCTDbbZvG81zo4GYkDZNHRt1883QUASxwfCpPRaNTtuaYoSlHBRHDye4cDYxwiZ852WmOxv8p21DTN4/ZUVVWZTKb6+vrq6urGxg82Uild1xPCKjr2Pjry1tm+UYA99Mw7R+u+ANi9GZy6Q1KRSOTqqGefgGVZjLEsp6Kx+V9uBAyDC4Igy2lZTo35/G3tF3ptP/ZcdJxpv3CuZ0AURVmWE8JK14Dvoy7X7chiZieNnjiBjp0WlhftDudMDGOMiVH3PsHk1JTLNcKy9wzzAIDFYmloaCiUSjTmC3T32M9bf+i1DZ23DnV8c0nX9aIXt1T0wil0uLbd5i1e8rMNz73bDgAxTIdCYYqaJkbdaUVZT64jiqL8fj9FUWXnEARhNpuN5Y3fJ7t77N9+d8Viu9JrG+rquWwQ5HK7l1zBLy2e6CxXKOjo0IkjdWeMy+e45Q1J8nivy3K6eMml9/LrPY57rGEYCianyL5+56CD6LeP9NtHLg44DWJF2UJPv4nQS5987SzZ9DX0+qeVNvV4rynKZrFEj7WKyrdaflP5g6jYWMhki1VFh95AtS2GTTmOeyBJgYkJQRA4bhn9J+YANG1HkjbkfaRVdbsiib3CoxYUuft3nBMBYDeff6hpW6qq67paGv9fQVmEKIper9fn8wUCAb/fjzGuaJx7xumVGB8fJwgiHA57vV6apgHgX8gxhdO+MmULAAAAAElFTkSuQmCC" nextheight="682" nextwidth="1456" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p><strong>ZK Proof Layer：去中心化与硬件加速</strong></p><p>零知识证明虽能在不泄露信息的前提下验证计算，但生成过程高耗时高成本。Cysic Network 通过 <strong>Prover 去中心化 + GPU/ASIC 加速</strong> 提升效率，并以 <strong>离链验证 + 聚合上链</strong> 模式降低以太坊验证的延迟与成本。其流程为：ZK 项目通过合约发布任务 → Prover 去中心化竞争生成证明 → Verifier 多方验证 → 链上合约结算。整体上，Cysic 将硬件加速与去中心化调度结合，打造可扩展的 Proof Layer，为 ZK Rollup、ZKML 与跨链应用提供底层支撑。</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/2c942d2ddc377b5b50f956b46c4d82f009c6c4ef5d5536843c50ba09cbf85073.jpg" alt="" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAARCAIAAAAzPjmrAAAACXBIWXMAAAsTAAALEwEAmpwYAAAEOklEQVR4nLWUX0xbVRzH++Sz8dUXEt2DxJAoZMJkKEs2TCBmzbJCHAm+rA4tCW203ZQ7kyLcQXKNclmwCNxEDmy9FXoH5UK5LTSlrmVKBS1YLv8albbQP/y5bHJK5Gdub1aJ8WEz8Zv78Du/e875/L7n/HJU8D9L9a9ZjDEAuFwuhBD3WAihvr5ehPoRQizL5vIsa41Go/8FoFara6qrTSaTTqczmUwFBQWqx9JqtXq9XqfTEQRx9mxZB307t+oJARkAoDp6uizM8vJSMBj8YW5OFMP9AzaDyRw8oVDop2/tDvOtrqdzAAChxZUBlldIOS0tb3BjHgD48/j4ZN4f+NFq458CsLcvdTO2ZGoH4BhnBSDvuBhetdknslOOjzIZ5ZdyMqxdYBD3pAAGcXPBxZOV4qyVpfAa55gGgEeHh7nJ62u/eaZmneO+trYeYcI3NuJxjvt83jlhwueZms0B/rbM2gW7Yyo3xDizGY0nUzsYZ0zXm14tKo1Efk2ndzej8b19CQAeBBY+b+u9PxNkB8YK86sK86v09ea7/SNDd/j2lm4VxhmlOkk6CAYXRHGlf/Be9nBAkh4CQDK1o3SO3FfvNLxU9JZsPKu9/YN0ehcADI0fGRqN8XgycP/7c2+eF8PrK+GN0tfL79l5VTq9G4vFJwWXJB0QxE2Npto3M/Po8NBmG4rFtjDOnAQ0GlvyC88DQF5enkqlSqZ2FEA73X7bInfq2nrEYLyBMT7KHJLNxPz8z7IDhFBFRQXG2O/3a7VaAOA47tSpF2OxOMaZRDJlNjer1epAIEBR1BtlZaFQiKbpMyUlfxxiSXqYSKYu6Wt883758u6MOiZ9MumXufD89PZWSq7L6/W53W4ACC+LgiAHweBCc3NrLLZ1lDmKxbbM5maN5jLDMDRNazQamqYpitJoNHv7BwDg9Ew3EPpRJ59O73ZaBq02OQj6vV7X+OryugxACHm93mzhIxwnt9p2IkFRlCTJ62OxrY+bmsxmMwCQJKnX6wHAYrEoAKVHKYoSRVF2wDDKVmur652dnfJtIYROv3ZaFMVAIFBcXOxyuSKRiFp9kSRblS7ajMYJ4qYypGnaZDJhjBmGqa29kgOQJKnVXgUAnud1ug8ikYjy0vD8uOJgcNguF+52uztoGbu7t/dJE5FM7UjSgbiycaW2zmAwMAxTWFhIEARJkkVFReXl536PbieSKQBoJW9NCi4A6O1jOujORDIlSQf11+ojkYjK7Qmcqajp/HpwhPdY+qz0V98McS5kHaU6eli7M/sJpeWV1bXXGoxtzzz3wrPPv9xgJC9UXr5QqbEOO+/axjjHtLn1S2SVr7eN6mL6hwTPA+KzL85frFta3lDxgu+VkqruPhaxDsQ6WLvwjwCxDl74btgxVXv1xrvvf3qp7sMGI+mcnuUc07kJJ1Y5laHe1PK25r35kPgXrz7qDkl3IXAAAAAASUVORK5CYII=" nextheight="776" nextwidth="1456" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><br><p><strong>节点角色：Cysic Prover 机制</strong></p><p>Cysic 在其 ZK 网络中引入 <strong>Prover 节点</strong>，用户可直接贡献算力或购买 Digital Harvester 执行证明任务，并以 <strong>CYS 与 CGT</strong> 获取奖励。通过提升 <strong>Multiplier 倍速因子</strong>可加快任务获取速度。节点需抵押 <strong>10 CYS</strong> 作为保证金，违规将被扣留。</p><p>当前 Prover 的核心任务为 <strong>ETHProof Prover</strong>，聚焦以太坊主网的区块证明，旨在推动底层的 ZK 化与扩展性建设。整体上，Prover 承担高强度计算任务，是 Cysic 网络性能与安全的核心执行层，并为后续可信推理与 AgentFi 应用提供算力保障。</p><p><strong>节点角色：Cysic Verifier 机制</strong></p><p>与 Prover 相对应，<strong>Verifier 节点</strong>负责对证明结果进行轻量级验证，提升网络安全与可扩展性。用户可在 <strong>PC、服务器</strong>或 <strong>官方 Android 应用</strong>运行 Verifier，并通过 <strong>Multiplier 倍速因子</strong>提高任务处理与奖励效率。</p><p>Verifier 的参与门槛更低，仅需抵押 <strong>0.5 CYS</strong> 作为保证金，运行方式简单，可随时加入或退出。整体上，Verifier 以 <strong>低成本、轻参与</strong>的模式吸引更多用户加入，扩展了 Cysic 在移动端和大众层面的覆盖，增强网络的去中心化与可信验证能力。</p><br><table style="min-width: 75px"><colgroup><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>维度</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Prover 节点</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Verifier 节点</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>角色定位</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">高强度计算，生成以太坊区块证明，是网络性能与安全的执行层</p></td><td colspan="1" rowspan="1"><p style="text-align: center">轻量验证 Prover 输出，增强网络安全与可扩展性</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>硬件要求</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">高性能 GPU/ASIC 服务器</p></td><td colspan="1" rowspan="1"><p style="text-align: center">PC、服务器或 Android 手机</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>抵押门槛</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">10 CYS</p></td><td colspan="1" rowspan="1"><p style="text-align: center">0.5 CYS</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>激励机制</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">CYS/CGT 奖励 + Multiplier 提速，高算力高门槛回报率更高</p></td><td colspan="1" rowspan="1"><p style="text-align: center">CYS/CGT 奖励 + Multiplier 提速，回报率较低，主要吸引广泛参与</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>网络规模</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">节点约 4.2 万<strong>（2025/10）</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">节点超10 万<strong>（2025/10）</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>参与特征</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">门槛高，长期稳定贡献算力有限</p></td><td colspan="1" rowspan="1"><p style="text-align: center">门槛低，广泛参与，验证任务轻量</p></td></tr></tbody></table><br><p>截至 2025 年 10月15日，Cysic 网络已初具规模：共运行约 <strong>4.2 万 Prover 节点</strong> 与 <strong>10 万+ Verifier 节点</strong>，累计处理任务 <strong>9.1 万余个</strong>，已分配奖励约 <strong>70 万枚 $CYS/$CGT</strong>。需注意的是，节点虽数量庞大，但因准入与硬件差异，<strong>活跃度与算力贡献分布不均</strong>。目前网络已对接 <strong>3 个项目</strong>，生态仍处早期阶段，其能否进一步演化为 <strong>稳定的算力网络与 ComputeFi 基础设施</strong>，仍取决于更多实际应用与合作落地。</p><h2 id="h-ai-cysic-aiagentfi" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>四、AI 视角Cysic AI：云服务、AgentFi 与可信推理</strong></h2><p>Cysic AI 的业务布局呈现“产品—应用—战略”三层：底层 <strong>Serverless Inference</strong> 提供标准化推理 API，降低模型调用门槛；中层 <strong>Agent Marketplace</strong> 探索 AI Agent 的链上闭环应用；顶层 <strong>Verifiable AI</strong> 以 ZKP+GPU 加速支撑可信推理，承载 ComputeFi 的长期愿景。</p><p><strong>标准产品层：云端推理服务（Serverless Inference）</strong></p><p>Cysic AI推出即开即用、按需计费的标准推理服务，用户无需自建或维护算力集群，即可通过 API 快速调用多种主流大模型，实现低门槛的智能化接入。当前支持的模型包括 <strong>Meta-Llama-3-8B-Instruct</strong>（任务与对话优化）、<strong>QwQ-32B</strong>（推理增强型）、<strong>Phi-4</strong>（轻量化指令模型）、以及 <strong>Llama-Guard-3-8B</strong>（内容安全审查），覆盖通用对话、逻辑推理、轻量部署与合规审查等多元需求。该服务在成本与效率之间取得平衡，既满足开发者快速原型搭建，也能支撑企业级应用的规模化推理，是 Cysic 构建可信 AI 基础设施的重要一环。</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/9c0f9f635bf74185642609fdef4860bb504b01401a6e45e368cbb35eb7d4b2ea.jpg" alt="" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAJCAIAAADcu7ldAAAACXBIWXMAAAsTAAALEwEAmpwYAAACkklEQVR4nIWR70/aQByHmzhSuF57VFqktBtDhjgFNrSIPyDlDNbGUm261KOjsyZujBlxCW+IiZKY4Iv5Tv+B7R9dtkD3Yi/243lzd59LnvvkvhT3JxBCNE1rmnZ9fU0IGQwGjuP0+/3TIHAcezAYDIfDfr/v+z4hhPsn1N8uAACa1jw//2RZbc/zXNf1PM9xHNu2HccJ1YQQy7LifPw/D7CzzmFxNNuER0VRMplMNvs8k8nIspzP5xuaViquSlJKVhRZ+UUikQhdbGj47R84hKg4QvMcAyGkZ0AIGQBi0SjPxxHiYtHo3NzcLGenYTwOWJaiaAAYwDBhNwbEAABTD4IQQgDAdJl5GBCjFpdK4tKObhi93nvf93cxVneaNdySUul5IYsx7vV6GGOOY7eaeqFaJ7Xytwe7ub+rrqmCIMQRqmqtWqOZfl6QVhoYY0KIaZpVVa1qrVe1HUpZ3lBWO5t1/Max6/VGKpVC/PyzxUWeT1FUslJZs6y2IAgURYkLkigKzofR9x9fy8tpQViITHmykJZzuWxyaSv90t1t7VVVFUIWIU5SMmnlKVUulwwd27ZtGIau77mua5qGKIqVSsWy2oZhWFbbNA86XkfX9VQyWVKrB0dO2wxDLwiCrc0aQmh9/fXBPj48OrQsS9f3Wq0Wz8dj0Sil1jZOToOu3/V9/2Jwcfn5sl6vQwgxxre3tzfjm+Fw+O7k5Ozs7PjYTSZF27YfHx/u779033YJIR97vWJxBULWNM3J3eTq6mpyNxmPx6PRaHt7OyEIlCzLEELEcRBCjmPD+fA8n8+/KBQKxWIxm83ORsewszyXyymKIssyAwBN05FIBEKYSCQ0TTsNAtc97ngdzyOu65bLZUmSfgLz2Zi4c/9YBwAAAABJRU5ErkJggg==" nextheight="430" nextwidth="1456" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p><strong>应用实验层：去中心化智能体市场(Agent Marketplace)</strong></p><p>Cysic AI推出的 <strong>Agent Marketplace</strong> 提供一个去中心化的智能体应用平台，用户只需连接 Phantom 钱包并完成认证，即可调用不同的 AI Agent 并通过 <strong>Solana USDC</strong> 实现自动支付。平台目前已集成三类核心智能体：</p><ul><li><p><strong>X Trends Agent</strong>：实时解析 X 平台趋势，生成可转化为 MEME Coin 的创意概念；</p></li><li><p><strong>Logo Generator Agent</strong>：根据描述快速生成专属项目标识；</p></li><li><p><strong>Publisher Agent</strong>：一键将 MEME Coin 部署到 Solana 网络（如 <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://Pump.fun">Pump.fun</a>）。</p></li></ul><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/73cbf15df448864efa49cbabdd2d4278264561fbde607ef3e2f401e4d7a0cdab.jpg" alt="" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAPCAIAAAAK4lpAAAAACXBIWXMAAAsTAAALEwEAmpwYAAADrUlEQVR4nK2S208bRxTGF5dNAqFr1uzOzM7O7trEYOMrtrPBC0sNOFjYhiWmgF0uAmIIgrRPVCqVkraioqEJRCEyoYGkVkQqJNRclIc8VEJqo6ZteqGtlP45VSp7+1C1aiol/TSahzk653fON4eq+nfRNF1rrb2wdGlrs1j89LPrhe2b2ztbm8Urq4WLy5dXllcXF94DANI0/Zwi1H8CVpbXbly7ffDjb1/uP/rlp6cHT3499+77t4q7z35/9tbcwv8AuLh8eWP9kzt7D77a/+b+3Yfmfau4e61wY3Hh/EsBjhw5zLLshaVLqytXbm7vfPf44PHXPzz59ueD75/u7d778IOP3lk4z3PgpSawWq1X1zZ3d/Y21q9/vntv/4tHD+4+LG7dvrq2sVHYXjr3MQCwkq58EQBN0wzDAAA6Y92ZU0MZYzCV7B/IZAczuWSiN5XsTyZ64109LGtjGOZFAJV0pSiKGGPqLwIAQAQpiqqwVFRYKiiKYhhGkqTnuET9M3b40CFzAlSWxfJKTVk0TRORiKJosVjMl6qqqpqaGgj//s9mhT8B9rIIIQghAQs84CGCHMdVH63mAc+yrN2uYIwhQggLCCGIIESQYZjqo9VsWc4GZzldgKUoMiswDPMqw9jqbJQ/FGwK+9NG73Au26q3jU9OSIrcomnrhYJARITgMZevf2h0ZnZmZGw0P50fGx9DWEgbxuz8PMfzAkaRZj2Z7EufSr8+NBhPdE/lT3M89/bi4lQ+X/JTPaGG1ePxRCJt9PakUhE1grBwzOmMahrCAkSw0eMbyo2emT/jCwSC4ZCkyNZaa4sWTRuG2WxbW3uso7MnlYwnEj2ppMvtAgj6An63p0nAmIpq0eE3RvRYrFVvi3V1aroei8f19vZQJEIkiQe8ekKNdXRouh5PJFLpdDAcatE0SZGZWgYiCADQdT1+sjttGGZuMBxCWAAIAgRNgDYxOZk2emNdnZiImIgIoyav72Syz9HotNXZ9Pb2ubNvDueygWDQ5XZ7vF5vwOcLBOxOJ8KY47jMwEB+ejozkHF7mgCC5QqCMZjty2QVu4Nq1duGc9mRsVHTfSgIAIKwGs1NzDibvCVA7LXZ+bmp/Gl/0M/W2QCCbJ2tI5Ecz59V7A0A8C1a1GyO4/nSIpRPc1htVlXiUCiP1yMrir3eQQiRFZlIEiaiUJ7RXCd/0B8KhyLq8QZXo6zIkiJjItrrSwKgZJHL7cJEdNQ7ZEWRFJkQgokIECz/EPoDgx73U/XH2VUAAAAASUVORK5CYII=" nextheight="690" nextwidth="1456" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>Agent Marketplace 在应用上依托 <strong>Agent Swarm Framework</strong> 提升协作效率，将多个自治智能体组合为任务协作群体（Swarm），实现分工、并行与容错；在经济上通过 <strong>Agent-to-Agent Protocol</strong> 实现链上支付与自动激励，确保安全、透明的链上结算，用户仅为成功操作付费。通过这一组合，Cysic 打造了一个涵盖 <strong>趋势分析 → 内容生成 → 链上发布</strong> 的完整闭环，展示了 AI Agent 在 <strong>链上金融化与 ComputeFi 生态</strong> 中的落地路径。</p><p><strong>战略支柱层：可信推理的硬件加速(Verifiable AI)</strong></p><p>“<strong>推理结果是否可信</strong>”是 AI 推理领域的核心挑战。Verifiable AI 以零知识证明（ZKP）对推理结果提供数学级担保、无需泄露输入与模型；传统 ZKML 证明生成过慢难以满足实时需求，Cysic以 GPU 硬件加速突破这一瓶颈， 针对 Verifiable AI 提出了三方面的硬件加速创新：</p><ul><li><p>首先，在 <strong>Sumcheck 协议并行化</strong> 上，将庞大的多项式计算任务拆分为数万个 CUDA 线程同时执行，使证明生成速度能够随 GPU 核心数实现近乎线性提升。</p></li><li><p>其次，通过 <strong>定制有限域算术内核</strong>，在寄存器、共享内存及 warp-level 并行设计上进行深度优化，大幅缓解传统 GPU 在模运算中的内存瓶颈，使 GPU始终保持高效运转。</p></li><li><p>最后，Cysic 在 <strong>端到端加速栈 ZKPoG</strong> 中，覆盖 witness 生成—证明生成—验证的全链路优化，兼容 Plonky2、Halo2 等主流后端，实测最高达 CPU 的 52× 性能，并在 CNN-4M 模型上实现约 10 倍加速。</p></li></ul><p>通过这一整套优化，Cysic 将可验证推理从“理论可行但过慢”真正推向“可实时落地”的阶段，显著降低了延迟与成本，使 Verifiable AI 首次具备进入实时应用场景的可能性。</p><p>Cysic 平台兼容 PyTorch 与 TensorFlow，开发者只需将模型封装进 VerifiableModule，即可在不改写代码的前提下，获得推理结果及对应加密证明。在路线图上，将逐步扩展对 CNN、Transformer、Llama、DeepSeek 等模型的支持，并发布人脸识别、目标检测等实时 Demo 验证可用性；同时于未来数月开放代码、文档与案例，推动社区共建。</p><table style="min-width: 125px"><colgroup><col><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>层级</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>业务模块</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>核心定位</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>工程难度</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>业务价值</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>标准产品</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">Serverless Inference</p></td><td colspan="1" rowspan="1"><p>标准化云端推理 API，集成主流开源模型，降低开发者接入门槛</p></td><td colspan="1" rowspan="1"><p><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span> 中等</p><p>（算力调度成本）</p></td><td colspan="1" rowspan="1"><p>基础入口，满足快速规模化推理需求，差异化有限</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>试验应用</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">Agent Marketplace</p></td><td colspan="1" rowspan="1"><p>去中心化智能体市场，探索趋势分析 → Logo 生成 → 链上发布的闭环</p></td><td colspan="1" rowspan="1"><p><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span> 中低</p><p>（依赖现有模型与链上支付集成）</p></td><td colspan="1" rowspan="1"><p>应用实验，展示 AgentFi 与链上支付结合的可能性</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>战略能力</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">Verifiable AI</p></td><td colspan="1" rowspan="1"><p>ZKP + GPU 加速，将可验证推理推进至实时可用</p></td><td colspan="1" rowspan="1"><p><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span><span data-name="star" class="emoji" data-type="emoji">⭐</span> 极高（涉及密码学与底层系统优化）</p></td><td colspan="1" rowspan="1"><p>战略支柱，提供可信算力，构建长期竞争壁垒</p></td></tr></tbody></table><p>整体来看，Cysic AI 的三层路径形成了一条自下而上的演进逻辑：Serverless Inference 解决“能用”，Agent Marketplace 展示“能应用”，Verifiable AI 则承担“可信性与护城河”。前两者更多是过渡与试验，真正的价值和差异化将在 Verifiable AI 的落地中体现，其与 ZK 硬件及去中心化算力网络结合，才是 Cysic 未来在 ComputeFi 生态中建立长期优势的关键。</p><h2 id="h-nft-computefi" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>五、金融化视角：NFT 化算力入口与ComputeFi 节点</strong></h2><p>Cysic Network 通过 <strong>“Digital Compute Cube” Node NFT</strong> 将 GPU、ASIC 等高性能算力资产代币化，打造面向大众用户的 <strong>ComputeFi 入口</strong>。每枚 NFT 即是网络节点许可（verifiable license），同时承载 <strong>收益权 + 治理权 + 参与权</strong>：用户无需自建硬件，即可代理或委托参与 ZK 证明、AI 推理与挖矿任务，并直接获得 $CYS 激励。</p><table style="min-width: 125px"><colgroup><col><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>等级</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>名称</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>价格（USDC）</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>供应量（枚）</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>对应 $CYS 分配</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Tier 1</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">Tesseract</p></td><td colspan="1" rowspan="1"><p style="text-align: center">69</p></td><td colspan="1" rowspan="1"><p style="text-align: center">5,000</p></td><td colspan="1" rowspan="1"><p style="text-align: center">350 CYS / NFT</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Tier 2</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">Monolith</p></td><td colspan="1" rowspan="1"><p style="text-align: center">99</p></td><td colspan="1" rowspan="1"><p style="text-align: center">7,000</p></td><td colspan="1" rowspan="1"><p style="text-align: center">450 CYS / NFT</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Tier 3</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">Allspark</p></td><td colspan="1" rowspan="1"><p style="text-align: center">139</p></td><td colspan="1" rowspan="1"><p style="text-align: center">8,000</p></td><td colspan="1" rowspan="1"><p style="text-align: center">600 CYS / NFT</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Tier 4</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">MotherBox</p></td><td colspan="1" rowspan="1"><p style="text-align: center">189</p></td><td colspan="1" rowspan="1"><p style="text-align: center">9,000</p></td><td colspan="1" rowspan="1"><p style="text-align: center">750 CYS / NFT</p></td></tr></tbody></table><p>NFT 总量为 <strong>29,000 枚</strong>，累计分配约 <strong>1,645 万 CYS（占总供应 1.65%，在社区分配上限 9% 内）</strong>。解锁方式为 <strong>50% TGE 即时解锁 + 50% 六个月线性释放</strong>。除固定分配外，NFT 持有者还享有 <strong>Multiplier 火力加速（最高 1.2x）、优先算力任务权、治理权重</strong>等额外权益。目前公开销售已经结束，用户可在 <strong>OKX NFT Marketplace</strong> 进行交易。</p><p>与传统云算力租赁不同，Compute Cube 本质上是对底层硬件基础设施的 <strong>链上所有权确权</strong>：</p><ul><li><p><strong>固定 Token 收益</strong>：每枚 NFT 锁定一定比例 $CYS 分配；</p></li><li><p><strong>实时算力收益</strong>：节点接入实际工作负载（ZK 证明、AI 推理、加密挖矿），收益直接分发至持有者钱包；</p></li><li><p><strong>治理与优先权</strong>：持有者在算力调度、协议升级中拥有治理权重与优先使用权；</p></li><li><p><strong>正向循环效应</strong>：更多任务 → 更多奖励 → 更多质押 → 更强治理影响力。</p></li></ul><p>整体上，Node NFT首次将零散 GPU/ASIC 转化为可流通的链上资产，在 AI 与 ZK 需求并行爆发的背景下，开辟了全新的 <strong>算力投资市场</strong>。<strong>ComputeFi 的循环效应</strong>（更多任务 → 更多奖励 → 更强治理权）是成为 Cysic 扩展算力网络至大众用户的重要桥梁。</p><h2 id="h-asic-doge-and-cysic" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>六、消费场景：家庭 ASIC 矿机 （Doge &amp; Cysic）</strong></h2><p>Dogecoin 诞生于 2013 年，采用 Scrypt PoW，并自 2014 年起与 Litecoin 合并挖矿（AuxPoW），通过共享算力提升网络安全。其代币机制为无限供应 + 每年固定增发 50 亿 DOGE，更偏向社区文化与支付属性。在完全 ASIC 化的 PoW 矿币中，Dogecoin 是除比特币外热度最高的代表，其 Meme 文化与社群效应形成了长期生态粘性。</p><p>硬件层面，Scrypt ASIC 已全面取代 GPU/CPU，Bitmain Antminer L7/L9 等工业级矿机占据主流。但不同于比特币已彻底矿场化，Dogecoin 仍保留家庭矿机空间，Goldshell MiniDoge、Fluminer L1、ElphaPex DG Home 1 等轻量产品使其兼具现金流与社群驱动特征。</p><p>对 Cysic 而言，切入 Dogecoin ASIC 具备三重意义：其一，Scrypt ASIC 难度低于 ZK ASIC，可快速验证量产与交付能力；其二，挖矿市场现金流成熟，可提供稳定营收；其三，Doge ASIC 有助于积累供应链与品牌经验，为未来 ZK/AI 专用芯片奠定基础。总体来看，家庭 ASIC 矿机是 Cysic 的务实落点，同时为长期布局 ZK/AI ASIC 提供过渡支撑。</p><p><strong>Cysic Portable Dogecoin Miner：家庭级创新路径</strong></p><p>Cysic 于 Token2049 期间正式发布 <strong>DogeBox 1</strong>，这是一款面向家庭与社区用户的 <strong>便携式 Scrypt ASIC 矿机</strong>，定位为“可验证的家庭级算力终端”：</p><ul><li><p><strong>便携节能</strong>：口袋大小，适合家庭与社区用户，降低参与门槛；</p></li><li><p><strong>即插即用</strong>：手机 App 管理，面向全球零售市场；</p></li><li><p><strong>双重功能</strong>：既可挖矿 DOGE，又能验证 DogeOS 的 ZK 证明，实现 L1+L2 安全；</p></li><li><p><strong>激励循环</strong>：DOGE 挖矿 + CYS 补贴，形成 DOGE→CYS→DogeOS 的经济闭环。</p></li></ul><p>该产品与 <strong>DogeOS</strong>（MyDoge 团队开发的基于零知识证明的 Layer-2 Rollup， Polychain Capital 领投）和 <strong>MyDoge 钱包</strong> 的协同，使 Cysic 矿机不仅能挖矿 DOGE，还能参与 ZK 验证，并通过 <strong>DOGE 奖励 + CYS 补贴</strong> 建立激励循环，增强用户黏性并融入 DogeOS 生态。</p><p>Cysic 的 Dogecoin 家庭矿机既是 务实的现金流落点，也是 长期 ZK/AI ASIC 的战略铺垫；通过“挖矿+ZK 验证”的混合模式，不仅积累市场与供应链经验，还为 Dogecoin 引入 可扩展、可验证、社区驱动的 L1+L2 新叙事。</p><h2 id="h-cysic" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>七、Cysic生态布局与核心进展</strong></h2><p><strong>1. 与 Succinct / Boundless Prover Network的合作<br></strong> Cysic 已作为多节点 Prover 接入 Succinct Network，依托高性能 GPU 集群承接 SP1 zkVM 的实时证明任务，并在优化 GPU 代码层面与团队深度协作。与此同时，Cysic 也已加入 <strong>Boundless Mainnet Beta</strong>，为其 Proof Marketplace 提供硬件加速能力。</p><p><strong>2. 早期合作项目（Scroll）<br></strong>在早期阶段，Cysic 曾为 <strong>Scroll</strong> 提供高性能 ZK 计算，依托 GPU 集群为其承接大规模 Proving 任务，确保低延迟与低成本运行，累计生成超千万个证明。这一合作不仅验证了 Cysic 的工程实力，也为其后续在硬件加速和算力网络方向的探索奠定了基础。</p><p><strong>3. 家庭矿机亮相 Token2049<br></strong>Cysic 在 Token2049 发布其首款便携式家庭 ASIC 矿机 DogeBox 1，正式切入 Dogecoin/Scrypt 算力市场。该设备定位为“掌上级算力终端”。DogeBox 1 具备 轻量、低功耗、即插即用 特征，仅 55 W 功耗、125 MH/s 算力，机身仅 100×100×35 mm，支持 Wi-Fi 与蓝牙连接，噪音低于 35 dB，适合家庭与社区用户使用。</p><p>除 DOGE/LTC 挖矿外，设备还支持 DogeOS ZK 验证，实现 L1+L2 双层安全，并通过 DOGE 挖矿 + CYS 补贴 构建「DOGE → CYS → DogeOS」的三重激励循环。</p><p><strong>4. 测试网收官，主网在即<br></strong>Cysic 于 2025 年 9 月 18 日完成 Phase III: Ignition，标志测试网阶段正式结束并进入主网筹备期。继 Phase I 验证硬件与代币模型、Phase II 扩展 Genesis Node 规模后，本阶段全面验证了算力网络的用户参与度、激励机制与资产化逻辑。</p><p>Cysic 已在测试网阶段接入 <strong>Succinct、Aleo、Scroll 与 Boundless</strong> 等零知识项目，官网数据显示，测试网期间共汇聚 55,000+ 钱包地址、800万笔交易 与 100,000+ 预留高端 GPU 设备。Phase III：Ignition 测试网共吸引 136 万注册用户，累计处理 约 1,300 万笔交易，形成由 约 22.3 万 Verifiers 与 4.18 万 Provers 构成的 26 万+ 节点网络。激励层面，累计分发 约 146 万枚代币（73.3 万 $CYS + 73.3 万 $CGT） 与 460 万 FIRE，共有 48,000+ 用户参与质押，验证了其激励机制与算力网络的可持续性。</p><p>此外，从官网的生态地图来看，Cysic 已经与 ZK 与 AI 领域的核心项目形成了广泛连接，展现出其作为底层算力和硬件加速提供方的广泛兼容性和开放性。这些生态链接为未来在 ZK、AI 与 ComputeFi 路线的拓展提供了良好的外部接口与合作基础。</p><ul><li><p><strong>zkEVM 与 L2</strong>：zkSync、Scroll、Manta、Nil、Kakarot</p></li><li><p><strong>zkVM / Prover Network</strong>：Succinct、Risc0、Nexus、Axiom</p></li><li><p><strong>zk Coprocessor</strong>：Herodotus、Axiom</p></li><li><p><strong>基础设施 / 跨链</strong>：zkCloud、ZKM、Polyhedra、Brevis</p></li><li><p><strong>身份与隐私</strong>：zkPass、<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://Human.tech">Human.tech</a></p></li><li><p><strong>预言机</strong>：Chainlink、Blocksense</p></li><li><p><strong>AI 生态</strong>：Talus、Modulus Labs、Gensyn、Aspecta、Inference Labs</p></li></ul><h2 id="h-cysic" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>八、Cysic代币经济模型设计</strong></h2><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/e7082b47009fcc243c431d7ef09acd96db312bdff59c4a1065d6efa955bb8ae8.jpg" alt="" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAKCAIAAABaL8vzAAAACXBIWXMAAAsTAAALEwEAmpwYAAABcUlEQVR4nK2SEZQrMRSGUxsbKRQLgUAlUCkMVKrVSrlQGa5UhiuhkcJQZCR0JTISGomEQpFIKBSZPWfy3u7C7p6+Pe+jnJz/3j/3/kHTb0kpvSJDr4hCCEIIAFAzWuvP3X92esnAWns+nxFCZVkul8vb7Wat7fu+67r/M4HWehzH4/GIEKrr2nvvnGOMrVar/X6/2+2klN+N8pLB9Lf4+XwyxpqmUUrle+dcXddN06SZLwxSSvmB1lqttbU2t4sxvteEOQNjTNu2hBCM8ePxAADOuVLKe6+1BgDGmJQyh/RhEEIAgPV6jRA6HA7DMDjnhmHYbreLxYJzPk2TEOJ+vxdFgRCilAJA27ZVVRFCTqfTZrPBGHPOq6rKgnEcQwh/DGKMQoisu1wufd9zzjHGRVEQQiilZVler1fvfdd1hJD3j/R5dSklpZSUklLa970xJsb4kUHeRtZlnHPGGOecUgoAwkyuyecvQ8r3ceafQ/41b4JyIpmOkQ6GAAAAAElFTkSuQmCC" nextheight="367" nextwidth="1150" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>Cysic Network 采用 <strong>双代币体系</strong>：网络代币 $CYS 与治理代币 $CGT。</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/bc716e2fa6f7b0fa9cbb019d596dfcd5fcf680219fb9fad0d170e7d0d8a6c0a6.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAARCAIAAAAzPjmrAAAACXBIWXMAAAsTAAALEwEAmpwYAAACqUlEQVR4nLWTsUscQRTG5x+Ixi4iB+LBtVscGBYOBOstrrKzu+LgCouIYClYWBxYnrCygoIB60HYwMAkDDjFFFMcbDGRiUxkiy2WOMUUW0xw3+lpMHgG8uv2zb753nvzPuT/M8h7Px6PKaXiCZxzQkhZlrNcIaVMa4QQWmshBCGEUuqcmwikaVrWaK1vf94aY7z3nHOt9SwCGGPnHMY4DMOV5srh4aH3njGmlJoKeO8ppYsfFufn5rvdLrQFSq/COVdKbX/abtf0ej2tdVYzFTDGFEUBUaWUtZYxNmMHlNI0Td/VLC0tIYT29/efCWitGWMwu9FoJKVkNTDEV8myjHOe5zmlNEkSa22WZVLKqqomAo9Ya6WU3ns4eyvWWoyxUirPc601TPheoHrAOUcIieMYY0wImUWpqrOcc9Za7/3BwQFCqNFoIIQ2Njb+7MA5xzmPogghFIbhW1txzmVZ5pzL89xaO13Tx1ustbBbT4NVzYu1Q8lhGO7u7vb7/cFgAA8LR9M3qKqqKAprrdb68vISSnjVZVWdf3p6GgRBs9lst9tRFCVJUlUVdAC/3QsIISiljLGLi4vhcEgI4ZyD+1zddVmW9gGwZFmWv+7u4Ap4VWMMpXQ0GqVp+pg+EaCUeu+LooiiqNFo7OzsQHA8HjPGUM3CwsLq6sdut/t+bg4iCCFo9Pz8HKpstVpBEMRxDJ9THzDGvPd7e3udTmdtbW1zc9MYI4Q4jo9vftwwxjDGX9L06urq29evn09OzuL45OjoLEmUUoyx4XDovd/a2lpfXw+CoN/vl2UJhp0IEEKstYPBoNlstlqtTqdDCJFSXn+/nmV5YBq9Xq/dbi8vL4dhKKUUQkwFwIQwSgD+eFyGvwECWmtIN8ZAuhACDPvMB0936994caF/A58kY7sPaR3MAAAAAElFTkSuQmCC" nextheight="375" nextwidth="700" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><ul><li><p><strong>$CYS（网络代币）</strong>：为原生可转让资产，用于支付交易费用、节点抵押、区块奖励及网络激励，确保网络活跃度与经济安全。$CYS 也是计算提供者与验证者的主要激励来源。用户可通过质押 $CYS 获取治理权重，并参与算力池（Computing Pool）的资源分配与治理决策。</p></li><li><p><strong>$CGT（治理代币）</strong>：为不可转让资产，仅能通过抵押 $CYS 以 1:1 比例获得，并在解押周期更长的机制下参与 <strong>Computing Governance (CG)</strong>。$CGT 反映算力贡献与长期参与度，计算提供者需预留一定数量的 $CGT 作为准入保证金，以防止恶意行为。</p></li></ul><p>在网络运行中，计算提供者将算力接入 Cysic Network，为 ZK、AI 与加密挖矿等任务提供服务。其收益来源包括区块奖励、外部项目激励及算力治理分配。算力的调度与奖励分布将根据多维因素动态调整，其中 <strong>外部项目激励（如 ZK、AI、Mining 奖励）</strong> 是关键权重。</p><h2 id="h-" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>九、团队背景及项目融资</strong></h2><p>Cysic 联合创始人兼首席执行官为Xiong (Leo) Fan，他曾任美国罗格斯大学计算机科学系助理教授。在此之前，他先后担任 Algorand 研究员、马里兰大学博士后研究员，并在康奈尔大学获得博士学位。Leo Fan 的研究长期聚焦于密码学及其在形式化验证与硬件加速中的交叉方向，已在 IEEE S&amp;P、ACM CCS、POPL、Eurocrypt、Asiacrypt 等国际顶级会议和期刊发表多篇论文，涵盖同态加密、格密码、功能加密、协议验证等领域。他曾参与多个学术与行业项目，兼具理论研究与系统实现经验，并在国际密码学学术会议中担任程序委员会成员。</p><p>根据LinkedIn的公开信息，Cysic 团队由硬件加速、加密研究与区块链应用背景的成员组成，核心成员具备芯片设计与系统优化的产业经验，同时拥有欧美及亚洲顶尖高校的学术训练。团队在 <strong>硬件研发、零知识证明优化及运营拓展</strong> 等方向形成互补。</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/0a739c5bd91034ec29f3c653d3c3d0b5cb748ba992b3594947f9a77f37539aa1.jpg" alt="" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAJCAIAAADcu7ldAAAACXBIWXMAAAsTAAALEwEAmpwYAAACZUlEQVR4nJWSQUsbQRTHd1nXYRlnOtMyu6xsF7qEsogI1qYH01RXS1hDyKFpWgLGLqkVVllsjIjIYreHNGgw5LB4sZBrSEntRfBuv4U3v4aXkp3iyR76O743/H8z742A/gHGWFVVjDEhBGM8PT1tmmYqlbJt+4llTU1NaZpmmo9N09R1XVGUf+UI90aLori46Fxd/T46Ot5pNA7C8OzsexRFx+12s9lst9ue5w2HP8/Pf/X7A8/zAAAY4/8QSJKUTr/Y2gqCIKjVPq6tfdjf319drR6EoeM4S8vLGxsbu3t7vu8HQeD7Phfc6xAghAAAmKAkIIR4UZZlURSlBEIIQogfAAAQQu7qiqJACO+6HB44EjDGTNNkjKmqqmmaruuUUk3T+AKeJliWhRCanJy0LItSyhgzDIMQwhhDCDHGKKUIIb4PwzB0XeeBI0GpVLq8vBwOh71er9vtXlxcxHE8GAwajQaldHNz8/T0tN/vAwA6nc7t7W02m52bm7u+vn5bLs/MzBBC6vW64zgQwpubmzAM4zju9XqtVmt9fV1RFIFSyu+iquqjBO7HGP8d0dgYwhgAIIoinw+E8AHGExNQFAVBECRJ4q1UKoUQkmUZQsi/3+gFkiTl8/nh8Ec6odvt+r5/eHjY6XS2t7fl8fFv0df45MT3/S9R1GyNgBDu7kX5whs3X373vlKrffpc36H04cLCq36C53lcNhJwc6VSMQzDtu1SqZTJZCqVSqFQKBaLGONns7Mv5+dt23bdFdddyWazCCHHyaXTmefp7NLS61wu57orjI0GUCwWq9VquVzO5XJc8AcTyaixdhi0TgAAAABJRU5ErkJggg==" nextheight="438" nextwidth="1456" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>在融资方面，2024 年 5 月，Cysic 宣布完成 <strong>1200 万美元 Pre-A 轮融资</strong>，由 <strong>HashKey Capital 与 OKX Ventures</strong> 联合领投，参投方包括 Polychain、IDG、Matrix Partners、SNZ、ABCDE、Bit Digital、Coinswitch、<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://Web3.com">Web3.com</a> Ventures，以及 Celestia/Arbitrum/Avax 早期投资人 George Lambeth 与 Eternis 联合创始人 Ken Li 等知名天使。</p><h2 id="h-zk" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>十、ZK硬件加速市场竞品分析</strong></h2><p><strong>&nbsp;1. 直接竞品（硬件加速型）</strong></p><p>在硬件加速型 Prover 与 ComputeFi 赛道，Cysic 的核心对手包括 <strong>Ingonyama、Irreducible（前 Ulvetanna）、Fabric Cryptography、Supernational</strong>，均围绕“加速 ZK Proving 的硬件与网络”展开。</p><ul><li><p><strong>Cysic</strong>：全栈化（GPU+ASIC+网络），主打 <em>ComputeFi</em> 叙事，优势在算力资产化与金融化，但ComputeFi 模式尚需市场教育，同时硬件量产也具备一定挑战。</p></li><li><p><strong>Irreducible</strong>：学术与工程结合，探索新代数结构（Binius）与 zkASIC，理论创新强，但其商业化落地节奏可能受制于 FPGA 规模化经济性。</p></li><li><p><strong>Ingonyama</strong>：开源友好，ICICLE SDK 已成为 GPU ZK 加速事实标准，生态采用率高，但缺乏自研硬件。</p></li><li><p><strong>Fabric</strong>：定位为“软硬一体”路径，试图打造通用加密计算芯片（VPU），商业模式类似“CUDA + NVIDIA”，谋求更广泛的加密计算市场。</p></li></ul><table style="min-width: 100px"><colgroup><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>项目</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>技术路径</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>硬件方向</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>定位模式</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Cysic</strong></p></td><td colspan="1" rowspan="1"><p><strong>从 GPU → ASIC</strong>，通过 <strong>ComputeFi</strong> 把算力代币化</p></td><td colspan="1" rowspan="1"><p>自研 ASIC（C1 芯片 + ZK Air + ZK Pro），同时大规模 GPU 集群</p></td><td colspan="1" rowspan="1"><p><strong>ComputeFi</strong> 模式：算力金融化 + 实时 ZK 证明网络</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Irreducible</strong> （原 Ulvetanna）</p></td><td colspan="1" rowspan="1"><p><strong>数学驱动</strong>：提出 Binius（基于二进制的多项式承诺） → 适配硬件</p></td><td colspan="1" rowspan="1"><p>早期 FPGA，现聚焦 Binius + 硬件/软件协同</p></td><td colspan="1" rowspan="1"><p>算法优先，硬件作为“实验验证平台”，更像 <strong>科研型基础设施</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Ingonyama</strong></p></td><td colspan="1" rowspan="1"><p><strong>软件优先</strong>：推出 ICICLE CUDA 库，GPU 上加速 MSM/FFT</p></td><td colspan="1" rowspan="1"><p>无自研硬件（利用现有 GPU）</p></td><td colspan="1" rowspan="1"><p>提供 <strong>开源 GPU 加速工具链</strong>，赋能开发者 → 不直接造硬件</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Fabric Cryptography</strong></p></td><td colspan="1" rowspan="1"><p><strong>软硬件协同</strong>：提出 VPU（Verifiable Processing Unit），介于 GPU 灵活性与 ASIC 性能之间</p></td><td colspan="1" rowspan="1"><p>自研 VPU 芯片 + 开发板（FC1000 / VPU8060 / Byte Smasher）</p></td><td colspan="1" rowspan="1"><p><strong>平台型定位</strong>：既造芯片，也提供编译器、库、云端服务</p></td></tr></tbody></table><p><br><strong>2. 间接竞品（ZK Marketplace / Prover Network / zk Coprocessor）</strong></p><p>在 ZK Marketplace、Prover Network 与 zk Coprocessor 赛道，Cysic 当前更多扮演 <strong>上游算力供应商</strong> 的角色，而 Succinct、Boundless、Risc0、Axiom 等项目则通过 zkVM、任务调度和开放市场撮合切入同一客户群（L2、zkRollup、ZKML）。</p><p>短期来看，Cysic 与这些项目以协作为主：Succinct 负责任务路由，Cysic 提供高性能 Prover 节点；zk Coprocessor 则可能分流部分任务至 Cysic。 但长期若 Boundless 与 Succinct 的 Marketplace 模式（竞拍 vs 路由）继续壮大，而 Cysic 自建 Marketplace，则三方将在 <strong>客户入口层</strong> 不可避免地产生直接冲突。类似地，zk Coprocessor 若形成闭环，可能成为客户入口替代硬件直连，Cysic 有被边缘化为“代工厂”的风险。</p><table style="min-width: 100px"><colgroup><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>项目</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>核心定位</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>商业模式 / 产品</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Cysic 的关系</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Cysic</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">ZK 硬件加速 + Prover/Verifier 网络</p></td><td colspan="1" rowspan="1"><p>提供基于 GPU/ASIC 的高性能 ZK 证明生成，运行 Prover/Verifier 节点网络</p></td><td colspan="1" rowspan="1"><p>与 Succinct：作为其 Prover，上下游合作；与Boundless：潜在合作/竞争</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Succinct</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">通用 zkVM (SP1) + Prover Network</p></td><td colspan="1" rowspan="1"><p>开放 zkVM（SP1），构建去中心化 Prover Marketplace，自动路由最优路径</p></td><td colspan="1" rowspan="1"><p>Cysic 是 Succinct 的 Prover 节点之一，负责高性能算力，上下游协作</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Boundless</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">开放证明市场（Proof Marketplace）</p></td><td colspan="1" rowspan="1"><p>采用 Reverse Dutch Auction 的证明竞拍市场，撮合 prover 与任务方</p></td><td colspan="1" rowspan="1"><p>Cysic 的 Prover 节点可对接其 Marketplace，若自建市场则存在竞争关系</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>zk Coprocessor（Axiom 等）</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">ZK 外包计算模块（Co-processor）</p></td><td colspan="1" rowspan="1"><p>提供链下计算 + 链上验证的 ZK Coprocessor API，开发者无需接触底层硬件即可调用复杂证明任务</p></td><td colspan="1" rowspan="1"><p>短期：可将任务分流给 Cysic；长期：若形成闭环，可能替代硬件直连</p></td></tr></tbody></table><p><br></p><h2 id="h-" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>十一、总结：商业逻辑、工程实现及潜在风险</strong></h2><p><strong>商业逻辑</strong></p><p>Cysic 以 <strong><em>ComputeFi</em> </strong>为核心叙事，试图将算力从硬件生产、网络调度到金融化资产打通。短期依托 GPU 集群满足现有 ZK Prover 需求并形成营收；中期通过 Dogecoin 家庭 ASIC 矿机进入现金流成熟市场，验证量产能力并借助社群文化打开消费级硬件入口；长期目标是自研 ZK/AI 专用 ASIC，叠加 Node NFT 与 Compute Cube，实现算力资产化与市场化，构筑基础设施型护城河。</p><p><strong>工程实现<br></strong>在硬件层面，Cysic 已完成 GPU 加速 Prover/Verifier 优化（MSM、FFT 并行化），并公布 ASIC 研发成果（1.3M Keccak/s 原型实验）。在网络层面，构建基于 Cosmos SDK 的验证链，支持 Prover 节点记账与任务分发，并以 Compute Cube/Node NFT 实现算力代币化。AI 方向上，推出 Verifiable AI 框架，通过 GPU 并行优化 Sumcheck 与有限域运算，实现可信推理，但与行业同类产品相比差异化有限。</p><p><strong>潜在风险</strong></p><ol><li><p><strong>市场教育与需求不确定性</strong>：ComputeFi 模式尚属新概念，客户是否愿意通过 NFT/代币形式投资算力尚需市场验证。</p></li><li><p><strong>ZK 业务需求不足</strong>：ZK Prover 行业仍处早期，现阶段 GPU 已能满足大部分需求，难以支撑 ASIC 的大规模出货，营收贡献有限。</p></li><li><p><strong>ASIC 工程与量产风险</strong>：证明系统尚未完全标准化，ASIC 研发需 12–18 个月，叠加高额流片成本与量产良率不确定性，可能冲击商业化进度。</p></li><li><p><strong>Doge 家庭矿机产能瓶颈</strong>：家庭场景整体市场容量有限，电价与社群驱动导致更多是“兴趣型”消费，难以形成稳定规模化收入。</p></li><li><p><strong>AI 业务差异性不足</strong>：Cysic 的 Verifiable AI 虽展示 GPU 并行优化，但其云端推理服务差异化有限，Agent Marketplace 门槛较低，整体壁垒仍不突出。</p></li><li><p><strong>竞争格局动态</strong>：长期则可能与 Succinct、Boundless 等 zkMarketplace 或 zkCoprocessor 项目在客户入口层发生冲突，被动退居“上游代工”角色。</p></li></ol><p><br><strong><em>免责声明：</em></strong><em>本文在创作过程中借助了 ChatGPT-5 的 AI 工具辅助完成，作者已尽力校对并确保信息真实与准确，但仍难免存在疏漏，敬请谅解。需特别提示的是，加密资产市场普遍存在项目基本面与二级市场价格表现背离的情况。本文内容仅用于信息整合与学术/研究交流，不构成任何投资建议，亦不应视为任何代币的买卖推荐。</em></p>]]></content:encoded>
            <author>0xjacobzhao@newsletter.paragraph.com (jacobzhao)</author>
            <category>cysic</category>
            <category>gpu</category>
            <category>asic</category>
            <category>zk</category>
            <category>ai</category>
            <category>doge</category>
            <enclosure url="https://storage.googleapis.com/papyrus_images/f08fb626749b6c772ab93a1dc872f7dd2d9c35d32b0be409e5db601be1c9004b.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[GAIB Research Report: The On-Chain Financialization of AI Infrastructure — RWAiFi]]></title>
            <link>https://paragraph.com/@0xjacobzhao/gaib-research-report-the-on-chain-financialization-of-ai-infrastructure-—-rwaifi</link>
            <guid>oylD7oCwkI6YjLNFyH6Y</guid>
            <pubDate>Wed, 08 Oct 2025 09:10:20 GMT</pubDate>
            <description><![CDATA[As AI rises as the world’s most powerful technological wave, computing power is becoming the new “currency,” and RWAization is opening a bridge for GPU and other AI infrastructure assets to enter the crypto financial system. Against this backdrop, GAIB introduces the innovative concept of RWAiFi (RWA + AI + DeFi): by bringing off-chain GPU and robotics financing agreements on-chain through an SPC structure, GAIB establishes an economic layer built on AID (AI Synthetic Dollar) and sAID (yield-bea]]></description>
            <content:encoded><![CDATA[<p>As AI becomes the fastest-growing tech wave, computing power is seen as a new “currency,” with GPUs turning into strategic assets. Yet financing and liquidity remain limited, while crypto finance needs real cash flow–backed assets. RWA tokenization is emerging as the bridge. AI <strong>&nbsp;</strong>infrastructure, combining <strong>high-value hardware + predictable cash flows</strong>, are viewed as the best entry point for non-standard RWAs — GPUs offer near-term practicality, while robotics represent the longer frontier. GAIB’s <strong>RWAiFi (RWA + AI + DeFi)</strong> introduces a new path to on-chain financialization, powering the flywheel of <strong>AI Infra (GPU &amp; Robotics) × RWA × DeFi</strong>.</p><h3 id="h-i-outlook-for-ai-asset-rwaization" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>I. Outlook for AI Asset RWAization</strong></h3><p>In discussions around RWA (Real-World Asset) tokenization, the market generally believes that <strong>standard assets such as U.S. Treasuries, U.S. equities, and gold</strong> will remain at the core in the long term. These assets are highly liquid, have transparent valuations, and follow well-defined compliance pathways — making them the natural carriers of the on-chain “risk-free rate.”</p><p>By contrast, the RWAization of <strong>non-standard assets</strong> faces greater uncertainty. Segments such as carbon credits, private credit, supply chain finance, real estate, and infrastructure all represent massive markets. However, they often suffer from opaque valuation, high execution complexity, long cycles, and strong policy dependence. The real challenge lies not in tokenization itself, but in <strong>enforcing off-chain asset execution</strong> — especially post-default recovery and liquidation, which still depend on due diligence, post-loan management, and traditional legal processes.</p><p>Despite these challenges, RWAization remains significant for several reasons:</p><ol><li><p><strong>On-chain transparency</strong> — contracts and asset pool data are publicly visible, avoiding the “black box” problem of traditional funds.</p></li><li><p><strong>Diversified yield structures</strong> — beyond interest income, investors can earn additional returns through mechanisms like Pendle PT/YT, token incentives, and secondary market liquidity.</p></li><li><p><strong>Bankruptcy protection</strong> — investors usually hold securitized shares via SPC structures rather than direct claims, providing a degree of insolvency isolation.</p></li></ol><p>Within AI assets, <strong>GPU hardware </strong>is widely regarded as the first entry point for RWAization due to its clear residual value, high degree of standardization, and strong demand. Beyond hardware, <strong>compute lease contracts</strong> offer an additional layer — their contractual and predictable cash flow models make them particularly suitable for securitization.</p><p>Looking further, <strong>robotics hardware and service contracts</strong> also carry RWA potential. Humanoid and specialized robots, as high-value equipment, could be mapped on-chain via financing lease agreements. However, robotics is far more operationally intensive, making execution significantly harder than GPU-backed assets.</p><p>In addition, <strong>data centers and energy contracts</strong> are worth attention. Data centers — including rack leasing, electricity, and bandwidth agreements — represent relatively stable infrastructure cash flows. Energy contracts, exemplified by green energy PPAs, provide not only long-term revenue but also ESG attributes, aligning well with institutional investor mandates.</p><p>Overall, AI asset RWAization can be understood across different horizons:</p><ul><li><p><strong>Short term</strong>: centered on GPU and related compute lease contracts.</p></li><li><p><strong>Mid term</strong>: expansion to data center and energy agreements.</p></li><li><p><strong>Long term</strong>: breakthrough opportunities in robotics hardware and service contracts.</p></li></ul><p>The common logic across all layers is <strong>high-value hardware + predictable cash flow</strong>, though the execution pathways vary.</p><table style="min-width: 100px"><colgroup><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Category</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Potential Assets</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Logic Foundation</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Features / Advantages</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Compute Hardware</strong></p></td><td colspan="1" rowspan="1"><p>GPU / TPU / ASIC</p></td><td colspan="1" rowspan="1"><p>High residual value, standardized, strong demand</p></td><td colspan="1" rowspan="1"><p>Most practical near-term entry point for RWAization</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Compute Contracts</strong></p></td><td colspan="1" rowspan="1"><p>Compute lease agreements, edge compute units</p></td><td colspan="1" rowspan="1"><p>Long-term contractual model</p></td><td colspan="1" rowspan="1"><p>Predictable income, high contractual enforceability</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Robotics Assets</strong></p></td><td colspan="1" rowspan="1"><p>Hardware leasing contracts</p></td><td colspan="1" rowspan="1"><p>High-value hardware + predictable cash flow</p></td><td colspan="1" rowspan="1"><p>Scenario-driven, but heavy operations and higher friction</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Data Centers</strong></p></td><td colspan="1" rowspan="1"><p>Rack leasing, electricity &amp; bandwidth contracts</p></td><td colspan="1" rowspan="1"><p>Stable operational income</p></td><td colspan="1" rowspan="1"><p>Infrastructure cash flows, suitable for long-term securitization</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Energy Contracts</strong></p></td><td colspan="1" rowspan="1"><p>Green energy PPAs</p></td><td colspan="1" rowspan="1"><p>Long-term power supply agreements</p></td><td colspan="1" rowspan="1"><p>Strong ESG profile, stable yields</p></td></tr></tbody></table><p><br></p><h3 id="h-ii-the-priority-value-of-gpu-asset-rwaization" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>II. The Priority Value of GPU Asset RWAization</strong></h3><p>Among the many non-standard AI assets, <strong>GPUs may represent one of the most practical directions for exploration</strong>:</p><ul><li><p><strong>Standardization &amp; Clear Residual Value</strong>: Mainstream GPU models have transparent market pricing and well-defined residual value.</p></li><li><p><strong>Active Secondary Market</strong>: Strong resale liquidity ensures partial recovery in case of default.</p></li><li><p><strong>Real Productivity Attributes</strong>: GPU demand is directly tied to AI industry growth, providing real cash flow generation capacity.</p></li><li><p><strong>High Narrative Fit</strong>: Positioned at the intersection of AI and DeFi — two of the hottest narratives — GPUs naturally attract investor attention.</p></li></ul><p>As AI compute data centers remain a highly nascent industry, traditional banks often struggle to understand their operating models and are therefore unable to provide loan support. Only large enterprises such as <strong>CoreWeave</strong> and <strong>Crusoe</strong> can secure financing from major private credit institutions like <strong>Apollo</strong>, while small and mid-sized operators are largely excluded — highlighting the urgent need for financing channels that serve the mid-to-small enterprise segment.</p><p>It should be noted, however, that <strong>GPU RWAization does not eliminate credit risk</strong>. Enterprises with strong credit profiles can typically obtain cheaper financing from banks, and may have little need for on-chain financing. Tokenized financing often appeals more to small and medium-sized enterprises, which inherently face higher default risk. This creates a <strong>structural paradox in RWA</strong>: high-quality borrowers do not need tokenization, while higher-risk borrowers are more inclined to adopt it.</p><p>Nevertheless, compared to traditional equipment leasing, GPUs’ <strong>high demand, recoverability, and clear residual value</strong> make their risk-return profile more attractive. The significance of RWAization lies not in eliminating risk, but in making risk more <strong>transparent, priceable, and tradable</strong>. As the flagship of non-standard asset RWAs, GPUs embody both <strong>industrial value and experimental potential</strong> — though their success ultimately depends on <strong>off-chain due diligence and enforcement</strong>, rather than purely on-chain design.</p><h3 id="h-iii-frontier-exploration-of-robotics-asset-rwaization" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>III. Frontier Exploration of Robotics Asset RWAization</strong></h3><p>Beyond computing hardware, the robotics industry is also entering the RWAization landscape, with the market projected to exceed <strong>$185 billion by 2030</strong>, signaling immense potential. The rise of <strong>Industry 4.0</strong> is ushering in an era of intelligent automation and human–machine collaboration. In the coming years, robots will become ubiquitous—across factories, logistics, retail, and even homes. By enabling the adoption and deployment of intelligent robots through structured, on-chain financing, while creating an investable product that allows users to participate in this global shift. Feasible pathways include:</p><ol><li><p><strong>Robotics Hardware Financing</strong></p><ul><li><p>Provides capital for production and deployment.</p></li><li><p>Returns come from leasing, direct sales, or Robot-as-a-Service (RaaS) models.</p></li><li><p>Cash flows can be mapped on-chain through SPC structures with insurance coverage, reducing default and disposal risks.</p></li></ul></li><li><p><strong>Data Stream Financialization</strong></p><ul><li><p>Embodied AI requires large-scale real-world data.</p></li><li><p>Financing can support sensor deployment and distributed data collection networks.</p></li><li><p>Data usage rights or licensing revenues can be tokenized, giving investors exposure to the future value of data.</p></li></ul></li><li><p><strong>Production &amp; Supply Chain Financing</strong></p><ul><li><p>Robotics involves long value chains, including components, manufacturing capacity, and logistics.</p></li><li><p>Unlock working capital through trade finance, and mapping future shipments and cash flows on-chain.</p></li></ul></li></ol><p>Compared with GPU assets, <strong>robotics assets are far more dependent on operations and real-world deployment</strong>. Cash flows are more vulnerable to fluctuations in utilization, maintenance costs, and regulatory constraints. Therefore, it is recommended to adopt a shorter-term structure with higher overcollateralization and reserve ratios to ensure stable returns and liquidity safety.</p><h3 id="h-iv-gaib-protocol-an-economic-layer-linking-off-chain-ai-assets-and-on-chain-defi" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>IV. GAIB Protocol: An Economic Layer Linking Off-Chain AI Assets and On-Chain DeFi</strong></h3><p>The RWAization of AI assets is moving from concept to implementation. GPUs have emerged as the most practical on-chain asset class, while robotics financing represents a longer-term growth frontier. To give these assets true financial attributes, it is essential to build an <strong>economic layer</strong> that can bridge off-chain financing, generate yield-bearing instruments, and connect seamlessly with DeFi liquidity.</p><p>GAIB was born in this context. Rather than directly tokenizing AI hardware, it brings <strong>on-chain the financing contracts collateralized by enterprise-grade GPUs or robots</strong>, thereby building an <strong>economic bridge between off-chain cash flows and on-chain capital markets</strong>.</p><p>Off-chain, enterprise-grade GPU clusters or robotic assets purchased and used by cloud service providers and data centers serve as collateral; On-chain, <strong>AID</strong> is used for <strong>price stability and liquidity management</strong> (non-yield-bearing, fully backed by T-Bills), while <strong>sAID</strong> provides <strong>yield exposure and automatic compounding</strong> (underpinned by a financing portfolio plus T-Bills).</p><p><br></p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/4c51665a2753f5c1ba3785c31c74a925eb6fdbd0c60f643ebd21d0538df2ee7f.jpg" alt="" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAANCAIAAABHKvtLAAAACXBIWXMAAAsTAAALEwEAmpwYAAADIElEQVR4nG2UP2wcRRTGt3Pp6go316SwKc7FNdu48QXpmq2u2Wqg2IYtcgXWSOEkkhWgQUJbsQYNNFM4E/6sLLOO4ilAqxRbMUJokcI4CE0CaHCxCIkBoW3ih7QvOQcpv2pm33vfp3nzZgMYsNZOJhNKKQAURRFFUZIkxhgA0FqnaUoIkVJiclEUs9mMUqq1ds61bbtYLOI45pz3fV8UxWQyWa1WXdcBQIA1GCjLEgA459vbO3EcW2sBoG3b+Xy+s/OKlNIPoEQcx1rrruuMMYSQMAyFEN77NE2DIIiiyDn3zKDvewBwzqEiADDG1msAuHPn6PXXiDE/fnx4+Okn/Mnjx69ev845Xyc455bLJQB0XUcp3djYIIRcGSBmANec86ZplFKMMWPMTw+/v3/y5cWvP//15x///vN3d/Hb/dPjhz98d37+qCgKTGaMoYHWmjGmlHrWImOMlLLve2stY6wsy67rqqoihIwHwjDM3joAgKeXl98+uPfNiXh6eQkANw9u7O7ujgZms1lVVd57O6C1ttYKIZxzQV3XcRw3TVOWJbZyPp9TSuVAWZaMsTdvvIEG3cUvvz8xaHDr5gEd2NraGo1GQRCMx+NkQAjRti0hpGmaoO/7ruu890qpuq6TJEnTlFLKBqSUp6enZ/dOzr4qj784eu+dtz94/93jz48efH320eGHt27f5pyjehAE29s7VVXleS6EwMns+/7qDpbL5WKx2NzcxILRaHTt2rW9vb00TaWUSqmiKKbT6f7+flEUVVWVZSmEOD9/JITIBjjn1trlcpnn+Xp2rgyiKGqaZr3FDK11VVWf3b2rlKKU4rxnWVbX9fp6UWitmKZplmUvMZjNZkopAPDeY8x7zxibTqdxHK9WK0IIdi9JkizLwjAcj8eMMczv+957DwAYfbkBPrQX8d4bY7CbbdtKKTnnWmvvffecF0+MJ8CRvTLA4iiKsizDcZJS1nXdDlhrnXP4YrFjxhiUds9p21YNaK3jOA7DUGv9PwMpZZ7nOANZlqVpituyLKuqqutaD6AuGuMXrTWq5wPY0iAI8LcGAP8B9uhf8BQnINUAAAAASUVORK5CYII=" nextheight="595" nextwidth="1456" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p><br></p><h4 id="h-gaibs-off-chain-financing-model" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>GAIB’s Off-Chain Financing Model</strong></h4><p>GAIB partners with global cloud providers and data centers, using GPU clusters as collateral to design <strong>three types of financing agreements</strong>:</p><ul><li><p><strong>Debt Model</strong>: Fixed interest payments (annualized ~10–20%).</p></li><li><p><strong>Equity Model</strong>: Revenue-sharing from GPU &amp; Robotics income (annualized ~60–80%+).</p></li><li><p><strong>Hybrid Model</strong>: Combination of fixed interest and revenue-sharing.</p></li></ul><p>Risk management relies on <strong>over-collateralization of physical GPUs</strong> and <strong>bankruptcy-isolated legal structures</strong>, ensuring that in case of default, assets can be liquidated or reassigned to partnered data centers to continue generating cash flow. With enterprise-grade GPUs featuring short payback cycles, financing tenors are significantly shorter than traditional debt products, typically <strong>3–36 months</strong>.</p><p>To enhance security, GAIB works with <strong>third-party underwriters, auditors, and custodians</strong> to enforce strict due diligence and post-loan management. In addition, <strong>Treasury reserves</strong> serve as supplementary liquidity protection.</p><h4 id="h-on-chain-mechanisms" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>On-Chain Mechanisms</strong></h4><ul><li><p><strong>Minting &amp; Redemption</strong>: Qualified users (Whitelist + KYC) can mint AID with stablecoins or redeem AID back into stablecoins via smart <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://contracts.In">contracts.In</a> addition, non-KYC users can also obtain it through secondary market trading.</p></li><li><p><strong>Staking &amp; Yield</strong>: Users can stake AID to obtain sAID, which automatically accrues yield and appreciates over time.</p></li><li><p><strong>Liquidity Pools</strong>: GAIB will deploy AID liquidity pools on mainstream AMMs, enabling users to swap between AID and stablecoins.</p></li></ul><h4 id="h-defi-use-cases" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>DeFi Use Cases</strong></h4><ul><li><p><strong>Lending</strong>: AID can be integrated into lending protocols to improve capital efficiency.</p></li><li><p><strong>Yield Trading</strong>: sAID can be split into PT/YT (Principal/ Yield Tokens), supporting diverse risk-return strategies.</p></li><li><p><strong>Derivatives</strong>: AID and sAID can serve as yield-bearing primitives for derivatives such as options and futures.</p></li><li><p><strong>Custom Strategies</strong>: Vaults and yield optimizers can incorporate AID/sAID, allowing for personalized portfolio allocation.</p></li></ul><p><strong>In essence, GAIB’s core logic</strong> is to convert <strong>off-chain real cash flows</strong> — backed by GPUs, Robotic Assets, and Treasuries — into <strong>on-chain composable assets</strong>. Through the design of <strong>AID/sAID</strong> and integration with DeFi protocols, GAIB enables the creation of markets for yield, liquidity, and derivatives. This dual foundation of <strong>real-world collateral + on-chain financial innovation</strong> builds a scalable bridge between the <strong>AI economy</strong> and <strong>crypto finance</strong>.</p><h3 id="h-v-off-chain-gpu-asset-tokenization-standards-and-risk-management-mechanisms" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>V. Off-Chain: GPU Asset Tokenization Standards and Risk Management Mechanisms</strong></h3><p>GAIB uses an <strong>SPC (Segregated Portfolio Company) </strong>structure to convert off-chain GPU financing into on-chain yield certificates. Investors deposit stablecoins to mint <strong>AI Synthetic Dollars (AID)</strong>, which can be staked for sAID to earn returns from GAIB’s GPU and robotics financing. As repayments flow into the protocol, sAID appreciates in value, and holders can burn it to redeem principal and yield — creating a one-to-one link between on-chain assets and real cash flows.</p><h4 id="h-tokenization-standards-and-operational-workflow" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>Tokenization Standards and Operational Workflow</strong></h4><p>GAIB requires assets to be backed by robust <strong>collateral and guarantee mechanisms</strong>. Financing agreements must include <strong>monthly monitoring, delinquency thresholds, over-collateralization compliance</strong>, and require underwriters to have at least <strong>2+ years of lending experience with full data disclosure</strong>.</p><p><strong>Process flow: </strong>&nbsp;Investor deposits stablecoins → Smart contract mints AID (non-yield-bearing, backed by T-Bills) → Holder stakes and receives sAID (yield-bearing) → the staked funds are used for <strong>GPU/robotics financing agreements</strong> → <strong>SPC repayments</strong> flow back into <strong>GAIB</strong> → the value of sAID appreciates over time → investors burn sAID to redeem their principal and yield.</p><h4 id="h-risk-management-mechanisms" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>Risk Management Mechanisms</strong></h4><ul><li><p><strong>Over-Collateralization</strong> — Financing pools maintain ~30% over-collateralization.</p></li><li><p><strong>Cash Reserves</strong> — ~5–7% of funds are allocated to independent reserve accounts for interest payments and default buffering.</p></li><li><p><strong>Credit Insurance</strong> — Cooperation with regulated insurers to partially transfer GPU provider default risk.</p></li><li><p><strong>Default Handling</strong> — In case of default, GAIB and underwriters may liquidate GPUs, transfer them to alternative operators, or place them under custodial management to continue generating cash flows. SPC’s bankruptcy isolation ensures each asset pool remains independent and unaffected by others.</p></li></ul><p>In addition, the <strong>GAIB Credit Committee</strong> is responsible for setting tokenization standards, credit evaluation frameworks, and underwriter admission criteria. Using a <strong>structured risk analysis framework</strong> — covering borrower fundamentals, external environment, transaction structure, and recovery rates — it enforces due diligence and post-loan monitoring to ensure <strong>security, transparency, and sustainability</strong> of transactions.</p><p><strong>Structured Risk Evaluation Framework (Illustrative reference only)</strong></p><table style="min-width: 100px"><colgroup><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Tier</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Dimension</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Core Metrics / Methods</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Evaluation Focus</strong></p></td></tr><tr><td colspan="1" rowspan="3"><p><strong>Borrower Fundamentals</strong></p></td><td colspan="1" rowspan="1"><p>Financial Stability</p></td><td colspan="1" rowspan="1"><p>D/E &lt; 0.65; CR &gt; 1.2; DSCR &gt; 1.35x; LTV &lt; 75%</p></td><td colspan="1" rowspan="1"><p>Debt servicing capacity &amp; capital structure</p></td></tr><tr><td colspan="1" rowspan="1"><p>Credit Record</p></td><td colspan="1" rowspan="1"><p>Loan history, repayment timeliness</p></td><td colspan="1" rowspan="1"><p>Willingness &amp; credibility of repayment</p></td></tr><tr><td colspan="1" rowspan="1"><p>Cash Flow Capacity</p></td><td colspan="1" rowspan="1"><p>Free cash flow, revenue forecast</p></td><td colspan="1" rowspan="1"><p>Sustainability of debt service</p></td></tr><tr><td colspan="1" rowspan="2"><p><strong>External Environment</strong></p></td><td colspan="1" rowspan="1"><p>Macro Risks</p></td><td colspan="1" rowspan="1"><p>Country/sovereign risk, regulatory policy changes</p></td><td colspan="1" rowspan="1"><p>Political, economic, and regulatory stability</p></td></tr><tr><td colspan="1" rowspan="1"><p>Market Conditions</p></td><td colspan="1" rowspan="1"><p>AI demand trends, GPU supply/demand &amp; price volatility</p></td><td colspan="1" rowspan="1"><p>Industry growth &amp; cyclical risks</p></td></tr><tr><td colspan="1" rowspan="2"><p><strong>Transaction Structure</strong></p></td><td colspan="1" rowspan="1"><p>Credit Enhancement</p></td><td colspan="1" rowspan="1"><p>Over-collateral (~33%), cash reserve (~6.6%), credit insurance</p></td><td colspan="1" rowspan="1"><p>Mitigation of default risk, principal protection</p></td></tr><tr><td colspan="1" rowspan="1"><p>Cash Flow Design</p></td><td colspan="1" rowspan="1"><p>Payment priority, delinquency triggers</p></td><td colspan="1" rowspan="1"><p>Ensuring stable &amp; predictable cash flows</p></td></tr><tr><td colspan="1" rowspan="4"><p><strong>Risk Mitigation &amp; Recovery</strong></p></td><td colspan="1" rowspan="1"><p>Operations &amp; Team</p></td><td colspan="1" rowspan="1"><p>≥10 years mgmt. experience; PUE &lt; 1.5; COGS/Revenue &lt; 25%</p></td><td colspan="1" rowspan="1"><p>Execution capability &amp; operational resilience</p></td></tr><tr><td colspan="1" rowspan="1"><p>Recovery Rate Analysis</p></td><td colspan="1" rowspan="1"><p>GPU residual value, secondary market liquidity, depreciation cycle</p></td><td colspan="1" rowspan="1"><p>Asset realization capacity post-default</p></td></tr><tr><td colspan="1" rowspan="1"><p>Stress Testing</p></td><td colspan="1" rowspan="1"><p>GPU price decline, delayed repayments, default scenarios</p></td><td colspan="1" rowspan="1"><p>Resilience under adverse conditions</p></td></tr><tr><td colspan="1" rowspan="1"><p>Continuous Monitoring</p></td><td colspan="1" rowspan="1"><p>FCF &gt; 1.0; Gross margin &gt; 80%; collateral valuation</p></td><td colspan="1" rowspan="1"><p>Dynamic risk alerts &amp; adjustments</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Internal Rating</strong></p></td><td colspan="1" rowspan="1"><p>Composite Scoring</p></td><td colspan="1" rowspan="1"><p>Country / Industry / Company / Management / Financials / Structure</p></td><td colspan="1" rowspan="1"><p>Internal credit decision &amp; admission thresholds</p></td></tr></tbody></table><p><br></p><h3 id="h-vi-on-chain-aid-synthetic-dollar-said-yield-mechanism-and-the-early-deposit-program" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>VI. On-Chain: AID Synthetic Dollar , sAID Yield Mechanism, and the Early Deposit Program</strong></h3><h4 id="h-gaib-dual-token-model-aid-synthetic-stablecoin-and-said-yield-bearing-certificate" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>GAIB Dual-Token Model: AID Synthetic Stablecoin and sAID Yield-Bearing Certificate</strong></h4><p>GAIB introduces <strong>AID (AI Synthetic Dollar)</strong> — a synthetic asset backed by U.S. Treasury reserves. Its supply is dynamically linked to protocol capital:</p><ul><li><p>AID is <strong>minted</strong> when funds flow into the protocol.</p></li><li><p>AID is <strong>burned</strong> when profits are distributed or redeemed.</p></li></ul><p>This ensures that AID’s scale always reflects the underlying asset value. AID itself only serves as a <strong>stable unit of account and medium of exchange</strong>, without directly generating yield.</p><p>To capture yield, users stake AID to receive <strong>sAID</strong>. As a <strong>yield-bearing, transferable certificate</strong>, sAID appreciates over time in line with protocol revenues (GPU/robotics financing repayments, U.S. Treasury interest, etc.). Returns are reflected through the <strong>exchange ratio between sAID and AID</strong>. Holders automatically accumulate yield without any additional actions. At redemption, users can withdraw their initial principal and accrued rewards after a short cooldown period.</p><ul><li><p><strong>AID</strong> provides <strong>stability and composability</strong>, making it suitable for trading, lending, and liquidity provision.</p></li><li><p><strong>sAID</strong> carries the <strong>yield property</strong>, both appreciating in value directly and supporting further composability in DeFi (e.g., splitting into PT/YT for risk-return customization).</p></li></ul><p>In summary, <strong>AID + sAID form GAIB’s dual-token economic layer</strong>: <strong>AID</strong> ensures stable circulation and <strong>sAID</strong> captures real yield tied to AI infrastructure. This design preserves the usability of a synthetic asset while giving users a yield gateway linked to the <strong>AI infrastructure economy</strong>.<br></p><h4 id="h-gaib-aid-said-vs-ethena-usde-susde-vs-lido-steth" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>GAIB AID / sAID vs. Ethena USDe / sUSDe vs. Lido stETH</strong></h4><p>The relationship between AID and sAID is comparable to <strong>Ethena’s USDe / sUSDe</strong> and <strong>Lido’s ETH / stETH</strong>:</p><ul><li><p>The base asset (USDe, AID, ETH) itself is non-yield-bearing.</p></li><li><p>Only after conversion to the yield-bearing version (sUSDe, sAID, stETH) does it automatically accrue yield.</p></li></ul><p>The key difference lies in the <strong>yield source</strong>: <strong>sAID</strong> derives yield from <strong>GPU financing agreement + US Treasuries</strong>.&nbsp; <strong>sUSDe</strong> yields come from <strong>derivatives hedging/arbitrage</strong>. and <strong>stETH</strong> yield comes from <strong>ETH staking</strong>.</p><table style="min-width: 150px"><colgroup><col><col><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Project</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>AID (GAIB)</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>sAID (GAIB)</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>USDe (Ethena)</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>sUSDe (Ethena)</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>stETH (Lido)</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Asset Type</strong></p></td><td colspan="1" rowspan="1"><p>AI synthetic dollar</p></td><td colspan="1" rowspan="1"><p>Yield-bearing certificate of AID</p></td><td colspan="1" rowspan="1"><p>Synthetic dollar</p></td><td colspan="1" rowspan="1"><p>Yield-bearing certificate of USDe</p></td><td colspan="1" rowspan="1"><p>ETH staking liquidity token</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Collateral / Yield Source</strong></p></td><td colspan="1" rowspan="1"><p><span data-name="cross_mark" class="emoji" data-type="emoji">❌</span> Non-yield-bearing</p></td><td colspan="1" rowspan="1"><p><span data-name="check_mark_button" class="emoji" data-type="emoji">✅</span> Auto-accruing (GPU financing + Treasuries)</p></td><td colspan="1" rowspan="1"><p><span data-name="cross_mark" class="emoji" data-type="emoji">❌</span> Non-yield-bearing</p></td><td colspan="1" rowspan="1"><p><span data-name="check_mark_button" class="emoji" data-type="emoji">✅</span> Auto-accruing (hedging arbitrage)</p></td><td colspan="1" rowspan="1"><p><span data-name="check_mark_button" class="emoji" data-type="emoji">✅</span> Auto-accruing (ETH staking)</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Yield Form</strong></p></td><td colspan="1" rowspan="1"><p>Must be staked to sAID</p></td><td colspan="1" rowspan="1"><p>Appreciates over time (10–20% debt-mode GPU, 60–80%+ revenue-share GPU)</p></td><td colspan="1" rowspan="1"><p>Must be staked to sUSDe</p></td><td colspan="1" rowspan="1"><p>Appreciates over time (~8–15%)</p></td><td colspan="1" rowspan="1"><p>Appreciates over time (~3–4%)</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Long-Term Vision</strong></p></td><td colspan="1" rowspan="1"><p>AI-Dollar, base currency of AI economy</p></td><td colspan="1" rowspan="1"><p>Yield benchmark asset for AI</p></td><td colspan="1" rowspan="1"><p>Decentralized “borderless dollar”</p></td><td colspan="1" rowspan="1"><p>Yield benchmark asset in DeFi</p></td><td colspan="1" rowspan="1"><p>Global liquidity standard for ETH staking</p></td></tr></tbody></table><p><br></p><h4 id="h-aid-alpha-gaibs-liquidity-bootstrapping-and-incentive-program-pre-mainnet" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>AID Alpha: GAIB’s Liquidity Bootstrapping and Incentive Program (Pre-Mainnet)</strong></h4><p>Launched on <strong>May 12, 2025</strong>, <strong>AID Alpha</strong> serves as GAIB’s <strong>early deposit program</strong> ahead of the AID mainnet, designed to bootstrap liquidity while rewarding early participants through extra incentives and gamified mechanics. <strong>Initial deposits</strong> are allocated to <strong>U.S. Treasuries</strong> for safety, then gradually shifted into <strong>GPU financing transactions</strong>, creating a transition from <strong>low-risk → high-yield</strong>.</p><p>On the technical side, AID Alpha contracts follow the <strong>ERC-4626 standard</strong>, issuing AIDα receipt tokens (e.g., AIDaUSDC, AIDaUSDT) to represent deposits and ensure cross-chain composability.</p><p>During the <strong>Final Spice</strong> stage, GAIB expanded deposit options to multiple stablecoins (USDC, USDT, USR, CUSDO, USD1). Each deposit generates a corresponding <strong>AIDα token</strong>, which serves as proof of deposit, automatically tracks yield and counts toward the <strong>Spice points system</strong>, which enhances rewards and governance allocation.</p><p><strong>Current AIDα Pools (TVL capped at $80M):</strong></p><table style="min-width: 125px"><colgroup><col><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Pool</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>TVL (Approx.)</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Supported Asset / Source</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Supported Chains</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Incentives</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p>AIDaUSDC</p></td><td colspan="1" rowspan="1"><p>~$43.1M</p></td><td colspan="1" rowspan="1"><p>USDC (Circle)</p></td><td colspan="1" rowspan="1"><p>Ethereum, Arbitrum, Base, Sei, Story</p></td><td colspan="1" rowspan="1"><p>10x Spice</p></td></tr><tr><td colspan="1" rowspan="1"><p>AIDaUSDT</p></td><td colspan="1" rowspan="1"><p>~$21.1M</p></td><td colspan="1" rowspan="1"><p>USDT (Tether)</p></td><td colspan="1" rowspan="1"><p>Ethereum, Arbitrum, Sei, BSC</p></td><td colspan="1" rowspan="1"><p>10x Spice</p></td></tr><tr><td colspan="1" rowspan="1"><p>AIDaUSR</p></td><td colspan="1" rowspan="1"><p>~$0.09M</p></td><td colspan="1" rowspan="1"><p>USR (Resolv, delta-neutral stablecoin)</p></td><td colspan="1" rowspan="1"><p>Ethereum</p></td><td colspan="1" rowspan="1"><p>10x Spice + 30x Resolv</p></td></tr><tr><td colspan="1" rowspan="1"><p>AIDaCUSDO</p></td><td colspan="1" rowspan="1"><p>~$0.07M</p></td><td colspan="1" rowspan="1"><p>CUSDO (OpenEden yield-bearing wrapper)</p></td><td colspan="1" rowspan="1"><p>Ethereum</p></td><td colspan="1" rowspan="1"><p>10x Spice + 3x OpenEden</p></td></tr><tr><td colspan="1" rowspan="1"><p>AIDaUSD1</p></td><td colspan="1" rowspan="1"><p>~$1.69M</p></td><td colspan="1" rowspan="1"><p>USD1 (WLFI, Treasury-backed institutional stablecoin)</p></td><td colspan="1" rowspan="1"><p>BNB Chain</p></td><td colspan="1" rowspan="1"><p>10x Spice + WLFI Points</p></td></tr></tbody></table><p><br>All AIDα deposits have a lock-up period of up to two months. After the campaign ends, users can choose to either convert their AIDα into mainnet AID and stake it as sAID to earn ongoing yields, or redeem their original assets while retaining the accumulated <strong>Spice</strong> points.</p><p><strong>Spice</strong> is GAIB’s incentive point system launched during the AID Alpha phase, designed to measure early participation and allocate future governance rights. The rule is <strong>“1 USD = 1 Spice per day”</strong>, with additional multipliers from various channels (e.g., 10× for deposits, 20× for Pendle YT, 30× for Resolv USR), up to a maximum of <strong>30×</strong>, creating a dual incentive model of <strong>“yield + points.”</strong></p><p>In addition, a referral mechanism further amplifies rewards (Level 1: 20%, Level 2: 10%). After the <strong>Final Spice</strong> event concludes, all points will be locked and used for governance and reward distribution upon mainnet launch.</p><p><strong>Fremen Essence NFT: </strong>&nbsp;GAIB also issued <strong>3,000 limited Fremen Essence NFTs</strong> as early supporter badges: Top 200 depositors automatically qualify.Remaining NFTs distributed via whitelist and minimum $1,500 deposit requirement. Minting is free (gas only).NFT holders gain <strong>exclusive mainnet rewards, priority product testing rights, and core community status</strong>. Currently, the NFTs are trading at around <strong>0.1 ETH</strong> on secondary markets, with a total trading volume of <strong>98 ETH</strong>.<br></p><h3 id="h-vii-gaib-transparency-on-chain-funds-and-off-chain-assets" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>VII. GAIB Transparency: On-Chain Funds and Off-Chain Assets</strong></h3><p>GAIB maintains a <strong>high standard of transparency</strong> across both assets and protocols.&nbsp;</p><ul><li><p>On-chain, users can track asset categories (USDC, USDT, USR, CUSDO, USD1), cross-chain distribution (Ethereum, Sei, Arbitrum, Base, etc.), TVL trends, and detailed breakdowns in real time via the <strong>official website, DefiLlama, and Dune dashboards</strong>.&nbsp;</p></li><li><p>Off-chain, the official site discloses portfolio allocation ratios, active deal amounts, expected returns, and selected pipeline projects.<br></p></li><li><p>GAIB Official Transparency Portal:<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://aid.gaib.ai/transparency"> <u>https://aid.gaib.ai/transparency</u></a></p></li><li><p>DefiLlama:<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://defillama.com/protocol/tvl/gaib"> <u>https://defillama.com/protocol/tvl/gaib</u></a></p></li><li><p>Dune:<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://dune.com/gaibofficia"> </a><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://dune.com/gaibofficial"><u>https://dune.com/gaibofficial</u></a><br></p></li></ul><p><strong>Asset Allocation Snapshot</strong></p><table style="min-width: 100px"><colgroup><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Dimension</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Data / Composition</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Share</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Expected Yield</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Total AUM</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">$175.29M</p></td><td colspan="1" rowspan="1"><p style="text-align: center">100%</p></td><td colspan="1" rowspan="1"><p style="text-align: center">–</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>T-Bills</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">$124.9M</p></td><td colspan="1" rowspan="1"><p style="text-align: center">71%</p></td><td colspan="1" rowspan="1"><p style="text-align: center">4%</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>GPU + Robotics</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">$50.4M</p></td><td colspan="1" rowspan="1"><p style="text-align: center">29%</p></td><td colspan="1" rowspan="1"><p style="text-align: center">15% – 30%</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Chain Distribution</strong></p></td><td colspan="3" rowspan="1"><p style="text-align: center">Ethereum 83.2%, Sei 13.0%, Base + Arbitrum &lt;4%</p><p style="text-align: center">Supported:<strong> </strong>Story Protocol, BNB Chain, Plan: Plume Network</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Stablecoin Mix</strong></p></td><td colspan="3" rowspan="1"><p style="text-align: center">USDC (52.4%), USDT (33.4%), USDƒ0 (14.0%), USD1 (~2%), USR (0.1%), CUSDO (0.09%)</p></td></tr></tbody></table><br><p>As of <strong>October 7, 2025</strong>, GAIB manages a total of <strong>$175.29 million</strong> in assets. This <strong>“dual-layer allocation”</strong> balances stability with excess returns from AI infrastructure financing.</p><ul><li><p><strong>Reserves</strong> account for <strong>71% ($124.9M)</strong>, mainly U.S. Treasuries, around <strong>4% APY</strong></p></li><li><p><strong>Deployed assets</strong> account for <strong>29% ($50.4M)</strong>, allocated to off-chain GPU and robotics financing with an average <strong>15% APY</strong>.</p></li></ul><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/3541f271e2ce86ed816981a2c747f326d021f04971f394fd5c80bc7460dce3cb.png" alt="" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAVCAIAAACor3u9AAAACXBIWXMAAAsTAAALEwEAmpwYAAAFb0lEQVR4nI2Uf2gTZxjHX4T9Nxj414agTEH3i+2/sT8mSNFZ66+O/cC5yqoyRaRMhnab4nSWKLVdrRNRW22qVqpZ+oPY302vaZM0sTHxcrkmMfcjl7vrJZdc3rukaZuohXF3JRRtYfDh4eHe957v87zv87xAICcZFIkFbGxgPBawxTAkhiECOSkJU/+fJIcV/XQiVAQmIoBwWewtF93tjb7uW662ekdrraO1NuLsTidCS0Oktd2KGIZiOJ0IviagpBaXRMrDoAjjH9WJolYgp6Ozs2mN5Oxscj4vz+flrDIN44QiRXWgSEp8iCVQmsAYCktw4Sxki0uySLIhD0NhUQLF7H2Ec4Dxjuj4e+8BKBIwTk4HPQnCJ/HhFBtMscGsIsznZVmkFSmmSDE5zXD+McRsHOgxIQOdAY9NSdJQZDQoOR5BjA0W872R/g5z099TNguHOaK+MRYdD/S3AS0FmvFaWdT24MS3lavANwBUrQVP2v+ZnU3rGqoA7kRunLc11Thb6/su/zrY8EcO8nOZuCyScppFbpw3nz5gvXam/eS+Bye+j05aox61Al93k1qBLNL5V3MD9dU7APjzi/dM1T9VrQU7AfB2Ns/NQVmkFIlmCXS0z+R1DslxIsVOKQkqSqCu8UEokopEuxEL0vOQDXmoyWFyog9HuizGq76hDtRyRxXIQg7GiQoADCUbCguFwkIhNyNWrQW/f/bOfF6RRVoWyQQXxv1uhsKTApEUKJEnOCrwfOqpLsBQOEPhMBXjcTfxZBh39I6a7vhHOrydN4EkRHIz4nTQswuA4atn8y9zHD5RWCg0ln++F4DcjKgekUhCkaYJjCYwIuijI+jzqadk+JkaVKRkkcpBPgf5jMQmCNQ/0uUdMnuHzDjS7WytUyvIQF4WqQoALm3dqFcwNwe/BmAfAPkXGVmk1UYQiRnIKCkqK6kFZSVaRxLC+ipUkyBJV294rJN09oTHOoNWk7v96uIdzOcVS83xUvUO3r19uHQ/AFsBcLTU5V/NymqOhJQg2Wgoo0zDVEyGnAx5zXIZyGudttjQHD7BeBEWtUW9CONF/L1GVUA7BDo3Iw7Un9JDlwFgPn0wxQY5zK79T/Okf3K0h8OdKSaQIHx6W2v2qd7KOgkCZdFxDrOzqK3YpkRGYnWN+bySgbw+Chw+wT6zcfiE+qfMMV6rpebYQP2pgfpTPZdOPKr+0XrtzGNDlaOlTjvhxYnhMIcm4NA19C4is5DLQjYLWe00KG1KY+oXhc9CThYpSQjLIpmOqzthPKJvg3F1gLTMqCIcPhFVO3WQnOgjHL2uB1dAnPYJEbdAeoSIO8lhMEXCFJlORtLJiO7DImnVymm6aF9DTtPUZK/HfN3X3eTtvOlqaxhvNoCQzTzYcNLeYhhsOIlamvmQU39cV4INjK0EgyLu9iuO1kv2FsNw42+2W+fULjIe3bEHqM9DOQDbV+CrJc62lSkHYC8AuzX7nRbzUfV+MNZ8wVCyvrZ008WtGy5sXnNh85qaLesMJe+/Sc2WdbWlm24e2Hyj4stlMR7dUb/rEy3aB3VlH9eWbrJeOw3uHd+9B4Ajq8Hht1WOrFaViylvX8JWAA6+BQoLhfzLXP7V3JvYXTaLyTjYeb/v37vG63X3mxuDiAnYWwyGkvV1ZR9dLvuwtnRjw55P234pbz228w123fl5e/dfRyQhnOKDy8LTKOEawAbbseGH6LBq8cE2cPvwtm0A/ABAhfY2VK4Cxut1opJ6ubBMjvkX2aVjtZQc5B2jPZ2PjO7xflNbc0f7bVNbc5LHAdZ/t+NsZde5Q13nDnWcrXxsOCZEJiUhlOTxZYGJyEpIQlh9mjQnpfmSEPoPs5CsI2dcT2AAAAAASUVORK5CYII=" nextheight="679" nextwidth="1038" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p><strong>On-chain fund distribution</strong>: According to the latest <strong>Dune Analytics</strong> data, <strong>Ethereum holds 83.2%</strong> of TVL, <strong>Sei 13.0%</strong>, while <strong>Base and Arbitrum</strong> together make up less than 4%. By asset type, deposits are dominated by <strong>USDC (52.4%)</strong> and <strong>USDT (47.4%)</strong>, with smaller allocations to <strong>USD1 (~2%)</strong>, <strong>USR (0.1%)</strong>, and <strong>CUSDO (0.09%)</strong>.</p><p><strong>Off-chain asset deployment</strong>: GAIB’s active deals are aligned with its capital allocation, including:</p><ul><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://Siam.AI"><strong>Siam.AI</strong></a><strong> (Thailand)</strong>: $30M, 15% APY</p></li><li><p><strong>Two Robotics Financing deals</strong>: $15M combined, 15% APY</p></li><li><p><strong>US Neocloud Provider</strong>: $5.4M, 30% APY</p></li></ul><p>In addition, GAIB has also established approximately <strong>$725M in projects pipeline reserves</strong>, with a broader total pipeline outlook of <strong>over $2.5B within 1–2 years</strong>:</p><ul><li><p><strong>GMI Cloud and Nvidia Cloud Partners</strong> across Asia ($200M and $300M), Europe ($60M), and the UAE ($80M).</p></li><li><p><strong>North America Neocloud Providers</strong> ($15M and $30M).</p></li><li><p><strong>Robotics asset providers</strong> ($20M).</p></li></ul><p>This pipeline lays a solid foundation for future expansion and scaling.</p><h3 id="h-viii-ecosystem-compute-robotics-and-defi" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>VIII. Ecosystem: Compute, Robotics, and DeFi</strong></h3><p>GAIB’s ecosystem consists of <strong>three pillars</strong> — <strong>GPU computing resources, robotics innovation enterprises, and DeFi protocol integrations</strong> — designed to form a closed-loop cycle of: <strong>Real Compute Assets → Financialization → DeFi Optimization</strong>.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/7d80e55fd687ca042f1ae5473eb0cdb7e6de2bc6487ac1332f3412206adaebee.jpg" alt="" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAOCAIAAADBvonlAAAACXBIWXMAAAsTAAALEwEAmpwYAAAEaElEQVR4nG2TW2wUVRzGR4wPEqNRHohoTAgaLw3Cg6iVJRExDQjGKthLulrZBkyJBYs80LjQmPjQJ9G0NgYSUUCaLNc07ZKmbMGh3W22tuyty95nnZnO7szsnp3dmZ4zOzOdY2Y31hg9+R7OzPmf+b7f+Z8hmLTvv6Li90Tuvuu3HzY+92R7S1PTO9tsjQ2HHR+e+LJz3eOE49P3DzTv7O2xd3bs7T7UMjVxSeTuU/F7//spgknPrIqKk6sTcWlh/Oa5hpeesb35cvM+24svrH/iUaJ5n23TxnX21qatm576aOerts3PPr9+ze3x84APru79t3wEk5xlUwt0co6jAgIbYVMLqxLYCFSySplaLrNomauA7HKZrapiBWSVMlMS0+ViUirGmLifTs7VtbqXowIcFWCSPiJwZ3Tywo+fdX6y973dO2yN7a37oQJ0TYEKUCQxtTh7c+jbb44eatjc8PaOt7Y3vnHjyoiuQ6gAqEiApxe9k1e/O21v/9hma4xFg6ZZRbBcRRUoWwX8n4tExE9eu/yLZ9I9Td71TN6KhOazmcQ06WGppCIJTNx/13199NrI5MTYH/6Z0P05KhWjUrFp0qNIAshTieDM1cu/XnGNjI3emCY906SH/N2zMD8L5QKUgcg+IMSlzDJEHMt+f+bMwMDA8d7ezw8fDoeCKkKGrmajXqkEIISpZNLhOPjFkSN2e0dfX59UAoaul4u5RNA3Nj52/YqLYbJQkaRCTgKCVMjXES0DwNMYY47jDhzYf+LEV83NH7y7a1dBFDHGhq5lo14EKxjjTCa9Z8/uzs5Om217V1cXro0Sz+bSYV03MMZOp7Onp4ckSbfbnclk6gXiUsIy0DQ0cvlSQRRKoFguARXBs2d/CocCdYIqqhi6pqLlYDBQAkUIoYqWDV3F2CjxTC4d1lQVY3PW56UyaYxNRZZVBDUVaaoqsjGiXMwJgvDY2rVOp/PY0WOnT50aGhra8PSG4eFhQ9ctAkXGGCuyfLy3t72tfeuWLSRJYoxXjJU6wYppYozvTE11d3f39/c7nV+HQsF6QY0gT9XiYD7HFcU8VORaQA1jU0MwG/VBRdIQNHR1MRSYcLtvXHUthgKmqWkq+pugauh6Psc9iC6yDJ1MJCqSpKmqplYtAsDT9QgkSQ4PD7tcLr/fLwhCrQd6NurVENR1HWPc0dFB1Mbg4KAV0DTLxXytB9bqLbe77+TJttaWttaWc+fOYoxVhGoGeco6LwRzLM0yNISKVAIaghqCVSRno16oWI8YG6HAwtTkhGrRaEipGLoKeDqXDhuWgRkOBc+f/7nL4XA4Dl68eOEfgxLP1K/Ba9teX/PwI7WID/n9cxhjXdeosM9y0jSMsd1uJwjilYaGaDS6SsClQ1UkQxlUkWyaGsZG/WzrP5plwKUDJZ4BPJ16EIhFFhKR+URkXlzKAJ4GeYqOeQtLSasgTymSwDEpOhWpr1oNyEaYuA/kMyCfKeRSIJ8W2HhNifpLNjX3F6gSFp70r6n4AAAAAElFTkSuQmCC" nextheight="645" nextwidth="1456" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><h4 id="h-gpu-compute-ecosystem-on-chain-tokenization-of-compute-assets" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>GPU Compute Ecosystem: On-Chain Tokenization of Compute Assets</strong></h4><p>Within the on-chain financing ecosystem for AI infrastructure, GAIB partners with a diverse set of compute providers, spanning both <strong>sovereign/enterprise-level clouds</strong> (GMI, <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://Siam.AI">Siam.AI</a>) and <strong>decentralized networks</strong> (Aethir, <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://PaleBlueDot.AI">PaleBlueDot.AI</a>). This ensures both operational stability and an expanded RWA narrative.</p><ul><li><p><strong>GMI Cloud</strong>: One of NVIDIA’s six Global Reference Platform Partners, operating seven data centers across five countries, with ~$95M already financed. Known for low-latency, AI-native environments. With GAIB’s financing model, GMI’s GPU expansion gains enhanced cross-regional flexibility.</p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://Siam.AI"><strong>Siam.AI</strong></a>: Thailand’s first sovereign-level NVIDIA Cloud Partner. Achieves up to <strong>35x performance improvement</strong> and <strong>80% cost reduction</strong> in AI/ML and rendering workloads. Completed a <strong>$30M GPU tokenization deal with GAIB</strong>, marking GAIB’s first GPU RWA case and securing first-mover advantage in Southeast Asia.</p></li><li><p><strong>Aethir</strong>: A leading decentralized GPUaaS network with <strong>40,000+ GPUs (incl. 3,000+ H100s)</strong>. In early 2025, GAIB and Aethir jointly conducted the first GPU tokenization pilot on BNB Chain — raising <strong>$100K in 10 minutes</strong>. Future integrations aim to connect AID/sAID with Aethir staking, creating dual-yield opportunities.</p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://PaleBlueDot.AI"><strong>PaleBlueDot.AI</strong></a>: An emerging decentralized GPU cloud provider, adding further strength to GAIB’s DePIN narrative.</p></li></ul><h4 id="h-robotics-ecosystem-on-chain-financing-of-embodied-intelligence" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>Robotics Ecosystem: On-Chain Financing of Embodied Intelligence</strong></h4><p>GAIB has formally entered the <strong>Embodied AI (robotics)</strong> sector, extending the GPU tokenization model into robotics. The aim is to create a <strong>dual-engine ecosystem of Compute + Robotics</strong>, using SPV collateral structures and cash flow distribution. By packaging robotics and GPU returns into <strong>AID/sAID</strong>, GAIB enables the financialization of both hardware and operations.</p><p>To date, GAIB has allocated <strong>$15M on</strong> <strong>robotics financing deals aiming</strong> at generating ~15% APY, together with partners including <strong>OpenMind, PrismaX, CAMP, Kite, and SiamAI Robotics</strong>, spanning hardware, data streams, and supply chain innovations.</p><ul><li><p><strong>PrismaX</strong>: Branded as <strong>“Robots as Miners”</strong>, PrismaX connects operators, robots, and data buyers through a teleoperation platform. It produces high-value motion and vision data priced at <strong>$30–50/hour</strong>, and has validated early commercialization with a <strong>$99-per-session paid model</strong>. GAIB provides financing to scale robot fleets, while data sales revenues are funneled back to investors via AID/sAID — creating a data-centric financialization pathway.</p></li><li><p><strong>OpenMind</strong>: With its <strong>FABRIC network</strong> and <strong>OM1 operating system</strong>, OpenMind offers identity verification, trusted data sharing, and multimodal integration — effectively acting as the <strong>“TCP/IP” of robotics</strong>. GAIB tokenizes task and data contracts to provide capital support. Together, the two achieve a complementary model of <strong>technical trustworthiness + financial assetization</strong>, enabling robotics assets to move from lab experiments to scalable, financeable, and verifiable growth.</p></li></ul><p>Overall, through <strong>PrismaX’s data networks, OpenMind’s control systems, and CAMP’s infrastructure deployment</strong>, GAIB is building a full-stack ecosystem covering robotics hardware, operations, and data value chains — accelerating both the industrialization and financialization of embodied intelligence.</p><h4 id="h-defi-ecosystem-protocol-integrations-and-yield-optimization" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>DeFi Ecosystem: Protocol Integrations and Yield Optimization</strong></h4><p>During the <strong>AID Alpha</strong> stage, GAIB deeply integrated <strong>AID/aAID assets</strong> into a broad range of DeFi protocols. By leveraging yield splitting, liquidity mining, collateralized lending, and yield boosting, GAIB created a <strong>cross-chain, multi-layered yield optimization system</strong>, unified under the <strong>Spice points incentive framework</strong>.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/bf9063d707caecad9a90c79efe3f865dd01cf0d4bf6a1ad39f4ecef314a568e7.jpg" alt="" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAASCAIAAAC1qksFAAAACXBIWXMAAAsTAAALEwEAmpwYAAAGD0lEQVR4nE1UT2zb5hX/tkN2221AdhgwoIdeCmQYsA0YBizbYQXaoimGpsm6FME6dGiCuZg9BNHSNHGT1nES2a5jO7Esy7RNy4wpU4wlmiItkaFIf+JnMhQZSbRUyxQlWnJk13LroociwJBB0mXAw8MHvPe+3/u9f8Bz0XaOdytouyBUSnKllHbLypfZVbesFFV6K8tWSvKOZ3g1zdkUao5aLcOSHquWoVfTPBfVnIznos5D7Wjouaht7eiao4KaAx9+9PrDj14L/PkV/IOT1OW36Stnyb5TY6+9RF89R10+Pf/+70Lnfr148dXFC38i+06Fzv0m9O6vxt94mb5ydscz/g8gU69bq599wA/3wPAgIkeCZ365QY2CHc8opAlzDdugRlXibk7ADea+wQY3IiMVWyjAyBNmYoMa3VSWyhZfSIdzAl5UaZObKapUN82uuGXFq2klPVZUKWdT7DgTzqYEqmUYvfpuOnQ91n9+/u+/l2dvdtNcvPCqPHvz8Pnu0Ytv9o+cet3c8Yz9r7ePXnxz+Lz57YujvdaXNSfTPCy3vq8ffOe2vq/X6+b+kXP0368OvvMOn+8ePm/seAbwXORsipVSuivtlliJ7YKAnf9tDwBXfgb6fgjEwJW9VnG/VZSCH186Bj575Qe+H4MMcaf1bWX89ZcvHQNXfw76ANCj9xY//GMPAL6ftqOGTx5veDrwalqtplXdjd16tivFvGiWtf4TPzoNwDkATgOA976l5x4/KchT7598E4CzAJwCYP7f72TL6MOOw3kA3gZgsf9v137RjnoPgDMA/AOAUlEGrf08HZ3+61/eUCEdX5nRMoyBmPjKTIwOYsFBLHBrOjBALk1QywFZopoNM4tiUCDQenSvYXIsvvk09UR9RMyNPNVXM5Bel6ORxXF6ORAhxrM6axlroNkwM5Amlx6okE5LlGkkLIPL6gmOW4hEAhQVxPExKUXK0jJUoq0DO58TEGI0jf36sMhz4ZItW1YKnx+1rJSOmHU5ioX8U5OD09N+Q2MLVgq0Dm0sdPfTa/8cuNH7ef+/hoev6ogxjYSQJELBQSx4Zzbkh0oUKlEmjnHsfFok0yKZ5MOP6Gk6Ol2yxUoZpkVyd0dfl6M6YkhiYiZ4e27Gr0I6nxNAvaptu9pPzrwzuxIWYKJWM4p5UYUxnl0g8HskMbFEjAs8ISQJSSRdJ5NZX+HZBajQzhbkWNwuCK6TEZJLnougQqswFiEfEOExIjyuwlgHwN3Y3s5cvNy7yj8U03R5C+Ys3jK4DKTx+VF8fgSfG1EhLUvLskS1DgoqpHkWR2r8q2aO58J2TvBclIG056J1ieoCYJh/DhtqM7CSwKtteC5iY0F6ORBbCVYdlLeS+ZzAMnNY8M7U5MD98RvxlZkuwF7DbDZMqEQbntFsmB0AseogIUl0GWRgbIkYn8OGZkP+jELbORHs1p84WxCfH7s1cOn2bV9WZ4t5EalxpMbjK1h0eYok7ksiqcKYLFH1qmbqLFLjlrHmuSjJL9o5oV7VSvbjelWTpaihsVRkcmryFrn0wDQSHQYuciuo6mx0/pqwc6KdE7oTNocNTQdukUsTlsF1qkQ1PJ0Ijw1+fokkJqoOSvJhuyCUSwrH4g1PlztDyDKzH/suMnGsYAm57BrYcTXXyZw48RKG+VceBbc25YKVUmFMEkkhSTBxLJUkDLSa5MPdPSgX06a+Wi4p7RKxCwUr1fB0Z6t9PuVOD6BC8xwuiWRWT7T3YP/ZU0kkjx8/durNP/Rf65nDhrJ6wjQSPBeeDflnQ3cxbAgq9GPhIVToYl58RE8LSYKNzTpb66kkUbCEqoN0xNSrGlRoWYry7IKQJGSJkiUqbwntU+G5yMwqSX5RkpbKJcUyOM9FDU/XEaMjxtDYZsO0DE6WoocHdrNhlGxx/9nTg/1cKkmU7MfNhmnnxL1nVjf9DtdlSSTtnNBm0AaoaZMPIv/x9V6/3pPVEyVbZJm5+xMDPt+FT/t7fb4LXwx/wrE4x+JZnZWkCBEegwptaCwdnU5LlCRFwgv3EGI4Fue5MBa6e/NGHxbyS2LEMtb+B85Gs0Wt5KMuAAAAAElFTkSuQmCC" nextheight="819" nextwidth="1456" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><ul><li><p><strong>Pendle</strong>: Users split AIDaUSDC/USDT into <strong>PT (Principal Tokens)</strong> and <strong>YT (Yield Tokens)</strong>. PTs deliver ~15% fixed yield; YTs capture future yield and carry a <strong>30x Spice bonus</strong>. LP providers also earn <strong>20x Spice</strong>.</p></li><li><p><strong>Equilibria &amp; Penpie</strong>: Pendle yield enhancers. Equilibria adds ~5% extra yield, while Penpie boosts up to 88% APR. Both carry <strong>20x Spice multipliers</strong>.</p></li><li><p><strong>Morpho</strong>: Enables PT-AIDa to be used as collateral for borrowing USDC, giving users liquidity while retaining positions, and extending GAIB into Ethereum’s major lending markets.</p></li><li><p><strong>Curve</strong>: AIDaUSDC/USDC liquidity pool provides trading fee income plus a <strong>20x Spice boost</strong>, ideal for conservative strategies.</p></li><li><p><strong>CIAN &amp; Takara (Sei chain)</strong>: Users collateralize enzoBTC with Takara to borrow stablecoins, which CIAN auto-deploys into GAIB strategies. This combines <strong>BTCfi with AI yield</strong>, with a <strong>5x Spice multiplier</strong>.</p></li><li><p><strong>Wand (Story Protocol)</strong>: On Story chain, Wand provides a Pendle-like PT/YT split for AIDa assets, with YTs earning <strong>20x Spice</strong>, further enhancing cross-chain composability of AI yield.</p></li></ul><p>In summary, GAIB’s DeFi integration strategy spans <strong>Ethereum, Arbitrum, Base, Sei, Story Protocol, BNB Chain, and Plume Network</strong>. Through Pendle and its ecosystem enhancers (Equilibria, Penpie), lending markets (Morpho), stablecoin DEXs (Curve), BTCfi vaults (CIAN + Takara), and native AI-narrative protocols (Wand), GAIB delivers <strong>comprehensive yield opportunities</strong> — from fixed income to leveraged yield, and from cross-chain liquidity to AI-native strategies.</p><h3 id="h-ix-team-background-and-project-financing" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>IX. Team Background and Project Financing</strong></h3><p>The GAIB team unites experts from AI, cloud computing, and DeFi, with backgrounds spanning L2IV, Huobi, Goldman Sachs, Ava Labs, and Binance Labs. Core members hail from top institutions such as Cornell, UPenn, NTU, and UCLA, bringing deep experience in finance, engineering, and blockchain infrastructure. Together, they form a strong foundation for bridging real-world AI assets with on-chain financial innovation.</p><br><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/23bda5cfab90b36eb1c0b5248ff3de3787197b480abf9e7ab6b782e4fda8a05d.jpg" alt="" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAQCAIAAAD4YuoOAAAACXBIWXMAAAsTAAALEwEAmpwYAAAFAUlEQVR4nG1TXYgbVRS+hYKiPkjxQVp8qQ+2iPRBCwq1IAX1QQSFgojFF1Gh+Pcigog/VERQES11a5Va1rbQn91uNs02OzuTkJ3JJJPMJJmdSWazmWR2Mj+Zyfxk06STTGZyJZtaWvHjcDiXe+75+e45AEL44w+nPv7wU9fdsmxbUVTDMGRZHg6HEMIwDCGEQRDAbUyPDVk7/tHnyRQNIVRVtdGQFEWRZdl1O0EQBnchDEPw1TcnwTYOHT6qa1o0Fo9GY5FItFAocixXKLB0vsiWWM/zJtHHYwjhkRffAADs2vVkzwvKPH89GmMYJkfleL6sKJN8kiRPpCF1uzfB84eO7tt5/wt7HgVgr66pOEFiWBJPEYlECsOSKysosoJyHF+riYqiQgh7tnryg9ePPf3YZ6880yiQetvCUzjDlFh2rVAopskMx/G+7087hhCCMkMf3LsXAPDT978OBh5BkFmK4jh+UobclCVZlmXTNA3D0HXd8wYQwhOfvLN75wNfHH8XQr+pqBSVz1I5KpsjyQyKYfk8LYp1RVGnJAMI4cIyfuLnPyGE3W7XNEzLshzHgf+HcPsPNN34+stvxfV1CKFhmLqum2bbsm3TNKcOw6HveYOpDU7/9sd9YAcAYGZu1TRaKysYhiVJMlMRhAkn4/9GhzA8uO8pAMDhJx63rXa5UkEQJJFITbXrup7n3V0TOLD/wLOP7Hj/yHNnz51v6TqGpdIkmUrhdL4gCDVdb1mWrestwzCnzIplfg8ABx7auf/hB1NI3DDN5fgKhiVj8WUqm2NZ3jDNexJEFhbfOrR/NwBXz57p93qJRHIVJzKZCaGZDIXjBEXlKhWh3pD+Hdbxq0deBgC899pLN7vdTqdTq4m1miiKoizLjYbkOE4YTgb0NkUQwgpbuHjhgu1MIDUmTlWh2pBkmmZyOZqmGY7nNmq1O9sgCNXFq1eaUqPX7zuO0+l0IISe57mu23Hdm93uPR1YVjuOoMlE6vLluWg0Lkmy74+md6USe+nS5YWFhQsXLmJY0rIs3/eDIEgkknEEnZu7tv0kOqG+s9XStbm5a/PzEQRBSyVWEKqm2e52u8D3fZZdIwmSIEiSyLQM406luq7jOEESJEXlcYIslkqWZUMIWXYtiSVpmhGEqiiKqqIGQdA2TQRBYrF4sVCUJFkU647jeJ4HPM9DsVQkEltcjMUR9PoSgmKpmigFQTgKQm8wGThv4A+Hfq9/q7+9B4XC2tISWpeabcvZlBXLnsy0putxBI3GlpEV7EYc5XnBMO3h0AcQjrbcVs9ttR3dtlt2SzH0TUPfHPkDGI5uyziYaDiCMBwH3jicSBgOxqEH4Wg8MQaWJqXQG3gyjiGxG0uRMssU82RN4EA6FkMW5s9fiTg03iRjifnvtGq1llkl//qFL+fo6Cw1d7aSn1c3MtJ6QSilLV2sVgVFFnWl3nebWoM3FKG9pbTcTb0tV6p1TW1odVbaYOqVnLTOgHpDLbECmi66hRKbSi0S6Ea5ubmxWS8z+TWBXcW4JFoiItIGS+WLWSrXkFsUJ1eltqg4lu0IQs3t9iudDt915M4tVuk5fV/V9aqoSLKmKBrwu8zQIUe3yBq/9Oaxt8/M/P737Ln4Kj17MbpcVtpUvDl/2qaT63T21MyMqigcx9UqrJaO5LJZAscNXYthmasRjEgXkyTDePX8UCoMm+cxdRZVI2njH43/5He8qRg0AAAAAElFTkSuQmCC" nextheight="688" nextwidth="1393" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p><br><strong>Kony Kwong — Co-Founder &amp; CEO<br></strong> Kony brings cross-disciplinary expertise in traditional finance and crypto venture capital. He previously worked as an investor at L2 Iterative Ventures and managed funds and M&amp;A at Huobi. Earlier in his career, he held roles at CMB International, Goldman Sachs, and CITIC Securities. He holds a <strong>First-Class Honors degree in International Business &amp; Finance from the University of Hong Kong</strong> and a <strong>Master’s in Computer Science from the University of Pennsylvania</strong>. Observing the lack of financialization (“-fi”) in AI infrastructure, Kony co-founded GAIB to transform real compute assets such as GPUs and robotics into investable on-chain products.</p><p><strong>Jun Liu — Co-Founder &amp; CTO<br></strong> Jun has a dual background in academic research and industry practice, focusing on blockchain security, crypto-economics, and DeFi infrastructure. He previously served as VP at Sora Ventures, Technical Manager at Ava Labs (supporting BD and smart contract auditing), and led technical due diligence for Blizzard Fund. He holds dual bachelor’s degrees in Computer Science and Electrical Engineering from National Taiwan University and pursued a PhD in Computer Science at Cornell University, contributing to IC3 blockchain research. His expertise lies in building <strong>secure and scalable decentralized financial architectures</strong>.</p><p><strong>Alex Yeh — Co-Founder &amp; Advisor<br></strong>Alex is also the founder and CEO of <strong>GMI Cloud</strong>, one of the world’s leading AI-native cloud service providers and one of NVIDIA’s six Reference Platform Partners. Alex has a background in semiconductors and AI cloud, manages the Realtek family office, and previously held positions at CDIB and IVC.&nbsp; At GAIB, Alex spearheads <strong>industry partnerships</strong>, bringing GMI’s GPU infrastructure and client networks into the protocol to drive the financialization of AI infra assets.</p><p><strong>Financing</strong></p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/045e673ef8fd5a824cbcdcf0786f0b06171de6228b1a2478544cbcb861354e29.jpg" alt="" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAATCAIAAAB+9pigAAAACXBIWXMAAAsTAAALEwEAmpwYAAAGiElEQVR4nE2TeVDTZxrHv4AoCDQhSg4bAQ0Q1KJAlQIG5bDRWlA5hUYUD86CJQgVLEeBuugqCC1Ca6zBBBoIOSD5EXJACDEph1I5Wm3rdkst3dna1e3Usf1nZ7LzS2d2duYzz7zznXee7/M+832hKlilPuOmysdQkedwiYemxENb6kOUeBJlvvoKurGKYan1tzT4TzYH2ls4s1eC5q6FzHWE3u/cttC9faErjKzdWxc/eeWBOGpZzv9t8vQLe/GPqrRl+YEvRTu/6NwKZT5MNWFTbW8SQpahijNaGaCvCLDUhxnOsYkyX12Fn7GKOXaeZa5l/2ljuxg4dZkzfSV45mrQbGvwXdKPu9i15RvxzmemrOm2A6a6xKdjRbfS3AyVzPnucAyexMyHR5YNF82N0QsiweeX4+0tvG9kZdbG1zTF7rry9STC9foKuqmaYa5lT9S9PFG30dLAnmzwtzYF3Gnyt7ew77YFPJbHKIs5PCALMNYl3OtIXujce689DOp81+ESX6KCoyhcoy6mOBflrS2lONflrS310ZZSRs7SiHdoxDvUUSFdJ1ynd/oZq5imatZ4DX2ygW6/6LaiCDnrhxiAD7Tt2/CbrfEJcdraHAR1vvuo0E9R7KEqoyoLoDhNojwN0qDMV1tK/dNDW0rRlTOcb2LoyhlOxWe4ZPWIkGKowsJHQf9Q0SRZ4AG7gXMc/DiYvtC121zLxsAJPJIWOr4ddvw04fjB5Phu1PG94d+2zj/uS/5YlD6fvWFpiu7NwaO+4l8mWp9a2/5puvTM1vHM1ulYNv5saJLnwVLPcPysdvxL5Hgh+lpe9VVfy1Nr48JHm8zv+Y4IKeS8U5fiHohPPSZq/66qWBmueax+d1GU+0hS9Ddp0WL30VFhgDIPltpwc3X4fPvhuSv779Tvut+esnQ9Y/ZyojIP5vO0h6LYFXnsc2uSY4HvmE/9ScWz1kMvhLbUHcpjMJQyNAUvDbwFeS4Gj2FQAHUuBlKgOAx1Ooi3QGRjOB2aTAylQpsKTQYJqWTBkLvadBKWIkxV4ItqzNfh7gXYK2EqxkgBtPmekCVhsWn/UiNfn+VlzmWZsl4ajMKD5uRvW3OW6vdbstcNxWCm7NUZYZT1TIhZwJh6O8JesEWzBxMCxr3q+PEsnwkBzZINyyGsdGX8IjtjE8B6EivK8w87c4h0YDAKlmy6+SjDlOplPLzWmLLalOI2nuatT4BdEGjNedkmYFsyqNpdMPAxL+RpdkETBR0fhjdddXEwJmEmjzN9hjudt+lO+lpblvfd/MCp48zl9mNf1STo+MBAMHl7LI1mzqSbM9ePp/lOZLL0+zC4EWNJsGSyrCdCiNfQ54WhaJhzAmVsDDBB7Id2HxSBIGIwnRc8c2rb7CmugQdjHJYqeTN5wbMF2+6k0wx7gf5NkDHxGQ3StWTtXU0ywIScg34/yChQBkL+CkZioGZD4QmVP+QbSV3uDQULSn8M0qHZgGE2zBEw7YQ+FPpwmMNhiMRoDNC/Ab2+pE3/JgxsgDoMyjBSVIVgMtl7PAHEsRBVNqOSiXrgPR80USChQsaFOBbKWMgiQMThJs+TSHL/YDNkkdC84aeLd+uIWaWLJz1wG5g6znXMqufKeE8+rXwma57J3/G7rmvp7B7Hl6OOF9/pP24V8sMSkByDiEhXHHSHALiyGVJhtrowMYOO5iRmb3XBwImE2iPxsxfftt/qMVbmGLvfn6/YN8wGegBbdpDjofV3omv52qkn4urnQ1d/VV79j1nksH/msIj7ktnXI9HphxsMNDPRwcD13RAFoTEE4mg0BEG0A117KbJX0ZvG1Se6zBXG24/4GcpSDAe9Ca7TYHArFs/xPxdwxpI9liqT5sp4Q9GYzgs1HvSZSnaLpKA0hnXtEONmaihvLQ6tQfbRpuqUnNa0XfWJwZmB1JZYLheIBjYD5aEuN2oK//pGJBXc40yWLQK47QLZFvRtg4QLVbynYjekwVDuWSXbCSVvjSwU4sMcMd/3wnb8ZTvKObjgjw9SD1zds1n8Ou3DcLRHeYliqe/H+TeFuRaH4HoULqVEtkd5ZHGol0LX6ENBhue2C6TrIV4HMR0SFnrWkUhYkGxED5UMldQVUheIATkd/c789AFSZ976vMiwETtIxmKh3QIDD+OvYyrTdzwRag7ICPU4F9UDsoX4/w7/41MnYuCGk4+Bm8AtZxU7YyIBOUGflzPfTPIPyZlkd4KL/wJwYsfjjHFQQAAAAABJRU5ErkJggg==" nextheight="660" nextwidth="1100" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><ul><li><p>In <strong>December 2024</strong>, GAIB closed a <strong>$5M Pre-Seed round</strong> led by Hack VC, Faction, and Hashed, with participation from The Spartan Group, L2IV, CMCC Global, Animoca Brands, IVC, MH Ventures, Presto Labs, J17, IDG Blockchain, 280 Capital, Aethir, NEAR Foundation, and other notable institutions, along with several industry and crypto angel investors.</p></li><li><p>In <strong>July 2025</strong>, GAIB raised an additional <strong>$10M in strategic investment</strong>, led by Amber Group with participation from multiple Asian investors. The funds will be used to accelerate <strong>GPU asset tokenization</strong>, expand infrastructure and financial products, and deepen strategic collaborations across the AI and crypto ecosystems, strengthening institutional participation in on-chain AI infrastructure.</p></li></ul><h3 id="h-x-conclusion-business-logic-and-potential-risks" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>X. Conclusion: Business Logic and Potential Risks</strong></h3><p><strong>Business Logic<br></strong> GAIB’s core positioning is <strong>RWAiFi</strong> — transforming AI infrastructure assets (GPUs, robotics, etc.) into composable financial products through tokenization. The business logic is built on three layers:</p><ul><li><p><strong>Asset Layer</strong>: GPUs and robotics have the combined characteristics of <strong>high-value hardware + predictable cash flows</strong>, aligning with RWA requirements. GPUs, with standardization, clear residual value, and strong demand, are the most practical entry point. Robotics represent a longer-term direction, with monetization via teleoperation, data collection, and RaaS models.</p></li><li><p><strong>Capital Layer</strong>: Through a dual-token structure of <strong>AID</strong> (for stable settlement, non-yield-bearing, backed by T-Bills) and <strong>sAID</strong> (a yield-bearing fund token underpinned by a financing portfolio plus T-Bills), <strong>GAIB separates stable circulation from yield capture</strong>. It further unlocks yield and liquidity through <strong>DeFi integrations</strong> such as <strong>PT/YT (Principal/ Yield Tokens), lending, and LP liquidity</strong>.</p></li><li><p><strong>Ecosystem Layer</strong>: Partnerships with <strong>GMI, </strong><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://Siam.AI"><strong>Siam.AI</strong></a> (sovereign-level GPU clouds), <strong>Aethir</strong>(decentralized GPU networks), and <strong>PrismaX, OpenMind</strong> (robotics innovators) build a cross-industry network spanning hardware, data, and services, advancing the <strong>Compute + Robotics dual-engine model</strong>.</p></li></ul><p><strong>Core Mechanisms</strong></p><ul><li><p><strong>Financing Models</strong>: Debt (10–20% APY), revenue share (60–80%+), or hybrid, with short tenors (3–36 months) and rapid payback cycles.</p></li><li><p><strong>Credit &amp; Risk Management</strong>: Over-collateralization (~30%), cash reserves (5–7%), credit insurance, and default handling (GPU liquidation/custodial operations), alongside third-party underwriting and due diligence, supported by internal credit rating systems.</p></li><li><p><strong>On-Chain Mechanisms</strong>: AID minting/redemption and sAID yield accrual, integrated with Pendle, Morpho, Curve, CIAN, Wand, and other protocols for cross-chain, multi-dimensional yield optimization.</p></li><li><p><strong>Transparency</strong>: Real-time asset and cash flow tracking provided via the official site, DefiLlama, and Dune ensures clear correspondence between off-chain financing and on-chain assets.</p></li></ul><p><strong>Potential Risks<br></strong> Despite GAIB’s transparent design (AID, sAID, AID Alpha, GPU Tokenization, etc.), underlying risks remain, and investors must carefully assess their own risk tolerance:</p><ul><li><p><strong>Market &amp; Liquidity Risks</strong>: GPU financing returns and digital asset prices are subject to volatility, with no guaranteed returns. Lockups may create liquidity challenges or discounted exits under adverse market conditions.</p></li><li><p><strong>Credit &amp; Execution Risks</strong>: Financing often involves SMEs, which face higher default risk. Recovery depends heavily on off-chain enforcement — weak execution may directly affect investor repayments.</p></li><li><p><strong>Technical &amp; Security Risks</strong>: Smart contract vulnerabilities, hacking, oracle manipulation, or key loss could cause asset losses. Deep integration with external DeFi protocols (e.g., Pendle, Curve) boosts TVL growth but also introduces external security and liquidity risks.</p></li><li><p><strong>Asset-Specific &amp; Operational Risks</strong>: GPUs benefit from standardization and residual markets, but robotics assets are non-standard, highly operationally dependent, and vulnerable to regulatory differences across jurisdictions.</p></li><li><p><strong>Compliance &amp; Regulatory Risks</strong>: The computing power assets invested in by GAIB belong to a new market and asset class that does not fall under the scope of traditional financial licensing. This could lead to regional regulatory challenges, including potential restrictions on business operations, asset issuance, and usage.</p></li></ul><p><strong>Disclaimer<br></strong>This report was produced with the assistance of <strong>ChatGPT-5 AI tools</strong>. The author has carefully proofread and ensured accuracy, but errors or omissions may remain. Importantly, crypto assets often exhibit divergence between project fundamentals and secondary market token performance. This content is provided for <strong>informational and academic/research purposes only</strong>, and does <strong>not constitute investment advice</strong> or a recommendation to buy or sell any token.</p>]]></content:encoded>
            <author>0xjacobzhao@newsletter.paragraph.com (jacobzhao)</author>
            <category>gpu</category>
            <category>robotics</category>
            <category>rwa</category>
            <category>defi</category>
            <category>gaib</category>
            <enclosure url="https://storage.googleapis.com/papyrus_images/fd019857a2b6ed04f519b8c8c89a1c2c4f9e497b0361332eef71452480d4fc8c.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[GAIB研报：AI 基建的链上金融化之路 - RWAiFi]]></title>
            <link>https://paragraph.com/@0xjacobzhao/gaib研报：ai-基建的链上金融化之路-rwaifi</link>
            <guid>PDXGpcSsle6vpqzhh1om</guid>
            <pubDate>Wed, 08 Oct 2025 08:10:25 GMT</pubDate>
            <description><![CDATA[随着 AI 崛起为全球最强技术浪潮，算力正成为新的“货币”，而 RWA 化正在为 GPU 等 AI 基建资产打开通往加密金融的新通道。GAIB 提出了“RWAiFi”（RWA + AI + DeFi）的创新路径：通过 SPC 结构将链下 GPU 与机器人融资协议上链，以 AID（AI Synthetic Dollar）和 sAID（收益凭证）构建经济层，实现链下现金流与链上流动性的映射与循环。AID 负责稳定计价，sAID 捕捉真实收益，从而形成“Compute × Robotics × RWA × DeFi”的飞轮闭环。本文将系统解析 GAIB 如何以结构化金融逻辑重塑 AI 基建的融资方式，为 AI Infra 产业与加密金融之间建立一条通向现实收益的可扩展之路。]]></description>
            <content:encoded><![CDATA[<p>随着 AI 成为全球增长最快的技术浪潮，算力正被视为新的“货币”，GPU 等高性能硬件也逐渐演化为战略性资产。但长期以来这类资产的融资与流动性受限。与此同时，加密金融亟需接入具备真实现金流的优质资产，RWA（Real-World Assets）链上化正在成为连接传统金融与加密市场的关键桥梁。AI 基础设施资产凭借“高价值硬件 + 可预测现金流”的特性，被普遍视为非标资产 RWA 的最佳突破口，其中 GPU 具备最现实的落地潜力，而机器人则代表更长期的探索方向。在这一背景下，GAIB 提出的 RWAiFi（RWA + AI + DeFi）路径，为“AI 基建的链上金融化之路”提供了全新解法，推动“AI基建 (算力与机器人) x RWA x DeFi”的飞轮效应。</p><h2 id="h-ai-rwa" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>一、AI 资产RWA化的展望</strong></h2><p>在 RWA 化的讨论中，市场普遍认为 <strong>美债、美股、黄金等标准资产</strong> 将长期占据核心地位。这类资产流动性深、估值透明、合规路径明确，是链上“无风险利率”的天然载体。</p><p>相比之下，<strong>非标资产 RWA 化</strong> 面临更大不确定性。碳信用、私募信贷、供应链金融、房地产及基础设施虽具备庞大市场规模，但普遍存在估值不透明、执行难度大、周期过长和政策依赖性强等问题。其真正挑战不在于代币化本身，而在于如何有效约束链下资产的执行力，尤其是违约后的处置与回收，仍需依赖尽调、贷后管理和清算环节。</p><p>尽管如此，RWA 化依然具有积极意义：（1）链上合约与资产池数据公开透明，避免“资金池黑箱”；（2）收益结构更为多元，除利息外，还可通过 Pendle PT/YT、代币激励及二级市场流动性实现叠加收益；（3）投资人通常通过 SPC 结构持有证券化份额，而非直接债权，从而具备一定破产隔离效果。</p><p>在 AI 算力资产中，<strong>GPU等算力硬件</strong> 因具备残值明确、标准化程度高以及需求旺盛，被普遍视为 RWA 化的首要切入点。围绕算力层，还可以进一步延伸至 <strong>算力租赁合同（Compute Lease）</strong>，其现金流模式具备合同化与可预测性，适合证券化。</p><p>在算力资产之后，<strong>机器人硬件与服务合同</strong> 同样具备 RWA 化潜力。人形或专用机器人作为高价值设备，可通过融资租赁合同映射至链上；但机器人资产高度依赖运营与维护，其落地难度显著比GPU更高。</p><p>此外，<strong>数据中心与能源合同</strong> 也是值得关注的方向。前者包括机柜租赁、电力与带宽合同，属于相对稳定的基础设施现金流；后者则以绿色能源 PPA 为代表，不仅提供长期收益，还兼具 ESG 属性，符合机构投资者需求。</p><p>总体而言，AI 资产的 RWA 化可以分为几个层次：<strong>短期</strong>以内以 GPU 等算力硬件与算力合同为核心；<strong>中期</strong>则扩展至数据中心与能源合同；而<strong>长期</strong>来看，机器人硬件与服务合同有望在特定场景中实现突破。其共同逻辑均围绕 <strong>高价值硬件 + 可预测现金流</strong>，但落地路径存在差异。</p><p><strong>AI 资产 RWA 化的潜在方向</strong></p><table style="min-width: 100px"><colgroup><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>类别</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>潜在标的</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>逻辑基础</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>特点/优势</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>算力硬件</strong></p></td><td colspan="1" rowspan="1"><p>GPU / TPU / ASIC</p></td><td colspan="1" rowspan="1"><p>高残值、标准化程度高、需求强</p></td><td colspan="1" rowspan="1"><p>当前最现实的 RWA 化切入点</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>算力合同</strong></p></td><td colspan="1" rowspan="1"><p>算力租赁合同、边缘算力单元</p></td><td colspan="1" rowspan="1"><p>长期合同模式</p></td><td colspan="1" rowspan="1"><p>收益可预测、合同化程度高</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>机器人资产</strong></p></td><td colspan="1" rowspan="1"><p>硬件融资租赁</p></td><td colspan="1" rowspan="1"><p>高价值硬件 + 可预测现金流</p></td><td colspan="1" rowspan="1"><p>场景化明显，但重运营、落地难度高</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>数据中心</strong></p></td><td colspan="1" rowspan="1"><p>机柜租赁、电力与带宽合同</p></td><td colspan="1" rowspan="1"><p>稳定运营收入</p></td><td colspan="1" rowspan="1"><p>基建现金流，适合长期证券化</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>能源合同</strong></p></td><td colspan="1" rowspan="1"><p>绿色能源 PPA</p></td><td colspan="1" rowspan="1"><p>长期供电协议</p></td><td colspan="1" rowspan="1"><p>ESG 属性强收益稳定</p></td></tr></tbody></table><p><br></p><h2 id="h-gpurwa" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>二、GPU资产RWA化的优先价值</strong></h2><p>在众多非标AI资产当中，<strong>GPU 或许是相对更具探索价值的方向之一</strong>：</p><ul><li><p><strong>标准化与残值明确</strong>：主流 GPU 型号具备清晰的市场定价，且残值较为明确。</p></li><li><p><strong>二手市场活跃</strong>：具备再流通性，违约时仍可实现部分回收；</p></li><li><p><strong>真实生产力属性</strong>：GPU 与AI产业需求直接挂钩，具有现金流生成能力。</p></li><li><p><strong>叙事契合度高</strong>：结合 AI 与 DeFi 的双重市场热点，易于获得投资者关注。</p></li></ul><p>由于 <strong>AI 算力数据中心属于极为新兴的行业</strong>，传统银行往往难以理解其运营模式，因此无法提供贷款支持。只有像 <strong>CoreWeave、Crusoe</strong> 这类大型企业，才能从 <strong>Apollo 等大型私募信贷机构</strong>获得融资，而中小型企业则被排除在外，服务于中小企业的融资通道迫在眉睫。</p><p>需要指出的是，GPU RWA 并不能消除<strong>信用风险</strong>。资质优良的企业通常可通过银行以更低成本融资，不一定需要上链；而选择代币化融资的多为中小企业，违约风险更高。这也导致了 RWA 的结构性悖论：优质资产方不需要上链，而风险更高的借款人更倾向参与。</p><p>尽管如此，相较传统融资租赁，GPU 的 <strong>高需求、可回收性和残值明确</strong> 使其风险收益特征更具优势。RWA 化的意义并非消灭风险，而是让风险更加透明、可定价与可流动化。GPU 作为非标资产 RWA 的代表，具备产业价值与探索潜力，但其成败最终仍取决于链下资质审查与执行能力，而非单纯的链上设计。</p><h2 id="h-rwa" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>三、机器人资产RWA化的前沿探索</strong></h2><p>在 AI 硬件之外，机器人产业也正逐步进入 RWA 化的视野。预计到 2030 年，市场规模将突破 <strong>1,850 亿美元</strong>，发展潜力巨大。随着 <strong>工业 4.0</strong> 的到来，智能自动化与人机协作的新时代正加速到来，未来几年内，机器人将在工厂、物流、零售乃至家庭等场景中广泛落地。通过<strong>结构化的链上融资机制</strong>，加速智能机器人的部署与普及，同时为普通用户创造可参与这一产业变革的投资入口。其可行路径主要包括：</p><ul><li><p><strong>机器人硬件融资</strong>：为生产与部署提供资金，回报来自租赁、销售或 <strong>Robot-as-a-Service（RaaS）</strong> 模式下的运营收入；现金流通过 <strong>SPC 结构与保险覆盖</strong>映射到链上，降低违约与处置风险。</p></li><li><p><strong>数据流金融化</strong>：Embodied AI 模型需要大规模真实世界数据，可为传感器部署和分布式采集网络提供资金，并将数据使用权或授权收入 <strong>Token 化</strong>，赋予投资人分享未来数据价值的渠道。</p></li><li><p><strong>生产与供应链融资</strong>：机器人产业链长，涉及零部件、产能与物流。通过贸易融资释放营运资金，并将未来的货物流与现金流映射到链上。</p></li></ul><p>相较于 GPU 资产，机器人资产 <strong>更依赖运营与场景落地</strong>，现金流波动也更受利用率、维护成本和法规约束的影响。因此，建议采取 <strong>期限更短、超额抵押与储备金更高</strong>的交易结构确保稳定收益与流动性安全。</p><h2 id="h-gaib-aidefi" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>四、GAIB 协议：链下AI资产与链上DeFi 的经济层</strong></h2><p>AI 资产的 RWA 化正从概念走向落地。GPU 已成为最具可行性的链上化资产，而机器人融资代表更长期的增长方向。要让这些资产真正具备金融属性，关键在于构建一个<strong>能承接链下融资、生成收益凭证并连接 DeFi 流动性</strong>的经济层。</p><p>GAIB 正是在此背景下诞生，它并非将AI硬件直接代币化，而是将企业级<strong>GPU或机器人作为抵押的融资合同上链</strong>，构建起连接链下现金流与链上资本市场的经济桥梁。在链下，由云服务商与数据中心购置并使用的企业级 GPU 集群或机器人资产作为抵押物；在链上，<strong>AID </strong>用于稳定计价与流动性管理（非生息，T-Bills 全额储备）；<strong>sAID</strong> 用于收益敞口与自动累计（底层为融资组合 + T-Bills）。</p><br><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/4c51665a2753f5c1ba3785c31c74a925eb6fdbd0c60f643ebd21d0538df2ee7f.jpg" alt="" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAANCAIAAABHKvtLAAAACXBIWXMAAAsTAAALEwEAmpwYAAADIElEQVR4nG2UP2wcRRTGt3Pp6go316SwKc7FNdu48QXpmq2u2Wqg2IYtcgXWSOEkkhWgQUJbsQYNNFM4E/6sLLOO4ilAqxRbMUJokcI4CE0CaHCxCIkBoW3ih7QvOQcpv2pm33vfp3nzZgMYsNZOJhNKKQAURRFFUZIkxhgA0FqnaUoIkVJiclEUs9mMUqq1ds61bbtYLOI45pz3fV8UxWQyWa1WXdcBQIA1GCjLEgA459vbO3EcW2sBoG3b+Xy+s/OKlNIPoEQcx1rrruuMMYSQMAyFEN77NE2DIIiiyDn3zKDvewBwzqEiADDG1msAuHPn6PXXiDE/fnx4+Okn/Mnjx69ev845Xyc455bLJQB0XUcp3djYIIRcGSBmANec86ZplFKMMWPMTw+/v3/y5cWvP//15x///vN3d/Hb/dPjhz98d37+qCgKTGaMoYHWmjGmlHrWImOMlLLve2stY6wsy67rqqoihIwHwjDM3joAgKeXl98+uPfNiXh6eQkANw9u7O7ujgZms1lVVd57O6C1ttYKIZxzQV3XcRw3TVOWJbZyPp9TSuVAWZaMsTdvvIEG3cUvvz8xaHDr5gEd2NraGo1GQRCMx+NkQAjRti0hpGmaoO/7ruu890qpuq6TJEnTlFLKBqSUp6enZ/dOzr4qj784eu+dtz94/93jz48efH320eGHt27f5pyjehAE29s7VVXleS6EwMns+/7qDpbL5WKx2NzcxILRaHTt2rW9vb00TaWUSqmiKKbT6f7+flEUVVWVZSmEOD9/JITIBjjn1trlcpnn+Xp2rgyiKGqaZr3FDK11VVWf3b2rlKKU4rxnWVbX9fp6UWitmKZplmUvMZjNZkopAPDeY8x7zxibTqdxHK9WK0IIdi9JkizLwjAcj8eMMczv+957DwAYfbkBPrQX8d4bY7CbbdtKKTnnWmvvffecF0+MJ8CRvTLA4iiKsizDcZJS1nXdDlhrnXP4YrFjxhiUds9p21YNaK3jOA7DUGv9PwMpZZ7nOANZlqVpituyLKuqqutaD6AuGuMXrTWq5wPY0iAI8LcGAP8B9uhf8BQnINUAAAAASUVORK5CYII=" nextheight="595" nextwidth="1456" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p><strong>GAIB的链下融资模式<br></strong> GAIB 与全球云服务商及数据中心合作，以 GPU 集群为抵押，设计三类融资协议：</p><ul><li><p><strong>债务模式</strong>：支付固定利息（年化 ~10–20%）；</p></li><li><p><strong>股权模式</strong>：分享 GPU或机器人收入（年化 ~60–80%+）；</p></li><li><p><strong>混合模式</strong>：利息 + 收入分成。</p></li></ul><p>GAIB 的风险管理机制建立在 <strong>实体 GPU 的超额抵押与破产隔离法律结构</strong> 之上，确保在违约情况下能够通过清算 GPU 或托管至合作数据中心继续产生现金流。由于企业级 GPU 回本周期短，整体期限显著低于传统债务产品，融资期限通常为 3–36 个月。GAIB 与第三方信用承销机构、审计方和托管方合作，严格执行尽调与贷后管理，并以国债储备作为补充流动性保障。</p><p><strong>链上机制</strong></p><ul><li><p><strong>铸造与赎回</strong>：通过合约，合格用户（Whitelist + KYC）可用稳定币铸造 AID，或以 AID 赎回稳定币。此外对于非KYC用户亦可通过二级市场交易获得。</p></li><li><p><strong>质押与收益</strong>：用户可将 AID 质押为 sAID，后者自动累积收益，价值随时间升值。</p></li><li><p><strong>流动性池</strong>：GAIB 将在主流 AMM 部署 AID 流动性池，用户可用稳定币兑换 AID。<br></p></li><li><p><strong>DeFi 场景</strong>：</p><ul><li><p>借贷：AID 可接入借贷协议，提升资本效率；</p></li><li><p>收益交易：sAID 可拆分为 PT/YT，支持多元风险收益策略；</p></li><li><p>衍生品：AID 与 sAID 作为底层收益资产，支持期权、期货等衍生品创新；</p></li><li><p>定制化策略：接入 Vault 与收益优化器，实现个性化资产配置。</p></li></ul></li></ul><p>总之， GAIB 的核心逻辑是通过 <strong>GPU+机器人资产+国债资产的融资与代币化</strong>，将链下真实现金流转化为链上可组合资产；再通过 <strong>AID/sAID 与 DeFi 协议</strong> 形成收益、流动性与衍生品市场。这一设计兼具实体资产支撑与链上金融创新，为 AI 经济与加密金融之间搭建了可扩展的桥梁。</p><h2 id="h-gpu" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>五、链下：GPU资产代币化标准及风险管理机制</strong></h2><p>GAIB 通过 <strong>SPC（Segregated Portfolio Company）</strong> 结构，将链下 GPU 融资协议转化为链上可流通的收益凭证。投资者投入稳定币后，将获得等值的 AI 合成美元（AID），可用于参与 GAIB 生态。当投资者质押并获得质押资产 sAID 后，即可分享来自 GAIB GPU 与机器人融资项目的收益。随着底层还款流入协议，sAID 的价值持续增长，投资者最终可通过销毁代币赎回本金与收益，从而实现链上资产与真实现金流的一对一映射。</p><p><strong>代币化标准与运作流程：</strong></p><p>GAIB 要求资产具备完善的抵押与担保机制，融资协议需包含 <strong>月度监控、逾期阈值、超额抵押合规</strong> 等条款，并限定承销方需有 ≥2 年放贷经验及完整数据披露。流程上，投资者存入稳定币 → 智能合约铸造 AID（非生息，T-Bills 储备） → 持有人质押并获得 sAID（收益型） → 质押资金用于 GPU/机器人融资协议 → SPC 还款流入 GAIB → sAID 价值随时间增长 → 投资者销毁 sAID 赎回本金与收益。</p><p><strong>风险管理机制</strong>：</p><ol><li><p><strong>超额抵押</strong> —— 融资池资产通常保持约 30% 的超额抵押率。</p></li><li><p><strong>现金储备</strong> —— 约 5–7% 的资金被划入独立储备账户，用于利息支付与违约缓冲。</p></li><li><p><strong>信用保险</strong> —— 通过与合规保险机构合作，部分转移 GPU Provider 的违约风险。</p></li><li><p><strong>违约处置</strong> —— 若违约发生，GAIB 与承销方可选择清算 GPU、转移至其他运营商或托管继续产生现金流。SPC 的破产隔离结构确保各资产池之间独立，不受连带影响。</p></li></ol><p>此外，GAIB 信用委员会负责制定 <strong>代币化标准、信用评估框架与承销准入门槛</strong>，并基于结构化风险分析框架（涵盖借款人基本面、外部环境、交易结构与回收率）实施尽调和贷后监控，确保交易的 <strong>安全性、透明度与可持续性</strong>。</p><p><strong>结构化风险评估框架（仅供参考示例）</strong></p><table style="min-width: 100px"><colgroup><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>层级</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>维度</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>核心指标 / 方法</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>评估重点</strong></p></td></tr><tr><td colspan="1" rowspan="3"><p style="text-align: center">借款人基本面</p></td><td colspan="1" rowspan="1"><p style="text-align: center">财务稳健性</p></td><td colspan="1" rowspan="1"><p style="text-align: center">D/E &lt; 0.65；CR &gt; 1.2；DSCR &gt; 1.35x；LTV &lt; 75%</p></td><td colspan="1" rowspan="1"><p style="text-align: center">偿债能力与资本结构稳健性</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center">信用记录</p></td><td colspan="1" rowspan="1"><p style="text-align: center">历史贷款表现、还款及时性</p></td><td colspan="1" rowspan="1"><p style="text-align: center">履约意愿与信誉</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center">现金流能力</p></td><td colspan="1" rowspan="1"><p style="text-align: center">自由现金流、收入预测</p></td><td colspan="1" rowspan="1"><p style="text-align: center">偿付能力的持续性</p></td></tr><tr><td colspan="1" rowspan="2"><p style="text-align: center">外部环境</p></td><td colspan="1" rowspan="1"><p style="text-align: center">宏观风险</p></td><td colspan="1" rowspan="1"><p style="text-align: center">国家 / 主权风险、政策监管变化</p></td><td colspan="1" rowspan="1"><p style="text-align: center">政治经济与监管环境</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center">市场条件</p></td><td colspan="1" rowspan="1"><p style="text-align: center">AI 需求趋势、GPU 供需与价格波动</p></td><td colspan="1" rowspan="1"><p style="text-align: center">行业发展与周期风险</p></td></tr><tr><td colspan="1" rowspan="2"><p style="text-align: center">交易结构</p></td><td colspan="1" rowspan="1"><p style="text-align: center">信用增强</p></td><td colspan="1" rowspan="1"><p style="text-align: center">超额抵押 (~33%)；现金储备 (~6.6%)；信用保险</p></td><td colspan="1" rowspan="1"><p style="text-align: center">降低违约风险、保障本金</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center">现金流设计</p></td><td colspan="1" rowspan="1"><p style="text-align: center">偿付优先级、逾期触发条款</p></td><td colspan="1" rowspan="1"><p style="text-align: center">确保现金流稳定与可预测性</p></td></tr><tr><td colspan="1" rowspan="4"><p style="text-align: center">风险缓释与回收</p></td><td colspan="1" rowspan="1"><p style="text-align: center">运营与团队</p></td><td colspan="1" rowspan="1"><p style="text-align: center">管理经验 ≥10 年；PUE &lt; 1.5；COGS/Revenue &lt; 25%</p></td><td colspan="1" rowspan="1"><p style="text-align: center">执行力与运营韧性</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center">回收率分析</p></td><td colspan="1" rowspan="1"><p style="text-align: center">GPU 残值、二手市场流动性、折旧周期</p></td><td colspan="1" rowspan="1"><p style="text-align: center">违约后的资产变现能力</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center">压力测试</p></td><td colspan="1" rowspan="1"><p style="text-align: center">GPU 价格下跌、回款延迟、违约情景</p></td><td colspan="1" rowspan="1"><p style="text-align: center">抗风险能力</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center">持续监控</p></td><td colspan="1" rowspan="1"><p style="text-align: center">自由现金流 &gt;1.0；毛利率 &gt;80%；抵押物估值</p></td><td colspan="1" rowspan="1"><p style="text-align: center">动态风险预警与调整</p></td></tr><tr><td colspan="1" rowspan="1"><p>内部评级</p></td><td colspan="1" rowspan="1"><p style="text-align: center">综合打分</p></td><td colspan="1" rowspan="1"><p style="text-align: center">国家 / 行业 / 公司 / 管理层 / 财务 / 结构化</p></td><td colspan="1" rowspan="1"><p style="text-align: center">内部信用决策与准入门槛</p></td></tr></tbody></table><p><br></p><h2 id="h-aidsaid-alpha" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>六、链上：AID合成美金、sAID 收益机制及Alpha存款计划</strong></h2><p><strong>GAIB 双币模型：AID 合成美金与 sAID 流动性收益凭证</strong></p><p>GAIB 推出的 <strong>AID（AI Synthetic Dollar）</strong> 是一种以美债储备为支撑的合成美金。其供应与协议资本动态挂钩：资金流入协议时铸造 AID，收益分配或赎回时销毁 AID，从而确保其规模与底层资产价值保持一致。AID 本身仅承担稳定计价与流通职能，并不直接产生收益。</p><p>为了获取收益，用户需要将 AID 质押转换为 <strong>sAID</strong>。sAID 作为一种可流通的收益凭证，其价值会随协议层的真实收益（GPU/机器人融资回款、美债利息等）逐步升值。收益通过 <strong>sAID/AID 的兑换比率</strong> 体现，用户无需额外操作，只需持有 sAID 即可自动累积收益。在赎回时，用户可经过冷却期取回初始本金与累计奖励。</p><p>从功能上看，AID 提供 <strong>稳定性与可组合性</strong>，可被用于交易、借贷、流动性提供；而 sAID 承载 <strong>收益属性</strong>，既可直接增值，也可进一步进入 DeFi 协议拆分为 <strong>本金与收益代币（PT/YT）</strong>，满足不同风险偏好的投资者需求。</p><p>总体而言，AID 与 sAID 构成了 GAIB 经济层的核心双币结构：<strong>AID 保障稳定流通，sAID 捕捉真实收益</strong>。这种设计既保持了合成稳定币的可用性，又为用户提供了与 AI 基础设施经济挂钩的收益入口。</p><p><strong>GAIB AID / sAID vs Ethena USDe / sUSDe vs Lido stETH 收益模式对比</strong></p><p>AID 与 sAID 的关系，可类比 Ethena 的 USDe / sUSDe 以及 Lido 的 ETH / stETH：前者作为合成美元本身不产生收益，只有在转换为 sToken 后才能自动累积收益。不同点在于，sAID 的收益来源于 <strong>GPU 融资合同与美债</strong>，sUSDe 的收益来自 <strong>衍生品对冲</strong>，而 stETH 则依托于 <strong>ETH Staking</strong>。</p><table style="min-width: 150px"><colgroup><col><col><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>项目</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>AID (GAIB)</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>sAID (GAIB)</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>USDe (Ethena)</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>sUSDe (Ethena)</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>stETH (Lido)</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>资产类型</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">AI 合成美元</p></td><td colspan="1" rowspan="1"><p style="text-align: center">AID 的流动性质押凭证</p></td><td colspan="1" rowspan="1"><p style="text-align: center">合成美元</p></td><td colspan="1" rowspan="1"><p style="text-align: center">USDe 的流动性质押凭证</p></td><td colspan="1" rowspan="1"><p style="text-align: center">ETH Staking 流动性凭证</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>抵押物 / 收益来源</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="cross_mark" class="emoji" data-type="emoji">❌</span> 不带收益</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="check_mark_button" class="emoji" data-type="emoji">✅</span> 自动累积</p><p style="text-align: center">GPU融资现金流 + 国债储备</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="cross_mark" class="emoji" data-type="emoji">❌</span> 不带收益</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="check_mark_button" class="emoji" data-type="emoji">✅</span> 自动累积衍生品套利收益</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><span data-name="check_mark_button" class="emoji" data-type="emoji">✅</span> 自动累积</p><p style="text-align: center">ETH Staking 收益</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>收益表现形式</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">需质押成 sAID&nbsp;</p></td><td colspan="1" rowspan="1"><p style="text-align: center">sAID 随时间升值</p><p style="text-align: center">10–20%（债务型 GPU）</p><p style="text-align: center">60–80%+（收益分成型 GPU）</p></td><td colspan="1" rowspan="1"><p style="text-align: center">需质押成 sUSDe&nbsp;</p></td><td colspan="1" rowspan="1"><p style="text-align: center">sUSDe 随时间升值，~8–15%（对冲套利）</p></td><td colspan="1" rowspan="1"><p style="text-align: center">stETH 本身随时间升值，~3–4%（ETH staking）</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>长期愿景</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">AI-Dollar，AI 经济基础货币</p></td><td colspan="1" rowspan="1"><p style="text-align: center">AI 收益型基准资产</p></td><td colspan="1" rowspan="1"><p style="text-align: center">去中心化“无国界美元”</p></td><td colspan="1" rowspan="1"><p style="text-align: center">收益型 DeFi 基准资产</p></td><td colspan="1" rowspan="1"><p style="text-align: center">ETH Staking 的全球流动性标准</p></td></tr></tbody></table><br><p><strong>AID Alpha：GAIB 主网前的流动性启动与积分激励机制</strong></p><p>AID Alpha 于 2025 年 5 月 12 日正式上线，作为 AID 主网前的流动性启动阶段（Early Deposit Program），旨在通过早期存款引导协议资金，同时给予参与者额外奖励与游戏化激励。所有存款初期将进入美债（T-Bills）以确保安全性，随后逐步配置至 GPU 融资交易，形成从“低风险—高收益”的过渡路径。</p><p>技术层面，AID Alpha 智能合约遵循 ERC-4626 标准，用户每存入一美元稳定币或合成稳定币，都会获得对应链上的 AIDα 收据 Token（如 AIDaUSDC、AIDaUSDT），保证跨链一致性与可组合性。</p><p>在 <em>Final Spice</em> 阶段，GAIB 通过 AIDα 机制开放了多元化的稳定币入口，包括 <strong>USDC、USDT、USR、CUSDO 以及 USD1</strong>。用户存入稳定币后，会获得对应的 <strong>AIDα 收据 Token</strong>（如 AIDaUSDC、AIDaUSD1），该 Token 即代表存款凭证，并自动计入 Spice 积分体系，可进一步参与 Pendle、Curve 等 DeFi 组合玩法。</p><p>截至目前，AIDα 总存款规模已触及 <strong>$80M 上限，</strong>AIDα 资产池明细如下：</p><table style="min-width: 125px"><colgroup><col><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>资产池</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>当前 TVL</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>支持资产 / 来源说明</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>支持链</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>激励机制</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>AIDaUSDC</strong></p></td><td colspan="1" rowspan="1"><p>~$43.1M</p></td><td colspan="1" rowspan="1"><p>USDC —— Circle 发行的主流稳定币</p></td><td colspan="1" rowspan="1"><p>Ethereum, Arbitrum, Base, Sei, Story</p></td><td colspan="1" rowspan="1"><p>10x Spice</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>AIDaUSDT</strong></p></td><td colspan="1" rowspan="1"><p>~$21.1M</p></td><td colspan="1" rowspan="1"><p>USDT —— 全球规模最大的稳定币</p></td><td colspan="1" rowspan="1"><p>Ethereum, Arbitrum, Sei, BSC</p></td><td colspan="1" rowspan="1"><p>10x Spice</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>AIDaUSR</strong></p></td><td colspan="1" rowspan="1"><p>~$0.09M</p></td><td colspan="1" rowspan="1"><p>USR —— Resolv 推出的 Delta-neutral 稳定币</p></td><td colspan="1" rowspan="1"><p>Ethereum</p></td><td colspan="1" rowspan="1"><p>10x Spice + 30x Resolv</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>AIDaCUSDO</strong></p></td><td colspan="1" rowspan="1"><p>~$0.07M</p></td><td colspan="1" rowspan="1"><p>CUSDO —— OpenEden 的收益型稳定币封装版本</p></td><td colspan="1" rowspan="1"><p>Ethereum</p></td><td colspan="1" rowspan="1"><p>10x Spice + 3x OpenEden</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>AIDaUSD1</strong></p></td><td colspan="1" rowspan="1"><p>~$1.69M</p></td><td colspan="1" rowspan="1"><p>USD1 —— WLFI 发行的美债支持机构级稳定币</p></td><td colspan="1" rowspan="1"><p>BNB Chain</p></td><td colspan="1" rowspan="1"><p>10x Spice + WLFI Points</p></td></tr></tbody></table><p>所有 AIDα 存款均设有不超过<strong>两个月</strong>的锁定期，活动结束后，用户可选择将 AIDα 兑换为主网 <strong>AID</strong> 并质押成 <strong>sAID享受持续收益</strong>，也可直接赎回原始资产，同时保留累积的 Spice 积分。Spice 是 GAIB 在 AID Alpha 阶段推出的积分体系，用于衡量早期参与度与分配未来治理权。其规则为“1 USD = 1 Spice/天”，并叠加多渠道倍数（如存款 10x、Pendle YT 20x、Resolv USR 30x），最高可达 30 倍，形成“收益 + 积分”的双重激励。此外，推荐机制进一步放大收益（一级 20%、二级 10%），Final Spice 结束后积分将被锁定，用于主网上线时的治理与奖励分配。</p><p>此外，GAIB 发行了 <strong>3,000 枚限量版 Fremen Essence NFT</strong>，作为早期支持者的专属凭证。前 200 名大额存款者享有保留名额，其余名额则通过白名单及 <strong>$1,500+ 存款资格</strong>分配。NFT 可 <strong>免费铸造（仅需支付 Gas 费）</strong>，持有者将获得主网上线时的专属奖励、产品优先测试权及核心社区身份。目前，该 NFT 在二级市场的价格约为 <strong>0.1 ETH</strong>，累计交易量已达 <strong>98 ETH</strong>。</p><h2 id="h-gaib" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>七、GAIB 链上资金与链下资产透明度</strong></h2><p>GAIB 在资产与协议透明度方面保持高标准，用户可通过官网、DefiLlama 与 Dune 实时追踪其链上资产类别（USDC、USDT、USR、CUSDO、USD1）、跨链分布（Ethereum、Sei、Arbitrum、Base等）、TVL趋势及明细；同时，官网还披露了链下底层资产的配置比例、在投项目(Active Deals)金额、预期收益及管道项目(Selected Pipeline)情况。</p><ul><li><p>GAIB官方网站：<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://aid.gaib.ai/transparency"><u>https://aid.gaib.ai/transparency</u></a></p></li><li><p>Defillama：<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://defillama.com/protocol/tvl/gaib"><u>https://defillama.com/protocol/tvl/gaib</u></a></p></li><li><p>Dune：<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://dune.com/gaibofficial"><u>https://dune.com/gaibofficial</u></a><br></p></li></ul><table style="min-width: 100px"><colgroup><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>维度</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>数据 / 构成</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>占比</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>预期收益</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center">总规模</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>$175.29M</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">100%</p></td><td colspan="1" rowspan="1"><p style="text-align: center">–</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center">T-Bills</p></td><td colspan="1" rowspan="1"><p style="text-align: center">$124.9M</p></td><td colspan="1" rowspan="1"><p style="text-align: center">71%</p></td><td colspan="1" rowspan="1"><p style="text-align: center">~4%</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center">GPU+Robotics</p></td><td colspan="1" rowspan="1"><p style="text-align: center">$50.4M</p></td><td colspan="1" rowspan="1"><p style="text-align: center">29%</p></td><td colspan="1" rowspan="1"><p style="text-align: center">15%- 30%</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center">链分布</p></td><td colspan="3" rowspan="1"><p style="text-align: center">Ethereum 83.2%，Sei 13.0%，Base Arbitrum 合计不足 4%</p><p>已支持Story Protocol, BNB Chain，计划Plume Network</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center">稳定币结构</p></td><td colspan="3" rowspan="1"><p style="text-align: center">USDC（52.4%）、USDT（33.4%） 与 USDƒ0（14.0%）</p><p style="text-align: center">USD1 ~2%、USR 0.1%、CUSDO 0.09%</p></td></tr></tbody></table><p><br>截至 2025 年 10 月，GAIB 管理资产总规模约 <strong>$175.29M</strong>，“双层配置”既兼顾稳健性，又带来 AI Infra 融资的超额回报。</p><ul><li><p><strong>储备资产（Reserves）占 71%</strong>，约 <strong>$124.9M</strong>，主要为美债，预期年化收益约 <strong>4%</strong>；</p></li><li><p><strong>已部署资产（Deployed）占 29%</strong>，约 <strong>$50.4M</strong>，用于链下 GPU 与机器人融资项目，平均年化收益约 <strong>15%</strong>。</p></li></ul><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/3541f271e2ce86ed816981a2c747f326d021f04971f394fd5c80bc7460dce3cb.png" alt="" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAVCAIAAACor3u9AAAACXBIWXMAAAsTAAALEwEAmpwYAAAFb0lEQVR4nI2Uf2gTZxjHX4T9Nxj414agTEH3i+2/sT8mSNFZ66+O/cC5yqoyRaRMhnab4nSWKLVdrRNRW22qVqpZ+oPY302vaZM0sTHxcrkmMfcjl7vrJZdc3rukaZuohXF3JRRtYfDh4eHe957v87zv87xAICcZFIkFbGxgPBawxTAkhiECOSkJU/+fJIcV/XQiVAQmIoBwWewtF93tjb7uW662ekdrraO1NuLsTidCS0Oktd2KGIZiOJ0IviagpBaXRMrDoAjjH9WJolYgp6Ozs2mN5Oxscj4vz+flrDIN44QiRXWgSEp8iCVQmsAYCktw4Sxki0uySLIhD0NhUQLF7H2Ec4Dxjuj4e+8BKBIwTk4HPQnCJ/HhFBtMscGsIsznZVmkFSmmSDE5zXD+McRsHOgxIQOdAY9NSdJQZDQoOR5BjA0W872R/g5z099TNguHOaK+MRYdD/S3AS0FmvFaWdT24MS3lavANwBUrQVP2v+ZnU3rGqoA7kRunLc11Thb6/su/zrY8EcO8nOZuCyScppFbpw3nz5gvXam/eS+Bye+j05aox61Al93k1qBLNL5V3MD9dU7APjzi/dM1T9VrQU7AfB2Ns/NQVmkFIlmCXS0z+R1DslxIsVOKQkqSqCu8UEokopEuxEL0vOQDXmoyWFyog9HuizGq76hDtRyRxXIQg7GiQoADCUbCguFwkIhNyNWrQW/f/bOfF6RRVoWyQQXxv1uhsKTApEUKJEnOCrwfOqpLsBQOEPhMBXjcTfxZBh39I6a7vhHOrydN4EkRHIz4nTQswuA4atn8y9zHD5RWCg0ln++F4DcjKgekUhCkaYJjCYwIuijI+jzqadk+JkaVKRkkcpBPgf5jMQmCNQ/0uUdMnuHzDjS7WytUyvIQF4WqQoALm3dqFcwNwe/BmAfAPkXGVmk1UYQiRnIKCkqK6kFZSVaRxLC+ipUkyBJV294rJN09oTHOoNWk7v96uIdzOcVS83xUvUO3r19uHQ/AFsBcLTU5V/NymqOhJQg2Wgoo0zDVEyGnAx5zXIZyGudttjQHD7BeBEWtUW9CONF/L1GVUA7BDo3Iw7Un9JDlwFgPn0wxQY5zK79T/Okf3K0h8OdKSaQIHx6W2v2qd7KOgkCZdFxDrOzqK3YpkRGYnWN+bySgbw+Chw+wT6zcfiE+qfMMV6rpebYQP2pgfpTPZdOPKr+0XrtzGNDlaOlTjvhxYnhMIcm4NA19C4is5DLQjYLWe00KG1KY+oXhc9CThYpSQjLIpmOqzthPKJvg3F1gLTMqCIcPhFVO3WQnOgjHL2uB1dAnPYJEbdAeoSIO8lhMEXCFJlORtLJiO7DImnVymm6aF9DTtPUZK/HfN3X3eTtvOlqaxhvNoCQzTzYcNLeYhhsOIlamvmQU39cV4INjK0EgyLu9iuO1kv2FsNw42+2W+fULjIe3bEHqM9DOQDbV+CrJc62lSkHYC8AuzX7nRbzUfV+MNZ8wVCyvrZ008WtGy5sXnNh85qaLesMJe+/Sc2WdbWlm24e2Hyj4stlMR7dUb/rEy3aB3VlH9eWbrJeOw3uHd+9B4Ajq8Hht1WOrFaViylvX8JWAA6+BQoLhfzLXP7V3JvYXTaLyTjYeb/v37vG63X3mxuDiAnYWwyGkvV1ZR9dLvuwtnRjw55P234pbz228w123fl5e/dfRyQhnOKDy8LTKOEawAbbseGH6LBq8cE2cPvwtm0A/ABAhfY2VK4Cxut1opJ6ubBMjvkX2aVjtZQc5B2jPZ2PjO7xflNbc0f7bVNbc5LHAdZ/t+NsZde5Q13nDnWcrXxsOCZEJiUhlOTxZYGJyEpIQlh9mjQnpfmSEPoPs5CsI2dcT2AAAAAASUVORK5CYII=" nextheight="679" nextwidth="1038" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>链上资金分布方面，根据 Dune 最新数据，跨链分布上，<strong>Ethereum 占比 83.2%</strong>，<strong>Sei 占 13.0%</strong>，<strong>Base 与 Arbitrum 合计不足 4%</strong>。按资产结构计算，资金主要来自 <strong>USDC（52.4%）</strong>与<strong>USDT（47.4%）</strong>，其余为 USD1（~2%）、USR（0.1%）、CUSDO（0.09%）。</p><p>链下资产分布方面，GAIB 在投项目与资金部署保持一致，已包括<strong>泰国 </strong><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://Siam.AI"><strong>Siam.AI</strong></a>（$30M，15% APY）、两笔 <strong>Robotics Financing</strong>（合计 $15M，15% APY）以及美国 <strong>US Neocloud Provider</strong>（$5.4M，30% APY）。与此同时，GAIB 还建立了约 $725M 的项目储备，更广义的总项目储备展望为 $2.5B+ / 1–2 年，覆盖 GMI Cloud 及多地区的 Nvidia Cloud Partners（亚洲 $200M 与 $300M、欧洲 $60M、阿联酋 $80M）、北美 Neocloud Providers（$15M 与 $30M），以及机器人资产提供方（$20M），为后续扩张与放量奠定坚实基础。</p><h2 id="h-defi" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>八、生态体系：算力、机器人与 DeFi&nbsp;</strong></h2><p>GAIB 的生态体系由 <strong>GPU 计算资源、机器人创新企业以及 DeFi 协议集成</strong>三大部分构成，旨在形成“真实算力资产 → 金融化 → DeFi 优化”的完整闭环。</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/7d80e55fd687ca042f1ae5473eb0cdb7e6de2bc6487ac1332f3412206adaebee.jpg" alt="" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAOCAIAAADBvonlAAAACXBIWXMAAAsTAAALEwEAmpwYAAAEaElEQVR4nG2TW2wUVRzGR4wPEqNRHohoTAgaLw3Cg6iVJRExDQjGKthLulrZBkyJBYs80LjQmPjQJ9G0NgYSUUCaLNc07ZKmbMGh3W22tuyty95nnZnO7szsnp3dmZ4zOzOdY2Y31hg9+R7OzPmf+b7f+Z8hmLTvv6Li90Tuvuu3HzY+92R7S1PTO9tsjQ2HHR+e+LJz3eOE49P3DzTv7O2xd3bs7T7UMjVxSeTuU/F7//spgknPrIqKk6sTcWlh/Oa5hpeesb35cvM+24svrH/iUaJ5n23TxnX21qatm576aOerts3PPr9+ze3x84APru79t3wEk5xlUwt0co6jAgIbYVMLqxLYCFSySplaLrNomauA7HKZrapiBWSVMlMS0+ViUirGmLifTs7VtbqXowIcFWCSPiJwZ3Tywo+fdX6y973dO2yN7a37oQJ0TYEKUCQxtTh7c+jbb44eatjc8PaOt7Y3vnHjyoiuQ6gAqEiApxe9k1e/O21v/9hma4xFg6ZZRbBcRRUoWwX8n4tExE9eu/yLZ9I9Td71TN6KhOazmcQ06WGppCIJTNx/13199NrI5MTYH/6Z0P05KhWjUrFp0qNIAshTieDM1cu/XnGNjI3emCY906SH/N2zMD8L5QKUgcg+IMSlzDJEHMt+f+bMwMDA8d7ezw8fDoeCKkKGrmajXqkEIISpZNLhOPjFkSN2e0dfX59UAoaul4u5RNA3Nj52/YqLYbJQkaRCTgKCVMjXES0DwNMYY47jDhzYf+LEV83NH7y7a1dBFDHGhq5lo14EKxjjTCa9Z8/uzs5Om217V1cXro0Sz+bSYV03MMZOp7Onp4ckSbfbnclk6gXiUsIy0DQ0cvlSQRRKoFguARXBs2d/CocCdYIqqhi6pqLlYDBQAkUIoYqWDV3F2CjxTC4d1lQVY3PW56UyaYxNRZZVBDUVaaoqsjGiXMwJgvDY2rVOp/PY0WOnT50aGhra8PSG4eFhQ9ctAkXGGCuyfLy3t72tfeuWLSRJYoxXjJU6wYppYozvTE11d3f39/c7nV+HQsF6QY0gT9XiYD7HFcU8VORaQA1jU0MwG/VBRdIQNHR1MRSYcLtvXHUthgKmqWkq+pugauh6Psc9iC6yDJ1MJCqSpKmqplYtAsDT9QgkSQ4PD7tcLr/fLwhCrQd6NurVENR1HWPc0dFB1Mbg4KAV0DTLxXytB9bqLbe77+TJttaWttaWc+fOYoxVhGoGeco6LwRzLM0yNISKVAIaghqCVSRno16oWI8YG6HAwtTkhGrRaEipGLoKeDqXDhuWgRkOBc+f/7nL4XA4Dl68eOEfgxLP1K/Ba9teX/PwI7WID/n9cxhjXdeosM9y0jSMsd1uJwjilYaGaDS6SsClQ1UkQxlUkWyaGsZG/WzrP5plwKUDJZ4BPJ16EIhFFhKR+URkXlzKAJ4GeYqOeQtLSasgTymSwDEpOhWpr1oNyEaYuA/kMyCfKeRSIJ8W2HhNifpLNjX3F6gSFp70r6n4AAAAAElFTkSuQmCC" nextheight="645" nextwidth="1456" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p><strong>GPU 计算生态资源：算力资产上链</strong></p><p>在 AI 基础设施的链上融资生态中，GAIB 已与多类算力服务商合作，覆盖<strong>主权级/企业级云（GMI、</strong><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://Siam.AI"><strong>Siam.AI</strong></a><strong>）</strong> 与 <strong>去中心化网络（Aethir、</strong><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://PaleBlueDot.AI"><strong>PaleBlueDot.AI</strong></a><strong>）</strong>，既保证算力稳定性，也拓展了 RWA 的叙事空间。</p><ul><li><p><strong>GMI Cloud</strong>：NVIDIA 全球 6 家 Reference Platform Partner 之一，运营 <strong>7 个数据中心、5 个国家</strong>，已融资约 <strong>$95M</strong>。以低延迟、AI 原生环境见长。通过 GAIB 的融资模式，其 GPU 扩张具备更强的跨区域弹性。</p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://Siam.AI"><strong>Siam.AI</strong></a>：泰国首家主权级 <strong>NVIDIA Cloud Partner</strong>，在 AI/ML 与渲染场景中性能最高提升 <strong>35x</strong>、成本下降 <strong>80%</strong>。已与 GAIB 完成 <strong>$30M GPU Tokenization</strong>，为 GAIB 首单 GPU RWA 案例，奠定其在东南亚市场的先发优势。</p></li><li><p><strong>Aethir</strong>：领先的去中心化 GPUaaS 网络，规模 <strong>40,000+ GPU（含 3,000+ H100）</strong>。2025 年初与 GAIB 在 BNB Chain 联合完成 <strong>首批 GPU Tokenization 试点</strong>，10 分钟完成 <strong>$100K</strong> 融资。未来将探索 <strong>AID/sAID 与 Aethir staking</strong> 打通，形成双重收益。</p></li><li><p><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://PaleBlueDot.AI"><strong>PaleBlueDot.AI</strong></a>：新兴去中心化 GPU 云，其参与强化了 GAIB 的 DePIN 叙事。</p></li></ul><p><strong>机器人生态：具身智能的链上融资</strong></p><p>GAIB 已正式切入具身智能（Embodied AI）赛道，正将 GPU Tokenization 模式延伸至机器人产业，构建“Compute + Robotics”双引擎生态，以 SPV 抵押结构和现金流分配为核心，并通过 AID/sAID 将机器人与 GPU 收益打包，实现硬件和运营的链上金融化。目前已部署合计 1,500 万美元的机器人融资，预期年化收益率约 15%，合作伙伴包括 OpenMind、PrismaX、CAMP、Kite 及 SiamAI Robotics，覆盖硬件、数据流和供应链的多维创新。</p><ul><li><p><strong>PrismaX：</strong>PrismaX 的定位是“机器人即矿机”，通过遥操作平台连接操作员、机器人与数据需求方，生成高价值的动作与视觉数据，单价约 30–50 美元/小时，并已通过 $99/次的付费模式验证早期商业化。GAIB 为其提供融资以扩展机器人规模，数据出售收益则通过 AID/sAID 回流投资人，形成以数据采集为核心的金融化路径。</p></li><li><p><strong>OpenMind：</strong>OpenMind 则以 FABRIC 网络与 OM1 操作系统提供身份认证、可信数据共享和多模态集成，相当于行业“TCP/IP”。GAIB 将这些任务与数据合同资产化上链，为其提供资本支持。双方结合实现“技术可信性 + 金融资产化”的互补，使机器人资产从实验室阶段走向可融资、可迭代、可验证的规模化发展。</p></li></ul><p>整体而言，GAIB 通过与 PrismaX 的数据网络、OpenMind 的控制系统及 CAMP 的基础设施部署协作，逐步构建覆盖机器人硬件、运营与数据价值链的完整生态，加速具身智能的产业化与金融化。</p><p><strong>DeFi 生态：协议集成与收益优化</strong></p><p>在 AID Alpha 阶段，GAIB 将 AID/aAID 资产与多类 DeFi 协议深度集成，通过 <strong>收益拆分、流动性挖掘、抵押借贷与收益增强</strong> 等方式，形成了跨链、多元的收益优化体系，并以 <strong>Spice 积分</strong> 作为统一激励。</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/bf9063d707caecad9a90c79efe3f865dd01cf0d4bf6a1ad39f4ecef314a568e7.jpg" alt="" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAASCAIAAAC1qksFAAAACXBIWXMAAAsTAAALEwEAmpwYAAAGD0lEQVR4nE1UT2zb5hX/tkN2221AdhgwoIdeCmQYsA0YBizbYQXaoimGpsm6FME6dGiCuZg9BNHSNHGT1nES2a5jO7Esy7RNy4wpU4wlmiItkaFIf+JnMhQZSbRUyxQlWnJk13LroociwJBB0mXAw8MHvPe+3/u9f8Bz0XaOdytouyBUSnKllHbLypfZVbesFFV6K8tWSvKOZ3g1zdkUao5aLcOSHquWoVfTPBfVnIznos5D7Wjouaht7eiao4KaAx9+9PrDj14L/PkV/IOT1OW36Stnyb5TY6+9RF89R10+Pf/+70Lnfr148dXFC38i+06Fzv0m9O6vxt94mb5ydscz/g8gU69bq599wA/3wPAgIkeCZ365QY2CHc8opAlzDdugRlXibk7ADea+wQY3IiMVWyjAyBNmYoMa3VSWyhZfSIdzAl5UaZObKapUN82uuGXFq2klPVZUKWdT7DgTzqYEqmUYvfpuOnQ91n9+/u+/l2dvdtNcvPCqPHvz8Pnu0Ytv9o+cet3c8Yz9r7ePXnxz+Lz57YujvdaXNSfTPCy3vq8ffOe2vq/X6+b+kXP0368OvvMOn+8ePm/seAbwXORsipVSuivtlliJ7YKAnf9tDwBXfgb6fgjEwJW9VnG/VZSCH186Bj575Qe+H4MMcaf1bWX89ZcvHQNXfw76ANCj9xY//GMPAL6ftqOGTx5veDrwalqtplXdjd16tivFvGiWtf4TPzoNwDkATgOA976l5x4/KchT7598E4CzAJwCYP7f72TL6MOOw3kA3gZgsf9v137RjnoPgDMA/AOAUlEGrf08HZ3+61/eUCEdX5nRMoyBmPjKTIwOYsFBLHBrOjBALk1QywFZopoNM4tiUCDQenSvYXIsvvk09UR9RMyNPNVXM5Bel6ORxXF6ORAhxrM6axlroNkwM5Amlx6okE5LlGkkLIPL6gmOW4hEAhQVxPExKUXK0jJUoq0DO58TEGI0jf36sMhz4ZItW1YKnx+1rJSOmHU5ioX8U5OD09N+Q2MLVgq0Dm0sdPfTa/8cuNH7ef+/hoev6ogxjYSQJELBQSx4Zzbkh0oUKlEmjnHsfFok0yKZ5MOP6Gk6Ol2yxUoZpkVyd0dfl6M6YkhiYiZ4e27Gr0I6nxNAvaptu9pPzrwzuxIWYKJWM4p5UYUxnl0g8HskMbFEjAs8ISQJSSRdJ5NZX+HZBajQzhbkWNwuCK6TEZJLnougQqswFiEfEOExIjyuwlgHwN3Y3s5cvNy7yj8U03R5C+Ys3jK4DKTx+VF8fgSfG1EhLUvLskS1DgoqpHkWR2r8q2aO58J2TvBclIG056J1ieoCYJh/DhtqM7CSwKtteC5iY0F6ORBbCVYdlLeS+ZzAMnNY8M7U5MD98RvxlZkuwF7DbDZMqEQbntFsmB0AseogIUl0GWRgbIkYn8OGZkP+jELbORHs1p84WxCfH7s1cOn2bV9WZ4t5EalxpMbjK1h0eYok7ksiqcKYLFH1qmbqLFLjlrHmuSjJL9o5oV7VSvbjelWTpaihsVRkcmryFrn0wDQSHQYuciuo6mx0/pqwc6KdE7oTNocNTQdukUsTlsF1qkQ1PJ0Ijw1+fokkJqoOSvJhuyCUSwrH4g1PlztDyDKzH/suMnGsYAm57BrYcTXXyZw48RKG+VceBbc25YKVUmFMEkkhSTBxLJUkDLSa5MPdPSgX06a+Wi4p7RKxCwUr1fB0Z6t9PuVOD6BC8xwuiWRWT7T3YP/ZU0kkjx8/durNP/Rf65nDhrJ6wjQSPBeeDflnQ3cxbAgq9GPhIVToYl58RE8LSYKNzTpb66kkUbCEqoN0xNSrGlRoWYry7IKQJGSJkiUqbwntU+G5yMwqSX5RkpbKJcUyOM9FDU/XEaMjxtDYZsO0DE6WoocHdrNhlGxx/9nTg/1cKkmU7MfNhmnnxL1nVjf9DtdlSSTtnNBm0AaoaZMPIv/x9V6/3pPVEyVbZJm5+xMDPt+FT/t7fb4LXwx/wrE4x+JZnZWkCBEegwptaCwdnU5LlCRFwgv3EGI4Fue5MBa6e/NGHxbyS2LEMtb+B85Gs0Wt5KMuAAAAAElFTkSuQmCC" nextheight="819" nextwidth="1456" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><ul><li><p><strong>Pendle</strong>：用户可将 AIDaUSDC/USDT 分拆为 PT（本金 Token）与 YT（收益 Token）。PT 提供约 15% 固定收益，YT 则承载未来收益并享有 30 倍积分加成，LP 流动性提供者可获得 20 倍积分。</p></li><li><p><strong>Equilibria 与 Penpie</strong>：作为 Pendle 的收益增强器，前者可在原有收益上额外提升 ~5%，后者最高可达 88% APR，两者均叠加 20 倍积分放大。</p></li><li><p><strong>Morpho</strong>：支持将 PT-AIDa 作为抵押物借出 USDC，赋予用户在保持仓位的同时获取流动性的能力，并拓展至以太坊主流借贷市场。</p></li><li><p><strong>Curve</strong>：AIDaUSDC/USDC 流动性池可获取交易费收益，同时获得 20 倍积分，适合偏好稳健策略的参与者。</p></li><li><p><strong>CIAN &amp; Takara（Sei 链）</strong>：用户可将 enzoBTC 抵押于 Takara 借出稳定币，再经 CIAN 智能金库自动注入 GAIB 策略，实现 BTCfi 与 AI Yield 的结合，并享有 5 倍积分加成。</p></li><li><p><strong>Wand（Story Protocol）</strong>：在 Story 链上，Wand 为 AIDa 资产提供类似 Pendle 的 PT/YT 拆分结构，YT Token 可获得 20 倍积分，进一步强化了 AI Yield 的跨链组合性。</p></li></ul><p>整体来看，GAIB 的 DeFi 集成策略涵盖 <strong>Ethereum、Arbitrum、 Base、Sei 与 Story Protocol、 BNB Chain和Plume Network</strong>等公链，通过 Pendle 及其生态增强器（Equilibria、Penpie）、借贷市场（Morpho）、稳定币 DEX（Curve）、BTCfi 金库（CIAN + Takara）、以及原生 AI 叙事的 Wand 协议，实现了从固定收益、杠杆收益到跨链流动性的全方位覆盖。</p><h2 id="h-" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>九、团队背景及项目融资</strong></h2><p>GAIB 团队汇聚了来自 AI、云计算与 DeFi 领域的专家，核心成员曾任职于 L2IV、火币、 高盛、Ava Labs 与 Binance Labs 等机构。团队成员毕业于康奈尔大学、宾夕法尼亚大学、南洋理工大学与加州大学洛杉矶分校，具备深厚的金融、工程与区块链基础设施经验，共同构建起连接真实世界 AI 资产与链上金融创新的坚实基础。</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/23bda5cfab90b36eb1c0b5248ff3de3787197b480abf9e7ab6b782e4fda8a05d.jpg" alt="" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAQCAIAAAD4YuoOAAAACXBIWXMAAAsTAAALEwEAmpwYAAAFAUlEQVR4nG1TXYgbVRS+hYKiPkjxQVp8qQ+2iPRBCwq1IAX1QQSFgojFF1Gh+Pcigog/VERQES11a5Va1rbQn91uNs02OzuTkJ3JJJPMJJmdSWazmWR2Mj+Zyfxk06STTGZyJZtaWvHjcDiXe+75+e45AEL44w+nPv7wU9fdsmxbUVTDMGRZHg6HEMIwDCGEQRDAbUyPDVk7/tHnyRQNIVRVtdGQFEWRZdl1O0EQBnchDEPw1TcnwTYOHT6qa1o0Fo9GY5FItFAocixXKLB0vsiWWM/zJtHHYwjhkRffAADs2vVkzwvKPH89GmMYJkfleL6sKJN8kiRPpCF1uzfB84eO7tt5/wt7HgVgr66pOEFiWBJPEYlECsOSKysosoJyHF+riYqiQgh7tnryg9ePPf3YZ6880yiQetvCUzjDlFh2rVAopskMx/G+7087hhCCMkMf3LsXAPDT978OBh5BkFmK4jh+UobclCVZlmXTNA3D0HXd8wYQwhOfvLN75wNfHH8XQr+pqBSVz1I5KpsjyQyKYfk8LYp1RVGnJAMI4cIyfuLnPyGE3W7XNEzLshzHgf+HcPsPNN34+stvxfV1CKFhmLqum2bbsm3TNKcOw6HveYOpDU7/9sd9YAcAYGZu1TRaKysYhiVJMlMRhAkn4/9GhzA8uO8pAMDhJx63rXa5UkEQJJFITbXrup7n3V0TOLD/wLOP7Hj/yHNnz51v6TqGpdIkmUrhdL4gCDVdb1mWrestwzCnzIplfg8ABx7auf/hB1NI3DDN5fgKhiVj8WUqm2NZ3jDNexJEFhbfOrR/NwBXz57p93qJRHIVJzKZCaGZDIXjBEXlKhWh3pD+Hdbxq0deBgC899pLN7vdTqdTq4m1miiKoizLjYbkOE4YTgb0NkUQwgpbuHjhgu1MIDUmTlWh2pBkmmZyOZqmGY7nNmq1O9sgCNXFq1eaUqPX7zuO0+l0IISe57mu23Hdm93uPR1YVjuOoMlE6vLluWg0Lkmy74+md6USe+nS5YWFhQsXLmJY0rIs3/eDIEgkknEEnZu7tv0kOqG+s9XStbm5a/PzEQRBSyVWEKqm2e52u8D3fZZdIwmSIEiSyLQM406luq7jOEESJEXlcYIslkqWZUMIWXYtiSVpmhGEqiiKqqIGQdA2TQRBYrF4sVCUJFkU647jeJ4HPM9DsVQkEltcjMUR9PoSgmKpmigFQTgKQm8wGThv4A+Hfq9/q7+9B4XC2tISWpeabcvZlBXLnsy0putxBI3GlpEV7EYc5XnBMO3h0AcQjrbcVs9ttR3dtlt2SzH0TUPfHPkDGI5uyziYaDiCMBwH3jicSBgOxqEH4Wg8MQaWJqXQG3gyjiGxG0uRMssU82RN4EA6FkMW5s9fiTg03iRjifnvtGq1llkl//qFL+fo6Cw1d7aSn1c3MtJ6QSilLV2sVgVFFnWl3nebWoM3FKG9pbTcTb0tV6p1TW1odVbaYOqVnLTOgHpDLbECmi66hRKbSi0S6Ea5ubmxWS8z+TWBXcW4JFoiItIGS+WLWSrXkFsUJ1eltqg4lu0IQs3t9iudDt915M4tVuk5fV/V9aqoSLKmKBrwu8zQIUe3yBq/9Oaxt8/M/P737Ln4Kj17MbpcVtpUvDl/2qaT63T21MyMqigcx9UqrJaO5LJZAscNXYthmasRjEgXkyTDePX8UCoMm+cxdRZVI2njH43/5He8qRg0AAAAAElFTkSuQmCC" nextheight="688" nextwidth="1393" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p><strong>Kony Kwong</strong> 为 GAIB 联合创始人兼 CEO，具备传统金融与加密风投的跨界经验。曾任 L2 Iterative Ventures 投资人，并在 Huobi M&amp;A 负责基金管理与并购，早年就职于招银国际、高盛、中信证券等机构。毕业于香港大学国际商务与金融学（一等荣誉），并获宾夕法尼亚大学计算机科学硕士学位。他认为 AI 基础设施缺乏金融化（“-fi”）环节，因此创立 GAIB，将 GPU 与机器人等真实算力资产转化为链上可投资产品。</p><p><strong>Jun Liu </strong>为 GAIB 联合创始人兼 CTO，兼具学术研究与产业实践背景，专注于区块链安全、加密经济学与 DeFi 基础设施。曾任 Sora Ventures 副总裁，亦在 Ava Labs 担任技术经理，支持 BD 团队并负责智能合约审计，同时在 Blizzard Fund 主导技术尽调工作。本科毕业于台湾大学计算机科学与电机工程双学位，后于康奈尔大学攻读计算机科学博士并参与 IC3 区块链研究。他的专长在于构建安全可扩展的去中心化金融架构。</p><p><strong>Alex Yeh </strong>为 GAIB 联合创始人及顾问，同时担任 GMI Cloud 创始人兼 CEO。GMI Cloud 是全球领先的 AI 原生云计算服务商之一，并获选为 6 家 NVIDIA Reference Platform Partner 之一。Alex 拥有半导体与 AI Cloud 背景，管理Realtek 家族办公室，并曾在 CDIB与IVC 任职。在 GAIB，他主要负责产业合作，将 GMI 的 GPU 基础设施与客户网络引入协议，推动 AI Infra 资产的金融化落地。</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/045e673ef8fd5a824cbcdcf0786f0b06171de6228b1a2478544cbcb861354e29.jpg" alt="" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAATCAIAAAB+9pigAAAACXBIWXMAAAsTAAALEwEAmpwYAAAGiElEQVR4nE2TeVDTZxrHv4AoCDQhSg4bAQ0Q1KJAlQIG5bDRWlA5hUYUD86CJQgVLEeBuugqCC1Ca6zBBBoIOSD5EXJACDEph1I5Wm3rdkst3dna1e3Usf1nZ7LzS2d2duYzz7zznXee7/M+832hKlilPuOmysdQkedwiYemxENb6kOUeBJlvvoKurGKYan1tzT4TzYH2ls4s1eC5q6FzHWE3u/cttC9faErjKzdWxc/eeWBOGpZzv9t8vQLe/GPqrRl+YEvRTu/6NwKZT5MNWFTbW8SQpahijNaGaCvCLDUhxnOsYkyX12Fn7GKOXaeZa5l/2ljuxg4dZkzfSV45mrQbGvwXdKPu9i15RvxzmemrOm2A6a6xKdjRbfS3AyVzPnucAyexMyHR5YNF82N0QsiweeX4+0tvG9kZdbG1zTF7rry9STC9foKuqmaYa5lT9S9PFG30dLAnmzwtzYF3Gnyt7ew77YFPJbHKIs5PCALMNYl3OtIXujce689DOp81+ESX6KCoyhcoy6mOBflrS2lONflrS310ZZSRs7SiHdoxDvUUSFdJ1ynd/oZq5imatZ4DX2ygW6/6LaiCDnrhxiAD7Tt2/CbrfEJcdraHAR1vvuo0E9R7KEqoyoLoDhNojwN0qDMV1tK/dNDW0rRlTOcb2LoyhlOxWe4ZPWIkGKowsJHQf9Q0SRZ4AG7gXMc/DiYvtC121zLxsAJPJIWOr4ddvw04fjB5Phu1PG94d+2zj/uS/5YlD6fvWFpiu7NwaO+4l8mWp9a2/5puvTM1vHM1ulYNv5saJLnwVLPcPysdvxL5Hgh+lpe9VVfy1Nr48JHm8zv+Y4IKeS8U5fiHohPPSZq/66qWBmueax+d1GU+0hS9Ddp0WL30VFhgDIPltpwc3X4fPvhuSv779Tvut+esnQ9Y/ZyojIP5vO0h6LYFXnsc2uSY4HvmE/9ScWz1kMvhLbUHcpjMJQyNAUvDbwFeS4Gj2FQAHUuBlKgOAx1Ooi3QGRjOB2aTAylQpsKTQYJqWTBkLvadBKWIkxV4ItqzNfh7gXYK2EqxkgBtPmekCVhsWn/UiNfn+VlzmWZsl4ajMKD5uRvW3OW6vdbstcNxWCm7NUZYZT1TIhZwJh6O8JesEWzBxMCxr3q+PEsnwkBzZINyyGsdGX8IjtjE8B6EivK8w87c4h0YDAKlmy6+SjDlOplPLzWmLLalOI2nuatT4BdEGjNedkmYFsyqNpdMPAxL+RpdkETBR0fhjdddXEwJmEmjzN9hjudt+lO+lpblvfd/MCp48zl9mNf1STo+MBAMHl7LI1mzqSbM9ePp/lOZLL0+zC4EWNJsGSyrCdCiNfQ54WhaJhzAmVsDDBB7Id2HxSBIGIwnRc8c2rb7CmugQdjHJYqeTN5wbMF2+6k0wx7gf5NkDHxGQ3StWTtXU0ywIScg34/yChQBkL+CkZioGZD4QmVP+QbSV3uDQULSn8M0qHZgGE2zBEw7YQ+FPpwmMNhiMRoDNC/Ab2+pE3/JgxsgDoMyjBSVIVgMtl7PAHEsRBVNqOSiXrgPR80USChQsaFOBbKWMgiQMThJs+TSHL/YDNkkdC84aeLd+uIWaWLJz1wG5g6znXMqufKeE8+rXwma57J3/G7rmvp7B7Hl6OOF9/pP24V8sMSkByDiEhXHHSHALiyGVJhtrowMYOO5iRmb3XBwImE2iPxsxfftt/qMVbmGLvfn6/YN8wGegBbdpDjofV3omv52qkn4urnQ1d/VV79j1nksH/msIj7ktnXI9HphxsMNDPRwcD13RAFoTEE4mg0BEG0A117KbJX0ZvG1Se6zBXG24/4GcpSDAe9Ca7TYHArFs/xPxdwxpI9liqT5sp4Q9GYzgs1HvSZSnaLpKA0hnXtEONmaihvLQ6tQfbRpuqUnNa0XfWJwZmB1JZYLheIBjYD5aEuN2oK//pGJBXc40yWLQK47QLZFvRtg4QLVbynYjekwVDuWSXbCSVvjSwU4sMcMd/3wnb8ZTvKObjgjw9SD1zds1n8Ou3DcLRHeYliqe/H+TeFuRaH4HoULqVEtkd5ZHGol0LX6ENBhue2C6TrIV4HMR0SFnrWkUhYkGxED5UMldQVUheIATkd/c789AFSZ976vMiwETtIxmKh3QIDD+OvYyrTdzwRag7ICPU4F9UDsoX4/w7/41MnYuCGk4+Bm8AtZxU7YyIBOUGflzPfTPIPyZlkd4KL/wJwYsfjjHFQQAAAAABJRU5ErkJggg==" nextheight="660" nextwidth="1100" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>2024 年 12 月，GAIB 完成 <strong>500 万美元 Pre-Seed 融资</strong>，由 <strong>Hack VC、Faction、Hashed</strong> 领投，参投方包括 <strong>The Spartan Group、L2IV、CMCC Global、Animoca Brands、IVC、MH Ventures、Presto Labs、J17、IDG Blockchain、280 Capital、Aethir、NEAR Foundation</strong> 等知名机构，以及多位产业与加密领域的天使投资人。随后在 <strong>2025 年 7 月</strong>，GAIB 又获得 <strong>1,000 万美元战略投资</strong>，由 <strong>Amber Group</strong> 领投，多家亚洲投资者跟投。此次资金将重点用于 <strong>GPU 资产 Token 化</strong>，进一步推动 GAIB 基础设施完善、GPU 金融化产品扩展，并深化与 AI 和加密生态的战略合作，强化机构在链上 AI 基础设施中的参与度。</p><h2 id="h-" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>十、总结：商业逻辑及潜在风险</strong></h2><p><strong>商业逻辑：</strong>GAIB 的核心定位是 <strong>RWAiFi</strong>，即将 AI 基础设施资产（GPU、机器人等）通过链上化的方式转化为可组合的金融产品，形成 “真实资产 → 现金流证券化 → DeFi 优化” 的闭环。其商业逻辑建立在三点：</p><ol><li><p><strong>资产端</strong>：GPU 与机器人具备“高价值硬件 + 可预测现金流”的特性，符合 RWA 化的基本要求。GPU 因标准化、残值明确与需求旺盛，成为当前最现实的切入点；机器人则代表更长期的探索方向，依托遥操作、数据采集与 RaaS 模式逐步实现现金流上链。</p></li><li><p><strong>资金端</strong>：通过 <strong>AID（稳定结算、非生息、T-Bills 储备）</strong> 与 <strong>sAID（收益型基金代币，底层为融资组合 + T-Bills）</strong> 的双层结构，GAIB 实现稳定流通与收益捕获分离。并通过 PT/YT、借贷、LP 流动性等 DeFi 集成释放收益与流动性。</p></li><li><p><strong>生态端</strong>：与 GMI、<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://Siam.AI">Siam.AI</a> 等主权级 GPU 云，Aethir等去中心化网络，以及 PrismaX、OpenMind 等机器人公司合作，建立跨硬件、数据与服务的产业网络，推动“Compute + Robotics”双引擎发展。</p></li></ol><p>此外GAIB 采用 <strong>SPC（Segregated Portfolio Company）结构</strong> 将链下融资协议转化为链上收益凭证。核心机制包括：</p><ul><li><p><strong>融资模式</strong>：债务（10–20% APY）、收益分成（60–80%+）、混合结构，期限短（3–36 个月），回本周期快。</p></li><li><p><strong>信用与风控</strong>：通过超额抵押（约 30%）、现金储备（5–7%）、信用保险与违约处置（GPU 清算/托管运营）保障安全性；并配合第三方承销与尽调，建立内部信用评级体系。</p></li><li><p><strong>链上机制</strong>：AID 铸造/赎回与 sAID 收益累积，结合 Pendle、Morpho、Curve、CIAN、Wand 等协议，实现跨链、多维度的收益优化。</p></li><li><p><strong>透明度</strong>：官网、DefiLlama 与 Dune 提供实时资产与资金流追踪，确保链下融资与链上资产对应关系清晰。</p></li></ul><p><strong>潜在风险：</strong>GAIB 及其相关产品（AID、sAID、AID Alpha、GPU Tokenization 等）在设计上通过链上透明化提升了收益可见性，但其底层风险依然存在，投资者需充分评估自身风险承受能力谨慎参与：</p><ul><li><p><strong>市场与流动性风险：</strong>GPU 融资收益和数字资产价格均受市场波动影响，回报并无保证；产品存在锁定期，若市场环境恶化投资者可能面临流动性不足或折价退出的风险。</p></li><li><p><strong>信用与执行风险：</strong>GPU 与机器人融资多涉及中小企业，违约概率相对更高；资产回收高度依赖链下执行力，若处置不畅，将直接影响投资人回款。</p></li><li><p><strong>技术与安全风险：</strong>智能合约漏洞、黑客攻击、预言机操纵或私钥遗失，均可能造成资产损失；与第三方 DeFi 协议（如 Pendle、Curve 等）的深度绑定，虽能提升 TVL 增长，但也引入了外部协议的安全与流动性风险。</p></li><li><p><strong>资产特性与运营风险：</strong>GPU 具备标准化和残值市场，而机器人资产非标准化程度高，运营依赖利用率与维护；跨区域扩张中，机器人资产尤其容易受到法规差异和政策不确定性影响。</p></li><li><p><strong>合规与监管风险：</strong>GAIB 投资的算力资产属于新的市场与资产类别，而并不非传统金融牌照的覆盖范围内。这可能会引发地区性监管问题，包括对其业务运营、资产发行及使用的限制。</p></li></ul><p><strong><em>免责声明：</em></strong><em>本文在创作过程中借助了 ChatGPT-5 的 AI 工具辅助完成，作者已尽力校对并确保信息真实与准确，但仍难免存在疏漏，敬请谅解。需特别提示的是，加密资产市场普遍存在项目基本面与二级市场价格表现背离的情况。本文内容仅用于信息整合与学术/研究交流，不构成任何投资建议，亦不应视为任何代币的买卖推荐。</em></p>]]></content:encoded>
            <author>0xjacobzhao@newsletter.paragraph.com (jacobzhao)</author>
            <category>gpu</category>
            <category>机器人</category>
            <category>rwa</category>
            <category>defi</category>
            <category>gaib</category>
            <enclosure url="https://storage.googleapis.com/papyrus_images/99b66da5c04e5318d8491514656ef3cca7da46634336fa811e6c30e23a9aaa32.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[From Federated Learning to Decentralized Agent Networks: An Analysis on ChainOpera]]></title>
            <link>https://paragraph.com/@0xjacobzhao/from-federated-learning-to-decentralized-agent-networks-an-analysis-on-chainopera</link>
            <guid>RlbO4norAyoBF4rNLIxX</guid>
            <pubDate>Wed, 17 Sep 2025 12:24:28 GMT</pubDate>
            <description><![CDATA[This report outlines the evolution from FedML → TensorOpera → ChainOpera: from federated learning (“data stays local, contribution-based rewards”) to enterprise AI infrastructure, and finally to an on-chain decentralized agent network. ChainOpera uses the AI Terminal and Agent Social Network to shift users from consumers to co-creators, supported by a Developer Platform and Model & GPU Platform for multi-agent collaboration and privacy-preserving training. Its CoAI protocol and Proof-of-Intellig]]></description>
            <content:encoded><![CDATA[<p>In our June report <em>“</em><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://x.com/0xjacobzhao/status/1932662806492688744"><em><u>The Holy Grail of Crypto AI: Frontier Exploration of Decentralized Training”</u></em></a>, we discussed <strong>Federated Learning</strong>—a “controlled decentralization” paradigm positioned between distributed training and fully decentralized training. Its core principle is <em>keeping data local while aggregating parameters centrally</em>, a design particularly suited for privacy-sensitive and compliance-heavy industries such as healthcare and finance.</p><p>At the same time, our past research has consistently highlighted the rise of <strong>Agent Networks</strong>. Their value lies in enabling complex tasks to be completed through <strong>autonomous cooperation and division of labor across multiple agents</strong>, accelerating the shift from “large monolithic models” toward “multi-agent ecosystems.”</p><p>Federated Learning, with its foundations of <em>local data retention, contribution-based incentives, distributed design, transparent rewards, privacy protection, and regulatory compliance</em>, has laid important groundwork for multi-party collaboration. These same principles can be directly adapted to the development of Agent Networks. The <strong>FedML team</strong> has been following this trajectory: evolving from open-source roots to <strong>TensorOpera</strong> (an AI infrastructure layer for the industry), and further advancing to <strong>ChainOpera</strong> (a decentralized Agent Network).</p><p>That said, Agent Networks are not simply an inevitable extension of Federated Learning. Their essence lies in <strong>autonomous collaboration and task specialization among agents</strong>, and they can also be built directly on top of Multi-Agent Systems (MAS), Reinforcement Learning (RL), or blockchain-based incentive mechanisms.</p><h3 id="h-i-federated-learning-and-the-ai-agent-technology-stack" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>I. Federated Learning and the AI Agent Technology Stack</strong></h3><p><strong>Federated Learning (FL)</strong> is a framework for collaborative training without centralizing data. Its core principle is that each participant trains a model locally and uploads only parameters or gradients to a coordinating server for aggregation, thereby ensuring <em>“data stays within its domain”</em> and meeting privacy and compliance requirements.</p><p>Having been tested in sectors such as healthcare, finance, and mobile applications, FL has entered a relatively mature stage of commercialization. However, it still faces challenges such as high communication overhead, incomplete privacy guarantees, and efficiency bottlenecks caused by heterogeneous devices.</p><p>Compared with other training paradigms:</p><ul><li><p><strong>Distributed training</strong> emphasizes centralized compute clusters to maximize efficiency and scale.</p></li><li><p><strong>Decentralized training</strong> achieves fully distributed collaboration via open compute networks.</p></li><li><p><strong>Federated learning</strong> lies in between, functioning as a form of <em>“controlled decentralization”</em>: it satisfies industrial requirements for privacy and compliance while enabling cross-institution collaboration, making it more suitable as a transitional deployment architecture.<br></p></li></ul><table style="min-width: 100px"><colgroup><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Dimension</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Distributed Training</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Federated Learning (FL)</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Decentralized Training</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Centralization</strong></p></td><td colspan="1" rowspan="1"><p>Highly centralized</p></td><td colspan="1" rowspan="1"><p>Controlled decentralization</p></td><td colspan="1" rowspan="1"><p>Fully decentralized</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Core Goal</strong></p></td><td colspan="1" rowspan="1"><p>Maximize efficiency &amp; scale</p></td><td colspan="1" rowspan="1"><p>Data stays local, privacy-compliant collaboration</p></td><td colspan="1" rowspan="1"><p>Open compute networks, free collaboration</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Typical Scenarios</strong></p></td><td colspan="1" rowspan="1"><p>GPT and other large-scale models</p></td><td colspan="1" rowspan="1"><p>Healthcare, finance, mobile input methods</p></td><td colspan="1" rowspan="1"><p>Crypto AI, DePIN networks</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Trust Structure</strong></p></td><td colspan="1" rowspan="1"><p>Single institution controls data &amp; compute</p></td><td colspan="1" rowspan="1"><p>Coordinator server + compliant multi-parties</p></td><td colspan="1" rowspan="1"><p>No central authority, cryptographic verification</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Communication</strong></p></td><td colspan="1" rowspan="1"><p>High-speed intra-cluster parallelism</p></td><td colspan="1" rowspan="1"><p>Parameter/gradient aggregation with frequent controlled updates</p></td><td colspan="1" rowspan="1"><p>Asynchronous, low-bandwidth, verification required</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Advantages</strong></p></td><td colspan="1" rowspan="1"><p>Industry maturity, highest efficiency</p></td><td colspan="1" rowspan="1"><p>Strong privacy protection, high industry acceptance</p></td><td colspan="1" rowspan="1"><p>Strong openness, censorship resistance</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Weaknesses</strong></p></td><td colspan="1" rowspan="1"><p>No privacy protection, data centralized</p></td><td colspan="1" rowspan="1"><p>High communication cost, limited generalizability</p></td><td colspan="1" rowspan="1"><p>Immature fault-tolerance and incentive mechanisms</p></td></tr></tbody></table><br><h3 id="h-ai-agent-protocol-stack" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>AI Agent Protocol Stack</strong></h3><p>In our previous research, we categorized the <strong>AI Agent protocol stack</strong> into three major layers:</p><h4 id="h-1-infrastructure-layer-agent-infrastructure-layer" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>1. Infrastructure Layer (Agent Infrastructure Layer)</strong></h4><p>The foundational runtime support for agents, serving as the technical base of all Agent systems.</p><ul><li><p><strong>Core Modules</strong>:</p><ul><li><p><strong>Agent Framework</strong> – development and runtime environment for agents.</p></li><li><p><strong>Agent OS</strong> – deeper-level multitask scheduling and modular runtime, providing lifecycle management for agents.</p></li></ul></li><li><p><strong>Supporting Modules</strong>:</p><ul><li><p><strong>Agent DID</strong> (decentralized identity)</p></li><li><p><strong>Agent Wallet &amp; Abstraction</strong> (account abstraction &amp; transaction execution)</p></li><li><p><strong>Agent Payment/Settlement</strong> (payment and settlement capabilities)</p></li></ul></li></ul><h4 id="h-2-coordination-and-execution-layer" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>2. Coordination &amp; Execution Layer</strong></h4><p>Focuses on agent collaboration, task scheduling, and incentive systems—key to building <em>collective intelligence</em> among agents.</p><ul><li><p><strong>Agent Orchestration</strong>: Centralized orchestration and lifecycle management, task allocation, and workflow execution—suited for controlled environments.</p></li><li><p><strong>Agent Swarm</strong>: Distributed collaboration structure emphasizing autonomy, division of labor, and resilient coordination—suited for complex, dynamic environments.</p></li><li><p><strong>Agent Incentive Layer</strong>: Economic layer of the agent network that incentivizes developers, executors, and validators, ensuring sustainable ecosystem growth.</p></li></ul><h4 id="h-3-application-and-distribution-layer" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>3. Application &amp; Distribution Layer</strong></h4><p>Covers distribution channels, end-user applications, and consumer-facing products.</p><ul><li><p><strong>Distribution Sub-layer</strong>: Agent Launchpads, Agent Marketplaces, Agent Plugin Networks</p></li><li><p><strong>Application Sub-layer</strong>: AgentFi, Agent-native DApps, Agent-as-a-Service</p></li><li><p><strong>Consumer Sub-layer</strong>: Social/consumer agents, focused on lightweight end-user scenarios</p></li><li><p><strong>Meme Sub-layer</strong>: Hype-driven “Agent” projects with little actual technology or application—primarily marketing-driven.</p></li></ul><h3 id="h-ii-federated-learning-benchmark-fedml-and-the-tensoropera-full-stack-platform" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>II. Federated Learning Benchmark: FedML and the TensorOpera Full-Stack Platform</strong></h3><p><strong>FedML</strong> is one of the earliest open-source frameworks for <strong>Federated Learning (FL)</strong> and distributed training. Originating from an academic team at USC, it gradually evolved into the core product of <strong>TensorOpera AI</strong> through commercialization.</p><p>For researchers and developers, FedML provides cross-institution and cross-device tools for collaborative data training. In academia, FedML has become a widely adopted experimental platform for FL research, frequently appearing at top conferences such as NeurIPS, ICML, and AAAI. In industry, it has earned a strong reputation in privacy-sensitive fields such as healthcare, finance, edge AI, and Web3 AI—positioning itself as the benchmark toolchain for federated learning.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/c8ab519ab61a21d079d0b552a618ce6343a82ebf7135b40f41530a9801096efd.jpg" alt="" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABgAAAAgCAIAAACHPC9vAAAACXBIWXMAAAsTAAALEwEAmpwYAAAHyklEQVR4nIWVa4gb1xXHbwMFQ6GlUBoSu2lC6lBDCoXmk9t8KG3B0DbgPBxKm9puU+K0idvGj2JocGkgbEucOKntXcfep9arXWmlkWZHM5JG89BoRjM7oxnNzOoxq9VqtZJWj9VzH9LuWq8iuxAoDT38OVzu+Z8f554vF/Q/J/Z2m9V6vdftddrdTrvb6/ba7W632/s8P/iftwftTlTiWBaWxVW/T/X7VLdTFNn4ynK6sdX4/6Bet9ftdvv9fiFfCEmy3Wqbt9hgB+J0wNOmWdzrw70+w1h+aH7o/AzU6/fqe1vlZr2+t91s71eajUqz1jjY3um0/CEOJXGSp1EKh9wLtMgWt8q73Va5Wa80G7v3W9WBuVFu1vba+6DT7UYqK2ptGVbwj2aH1dpyqBQNFSLihh7dTjFrkomwfmi+SRpBYUNTyoZYWFLKcbVq3LDfZdYktWpIxUhxtwI6/V4ovRQuGve8tvNXLy1VVpmEGCpEQoUIbQjhojGJzv3p6mW9mqQNQcxqYlYPJCQ+pZ6/esmjMUJa8ycWi7tl0Ol3AqrscFEWCMMIDkYpD8Px6yppBOmgzMvxT26O3bhtojmFEyPEUtAbZvhQlBH0kdF7pllYj627aS5dzoLt1jYt8i43O2v3zFlc0+YFlGD4rIpHGEYIR4wsw6kOhCL8Ia9PoOOiVwkYyWI0usrxOuJhFyXDL6gDUDqz5lV5EwZBHqtbERYTmlcJ8Fk1XDY+uDN8/MSJs5fegSg3RGEQhQWSkpjTr9+9/Z2fv/jDV181ozY8xLpEaqNeBLlcVtiIv3D6J9948qu33JZPoTGPTsulWPag9NqVP4MHYeUWhu3jiELIpVjmoHTyrVPgy4+Brzz991vvz1B2Ypmrtmogm8sq+chfzp/80fHv0jEaVgh2TQ5mw+FyfIyyP//9H79w8hSdEjGd8sU5bl2JNlKXPn738PeOHf3B82YaEjd0Ia8Plp3OZIzy2spOYeN+TcnGXCJJqKyL93lCNK4wjCF5wwwqki6RXBB8MOtBgvgCh6MCiQo4E5XitbRWNKrNBshksolIOSYVRX+KxqOiP8W4Yziiif6UGxZNoy6HhYUsLOoUA3g0oZcT2qYPlyEEdyIkigYULhOTi9VyC6TT64ZS0oIbftSwTbEjn1jH71hMY7aRj83DH9lxp66yOYnMiGQmRGcjQj4mFR0W7s6I5fawxWzCBXI1KuQrxd3BRAK14kFFitBQiC/ki6IoUhRlJIwlNblIpnFU9TMay0Z8uByisxaL1+VmLVbMYnMjKGsyO3lytb7ZAvt795H5oHnONWfFTGNQvrDBcUGCIBOJ5Xgsbp2kISuNERyGc14fv2DhRz41W51us81lgz1OjB6bmmO8iXq5BbrdXipWVdhcZLEgMWuER1CVqCLrQlB1w5IhlzQ+L7M5kcl4HSrrTehiQeXzLG4E8JjG5xU+q3Ib1VITdNqdhLb58PEKm6OQGGTmJkYQYiGyqpUT4VIiXFrRNnU+v0iu8eQqixukV4WstNXsw1GFxPTIwx11uz1DKakPJgp4VqxT/qkR99w4NTdO3bpmcczwjhkeMgu4U0csokRnCFS/N41MTcGTE/DEBGQxE1pwY7PwAFTIbOfXGvn0VjG9tZnbqRabm7mdUnZ7zoTwdISl9XuTC3YLMTFqJ9CQHEyG2CSNa9FwtpQdNGZTjZ3tA9Drdva3Inv10F5D2auHWnVpoKq8Vw+1D4zNDLcaQQ3FUVqjDna1QakSalUGnv1tda8h7zWUVl3qHJRBv9+OKSiF24SAK0DBIocG/QhLwUIA9aGzvHcEgU3W2VGbZZz0zvMBTOVmFd5Beu2kx8IQEOd3KTy0U1kG/f59TQ4sqmtjJgjG2GSmYUOo8WlozORAPJzgn05v1KLJIiNEh0dnaT4uBFyKIot6bsIMz9k90WRN17VKfgn0uvsxnePluJ9TMJxdz9XIgET4RRijJSWqsLPpTAnxBNwENw9hopqUONeSLquxDCtoTFAOR9LRSLiaXxpMJAvEvAN3IjiE4G6C91ICjJDTs7AoR8LcXDq3SQUUDOcmZ2yyvibzmCiwTtSPYDSC0U7UL8t8rRgdgBSJ1oyChxI9lETzOs3rMMYKyqoeywQJEx+K8nKCFSJzdg9KyAwBYRiKEQrDayQXdlMKw5APQe0V3a0twgPxzvl7NzzwWExBE5o3pqC5OBSTFzjfzPidf5DuKV1CDMWa0BZkbsE0em1m8roYcEYW55u1lcEH2es1up3NTrvSOdhMxOQVQ93Mr6ZTsXQqnkzGU6l4PKqiKCKKXC6TyGVWysW1Um45qvHrSe2gVez1qv3+Abjfbv/0zRHw3Dvg2T+Ao2+AZ34Pjp4DT/wGfPO34Imz4MhpcOTsQE+fA0/+Dhw5Myg99Tr41hvg2Nvg2T+C5y6A45dNEA+2d5rgmXMA/Ax88aX/6NDL4NArA/exN8Gxtwbobz+gP/U6+NKpgQ69MtAjL4EvnBxkcOLt9yxgt9kaUB95GTx+Bhw+Ax77NXj0tf/Oj58eHL72C/Dor8DXf/lZ6fCZQQYv/vVD52BH0cSGk1AnIf7WtP/aqO/9Yc/fbriufOC8/E/o4pD94pD9wpDtwpDt4pD9yjX43evwezfRoduef02Rdy2sBZFIPl7f2v03OfSb5cQLseUAAAAASUVORK5CYII=" nextheight="1591" nextwidth="1200" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p><strong>TensorOpera</strong> represents the commercialized evolution of FedML, upgraded into a full-stack AI infrastructure platform for enterprises and developers. While retaining its federated learning capabilities, it extends into <strong>GPU marketplaces, model services, and MLOps</strong>, thereby expanding into the broader market of the LLM and Agent era.</p><p>Its overall architecture is structured into three layers: <strong>Compute Layer (foundation), Scheduler Layer (coordination), and MLOps Layer (application).</strong></p><ol><li><p><strong>Compute Layer (Foundation)<br></strong> The Compute layer forms the technical backbone of TensorOpera, continuing the open-source DNA of FedML.</p><ul><li><p><strong>Core Functions</strong>: Parameter Server, Distributed Training, Inference Endpoint, and Aggregation Server.</p></li><li><p><strong>Value Proposition</strong>: Provides distributed training, privacy-preserving federated learning, and a scalable inference engine. Together, these support the three core capabilities of <em>Train / Deploy / Federate</em>, covering the full pipeline from model training to deployment and cross-institution collaboration.</p></li></ul></li><li><p><strong>Scheduler Layer (Coordination)<br></strong> The Scheduler layer acts as the compute marketplace and scheduling hub, composed of GPU Marketplace, Provision, Master Agent, and Schedule &amp; Orchestrate modules.</p><ul><li><p><strong>Capabilities</strong>: Enables resource allocation across public clouds, GPU providers, and independent contributors.</p></li><li><p><strong>Significance</strong>: This marks the pivotal step from FedML to TensorOpera—supporting large-scale AI training and inference through intelligent scheduling and orchestration, covering LLM and generative AI workloads.</p></li><li><p><strong>Tokenization Potential</strong>: The “Share &amp; Earn” model leaves an incentive mechanism interface open, showing compatibility with DePIN or broader Web3 models.</p></li></ul></li><li><p><strong>MLOps Layer (Application)<br></strong> The MLOps layer provides direct-facing services for developers and enterprises, including Model Serving, AI Agents, and Studio modules.</p><ul><li><p><strong>Applications</strong>: LLM chatbots, multimodal generative AI, and developer copilot tools.</p></li><li><p><strong>Value Proposition</strong>: Abstracts low-level compute and training capabilities into high-level APIs and products, lowering the barrier to use. It offers ready-to-use agents, low-code environments, and scalable deployment solutions.</p></li><li><p><strong>Positioning</strong>: Comparable to new-generation AI infrastructure platforms such as Anyscale, Together, and Modal—serving as the bridge from infrastructure to applications.</p></li></ul></li></ol><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/9b86ec9ba3d1765965cde1bc5ab762e2daf8a50a25b28f8f8bd083cd2933f77a.jpg" alt="" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAATCAIAAAB+9pigAAAACXBIWXMAAAsTAAALEwEAmpwYAAAF0UlEQVR4nH1Uf0wTZxi+xD9m9s80jGy6bCPGgTMqzh9hEnET8Se6DiuysYJFKxa1Rq1toZNeFfSwFEV+FFPEEqs5BC/a2p1itWXkq4jKJ5zAKdym1O0EDrVbvG6WyS30FM1i9uT7477Lm+953+d93hcR3oZQaFgQhBWxcTHzZi9bvRRBkLlz58bGxsbFxSEIMn369JiYGIlEkpiYOGHCBOF/gQiCEAgEIIQMw/A8H349JAgjgiCU7t5UcCDdXl+/9tsUqVSakJCg1qhXrloZHR0tlUoVCoVer1epVG+kFRIEgWXZ1jD8fv8oAU3TOI5DCN1uN47jLMuKoaHQ8PPnwSD/h1hNd1ePIde0c+tezQ6jIdf0k9MzVmgoNCweQRAYhiEIgqIoCCFJkgAABMdxjuNycnJQ1BgMBq1WK0VRr3IaeTHyQsyLaLiQtDBt4rhPEOS9pIVpBt0hQRj5Z3hYrFUEwzBGFPV4PArFZoVCASHEcRxxOBw0TW9RKu32UxzH1dWdIUmy8dJV0uVu8viaPD6i4UL77Y7fHv7ucl4uLanQ56Eu5+WbN+H9X/uc5y9dufyzGOZubALAR5KkXq+32Ww4jldWVpIkifj9foIgHK/A8zxBEKrNBpl0Z1JCalJCanaGPl9XNDDYLwiC3+9nGGa0bX8+LS05lrJq84rF3yclpC6M+yY7M6+i3BIIPPX7H7a2tgIAIIQ8z482+ZXoo1IIggAhzNttytmwdxzywYeR01Bt+Z4dBopqv+a7xoRx504nhG1GtOijyBkR46MixkchyDuJ8VLXBfItLhKbzjDM/Qf3aZqmKIphmDJzdXlJ7UHj0eIDVaWHauy19YIgNDc10zTNMAyEt3n+mbvRq9u5H80tPmAsO1Ro0asP9NzteQsBRVG1tlqZLEMiSdm1azdBEDiO9/Teo6j29o52AJrbO9p7enswDLPb7TiOE2FgGCb6jaa76xvOtFz3tVz3UVQHhNDr9b5hkzCBqdiEYVi+IV+v15uKTWKEKCJJXgQAUBQllUolkpQ9Gs3ixCVarU4my3C73RzH2e2nKIoKBAJarU6l2vHoUb8oydhYIHfudKKGgqKDhy0VNSXF5T/mGbs6u1tbWyGEYrI4jrvd7rS0tKlTP1MoNi9dtlwqXSeRfAsAYFnW4XBgGHaktBRFjRhWhGHY0aNlXq/3NUGTB+Sqio3aik8nxe5SFqLacktFdVd35927d0XDsGEp7Ha7zWYzm82HS0rs9pNms5kPY6yFAACCIBiGEa+vJWps9ChkuRvTNQgycWO6JmfD3qNHLOLUiLPOcRzP8xBCjuNE9TmOE6+BQGDsIZZl35T+NcFA/2CTB9xoaSManDda2po8wO9/GAo9p8MIvEJzU3OVpUouz9JqtVZr9aWLl/r6HgRH8dczPhgKDfcyv9xqg6HQ8DN+9Off4c3xetn19T2ob6jv63vAMIzD4QAAyGQZGIYB4LNarSzLDgz2d3V3ulxOl8vZ03uvp/fek6dPKivwov3HS7DawyZ7CVZbtP94WfEp8QMrrB7kHr8kAADYbLaCgsLTp05fvXKVpumBwYETJ2pcLtfjoSEI245VVa+TZKpydN+t25SyRrZdqZOuybCdOK3KyX//3Smzp30VNXnW5IiYqMmxUZNnTYqMmRQRI1khH+CevCSAEBYUFMjlWWaz2ev1ZGVu+2L6wvj5S+fNXDTl45lWS01FuSU6akHyEjmCRCIIsihu7ZezV+vU+05Un4ufkzwjetGCOavmzVw6+/PE+TOXz5r2dWL8ujx1KfuIe0ng9/sBAOICYRjGfrJOp0aNBmwfWqTaqvZcbR4aenwGdzjPN1otNSbsSEOd8xxB3rrZ3tJCea7caPF1+EBHi6/jxvVO8fhAh8dzMxgMviT4D2iaTv8hXS6X5xvy1Rq1XJ6VnJys1Wq3bd+mVOagKKrWqDdu2pS6fn22MlutUSuVWzI3ZMpksrNnG8bsP+aifwGq1O3OjSjhPAAAAABJRU5ErkJggg==" nextheight="873" nextwidth="1456" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>In <strong>March 2025</strong>, TensorOpera upgraded into a <strong>full-stack platform oriented toward AI Agents</strong>, with its core products covering <strong>AgentOpera AI App, Framework, and Platform</strong>:</p><ul><li><p><strong>Application Layer</strong>: Provides ChatGPT-like multi-agent entry points.</p></li><li><p><strong>Framework Layer</strong>: Evolves into an “Agentic OS” through graph-structured multi-agent systems and Orchestrator/Router modules.</p></li><li><p><strong>Platform Layer</strong>: Deeply integrates with the TensorOpera model platform and FedML, enabling distributed model services, RAG optimization, and hybrid edge–cloud deployment.</p></li></ul><p>The overarching vision is to build <strong>“one operating system, one agent network”</strong>, allowing developers, enterprises, and users to co-create the next-generation <strong>Agentic AI ecosystem</strong> in an open and privacy-preserving environment.</p><h3 id="h-iii-the-chainopera-ai-ecosystem-from-co-creators-and-co-owners-to-the-technical-foundation" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>III. The ChainOpera AI Ecosystem: From Co-Creators and Co-Owners to the Technical Foundation</strong></h3><p>If <strong>FedML</strong> represents the <em>technical core</em>, providing the open-source foundations of federated learning and distributed training; and <strong>TensorOpera</strong> abstracts FedML’s research outcomes into a commercialized, full-stack AI infrastructure—then <strong>ChainOpera</strong> takes this platform capability <strong>on-chain</strong>.</p><p>By combining <strong>AI Terminals + Agent Social Networks + DePIN-based compute/data layers + AI-Native blockchains</strong>, ChainOpera seeks to build a <strong>decentralized Agent Network ecosystem</strong>.</p><p>The fundamental shift is this: while TensorOpera remains primarily enterprise- and developer-oriented, ChainOpera leverages Web3-style governance and incentive mechanisms to include users, developers, GPU providers, and data contributors as <strong>co-creators and co-owners</strong>. In this way, AI Agents are not only “used” but also “co-created and co-owned.”</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/47266591f61b08d9f1433aa6671828f1c2b609504efc407ce7183d1f111e0df4.png" alt="" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAXCAIAAADlZ9q2AAAACXBIWXMAAAsTAAALEwEAmpwYAAAHDklEQVR4nF1U609b5x1+fW4+vvv4co7t4xs+2D62MQb7+IIviU2CbSBgKGAuxtxSkJ1gJ0ZJFlIItEEiVN0UZWjp2qb7A/ZpmlZNWatN+9BKzdJm6bpNWz9EaZoq3Sc0qTRThCfbEEhfPR/eV++r85zneX6/HwAAQDBUAwIBHg+C6vs6alcIgomFmFiICnFEwEcEeAMwhvAQmIdAMIY09jCGvrhtAEIQAABAUFQTaglM9nWXprtL05kLCz3lmdTilCnhV7gs+TvvnP3Hbwp//XXpX79bevJh5dGdyqM7S4/+sPX95z/735c//eHBxs7d6//9/M0fvri28+n5h79vPCh/9cHlnY87Ls7UCCAEJjm7Puwxx3w9xXz20hnfK110xKNyNTly6Y3qP5d3Plnfu5/74FZ0vZjYOh/fWOy8Xum7eWXo1trgzZXsu9dGb28M3Fzpu3n5+EYpsVmJb5Q6t84P/uqa8USgQYBQdQX9S/Ppxam+pVcHLix0n5uhgy2O8e7VZ/cuPv3j2vPP3LMDAACLw97scsgIOThYfAHOF+Dg5QUjsEgi2T9ACKxsa6b8jsBIOlWY6CvPJhdyxphPxujY0fTa8/sXn/5pfe+BZ34IAMB1J7pezVFGGsEQCIYPCAT1IGEIgWvJ8XiIgC+USvcJeDDk6I1F54ZC+Ywvmw7lM4HJvhPFce6VLvt46gVB6+lBAABtYxhfC0Gp+WKxUCLSMuaWSMDu92osJj7OBwBYGaWRlgPAO1TAgyFj2ONMR1t7E66eY850lE1FuOEUG/NbR7rW9/YJ9i1qd9mORzxeU+n6BCKRDLzzOumyUYTko39vMydCAICJUW8yyQIAhNIXFsGwOe7rmM5wYz3+id7o3HBsbjiYzxij7Y7coUUNAo1Rr9LpfJxt7ZcVhclIu22ek1HaznqCDq3N0hoNORPx1mSnKxqSELL67/NqCrQc68ucdKdjrlTEN5AMZ/t8A0m6nbUdKrjvqVukMdBakwEXCGEYFYnFGIYpSPKt9wqR6R4AgEAs4gv4gAcoWidTEgeR83i4QoYTUkwuwQkpKhWhUhGukEJ81HIqevX5vcqTD1ee/aWRgYJSq3QaOalUailcKlHotQojXVgdC46lIIwvxZHffvK6NRURC8Uai1Ekk9UKAcZQUzLoHDkxtFoavloevlrq+0mhLderibS2nx7arn79VvXv29Wv45fneQBQBlrLmM0eR1Obw9DuSC/OTF9f7lyYThXyGtZKEATDmlWUWiaXtRwLBXo7UT5Wy4CwmRRui9LNqN3NdTBKNyMzUI5sarv6+P3qN7eq33KFbE2BWomi6H54UK0iURSNb1ZkOqpuxo/6odEUGGrqCjqGEoMri5nlYma50H3htH86o4t4XFOnblQfvld9vF197CuOAgDUWg2O4zJKpXQ3Ux6rKdxmDrd5x3otsQDlsao9zWIVsc/dSLhxkJBKqUZN6Ck5vQ9Cr+FLRM39x9b3vrj0nz+v7z0IlfMAAFKnVSiVOo/NkPDbeo9F5oYjc8OByUy8OM50hQwJP8UyEqn0JSk1ixiD2s3oORfNOXVeluacNOeU6dS2oc4fVZFMKm2f6Gs/Ow4AkMhl8c2K3GoAAOhCrfGN8wAAFEMNHd7Q8sIRBQhCx73OwUTXmcn0udlEcbzrTC42nyWDDtfUqasvd7KRtVqCXtLOoBim1FCWsE8ok/EFOO1mdR57jYCPUc1m0m45jASCYYWbUXltGo7VcCzptWvrG8KsY7OHneyazTQ6We+0qnUaiYJgo0FbB8cXCghKbfG1KrUUgqIEqTY4rQSpOmIRguhj7dbeWPrc7MiVs92V2cylhUC+n/KzLfmjCmqdLKdUCg3JFwr4Alyt12ltDIKhEoVc77TKtWRj9qko8qX5WlPgatJyDh3HajmHlmMpr50OugkLfbSTW+sWYQJcIBbtD2oc5+O1D0EQJD4YbQiKSuWy+oQ4CBrBUDLcEsj3n7ow31uZS56ZPFnM9SydpvwOZ65n7flnl747zKC5zRXtT4slEhiGg8m4I+CtpS2TRvrTCkpdmyVmQ1dumNRpj5QRDxj8Libms8Y423G/JeYzdngsMX8T57KP1hQsPflodfeuv5LHpGJrzO/PJBVWs1RP2jvDjpMRMaWUm7RtPXEVy4hIpcbV3NYT17gYiY7CpOL9ce0d6y69vTl/Y6389ubM1vLM1pX5G2vJxbw5HV59du+N6pdvVP+2vnd/dffT176vYWX37q3qN+9Xv7tdffqL6uN3q9/erj79efXha7u1q5Xdu5d3Pt6sfhVbLzYIYIFCpjbpFQaN0qhR0FqlUaduMopJJcTHjMlIezHrnh30zA95FkbaCtkGAqVcuDwVLk8FSrlQaTJczgdKudaFEU+h9sYzP+QrT5J+JwDg//yM/7M7wU3XAAAAAElFTkSuQmCC" nextheight="1028" nextwidth="1456" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><h4 id="h-co-creator-ecosystem" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>Co-Creator Ecosystem</strong></h4><p>Through its <strong>Model &amp; GPU Platform</strong> and <strong>Agent Platform</strong>, ChainOpera provides toolchains, infrastructure, and coordination layers for collaborative creation. This enables model training, agent development, deployment, and cooperative scaling.</p><p>The ecosystem’s co-creators include:</p><ul><li><p><strong>AI Agent Developers</strong> – design and operate agents.</p></li><li><p><strong>Tool &amp; Service Providers</strong> – templates, MCPs, databases, APIs.</p></li><li><p><strong>Model Developers</strong> – train and publish model cards.</p></li><li><p><strong>GPU Providers</strong> – contribute compute power via DePIN or Web2 cloud partnerships.</p></li><li><p><strong>Data Contributors &amp; Annotators</strong> – upload and label multimodal datasets.</p></li></ul><p>Together, these three pillars—<strong>development, compute, and data</strong>—drive the continuous growth of the agent network.</p><h4 id="h-co-owner-ecosystem" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>Co-Owner Ecosystem</strong></h4><p>ChainOpera also introduces a <strong>co-ownership mechanism</strong> through shared participation in building the network.</p><ul><li><p><strong>AI Agent Creators</strong> (individuals or teams) design and deploy new agents via the Agent Platform, launching and maintaining them while pushing functional and application-level innovation.</p></li><li><p><strong>AI Agent Participants</strong> (from the community) join agent lifecycles by acquiring and holding <strong>Access Units</strong>, thereby supporting agent growth and activity through usage and promotion.</p></li></ul><p>These two roles represent the <strong>supply side</strong> and <strong>demand side</strong>, together forming a value-sharing and co-development model within the ecosystem.</p><h4 id="h-ecosystem-partners-platforms-and-frameworks" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>Ecosystem Partners: Platforms and Frameworks</strong></h4><p>ChainOpera collaborates widely to enhance usability, security, and Web3 integration:</p><ul><li><p><strong>AI Terminal App</strong> combines wallets, algorithms, and aggregation platforms to deliver intelligent service recommendations.</p></li><li><p><strong>Agent Platform</strong> integrates multi-framework and low-code tools to lower the development barrier.</p></li><li><p><strong>TensorOpera AI</strong> powers model training and inference.</p></li><li><p><strong>FedML</strong> serves as an exclusive partner, enabling cross-institution, cross-device, privacy-preserving training.</p></li></ul><p>The result is an <strong>open ecosystem</strong> balancing enterprise-grade applications with Web3-native user experiences.</p><h4 id="h-hardware-entry-points-ai-hardware-and-partners" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>Hardware Entry Points: AI Hardware &amp; Partners</strong></h4><p>Through <strong>DeAI Phones, wearables, and robotic AI partners</strong>, ChainOpera integrates blockchain and AI into smart terminals. These devices enable dApp interaction, edge-side training, and privacy protection, gradually forming a decentralized AI hardware ecosystem.</p><h4 id="h-central-platforms-and-technical-foundation" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>Central Platforms and Technical Foundation</strong></h4><ul><li><p><strong>TensorOpera GenAI Platform</strong> – provides full-stack services across MLOps, Scheduler, and Compute; supports large-scale model training and deployment.</p></li><li><p><strong>TensorOpera FedML Platform</strong> – enterprise-grade federated/distributed learning platform, enabling cross-organization/device privacy-preserving training and serving as a bridge between academia and industry.</p></li><li><p><strong>FedML Open Source</strong> – the globally leading federated/distributed ML library, serving as the technical base of the ecosystem with a trusted, scalable open-source framework.</p></li></ul><h4 id="h-chainopera-ai-ecosystem-structure" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>ChainOpera AI Ecosystem Structure</strong></h4><table style="min-width: 100px"><colgroup><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Layer</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Positioning</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Modules / Roles</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Description</strong></p></td></tr><tr><td colspan="1" rowspan="2"><p><strong>Participants</strong></p></td><td colspan="1" rowspan="1"><p>Supply Side</p></td><td colspan="1" rowspan="1"><p><strong>Co-Creators</strong></p></td><td colspan="1" rowspan="1"><p>Agent developers, tool/service providers, model developers, GPU/data contributors &amp; annotators. Build and supply ecosystem resources.</p></td></tr><tr><td colspan="1" rowspan="1"><p>Demand Side</p></td><td colspan="1" rowspan="1"><p><strong>Co-Owners</strong></p></td><td colspan="1" rowspan="1"><p>Agent creators &amp; participants. Create, use, and promote agents while sharing in their growth and value.</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Ecosystem Partners</strong></p></td><td colspan="1" rowspan="1"><p>External Synergy</p></td><td colspan="1" rowspan="1"><p><strong>Platform &amp; Framework Partners</strong></p></td><td colspan="1" rowspan="1"><p>Wallet developers, algorithm experts, bot/aggregation platforms, low-code frameworks; deep integration with TensorOpera and FedML.</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Hardware Entry</strong></p></td><td colspan="1" rowspan="1"><p>Interface Layer</p></td><td colspan="1" rowspan="1"><p><strong>AI Hardware</strong></p></td><td colspan="1" rowspan="1"><p>DeAI phones, wearables, robots as physical entry points for interaction and data collection, enabling privacy-preserving edge intelligence.</p></td></tr><tr><td colspan="1" rowspan="2"><p><strong>Platform Layer</strong></p></td><td colspan="1" rowspan="1"><p>Central Platform</p></td><td colspan="1" rowspan="1"><p><strong>TensorOpera GenAI Platform</strong></p></td><td colspan="1" rowspan="1"><p>Unified MLOps, Scheduler, Compute services for large-scale training and deployment.</p></td></tr><tr><td colspan="1" rowspan="1"><p>Industry Bridge</p></td><td colspan="1" rowspan="1"><p><strong>TensorOpera FedML Platform</strong></p></td><td colspan="1" rowspan="1"><p>Enterprise-grade FL/distributed platform enabling privacy-preserving model collaboration across organizations/devices.</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Foundation</strong></p></td><td colspan="1" rowspan="1"><p>Technical Base</p></td><td colspan="1" rowspan="1"><p><strong>FedML Open Source</strong></p></td><td colspan="1" rowspan="1"><p>Leading federated/distributed ML open-source library, providing the foundational framework for the ecosystem.</p></td></tr></tbody></table><p><br></p><h3 id="h-iv-chainopera-core-products-and-full-stack-ai-agent-infrastructure" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>IV. ChainOpera Core Products and Full-Stack AI Agent Infrastructure</strong></h3><p>In <strong>June 2025</strong>, ChainOpera officially launched its <strong>AI Terminal App</strong> and decentralized tech stack, positioning itself as a <em>“Decentralized OpenAI.”</em> Its core products span four modules:</p><ol><li><p><strong>Application Layer</strong> – <em>AI Terminal &amp; Agent Network</em></p></li><li><p><strong>Developer Layer</strong> – <em>Agent Creator Center</em></p></li><li><p><strong>Model &amp; GPU Layer</strong> – <em>Model &amp; Compute Network</em></p></li><li><p><strong>CoAI Protocol &amp; Dedicated Chain</strong></p></li></ol><p>Together, these modules cover the full loop from <strong>user entry points to underlying compute and on-chain incentives.</strong></p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/ff3952f9b419530b6cab249423ae14486469e7c4aaaf10e92cc1f774c564b176.jpg" alt="" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAASCAIAAAC1qksFAAAACXBIWXMAAAsTAAALEwEAmpwYAAAFkUlEQVR4nE1Ue2xTVRg/r/tsb++9bbfbdavbWDtK23Vl7A0jRB57gMAEhCA6/iAmEgYMAacRCDojMB7BYZQYMI7EJSZEAkT/EjFgEB8ENAxCICjbQDfWboP1sa495vYO4i/n5n45+b7zfd/5fucHYAYIY8ZYCBOEGZY1FmEIyMBwg0jHlG1sQEQgIgjpUQgzmDCEYQhhGMZwMMIBaxJJkQMX5RCPkyl0EJZBGAEIeEHgBYEQAhECz70zf4gQBICzKux0F/E4sTsHF+ciTQYsASxDGIJJpjgjSY4rb+v+vdsOfLDjUMfbXQfa2nds2bz5nfZ3mhY3MSxjMpkwxgAAs8uR7S9GGdvYyfcXbzzasbNr/65jB3cdO9x+6MPWfXt2H+1samqaOtzwKy+fNR6OUErT6TTNIJlMUkq/7O4GAPA8DwAweVy5617yvLHaXKAhghFCAID6xgZKaSqdTqXTcaoHJyf1wLPnzgEAdB9RFAEAFZUVAwP90VhsPBodHX8SjUcjT0YppSdPnAAAuFwujmOz3lwVWLYoMLvSXheS7GpujhMh1NDYSCmNxuP3Y+HeeDgcj4YfP05R+tWZ0wAAQRD0D0JYXlnZ19+fSqUmJpPJVDKZnkxMJPQEJ08CAFRV5RAS68rAhmZ3c0Px+uUmUdQ0DUK4eMkSSumTZPJeIvxoYnwkERuLjFBKe86egUbrRqfBUOj27TujY2OPI2EDQ4NDKZo+8kmXft2ZcYmaXakqySnz2YvyOYbVqwOgobExNTk5Ojo6+DgyPByORCJDg4PJVOpUT8+zAWcoYbaqvmBJSWVZybzaYF116dwa/5yqwLzanKKCKY4ixHKcbFUVR5YgioIo8LzAsazNbptZMStUVREqmxmcNbM0FCoJ6sibVmCcPAVk4hHBrNkkeF8QQ25uRj6Z7kLFeUiVEEY6RwHgbKrkzTf7prEOGwKQ53lBFERVZvMdbG4268xiNatObgOSAA3beA5Y4BizaFIskmQWBIHlOEIIwZjlecwQJHCFTXWBVQ0zVtZ7VyzyNi8oXrFQcmbzgiCpstmqiIpFkC2CLDGSiAQOCZxuYKxfkVGdo8pf2bYu2Lo6q3Vpwablnm2veLas9LQ05i2pZcwiJESb4c4LeZ2B4txAsbNkusPvNlkVjLGt1F3Wurpi05rZbS1z2lpmb3utZuu66vXNRQ21xKQPaeodzF279OfI/TN9v22//c3HN7/rvPz13h96Oq+ebu58C2Q6hQAgfUEMIIaQQMRgffJZpZ6fblyLj0f/Guh/NPjvo6HBofDwZCq5Ye92QAB6/tCWr19LKY3H4/FY/OYf1774rKvz2PFT3d3vHuoABPKyZNJUwS6LmmrNz1FyNSXPIdhlAIA9WPRt79VhGhuJjMTHxxPxxD99Q3TkTmvHDoBhJgHREyxrWUMpTcQT0adP+/4euHTpl+9/vHKr9/6+zz8FGHCiwKsSJ0u8apEdWRbNJmVbedmsJwgUXbh1PUZTE9GxdCqdSMZGYxE6ObHx/Z0A/6+DRa++3P8kfOfhg3tDA+cvXth98MjOw4c/On5iw552wBmaNcUOhjA8xyOEjEBbqfv8lYsDQ4N/3u39/dbNyzd+vdp7/e7DB6+/txUwCIJnQ7a48xz1VY4F5Y755bmN1e5ldf5V870rX7TX+LCh2M8SPD/aEFeTS8teWK7VV2oLK7T6ypz6KmdDTf7iObnVASJwU9qr85RjISEAIp28EGJCGI7DhMiybJYki2yRFdlqtcqKYpFlSZIURVFUVd+XZYZljUIhgiCj6pghWOAhhMRQbD0Lx2BVwooZKxKWzaxNYW0yq1p8Pl/htEK/3x8IBEKhUDAY9Pl8Xq/Xl4HH4wmFQlanBmUTscrYaiFWC7ZasN2CRF2AIYT/AWTWvb3X6sgIAAAAAElFTkSuQmCC" nextheight="772" nextwidth="1401" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><h4 id="h-ai-terminal-app" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>AI Terminal App</strong></h4><p>Already integrated with <strong>BNB Chain</strong>, the AI Terminal supports on-chain transactions and DeFi-native agents. The <strong>Agent Creator Center</strong> is open to developers, providing MCP/HUB, knowledge base, and RAG capabilities, with continuous onboarding of community-built agents. Meanwhile, ChainOpera launched the <strong>CO-AI Alliance</strong>, partnering with <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://io.net">io.net</a>, Render, TensorOpera, FedML, and MindNetwork.</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/257ef230f76d6eb05432d33502f8b620559d24b7aab506f9b428e95cad115fa8.jpg" alt="" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAANCAIAAABHKvtLAAAACXBIWXMAAAsTAAALEwEAmpwYAAADdUlEQVR4nF1SXW/cRBQ1NFLWhbXipXbG3ztjjz2eje14/bEf2WTXu4q3m6StlLSF3Y3URiICgiKkVqF9AFVCqAIe+sQLz7wBD7wi8Rd46S9AvPZHFG0HJVuko6t7RzNzzzn3cr/89Okfv//48q8/59NpM81/+/Xn+fE8zbZbea8bJjv9Mm62uluDIExVDZqWwyJLADA8ErQ6/TrygWJJsnaJqiCfff5o7+Aup2tqvz8qil2IvHUABoPc9dwKL0qyyvPCDQmsiVJVkCRZvbZSXeUFsQZWeYElYg1UeJGV7668x3EVFhkAMCRZ5TzP3d0tFLAuy7puuZJsajp6X6iZFhZrCgCGWAPXVqq2TcQakGQVOn7lek2S1TryIPIAMKqCVOHFCi+ypCosCFV4keMqi4iQWxS7tBGLNcC0M2g6Yj4Uw7LfHwFgXDrT6w1MiD0SEhKqGlQ1SBtx3Mwh8pKkw6612r1iOImilDOhG8Zxu9udTG6pGoTINUybGc2S7tYgbrZUDbISIre8ue9g6m5EfpSwSdBG3Gpv6dCJ2h1Vg4piJVl3e2eEMeVMROTqO9lm487hnOeFtZoqySbj9R+7YPELAwCGg+lHxw8JiXASkTxVFAsAo5m2ynKvTmhzXOgmUhQTIo8GiWk5nAqM6bOnsycXj88fP7r4+ofnn9wcZ7UPDMZXfSOfOcNgWs72zsi2Cd7cdONYUSxFM6FD0qxTJ9TPU91EABhBnKdZByKPUyRt9s2z6ZOL+4ezo3sPut2UUg8o1pWCxlsNIPLKcu/tBnUH0yzrQkr9Tq6biF1zMF00qDuNwJVbCf32+YvhsBSE9eXfVQ0Gcb7cQJLVYlhGUe5mCckzZlEUZ2U5UesoLHqaaS8UhOn2TrFoYFqY+DhJkw+nJ8PxLbY8y/BIsFwuWoYpxg1IqE022KGDaRCmi9WK48sT9pBzsX/D8pGffff9izuzjyVZXaYPgMGsvBqyt/HZ2ReERKSdNXodpiCI88n+bcWwmuNCezM8B9Moyj0ScAAY05PTk9Oz49nD6YPTukMN016m7GD6P01BmCLkIeQ6tnd5ZzEqiJkmTUdxGp+fH4z3e9z11bVXr1+//Pufp19+dTQ7MSBRtas1ZQpMiJdlHd2b0yDxIc5IpGh1tqYHtw8tz0sPRpppy7Le3urdPb7faDb/BX5XzP3/2LdOAAAAAElFTkSuQmCC" nextheight="614" nextwidth="1456" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>According to <strong>BNB DApp Bay</strong> on-chain data (past 30 days): <strong>158.87K unique users, 2.6M transactions </strong>and<strong> </strong>Ranked <strong>#2 in the entire “AI Agent” category on BSC, </strong>This demonstrates strong and growing on-chain activity.</p><h4 id="h-super-ai-agent-app-ai-terminal-chatchainoperaai" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>Super AI Agent App – AI Terminal </strong><span data-name="point_right" class="emoji" data-type="emoji">👉</span><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://chat.chainopera.ai/"> </a><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://chat.chainopera.ai"><u>chat.chainopera.ai</u></a></h4><p>Positioned as a decentralized <strong>ChatGPT + AI Social Hub</strong>, the AI Terminal provides: Multimodal collaboration, Data contribution incentives, DeFi tool integration, Cross-platform assistance, Privacy-preserving agent collaboration (<em>Your Data, Your Agent</em>). Users can directly call the open-source <strong>DeepSeek-R1</strong> model and community-built agents from mobile. During interactions, both <em>language tokens</em> and <em>crypto tokens</em> circulate transparently on-chain.</p><p><strong>Core Value</strong>: transforms users from <em>“content consumers”</em> into <em>“intelligent co-creators.”</em> Applicable across DeFi, RWA, PayFi, e-commerce, and other domains via personalized agent networks.</p><h4 id="h-ai-agent-social-network-chatchainoperaaiagent-social-network" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>AI Agent Social Network&nbsp; </strong><span data-name="point_right" class="emoji" data-type="emoji">👉</span><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://chat.chainopera.ai/agent-social-network"> </a><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://chat.chainopera.ai/agent-social-network"><u>chat.chainopera.ai/agent-social-network</u></a></h4><p>Envisioned as <strong>LinkedIn + Messenger for AI Agents.</strong>Provides <strong>virtual workspaces</strong> and <strong>Agent-to-Agent collaboration mechanisms</strong> (MetaGPT, ChatDEV, AutoGEN, Camel).Evolves single agents into <strong>multi-agent cooperative networks</strong> spanning finance, gaming, e-commerce, and research.Gradually enhances <strong>memory</strong> and <strong>autonomy</strong>.</p><h4 id="h-ai-agent-developer-platform-agentchainoperaai" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>AI Agent Developer Platform</strong><span data-name="point_right" class="emoji" data-type="emoji">👉</span><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://agent.chainopera.ai/"> </a><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://agent.chainopera.ai"><u>agent.chainopera.ai</u></a></h4><p>Designed as a <strong>“LEGO-style” creation experience</strong> for developers.Supports <strong>no-code</strong> and <strong>modular extensions, </strong>Blockchain smart contracts ensure <strong>ownership rights, DePIN + cloud infrastructure</strong> lower entry barriers and <strong>Marketplace</strong> enables discovery and distribution</p><p><strong>Core Value</strong>: empowers developers to rapidly reach users, with contributions transparently recorded and rewarded.</p><h4 id="h-ai-model-and-gpu-platform-platformchainoperaai" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>AI Model &amp; GPU Platform </strong><span data-name="point_right" class="emoji" data-type="emoji">👉</span><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://platform.chainopera.ai/"> </a><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://platform.chainopera.ai"><u>platform.chainopera.ai</u></a></h4><p>Serving as the <strong>infrastructure layer</strong>, it combines <strong>DePIN</strong> and <strong>federated learning</strong> to address Web3 AI’s reliance on centralized compute. Capabilities include:Distributed GPU network, Privacy-preserving data training, Model and data marketplace, End-to-end MLOps<br><strong>Vision</strong>: shift from <em>“big tech monopoly”</em> to <em>“community-driven infrastructure”</em>—enabling multi-agent collaboration and personalized AI.</p><h4 id="h-chainopera-full-stack-architecture-overview" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>ChainOpera Full-Stack Architecture Overview</strong></h4><table style="min-width: 125px"><colgroup><col><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Layer</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Module</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Positioning</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Vision &amp; Value Proposition</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Key Features</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Entry Layer</strong></p></td><td colspan="1" rowspan="1"><p><strong>AI Terminal</strong></p></td><td colspan="1" rowspan="1"><p>Decentralized ChatGPT + social gateway</p></td><td colspan="1" rowspan="1"><p>Collaborative AGI; users shift from consumers → co-creators</p></td><td colspan="1" rowspan="1"><p>Data incentives, DeFi tools, cross-platform assistants, agent collaboration</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Social Layer</strong></p></td><td colspan="1" rowspan="1"><p><strong>AI Agent Social Network</strong></p></td><td colspan="1" rowspan="1"><p>LinkedIn + Messenger for AI Agents</p></td><td colspan="1" rowspan="1"><p>Single agents evolve into cooperative networks</p></td><td colspan="1" rowspan="1"><p>Virtual workspace, agent-to-agent collaboration, social features, human-in-the-loop</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Developer Layer</strong></p></td><td colspan="1" rowspan="1"><p><strong>Developer Platform</strong></p></td><td colspan="1" rowspan="1"><p>Developer launchpad &amp; toolbox</p></td><td colspan="1" rowspan="1"><p>Low-barrier “LEGO-style” co-creation</p></td><td colspan="1" rowspan="1"><p>No-code, modular extensions, on-chain verification, distributed compute, marketplace</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Infrastructure Layer</strong></p></td><td colspan="1" rowspan="1"><p><strong>Model &amp; GPU Platform</strong></p></td><td colspan="1" rowspan="1"><p>DePIN + federated learning infrastructure</p></td><td colspan="1" rowspan="1"><p>Community-driven AI infra; monopoly → co-build</p></td><td colspan="1" rowspan="1"><p>Distributed GPU, FL, model/data marketplace, MLOps</p></td></tr></tbody></table><p><br><br></p><h3 id="h-v-chainopera-ai-roadmap" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>V. ChainOpera AI Roadmap</strong></h3><p>Beyond the already launched <strong>full-stack AI Agent platform</strong>, ChainOpera AI holds a firm belief that <strong>Artificial General Intelligence (AGI)</strong> will emerge from <em>multimodal, multi-agent collaborative networks.</em> Its long-term roadmap is structured into four phases:</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/039d1360b87d5f1b6efa185b985633ea0f489bfd16fcff0867e690298098e221.jpg" alt="" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAWCAIAAAAuOwkTAAAACXBIWXMAAAsTAAALEwEAmpwYAAAEjUlEQVR4nJ1VbWvbVhSWZdnyi16voqsrWVFdxXZUz6sWLa3rRPGUuM2LMjWpcLx5M15cJ6aCmpl8GCwjMEo6yCiMQeZRBqGjH/uz9kM2bCWhK05L+nAQ4nLvOfc+55znYGmeNbyVKQmKoijB0ZdlWQAASZL4JGCTwMEpHkEWCjTgeQQZAfAI8giSVBpDpfybf/+p7+2ePP/ldDh89epVr9frdDosy070hWFYBMffNgzD7n//uP3H8Xd/Pv/2958bJz/2/v6t89evT9+8vLXmYDwAj76qL68sNxqN1dVV13Udx2k2m/V63bZtz/N833dd1x/Dtu2JIZMMrRTzsjEjyIhHUFBkXpaEjJxiGUwUxcHT/tHR0evXr09OTo6Pjw8ODobD4dnZ2bNnzw4PD1+8eHF6ejocDl3XlSCMEtEpfVrKZ3lFAhkk5bMggxhJLNxfYKFw5ZMnkvs26fjFfwSPUIBnoEALgBYAK0EOSXIxz8sQx3EylYwnyHgiQcTjZCpJxGMTnF5FPX7FPW4uWp/57r3dHdPfuNP0yu1Hd1vblu8u7O4UNxzsIxDBI2FuiXiMEQCGYZqmIVlOkGQqmWJZlqbp8Z6RfdgdEY+j0qx0K0cLQL5toGI+liAxDKMArxRzCYYaPULXW2M4jlOYnVVV9Vr3xWOpZCyRiIxq8pwlIh5joRAlouEeRVG63W6n0+n1eq1Wq1arFQoFjuOuESZKRI21pZJX05fmze11098w1pY+2awt7O5kzOIN7cbW1pbruqVSyTRNy7IkCAmCuFYAgkfw7sqSktUAknhJnJIRnM6kARen0ggh0zQVRZl8mKTS8pwh6tPSrC7fNijAk6lkmmdJKp2iKR7BbGUOFXSciM7dNvtBsOPX/e1H297Dh5tfLpTvZTKZYrF4cHDQbrc9z6vX657nSRC+S3RYHviY1mxlbuOHJ/Z+w95vlNtb9pNvKm3f9DfMubnGzk6z1Wo2m3v7e9VqtVKpCIIgy3K32221Wr1er91um6ZpGAYAowIbpUv/4k520SptPjDWloob1aX9r6UZLX9Tz+dyI/njARSEkFBd10ulkgQhSZLJZPKyLQiCoGmavcD/2iVKRO3tzVz582mrpFmfSrnslK7FqbRpmk/7/cfd7k9HR2vr6+EZ27YPDw9brVYoUPoYCKEPJPDB8vLebueBs7y+UivM5GRJQpJkWVZYdkEQ9Pv9bDYbNlS1WrVt27KsWq2m6zoA4D26e46FxcUgCDqdzvLKiqpOcxwHAJBl2TAMVVVZlhVF8ZzQjwCO4yGzAACO42iaDtmwLGswGPi+Xy6Xa7WaYRgShIqihJHeI1kTUK1WgyDwPM9xnLBZwrmmaZqiKJqmqaqqjKGqqiiKYVskWSaWiF+MnUhsrERRgghXokT0ss+xSqUSBEGz2RwMBqurqwghQbhC2bFzxBIkJ4kMFCjAcdIUBTgGCpwksuOV0ewciTl/LteapjmOMz8/H2qIIAiXPFw1lkcvYOgUyyQYmgJ8mmdTLHNpFOApwKU5ZsI8uFby8Iux/A5G+hiPRcda9B/HCNsXiivVjgAAAABJRU5ErkJggg==" nextheight="1013" nextwidth="1456" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p><strong>Phase I (Compute → Capital):</strong></p><ul><li><p>Build decentralized infrastructure: GPU DePIN networks, federated learning, distributed training/inference platforms.</p></li><li><p>Introduce a <strong>Model Router</strong> to coordinate multi-end inference.</p></li><li><p>Incentivize compute, model, and data providers with usage-based revenue sharing.</p></li></ul><p><strong>Phase II (Agentic Apps → Collaborative AI Economy):</strong></p><ul><li><p>Launch <strong>AI Terminal, Agent Marketplace, and Agent Social Network</strong>, forming a multi-agent application ecosystem.</p></li><li><p>Deploy the <strong>CoAI Protocol</strong> to connect users, developers, and resource providers.</p></li><li><p>Introduce <strong>user–developer matching</strong> and a <strong>credit system</strong>, enabling high-frequency interactions and sustainable economic activity.</p></li></ul><p><strong>Phase III (Collaborative AI → Crypto-Native AI):</strong></p><ul><li><p>Expand into <strong>DeFi, RWA, payments, and e-commerce</strong> scenarios.</p></li><li><p>Extend to KOL-driven and personal data exchange use cases.</p></li><li><p>Develop <strong>finance/crypto-specialized LLMs</strong> and launch <strong>Agent-to-Agent payments and wallet systems</strong>, unlocking “Crypto AGI” applications.</p></li></ul><p><strong>Phase IV (Ecosystems → Autonomous AI Economies):</strong></p><ul><li><p>Evolve into <strong>autonomous subnet economies</strong>, each subnet specializing in applications, infrastructure, compute, models, or data.</p></li><li><p>Enable subnet governance and tokenized operations, while cross-subnet protocols support interoperability and cooperation.</p></li><li><p>Extend from <strong>Agentic AI</strong> into <strong>Physical AI</strong> (robotics, autonomous driving, aerospace).</p></li></ul><p><em>Disclaimer: This roadmap is for reference only. Timelines and functionalities may adjust dynamically with market conditions and do not constitute a delivery guarantee.</em></p><h3 id="h-vi-token-incentives-and-protocol-governance" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>VI. Token Incentives and Protocol Governance</strong></h3><p>ChainOpera has not yet released a full token incentive plan, but its <strong>CoAI Protocol</strong> centers on <em>“co-creation and co-ownership.”</em> Contributions are transparently recorded and verifiable via blockchain and a <strong>Proof-of-Intelligence (PoI)</strong> mechanism. <strong>Developers, compute providers, data contributors, and service providers</strong> are compensated based on standardized contribution metrics. <strong>Users</strong> consume services.<strong>Resource providers</strong> sustain operations.<strong>Developers</strong> build applications. All participants share in ecosystem growth dividends. The platform sustains itself via a <strong>1% service fee</strong>, allocation rewards, and liquidity support—building an <strong>open, fair, and collaborative decentralized AI ecosystem.</strong></p><h4 id="h-proof-of-intelligence-poi-framework" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>Proof-of-Intelligence (PoI) Framework</strong></h4><p>PoI is ChainOpera’s <strong>core consensus mechanism</strong> under the CoAI Protocol, designed to establish a transparent, fair, and verifiable incentive and governance system for decentralized AI.&nbsp; It extends <strong>Proof-of-Contribution</strong> into a blockchain-enabled collaborative machine learning framework, addressing federated learning’s persistent issues: insufficient incentives, privacy risks, and lack of verifiability.</p><p><strong>Core Design:</strong></p><ul><li><p>Anchored in <strong>smart contracts</strong>, integrated with <strong>decentralized storage (IPFS)</strong>, <strong>aggregation nodes</strong>, and <strong>zero-knowledge proofs (zkSNARKs)</strong>.</p></li><li><p>Achieves five key objectives:</p><ol><li><p><strong>Fair rewards based on contribution</strong>, ensuring trainers are incentivized for real model improvements.</p></li><li><p><strong>Data remains local</strong>, guaranteeing privacy protection.</p></li><li><p><strong>Robustness mechanisms</strong> against malicious participants (poisoning, aggregation attacks).</p></li><li><p><strong>ZKP verification</strong> for critical processes: model aggregation, anomaly detection, contribution evaluation.</p></li><li><p><strong>Efficiency and generality</strong> across heterogeneous data and diverse learning tasks.</p></li></ol></li></ul><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/10f668b08ec472fd39f03b68de3ed2868a0576ca80e8b1e6dcb566b083a869a9.jpg" alt="" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAXCAIAAADlZ9q2AAAACXBIWXMAAAsTAAALEwEAmpwYAAAEoUlEQVR4nK1VT0zbZhT/ruxCr70gbauKtMu4tAeknioVqfTIpipCQuO0rqqyikO7XliFOlZVZWuHxlg2qkasHkQsq5elmKwspCkezZoaLyYxpCYimIBjTB3H5osT502JS5bRP3RTf7Ks78/z93vv+/m9h2AvWJYFAJIk0TTNMEw4HE4mk/DKQK9oZxiGy+UaGRkhSZJl2ddPYFlWOBx2u92BQMCOwI7s9RBIUtY0Tb/f39vbe+HCBZIkd3Hb751BufpYpmlalrU3QcE0ifGJtcw6zye+GLhKURTG+Fn3awT/J4INKQtQXkmvjU9MqmqufktRFJqmASASifT19emapmr5zScqx8Vuk2Q8Hv+H4CV3alW3YrHY3enwTxQxTU/WFrPSxrDr+3VJOXHixNsHDvB84pepMDkVcjqdCKEjR448JwI72PqQS6UiAGRSbJQOfHXjqpciAKAMZQDAGOvb2zzPd3d3nzlzZiWVyuuGquVlWWZZVhCEpwSqqsbj8XoajLH9xjs3vrGSjEVobJbKUNqxKbDc4k3itqJs3bjh/tV/xzTN52gwMjKyf//+xsbGQ4cOpdOr08HwuGfC4XCIokgQxNjYmG2q6fn3nR9/NvytPc3r+nIqjXEhp+kF0wyGZqtSVeIqV6/B9gwZhtHa2trQ0HDw4MGGhobz58/HuAWKmjp16kOv1zs4OPidyxWNRnVNe/d4O6qid2CgIsnCUlUDKRicBgC3293b+6mmaRWOOjlRKpVqbm5GCDU2NiKEOjrew2Ypp+myrExSv82E7t2nH/zouZ2Vsu8cb0dvvoX27fvg3DkAeJwSje3tpaWkZ+LntLjR0tKCEJoJ3ROFyCI709//+eHDh0mSRDzPt7e3Dw0NnTx58vTp021tbQDQd+2T0IO7ZatMzz3MaXnbl68JouL/gebY4iLGxVRaVNXcxoYYZ38XhNTZs2cdDgfHLUjiQlp4NElNXbp0iWEYxHHc0aNHOzs7Ozo62tranE4nALhuXXsYmy3g4nQwtPVEBYBisThHz3o8nvGxsS1FycpPllOr1XW8ub6IMfb5fF6vV5bl3SInk0mv1+v3+2ma9vl8HMdVPiuVqlL9C/MsGwgEKYoSRREANpWttLhWME1VfRqiDbtQFIvFgi0yPIOsLIfpubyuV42t+i1J3hKEZcMw7GleN2ZCs9e/vAYAAwMDPT09zyZshaCWVvZfHOO4m+5bDyKRtcx6za5gFv6MzeW0fFdXF0FUEs02zml5YSVTOagKAEgwfwiJ+a3U8n3X8PNrke0CQujYsWML8URW3rQXCyZWFFWS5ATPT3hJjAuVDN1cv0tO1Ag2FYWZufMXQ897xj9Cb7ys2Lnd7uHhby5evIhxIbMuGwZWVfUHgohGHxmGkeCXbD80VVl9HF8Vxf7+/suXLwuCsKusvlI1zeuVS/f5fAihlpYWO5tqbQAAGGbe5XKNjo5SFFWv9ssI7HYBO2MAuHLlSlNTE0IoGo3uIrAsi2EYiqIymYoeu0XeE5qmJZPJoaGhnp4eh8MRDAZVtZIcNbyo2/wHgkAgEAqFBEFgWZaiKLv01h/6oo72N40Ec/XXYZ/KAAAAAElFTkSuQmCC" nextheight="1046" nextwidth="1456" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><h4 id="h-token-value-flows-in-full-stack-ai" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>Token Value Flows in Full-Stack AI</strong></h4><p>ChainOpera’s token design is anchored in <strong>utility and contribution recognition</strong>, not speculation. It revolves around <strong>five core value streams:</strong></p><ul><li><p><strong>LaunchPad</strong> – for agent/application initiation.</p></li><li><p><strong>Agent API</strong> – service access and integration.</p></li><li><p><strong>Model Serving</strong> – inference and deployment fees.</p></li><li><p><strong>Contribution</strong> – data annotation, compute sharing, or service input.</p></li><li><p><strong>Model Training</strong> – distributed training tasks.</p></li></ul><p><strong>Stakeholders:</strong></p><ul><li><p><strong>AI Users</strong> – spend tokens to access services or subscribe to apps; contribute by providing/labeling/staking data.</p></li><li><p><strong>Agent &amp; App Developers</strong> – use compute/data for development; rewarded for contributing agents, apps, or datasets.</p></li><li><p><strong>Resource Providers</strong> – contribute compute, data, or models; rewarded transparently.</p></li><li><p><strong>Governance Participants (Community &amp; DAO)</strong> – use tokens to vote, shape mechanisms, and coordinate the ecosystem.</p></li><li><p><strong>Protocol Layer (CoAI)</strong> – sustains development through service fees and automated balancing of supply/demand.</p></li><li><p><strong>Nodes &amp; Validators</strong> – secure the network by providing validation, compute, and security services.</p></li></ul><h4 id="h-protocol-governance" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>Protocol Governance</strong></h4><p>ChainOpera adopts <strong>DAO-based governance</strong>, where token staking enables participation in proposals and voting, ensuring transparency and fairness.</p><p>Governance mechanisms include:</p><ul><li><p><strong>Reputation System</strong> – validates and quantifies contributions.</p></li><li><p><strong>Community Collaboration</strong> – proposals and voting drive ecosystem evolution.</p></li><li><p><strong>Parameter Adjustments</strong> – covering data usage, security, and validator accountability.</p></li></ul><p>The overarching goal: prevent concentration of power, ensure system stability, and sustain <strong>community co-creation.</strong></p><h3 id="h-viii-team-background-and-project-financing" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>VIII. Team Background and Project Financing</strong></h3><p>The <strong>ChainOpera</strong> project was co-founded by <strong>Professor Salman Avestimehr</strong>, a leading scholar in federated learning, and <strong>Dr. Aiden Chaoyang He</strong>. The core team spans academic and industry backgrounds from institutions such as <strong>UC Berkeley, Stanford, USC, MIT, Tsinghua University</strong>, and tech leaders including <strong>Google, Amazon, Tencent, Meta, and Apple</strong>. The team combines deep research expertise with extensive industry execution capabilities and has grown to <strong>over 40 members</strong> to date.</p><h4 id="h-co-founder-professor-salman-avestimehr" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>Co-Founder: Professor Salman Avestimehr</strong></h4><ul><li><p><strong>Title &amp; Roles</strong>: Dean’s Professor of Electrical &amp; Computer Engineering at <strong>University of Southern California (USC)</strong>, Founding Director of the <strong>USC-Amazon Center on Trusted AI</strong>, and head of the <strong>vITAL (Information Theory &amp; Machine Learning) Lab</strong> at USC.</p></li><li><p><strong>Entrepreneurship</strong>: Co-Founder &amp; CEO of <strong>FedML</strong>, and in 2022 co-founded <strong>TensorOpera/ChainOpera AI</strong>.</p></li><li><p><strong>Education &amp; Honors</strong>: Ph.D. in EECS from <strong>UC Berkeley</strong> (Best Dissertation Award). IEEE Fellow with 300+ publications in information theory, distributed computing, and federated learning, cited over <strong>30,000 times</strong>. Recipient of <strong>PECASE</strong>, <strong>NSF CAREER Award</strong>, and the <strong>IEEE Massey Award</strong>, among others.</p></li><li><p><strong>Contributions</strong>: Creator of the <strong>FedML open-source framework</strong>, widely adopted in healthcare, finance, and privacy-preserving AI, which became a core foundation for TensorOpera/ChainOpera AI.</p></li></ul><h4 id="h-co-founder-dr-aiden-chaoyang-he" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>Co-Founder: Dr. Aiden Chaoyang He</strong></h4><ul><li><p><strong>Title &amp; Roles</strong>: Co-Founder &amp; President of <strong>TensorOpera/ChainOpera AI</strong>; Ph.D. in Computer Science from <strong>USC</strong>; original creator of <strong>FedML</strong>.</p></li><li><p><strong>Research Focus</strong>: Distributed &amp; federated learning, large-scale model training, blockchain, and privacy-preserving computation.</p></li><li><p><strong>Industry Experience</strong>: Previously held R&amp;D roles at <strong>Meta, Amazon, Google, Tencent</strong>; served in core engineering and management positions at <strong>Tencent, Baidu, and Huawei</strong>, leading the deployment of multiple internet-scale products and AI platforms.</p></li><li><p><strong>Academic Impact</strong>: Published 30+ papers with <strong>13,000+ citations</strong> on Google Scholar. Recipient of the <strong>Amazon Ph.D. Fellowship</strong>, <strong>Qualcomm Innovation Fellowship</strong>, and Best Paper Awards at <strong>NeurIPS</strong> and <strong>AAAI</strong>.</p></li><li><p><strong>Technical Contributions</strong>: Led the development of <strong>FedML</strong>, one of the most widely used open-source frameworks in federated learning, supporting <strong>27 billion daily requests</strong>. Core contributor to <strong>FedNLP</strong> and hybrid model parallel training methods, applied in decentralized AI projects such as <strong>Sahara AI</strong>.</p></li></ul><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/2f5769295dfcafcbc9f116c4545757f55ed08959971deeed553f9d40a5c37c32.jpg" alt="" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAOCAIAAADBvonlAAAACXBIWXMAAAsTAAALEwEAmpwYAAAErElEQVR4nD2UXW/TZhiG3cSOX8ev/b7+em3HdmLHifNhNyFpEyik5GOlbfoBtF0/6AotKwWB2DRAYWVbAW1iWrXTHU2TmMQkpGndDxhn7A+MX8CO+QHshElTqLSjR7p1S4/03NdzU4ADtpeJJhp+LfJqZbdcsLMukiTAcbbnpnzXCLJGKadmbGJbooQ5nk+5jlPwM6Mlo+gTzxnqeKibjk08J1XMpYo5knV0x8KSRDEMc2332mAw6HY6H66sXN7amp+bJ4QwDHN1Z2dvb296ampmZmZxcbE/21dVFQpwY3392u5uf3Z2+8r29evXL21s6DoRBKEQBJOtVr1W77TbnU6nXqspikwBjms2GjnfNw0jbTtpJ+1nfVmRAQD1Wr1aqfpZP+f7xaAQBAGWMA9hEASFoOBns6NhFJZKxUJBkiRegI7juBnXzbhZ1/M8L5PJSJJExRk6xtAUS3O6AlMEKJgVIQPYOE3THBjhWZbghC4xCMYhx7CJOEOPgATFJYCu0AqiRT7OAYZNxGiaTnK0kEwoiCWYxYgVBQawFAvAZKfz9bdPGpOng2oU1SvlKFSJxvF8KQpPnW21znUneu2xMyfHmk1VUzme6071Lq6ujrcm2jNTzdbp8VMnZUXheL7T6y6vrS1dWu+vnG+0TgfFgqppFAvYy1tbvzx79uvz5+3W5OzM7LluT9d1jue7ne7O9vbe7u7NGzd2rmxvbm4SQgAAn9y+/f13hzNT51ZXVi9euDDZag2zgfDs5OTq8sri/MLi/EKn3Z6emiaEUCwHeAGOxGPlKJSJloQQSTgJ+QRgRYwSgOUFASmypMjoPUIJwEqKrGgqy4EYQwui+L9f0VQsy0iRsaaqRJMVJQl5imETskGUjM3oSC/l7FIeGyQJYZyhLc8tjlVS5bzopgRTw7rGQT7BspKu4ZQ+sTD9wfqSlrUlkwx1wPIyFk0i5Rw5nxZTuqjKwwUIIzeT8Tzv0sbGiz9eDAYDRZIsy4rFY/v7+/+8fbu/v7+8vHR1Z6fX7cmyjDH2XNc0zVd/vXrz5k0UhrZlEUJ4np/r989MTIzV62P1eqPRaDabhGiUKIq2ZVUr1YMvvnx88PDC+fOmYRqGQdP0xztX/3z5cmvzo167PTfbLwSBLMsIobSTti37zqef3b83CPKBbVmapkEIm41mJYoqUVQqFKIwjMJQUzUqRsc5JEJVopIMxY0kEIQKZgB7rLNIBLrEaCItJI/xjdHxJEJQlThD4U2N1xVexvR7fGN0nKJHEgqi5aGfhtwQ0yTkU7ZlpO2DJ9/88NOP1ZMN3dQlRQY8l/H9fFgKRsPiiUowGnr5HMLDR7Mc23QdO/DTBd/yMmbawbJEM8zswvzdzwft6anlrY3p+X5tfHxYFccZuK779+vX/757t7y0rKmaYZqA43rd3uNHjx5+dXD/3uDBgwdra2uCICCMi0GhXqudqFSro5WxWn14Co3E6fjtW7d+Pzq6dfPmz0+fHv121DrTGmaQhLxhGkTX79y9e3h4WC6XNaJJsjQsNSvlel7GdS3bcl3XdhyEEf/er+u6bhj68TQNhIekpqyUZVm5XC4IgjAK8/k8wug/T2vYMTTDWGAAAAAASUVORK5CYII=" nextheight="592" nextwidth="1317" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>In <strong>December 2024</strong>, ChainOpera AI announced the completion of a <strong>$3.5M seed round</strong>, bringing its total funding (combined with TensorOpera) to <strong>$17M</strong>. Funds will be directed toward building a <strong>blockchain Layer 1 and AI operating system</strong> for decentralized AI Agents.</p><ul><li><p><strong>Lead Investors</strong>: Finality Capital, Road Capital, IDG Capital</p></li><li><p><strong>Other Participants</strong>: Camford VC, ABCDE Capital, Amber Group, Modular Capital</p></li><li><p><strong>Strategic Backers</strong>: Sparkle Ventures, Plug and Play, USC</p></li><li><p><strong>Notable Individual Investors</strong>:<strong>Sreeram Kannan</strong>, Founder of EigenLayer and <strong>David Tse</strong>, Co-Founder of BabylonChain</p></li></ul><p>The team stated that this round will accelerate its vision of creating a <strong>decentralized AI ecosystem where resource providers, developers, and users co-own and co-create.</strong></p><h3 id="h-ix-market-landscape-analysis-federated-learning-and-ai-agent-networks" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>IX. Market Landscape Analysis: Federated Learning and AI Agent Networks</strong></h3><h3 id="h-federated-learning-landscape" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>Federated Learning Landscape</strong></h3><p>The federated learning (FL) field is shaped by four main frameworks. <strong>FedML</strong> is the most comprehensive, combining FL, distributed large-model training, and MLOps, making it enterprise-ready. <strong>Flower</strong> is lightweight and widely used in teaching and small-scale experiments. <strong>TFF</strong> (TensorFlow Federated) is academically valuable but weak in industrialization. <strong>OpenFL</strong> targets healthcare and finance, with strong compliance features but a closed ecosystem. In short: FedML is the industrial-grade all-rounder, Flower emphasizes ease of use, TFF remains academic, and OpenFL excels in vertical compliance.</p><h3 id="h-industry-platforms-and-infrastructure" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>Industry Platforms &amp; Infrastructure</strong></h3><p><strong>TensorOpera</strong>, the commercialized evolution of FedML, integrates cross-cloud GPU scheduling, distributed training, federated learning, and MLOps in a unified stack. Positioned as a bridge between research and industry, it serves developers, SMEs, and Web3/DePIN ecosystems. Effectively, TensorOpera is like <em>“Hugging Face + W&amp;B” for federated and distributed learning</em>, offering a more complete and general-purpose platform than tool- or sector-specific alternatives.</p><h3 id="h-innovation-layer-chainopera-vs-flock" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>Innovation Layer: ChainOpera vs. Flock</strong></h3><p><strong>ChainOpera</strong> and <strong>Flock</strong> both merge FL with Web3 but diverge in focus. ChainOpera builds a <strong>full-stack AI Agent platform</strong>, turning users into co-creators through the AI Terminal and Agent Social Network. Flock centers on <strong>Blockchain-Augmented FL (BAFL)</strong>, stressing privacy and incentives at the compute and data layer. Put simply: <strong>ChainOpera emphasizes applications and agent networks, while Flock focuses on low-level training and privacy-preserving computation.</strong></p><p><strong>Federated Learning &amp; AI Infrastructure Landscape</strong></p><table style="min-width: 125px"><colgroup><col><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Layer</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Key Players</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Positioning</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Value</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Limits</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Foundation (Academic/Open Source)</strong></p></td><td colspan="1" rowspan="1"><p>FedML, Flower, TFF, OpenFL, PaddleFL, FederatedScope</p></td><td colspan="1" rowspan="1"><p>Define standards and toolkits. FedML is most full-stack; Flower is lightweight; TFF is academic; OpenFL targets healthcare/compliance.</p></td><td colspan="1" rowspan="1"><p>Standardized APIs, reproducibility, technical progress.</p></td><td colspan="1" rowspan="1"><p>Mostly experimental or small-scale PoCs; weak industry adoption.</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Platform (Industrial Infra)</strong></p></td><td colspan="1" rowspan="1"><p>TensorOpera, Hugging Face, W&amp;B, NVIDIA Clara, IBM FL, Amazon SageMaker</p></td><td colspan="1" rowspan="1"><p>TensorOpera = unified FL + distributed training + GPU scheduling + MLOps. Hugging Face = model/data community. W&amp;B = experiment tracking + visualization.</p></td><td colspan="1" rowspan="1"><p>Bridges research and enterprise use, lowers adoption barriers.</p></td><td colspan="1" rowspan="1"><p>Fierce competition; vendor lock-in; industry-specific silos.</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>Innovation (New Narratives)</strong></p></td><td colspan="1" rowspan="1"><p>ChainOpera, Flock</p></td><td colspan="1" rowspan="1"><p>Merge FL with Web3 (DePIN, token incentives, verifiable training).</p></td><td colspan="1" rowspan="1"><p>New economic models; incentivized compute/data; decentralized AI path.</p></td><td colspan="1" rowspan="1"><p>Early-stage; models unproven.</p></td></tr></tbody></table><p><br></p><h4 id="h-agent-network-layer-chainopera-vs-olas" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>Agent Network Layer: ChainOpera vs. Olas</strong></h4><p>At the <strong>agent-network level</strong>, the most representative projects are <strong>ChainOpera</strong> and <strong>Olas Network</strong>.</p><ul><li><p><strong>ChainOpera</strong>: rooted in federated learning, builds a <strong>full-stack loop</strong> across models, compute, and agents. Its <strong>Agent Social Network</strong> acts as a testbed for multi-agent interaction and social collaboration.</p></li><li><p><strong>Olas Network (Autonolas / Pearl)</strong>: originated from DAO collaboration and the DeFi ecosystem, positioned as a <strong>decentralized autonomous service network.</strong> Through <strong>Pearl</strong>, it delivers direct-to-market DeFi agent applications—showing a very different trajectory from ChainOpera.<br></p></li></ul><table style="min-width: 75px"><colgroup><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Dimension</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>ChainOpera AI</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Olas Network</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Positioning</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">From FL (FedML) → full-stack AI Agent network</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Decentralized autonomous service network&nbsp;</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Tech DNA</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">FedML-based: distributed learning + Proof-of-Contribution; focus on privacy, cross-node scheduling, compute/data incentives</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Modular stack: Agent Services + composable components + on-chain protocol; emphasizes composability &amp; reusability</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Architecture</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">3 layers: (1) Model &amp; GPU Platform (training) (2) Agent Platform (development/deployment/collaboration) (3) ChainOpera Layer (incentives/coordination)</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Agent Services = multiple independent programs coordinated via <strong>consensus gadgets</strong>, forming distributed replicated autonomous apps</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Product Focus</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Agent Social Network</strong> – dialogue/social core; emphasizes multi-agent interaction, community &amp; content co-creation</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Pearl – AI Agent App Store</strong>: users can own &amp; run multiple agents spanning DeFi, prediction markets, cross-chain asset management</p></td></tr></tbody></table><br><h3 id="h-x-investment-thesis-and-risk-analysis" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>X. Investment Thesis and Risk Analysis</strong></h3><h4 id="h-investment-thesis" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>Investment Thesis</strong></h4><ul><li><p><strong>Technical Moat</strong>: ChainOpera’s strength lies in its unique evolutionary path: from <strong>FedML</strong> (the benchmark open-source framework for federated learning) → <strong>TensorOpera</strong> (enterprise-grade full-stack AI infrastructure) → <strong>ChainOpera</strong> (Web3-enabled agent networks + DePIN + tokenomics). This trajectory integrates <strong>academic foundations, industrial deployment, and crypto-native narratives</strong>, creating a differentiated moat.</p></li><li><p><strong>Applications &amp; User Scale</strong>: The <strong>AI Terminal</strong> has already reached <strong>hundreds of thousands of daily active users</strong> and a thriving ecosystem of <strong>1,000+ agent applications</strong>. It ranks <strong>#1 in the AI category on BNBChain DApp Bay</strong>, showing clear on-chain user growth and verifiable transaction activity. Its multimodal scenarios, initially rooted in crypto-native use cases, have the potential to expand gradually into the broader Web2 user base.</p></li><li><p><strong>Ecosystem Partnerships</strong>: ChainOpera launched the <strong>CO-AI Alliance</strong>, partnering with <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://io.net"><strong>io.net</strong></a><strong>, Render, TensorOpera, FedML, and MindNetwork</strong> to build multi-sided network effects across GPUs, models, data, and privacy computing. In parallel, its collaboration with <strong>Samsung Electronics</strong> to validate mobile multimodal GenAI demonstrates expansion potential into hardware and edge AI.</p></li><li><p><strong>Token &amp; Economic Model</strong>: ChainOpera’s tokenomics are based on the <strong>Proof-of-Intelligence consensus</strong>, with incentives distributed across five value streams: <strong>LaunchPad, Agent API, Model Serving, Contribution, and Model Training</strong>. A <strong>1% platform service fee</strong>, reward allocation, and liquidity support form a <strong>positive feedback loop</strong>, avoiding reliance on pure “token speculation” and enhancing sustainability.</p></li></ul><h4 id="h-potential-risks" class="text-xl font-header !mt-6 !mb-3 first:!mt-0 first:!mb-0"><strong>Potential Risks</strong></h4><ol><li><p><strong>Technical execution risks</strong>: ChainOpera’s proposed five-layer decentralized architecture spans a wide scope. Cross-layer coordination—especially in distributed inference for large models and privacy-preserving training—still faces <strong>performance and stability challenges</strong> and has not yet been validated at scale.</p></li><li><p><strong>User and ecosystem stickiness</strong>: While early user growth is notable, it remains to be seen whether the <strong>Agent Marketplace</strong> and <strong>developer toolchain</strong> can sustain long-term activity and high-quality contributions. The current <strong>Agent Social Network</strong> is mainly LLM-driven text dialogue; user experience and retention still need refinement. Without carefully designed incentives, the ecosystem risks <strong>short-term hype without long-term value.</strong></p></li><li><p><strong>Sustainability of the business model</strong>: At present, revenue primarily depends on <strong>platform service fees and token circulation</strong>; stable cash flows are not yet established. Compared with <strong>AgentFi</strong> or <strong>Payment-focused applications</strong> that carry stronger financial or productivity attributes, ChainOpera’s current model still requires further validation of its commercial value. In addition, the <strong>mobile and hardware ecosystem</strong> remains exploratory, leaving its market prospects uncertain.</p></li></ol><hr><p><em>Disclaimer: This report was prepared with assistance from AI tools (ChatGPT-5). The author has made every effort to proofread and ensure accuracy, but some errors or omissions may remain. Readers should note that crypto asset markets often exhibit divergence between project fundamentals and secondary-market token performance. This report is intended solely for information consolidation and academic/research discussion. It does not constitute investment advice, nor should it be interpreted as a recommendation to buy or sell any token.</em></p><p><br></p>]]></content:encoded>
            <author>0xjacobzhao@newsletter.paragraph.com (jacobzhao)</author>
            <category>federatedlearning</category>
            <category>agentnetwork</category>
            <category>fedml</category>
            <category>chainopera</category>
            <category>tensoropera</category>
            <enclosure url="https://storage.googleapis.com/papyrus_images/0bdfc8e05cea3c048a4a617ff0ced9a3d1edcabb30a14cf804a3b158b00992c6.jpg" length="0" type="image/jpg"/>
        </item>
        <item>
            <title><![CDATA[从联邦学习到去中心化 Agent 网络：ChainOpera 项目解析]]></title>
            <link>https://paragraph.com/@0xjacobzhao/从联邦学习到去中心化-agent-网络：chainopera-项目解析</link>
            <guid>H64pNcAJRjmVSw2pzijf</guid>
            <pubDate>Wed, 17 Sep 2025 10:45:56 GMT</pubDate>
            <description><![CDATA[本报告梳理了 FedML → TensorOpera → ChainOpera 的演进路径：从“数据不出域、按贡献激励”的联邦学习，到企业级全栈 AI Infra，再到上链的去中心化 Agent 网络。ChainOpera 的核心在于通过 AI Terminal 与 Agent Social Network 将用户从“消费者”转变为“共创者”，并依托 Developer 平台与 Model & GPU 平台 构建多智能体协作与隐私训练的基础设施。其 CoAI 协议与 Proof-of-Intelligence 提供透明激励与治理机制，路线图涵盖算力网络、Agent 应用、Crypto-Native AI，到自治子网经济。总体而言，ChainOpera 兼具学术积累、产业落地与加密叙事，是观察 去中心化 AgentFi 赛道的重要窗口。]]></description>
            <content:encoded><![CDATA[<br><p>在 6 月份的研报《<a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://x.com/0xjacobzhao/status/1932645375548219825"><u>Crypto AI 的圣杯：去中心化训练的前沿探索</u></a>》中，我们提及联邦学习（Federated Learning）这一介于分布式训练与去中心化训练之间的“受控去中心化”方案：其核心是数据本地保留、参数集中聚合，满足医疗、金融等隐私与合规需求。与此同时，我们在过往多期研报中持续关注智能体（Agent）网络的兴起——其价值在于通过多智能体的自治与分工，协作完成复杂任务，推动“大模型”向“多智能体生态”的演进。</p><p>联邦学习以“数据不出本地、按贡献激励”奠定了多方协作的基础，其分布式基因、透明激励、隐私保障与合规实践为 Agent Network 提供了可直接复用的经验。FedML 团队正是沿着这一路径，将开源基因升级为 TensorOpera（AI产业基础设施层），再演进至 ChainOpera（去中心化 Agent 网络）。当然，Agent Network 并非联邦学习的必然延伸，其核心在于多智能体的自治协作与任务分工，也可直接基于多智能体系统（MAS）、强化学习（RL）或区块链激励机制构建。<br></p><h3 id="h-ai-agent" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>一、联邦学习与AI Agent技术栈架构</strong></h3><p><strong>联邦学习（Federated Learning, FL）</strong> 是一种在不集中数据的前提下进行协同训练的框架，其基本原理是由各参与方在本地训练模型，仅上传参数或梯度至协调端进行聚合，从而实现“数据不出域”的隐私合规。经过医疗、金融和移动端等典型场景的实践，联邦学习 已进入较为成熟的商用阶段，但仍面临通信开销大、隐私保护不彻底、设备异构导致收敛效率低等瓶颈。与其他训练模式相比，分布式训练强调算力集中以追求效率与规模，去中心化训练则通过开放算力网络实现完全分布式协作，而联邦学习则处于二者之间，体现为一种 <strong>“受控去中心化”</strong> 方案：既能满足产业在隐私与合规方面的需求，又提供了跨机构协作的可行路径，更适合工业界过渡性部署架构。</p><table style="min-width: 100px"><colgroup><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>维度</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>分布式训练</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>联邦学习 (FL)</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>去中心化训练</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>中心化程度</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">高度中心化</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>受控去中心化</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">完全去中心化</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>核心目标</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">提升训练效率与规模</p></td><td colspan="1" rowspan="1"><p style="text-align: center">数据不出域，隐私合规协作</p></td><td colspan="1" rowspan="1"><p style="text-align: center">开放算力网络自由协作</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>典型场景</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">GPT等大模型</p></td><td colspan="1" rowspan="1"><p style="text-align: center">医疗、金融、移动端输入法</p></td><td colspan="1" rowspan="1"><p style="text-align: center">Crypto AI、DePIN网络</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>信任结构</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">单一机构控制数据与算力</p></td><td colspan="1" rowspan="1"><p style="text-align: center">协调服务器 + 合规多方</p></td><td colspan="1" rowspan="1"><p style="text-align: center">无中心，依赖加密验证</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>通信机制</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">高速集群内并行</p></td><td colspan="1" rowspan="1"><p style="text-align: center">参数/梯度聚合，频繁受控</p></td><td colspan="1" rowspan="1"><p style="text-align: center">异步低带宽需验证机制</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>优势</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">工业级成熟，效率最高</p></td><td colspan="1" rowspan="1"><p style="text-align: center">隐私保护强，产业接受度高</p></td><td colspan="1" rowspan="1"><p style="text-align: center">开放性强、抗审查</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>劣势</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center">无隐私保护，数据集中</p></td><td colspan="1" rowspan="1"><p style="text-align: center">通信开销大，通用性不足</p></td><td colspan="1" rowspan="1"><p style="text-align: center">容错与激励机制未成熟</p></td></tr></tbody></table><p><br></p><p>而在整个AI Agent协议栈中，我们在之前的研报中将其划分为三个主要层级，即</p><ul><li><p><strong>基础设施层（Agent Infrastructure Layer）</strong>:该层为智能体提供最底层的运行支持，是所有 Agent 系统构建的技术根基。</p></li></ul><ul><li><p><strong>核心模块</strong>：包括 Agent Framework（智能体开发与运行框架）和 Agent OS（更底层的多任务调度与模块化运行时），为 Agent 的生命周期管理提供核心能力。</p></li><li><p><strong>支持模块</strong>：如 Agent DID（去中心身份）、Agent Wallet &amp; Abstraction（账户抽象与交易执行）、Agent Payment/Settlement（支付与结算能力）。</p></li></ul><ul><li><p><strong>协调与调度层（Coordination &amp; Execution Layer）</strong>关注多智能体之间的协同、任务调度与系统激励机制，是构建智能体系统“群体智能”的关键。</p></li></ul><ul><li><p><strong>Agent Orchestration</strong>：是指挥机制，用于统一调度和管理 Agent 生命周期、任务分配和执行流程，适用于有中心控制的工作流场景。</p></li><li><p><strong>Agent Swarm</strong>：是协同结构，强调分布式智能体协作，具备高度自治性、分工能力和弹性协同，适合应对动态环境中的复杂任务。</p></li><li><p><strong>Agent Incentive Layer</strong>：构建 Agent 网络的经济激励系统，激发开发者、执行者与验证者的积极性，为智能体生态提供可持续动力。</p></li></ul><ul><li><p><strong>应用层（Application &amp; Distribution Layer）</strong></p><ul><li><p><strong>分发子类：包括Agent Launchpad、Agent Marketplace 和Agent Plugin Network</strong></p></li><li><p><strong>应用子类：涵盖AgentFi、Agent Native DApp、Agent-as-a-Service等</strong></p></li><li><p><strong>消费子类：Agent Social / Consumer Agent为主，面向消费者社交等轻量场景</strong></p></li><li><p><strong>Meme：借 Agent 概念炒作，缺乏实际的技术实现和应用落地，仅营销驱动。</strong></p></li><li><br></li></ul></li></ul><h3 id="h-fedml-tensoropera" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>二、联邦学习标杆 FedML 与 TensorOpera 全栈平台</strong></h3><p><strong>FedML</strong> 是最早面向联邦学习（Federated Learning）与分布式训练的开源框架之一，起源于学术团队（USC）并逐步公司化成为 TensorOpera AI 的核心产品。它为研究者和开发者提供跨机构、跨设备的数据协作训练工具，在学术界，FedML 因频繁出现在 NeurIPS、ICML、AAAI 等顶会上，已成为联邦学习研究的通用实验平台；在产业界，FedML在医疗、金融、边缘 AI 及 Web3 AI 等隐私敏感场景中具备较高口碑，被视为 <strong>联邦学习领域的标杆性工具链</strong>。<br></p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/b9f295460006abb19cd65c606a0def504126c7ddaba439087bda89062b3e9f5e.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABgAAAAgCAIAAACHPC9vAAAACXBIWXMAAAsTAAALEwEAmpwYAAAHs0lEQVR4nIWVXWzb1hXH75otQPaQdiiytuiQYQ/L1qc9bMCAtQ/L8rAARWwnS7pmWPawrcMeig0Y0KLrgD2kQ9dua4I2RhIviRVbsmnrw5RJmhJFX4qiSJEidSmJEiVZkm1VH/6QZcmKJbmOIntgsr0MK3ZxHs49548fzj33Cxx+zuh1uo3t5mAw6PcfPdzvDwYH/f6jwWDweXrwP6P7/b6pSjzvVZUlyGiQ0RbmJSmYzC+ttnZa/wd0MDg4GBw88Tc3NxVFdTmdLpcbx3GXyzMxYadpP4RcNpv9j37w36DWXrveabT22p3+XqPbbHS2W3vt9v4ur4ZpjgmIkGRpN4nzqrC5U+886lqabnN3v7Pd3Wl0m/VOo9ffB4PBILOZ0zfS8wp9zf6JVk2gNSNaRtEySjWXxKLiWHR+aLsOM0K0jJ6k0Jql+Rgb5fNSYiOtVuONzrZVkbZqRFcT037PW395V11NhrKyVknGKkmYCqNP0zav4813/5Bcz3IpUVlBygriTVkqaL/781uERMsFxJlivdMAh4cHUhLNzTMODF9gw26cpmFILaNIUSP8nIrMazf+OTp2nxO0qJYOIB7qAi+iSCx9ffTO1CyeMkskC6uNGuj0OkJM8TGiAyOwWcI26Sb9UCsnIsuxhQBv5lYFCXlwPwvlxaC6mAhzumSkVsz0UkTWvSQnR9OcpFqgSqXM6uE7+NQsaacUQc4gWoFKSTe2slc/+dtLP/j+hd/80sMSbkjgPC3k5FjFuPrRByfP/Pjl4WFsweWTg5Qc2Nipg1qtqlbNV3/+wxPPfeHj+fs3XXdoHaKqUd6vXXnnTfB4zPLem667VCyAqsb6w/rpy2fBM8+BL37t6uh705wHZsRmrwUqlYpeS/3p7ctnX/kOG2doHQp5WSwqahnZOM+Z02eHL7wWzEeYBAczolhUzGb+9399++T3vvXNV152i0S0jNSqbjW7XKlkaoVie22z39LLGUblAiq/GBNoBbIqL2SjjBakRIaKMF6BpkSGEPw+GfoVjlGhYMi5+nKimm12d6yKzOR6WluTuUKA0iUuH6CS1FxUYLPknDw+5vXMCLhbpLzREGvmjLqpb7CMOjfvc+M+LwHjai0VW282ehYok9jQpQpkMs4p4fYNp+0e5rB7blybGBv1BMgUkipysPTEjGgtrtTmZvjxe+7RUce03Rfhi7pSa9S7oFarhaFJEqKPlmkiUqlUomqUYZhcLptJF8NsniYVCNVgKM4wigQLk5M4xYSwWeuszFOcfcojwUKr0QP9ft9PRG2T+NQ0YbcRq6UVRZFZCM2MaRhpF8Z7XNDHCBQj0D7RjQn3xp0eLz09iz9eHT0x5QqxOQt0MDgoZhu6VEmqa2p4lYdIVZKamoiIOk1oabQeV2pIqiih0gKu8UxGV6txtaYKKwyBkuoaUspxpbZd74LB4CCfrscVq2dauEzjcczGjY3ivvlEPl3P6BsZfSNn1JPqmswVleAyv5ghvGGPi7NPkNS8FPDpmlT5NyhrbGpCOSFXBaYwM8FjNohN8NgE94/371uOjcNs3BwmO6cEkV2GgbTN5p20EzbbnMNBzLmEKF+ymn14eNiod6vldq20s159sF3vNre6rUavXmtTBKfHckrEmMEowgtnMIpjEVILKFpQwtm0Udra6Kx/2l6vPuh2H1q3v9c0OluRXjPa3VY6W5FuQ35in3WS5fyiidyG6qwWFz/b1S3NlqXpNaO9pmpNG3KnLvb36uDwcJBGfj/lZhkcMniYW+AgxbEEx5KBBSdHjtKUG5uyOWdsNIn5aDwhuRTBSxJOiphhaA8HKTWMd1vL4PCwb+iillyZxLwzc77y+gMPAW/fm5rEiACUOPpuZb1p5muikro74SQZJaktJhI6MsqTmHfaubBUaiV0dWczZ4GyKTkUQQwUhAhKZ1YYKIhK3Euy6UxBYMZXK+tunPazIZzws7yW1NiErmqJAsfLQgTF4vmUEW09Bg0SiHdgOEHD6VkcClGKCc56SMxDIt18DNrgRT0ARQfm4QTdQFCWBcyzQNDQObeAU0FNEdpbeauihMZriSIUYiQdEuREUEQ4xcXiBTO3GiBuCxHEiXpQRB4vM+Vi5BDppymKjYQicU7QApwGIb27XbR2rbIETTSfi9O5OO20f8TRtnKWXUkx5SyzVqRWTVqXnOO33uMZW8lkylmytsQYUdx+78PJOx8YGrmcnN9rl61z1O/vdnfrrWa9vlnVEFJjyEhno7GEqOihCAorOhSimGeBCvAokUK6kTbNdDqtxWKZjNlubuz3tg8GD61/7buvvQ9eeB2cuAiODYOnfwK+fB48dQ4cGwFHh8HRocc2DJ6+aEXAOXDkHPjSEDh23lI+exG8eBl844rNKYD9/Yfg1BsAnAFgCBwZAU8NgSND4OgI+PovwLffAC/9Fpz6teWc+pUVOXbeSh0dsVhgyOKCVwE4/c7fcesZAT/6Izh+Abz4M8uefx2cuASevQS+chE8cwEcP2+lLGfEqvdJ/MQl8NVL4PmfgpNXwAuXwfEL120Bq0c77U4yUxLUHM0nPX5thlLtc9K4MzyG8bemQ7emQzcdwVvToTGMv+sUbG7J4ZWdtEZCHYqmoi8vlzb6/f6/AIKGjRvJz7apAAAAAElFTkSuQmCC" nextheight="1591" nextwidth="1200" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>TensorOpera是 FedML基于商业化路径升级为面向企业与开发者的全栈 AI 基础设施平台：在保持联邦学习能力的同时，扩展至 GPU Marketplace、模型服务与 MLOps，从而切入大模型与 Agent 时代的更大市场。TensorOpera的整体架构可分为Compute Layer（基础层）、Scheduler Layer（调度层）和MLOps Layer（应用层）三个层级：</p><p><strong>1. Compute Layer（底层）<br></strong> Compute 层是 TensorOpera 的技术基底，延续 FedML 的开源基因，核心功能包括 Parameter Server、Distributed Training、Inference Endpoint 与 Aggregation Server。其价值定位在于提供分布式训练、隐私保护的联邦学习以及可扩展的推理引擎，支撑 “Train / Deploy / Federate” 三大核心能力，覆盖从模型训练、部署到跨机构协作的完整链路，是整个平台的基础层。</p><p><strong>2. Scheduler Layer（中层）<br></strong>Scheduler 层相当于算力交易与调度中枢，由 GPU Marketplace、Provision、Master Agent 与 Schedule &amp; Orchestrate 构成，支持跨公有云、GPU 提供商和独立贡献者的资源调用。这一层是 FedML 升级为 TensorOpera 的关键转折，能够通过智能算力调度与任务编排实现更大规模的 AI 训练和推理，涵盖 LLM 与生成式 AI 的典型场景。同时，该层的 Share &amp; Earn 模式预留了激励机制接口，具备与 DePIN 或 Web3 模式兼容的潜力。</p><p><strong>3. MLOps Layer（上层）<br></strong> MLOps 层是平台直接面向开发者与企业的服务接口，包括 Model Serving、AI Agent 与 Studio 等模块。典型应用涵盖 LLM Chatbot、多模态生成式 AI 和开发者 Copilot 工具。其价值在于将底层算力与训练能力抽象为高层 API 与产品，降低使用门槛，提供即用型 Agent、低代码开发环境与可扩展部署能力，定位上对标 Anyscale、Together、Modal 等新一代 AI Infra 平台，充当从基础设施走向应用的桥梁。</p><br><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/9b86ec9ba3d1765965cde1bc5ab762e2daf8a50a25b28f8f8bd083cd2933f77a.jpg" alt="" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAATCAIAAAB+9pigAAAACXBIWXMAAAsTAAALEwEAmpwYAAAF0UlEQVR4nH1Uf0wTZxi+xD9m9s80jGy6bCPGgTMqzh9hEnET8Se6DiuysYJFKxa1Rq1toZNeFfSwFEV+FFPEEqs5BC/a2p1itWXkq4jKJ5zAKdym1O0EDrVbvG6WyS30FM1i9uT7477Lm+953+d93hcR3oZQaFgQhBWxcTHzZi9bvRRBkLlz58bGxsbFxSEIMn369JiYGIlEkpiYOGHCBOF/gQiCEAgEIIQMw/A8H349JAgjgiCU7t5UcCDdXl+/9tsUqVSakJCg1qhXrloZHR0tlUoVCoVer1epVG+kFRIEgWXZ1jD8fv8oAU3TOI5DCN1uN47jLMuKoaHQ8PPnwSD/h1hNd1ePIde0c+tezQ6jIdf0k9MzVmgoNCweQRAYhiEIgqIoCCFJkgAABMdxjuNycnJQ1BgMBq1WK0VRr3IaeTHyQsyLaLiQtDBt4rhPEOS9pIVpBt0hQRj5Z3hYrFUEwzBGFPV4PArFZoVCASHEcRxxOBw0TW9RKu32UxzH1dWdIUmy8dJV0uVu8viaPD6i4UL77Y7fHv7ucl4uLanQ56Eu5+WbN+H9X/uc5y9dufyzGOZubALAR5KkXq+32Ww4jldWVpIkifj9foIgHK/A8zxBEKrNBpl0Z1JCalJCanaGPl9XNDDYLwiC3+9nGGa0bX8+LS05lrJq84rF3yclpC6M+yY7M6+i3BIIPPX7H7a2tgIAIIQ8z482+ZXoo1IIggAhzNttytmwdxzywYeR01Bt+Z4dBopqv+a7xoRx504nhG1GtOijyBkR46MixkchyDuJ8VLXBfItLhKbzjDM/Qf3aZqmKIphmDJzdXlJ7UHj0eIDVaWHauy19YIgNDc10zTNMAyEt3n+mbvRq9u5H80tPmAsO1Ro0asP9NzteQsBRVG1tlqZLEMiSdm1azdBEDiO9/Teo6j29o52AJrbO9p7enswDLPb7TiOE2FgGCb6jaa76xvOtFz3tVz3UVQHhNDr9b5hkzCBqdiEYVi+IV+v15uKTWKEKCJJXgQAUBQllUolkpQ9Gs3ixCVarU4my3C73RzH2e2nKIoKBAJarU6l2vHoUb8oydhYIHfudKKGgqKDhy0VNSXF5T/mGbs6u1tbWyGEYrI4jrvd7rS0tKlTP1MoNi9dtlwqXSeRfAsAYFnW4XBgGHaktBRFjRhWhGHY0aNlXq/3NUGTB+Sqio3aik8nxe5SFqLacktFdVd35927d0XDsGEp7Ha7zWYzm82HS0rs9pNms5kPY6yFAACCIBiGEa+vJWps9ChkuRvTNQgycWO6JmfD3qNHLOLUiLPOcRzP8xBCjuNE9TmOE6+BQGDsIZZl35T+NcFA/2CTB9xoaSManDda2po8wO9/GAo9p8MIvEJzU3OVpUouz9JqtVZr9aWLl/r6HgRH8dczPhgKDfcyv9xqg6HQ8DN+9Off4c3xetn19T2ob6jv63vAMIzD4QAAyGQZGIYB4LNarSzLDgz2d3V3ulxOl8vZ03uvp/fek6dPKivwov3HS7DawyZ7CVZbtP94WfEp8QMrrB7kHr8kAADYbLaCgsLTp05fvXKVpumBwYETJ2pcLtfjoSEI245VVa+TZKpydN+t25SyRrZdqZOuybCdOK3KyX//3Smzp30VNXnW5IiYqMmxUZNnTYqMmRQRI1khH+CevCSAEBYUFMjlWWaz2ev1ZGVu+2L6wvj5S+fNXDTl45lWS01FuSU6akHyEjmCRCIIsihu7ZezV+vU+05Un4ufkzwjetGCOavmzVw6+/PE+TOXz5r2dWL8ujx1KfuIe0ng9/sBAOICYRjGfrJOp0aNBmwfWqTaqvZcbR4aenwGdzjPN1otNSbsSEOd8xxB3rrZ3tJCea7caPF1+EBHi6/jxvVO8fhAh8dzMxgMviT4D2iaTv8hXS6X5xvy1Rq1XJ6VnJys1Wq3bd+mVOagKKrWqDdu2pS6fn22MlutUSuVWzI3ZMpksrNnG8bsP+aifwGq1O3OjSjhPAAAAABJRU5ErkJggg==" nextheight="873" nextwidth="1456" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>2025年3月，TensorOpera 升级为面向 AI Agent 的全栈平台，核心产品涵盖 <strong>AgentOpera AI App、Framework 与 Platform</strong>。应用层提供类 ChatGPT 的多智能体入口，框架层以图结构多智能体系统和 Orchestrator/Router 演进为“Agentic OS”，平台层则与 TensorOpera 模型平台和 FedML 深度融合，实现分布式模型服务、RAG 优化和混合端云部署。整体目标是打造 <strong>“一个操作系统，一个智能体网络”</strong>，让开发者、企业与用户在开放、隐私保护的环境下共建新一代 Agentic AI 生态。</p><h3 id="h-chainopera-ai" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>三、ChainOpera AI生态全景：从共创共有者到技术基座</strong></h3><p>如果说 <strong>FedML</strong> 是技术内核，提供了联邦学习与分布式训练的开源基因；<strong>TensorOpera</strong> 将 FedML 的科研成果抽象为可商用的全栈 AI 基础设施，那么 <strong>ChainOpera</strong> 则是将TensorOpera 的平台能力“上链”，通过 <strong>AI Terminal + Agent Social Network + DePIN 模型与算力层 + AI-Native 区块链</strong> 打造一个去中心化的 Agent 网络生态。其核心转变在于，TensorOpera 仍主要面向企业与开发者，而 ChainOpera 借助 Web3 化的治理与激励机制，把用户、开发者、GPU/数据提供者纳入共建共治，让 AI Agent 不只是“被使用”，而是“被共创与共同拥有”。</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/fb6d319be0496ea845c6018873ec9569e9fd29cd109a757c9fa0ac4ff0be549b.png" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAXCAIAAADlZ9q2AAAACXBIWXMAAAsTAAALEwEAmpwYAAAHH0lEQVR4nGVVa0xb5xn+fG4+Nvbx5fic42Mb2/H9io0v2NgYbAj4wjVQA8ZAHCLMzB1GkxASaBqmkErLJtam3dqs//dnPzZVE63W/ai2NmlZ1qqbdpEapTRT+mfKn9BMEZ6ODYRqrz690ne+7+g5z/O873sAAACCIQiGYASGqsFtYQiGuCMEwURCTCREhQJEgB8vGEMhBOYhEIwhMIbwEBjG0MoRv7JwTIBDCAwAADCGKiOO8GhXem48PTfed6GYmT+bnB3TJQJy56n8zlvTf/v11P1fzf7jt8uPPljae3/p4c7y3vtbT/98679f/vi7zzef3Lv59C+vfffF9Sf3Fh/8bunhztLDnfl/vXfxPx81vlzgACAEZoI2TbTuVNzfWRobernk729Xx7wK5ylHPnW9/NdLT/64cbCbe+920ytTiZuL8c250zcXu7dXB97c6H/j6uDb13N3ftS7fbl7+1J8c7b1xnyicuHMu9d1baEqAKIMO8OjXb0rxdTsePfy+b6VYnqhoAo57LnklWef/vDx79ef77rP9QEATHarxWmXyqTgKPBKgO8HDMM1ItHhBkJg0mdmgrZQNpX6wUj3bKGjOKyL+yRGlSOXXH++u/L4w42DXc9kPwAgnG7NTOYZrRrBEAjmJOYLBHyBoGIkDCEV53g8TMAXEsQhAA+G7J2x2MRAZKzHP9gRGettGO1qnRry97XZXwDcrzt/BgCgthgtfo+coQU1IpFYpDbo3ZEGe9DHGvR8nA8AMBtJrVoKAK9GLH4BoI147Kmouzvu6myyp6K2ZCTwUru1JWAZbNs4OAaoSORzOJubvPX6ha0cIhb3/+Ka0mWh5eIP//66qT0MABgZ8ic7HAAAEXEEAMGwLuEPn+0JDqcbRjJNEwOxiYHQWJe2yWvPp44ZVD1gtRpapQo2WK79fFmh02ncFn9bk95m94ecKpPBGw17EnFfW6s3FpZUfeJVGLABW6DvtCvT7ExGA33t0aHOQH+7ut5qyb5g4KlIxNaqNbpagUCIwKiYEPMxjKTpn9wpRcfSAAChqIYv4AMeUGpUElJ2ZDmPJyQluIzApKJqRokavpyA+Kihq2n9+WdL3+xceXavKhFJU5RKKaNJBcsIJGJSqyS16tn1XONwEuLzCRz5zZ9etSSj4hqRyqAVSyUwDHONputosA8ksldL2fW57NXp3tUpbz7NRl3158/8rPzgtYMvt8sPEpcmYQCUGrXGqDd47Sa3Q1tv75wtnNtabS+eTRfHWJdJJpOabHqSpiRSSX1zONrZxjkPwbDcoifdBsptpDxGLruNpNsorWUcg8nt8tfvlr+5XX4ULA4CAOSkHEXRQ/MqgaJofHNRomIOFf//4Bi0cwz6r5T6Vku9q1Op5Yng2W5V1OUcz/y0/NU75a9fL+8FShwArWRwHJcwCoXHxHjMuohHH/H6hzOmlgDjMVMek0jBSQ9B3BwDvAogBEEEQxJKSqZhpGqmkmlpLYMTIlNP88bB/Qvf/mHj4H7j/CgAgGKVJKXQuC3aRMDaGWuaGIhOvNQw2p0o5YztodqEX2kzEATxPSoQDMmMaspp0AQcmoBd5bNyOWCTqCjLQOtRFe1Wq0hCEP6RXt9MDgBASCVtW8tSsw4AoIl4EptLAAAUQw0Rf2Rt6gQDBKmN1zv74x3T+fRCITE11DGdj01mlSGra7zrRCdzVaS3ma1hH2s1ohhGsUpTNFAjkeA4rq2zqj1WDoCPKU162mp8YQkEw6TbSPvMyoBVGbAyPguXgzaZnrUNth/3gftcT7WTtQ4zo2IJudTZ1OCIBAQ1QgVDmf0uimUQFJXTlMFpkVPkCYkQWB2vN3VGUguF7OWZzEKh90IxONpFB63O8czxsKsykDEUxTICkZAvwOlaVa3FiGKoWC7TOswkyxUSLsAphuYLTsxXrkyderby+dXM+CzqkENmYE908m7VA0yAC0U11Rf5OM6vDGoIgoRHwxlBUUIq4dSvGsA9wlA64giOdvWsTHQuneuYHj1dymWWJ5igzZ7PrD//bIWrokMGZq+zpSctJsQwDDe2tbiCfgCAWEI09yRJmqoOq9RIllGxJ8qIB7RBp7HJZ44FTC1BY8yvj3gMMf8pv9NS8WD50Qdr+3cblkZxQmSLBUO9KYVJJ9Ew9kSj83RUxJBSHevLtDI2g4iSK10mXyahdBklKgonqrR4PN9wcu72jeKtjcW3tgo3Lxa2Lk3eWu+YzuvTkavPPr1W/vxa+YuNg921/U8uP/348tOPr+zffbO898vyt++UH79R3nu7/O875cfb5a/W9j9Z27+7tn/34pOPbpT/GXulVP0fwAIZQes1CjWrULNyNU1qlZROI1LIID6q74jUl7LuQk/dZL+nOOAtZqsrNJOLzI9F5sdCM7nwXD48lw/O5OqKA9U7dZP9/vkcHbQDAP4HJ3H/sqCmkXAAAAAASUVORK5CYII=" nextheight="1028" nextwidth="1456" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p><strong>共创者生态（Co-creators）</strong></p><p>&nbsp;ChainOpera AI 通过 <strong>Model &amp; GPU Platform</strong> 与 <strong>Agent Platform</strong> 为生态共创提供工具链、基础设施与协调层，支持模型训练、智能体开发、部署与扩展协作。</p><p>ChainOpera 生态的共创者涵盖 <strong>AI Agent 开发者</strong>（设计与运营智能体）、<strong>工具与服务提供方</strong>（模板、MCP、数据库与 API）、<strong>模型开发者</strong>（训练与发布模型卡）、<strong>GPU 提供方</strong>（通过 DePIN 与 Web2 云伙伴贡献算力）、<strong>数据贡献者与标注方</strong>（上传与标注多模态数据）。三类核心供给——开发、算力与数据——共同驱动智能体网络的持续成长。</p><p><strong>共有人生态（Co-owners）</strong></p><p>ChainOpera 生态还引入 <strong>共有人机制</strong>，通过合作与参与共同建设网络。<strong>AI Agent 创作者</strong>是个人或团队，通过 Agent Platform 设计与部署新型智能体，负责构建、上线并持续维护，从而推动功能与应用的创新。<strong>AI Agent 参与者</strong>则来自社区，他们通过获取和持有访问单元（Access Units）参与智能体的生命周期，在使用与推广过程中支持智能体的成长与活跃度。两类角色分别代表 <strong>供给端与需求端</strong>，共同形成生态内的价值共享与协同发展模式。</p><p><strong>生态合作伙伴：平台与框架</strong></p><p>ChainOpera AI 与多方合作，强化平台的可用性与安全性，并注重 Web3 场景融合：通过 <strong>AI Terminal App</strong> 联合钱包、算法与聚合平台实现智能服务推荐；在 <strong>Agent Platform</strong> 引入多元框架与零代码工具，降低开发门槛；依托 <strong>TensorOpera AI</strong> 进行模型训练与推理；并与 <strong>FedML</strong> 建立独家合作，支持跨机构、跨设备的隐私保护训练。整体上，形成兼顾 <strong>企业级应用</strong> 与 <strong>Web3 用户体验</strong> 的开放生态体系。</p><p><strong>硬件入口：AI 硬件与合作伙伴（AI Hardware &amp; Partners）<br></strong>通过 DeAI Phone、可穿戴与 Robot AI 等合作伙伴，ChainOpera 将区块链与 AI 融合进智能终端，实现 dApp 交互、端侧训练与隐私保护，逐步形成去中心化 AI 硬件生态。</p><p><strong>中枢平台与技术基座：TensorOpera GenAI &amp; FedML<br></strong>TensorOpera 提供覆盖 MLOps、Scheduler、Compute 的全栈 GenAI 平台；其子平台 FedML 从学术开源成长为产业化框架，强化了 AI “随处运行、任意扩展” 的能力。</p><p><strong>ChainOpera AI 生态体系</strong></p><table style="min-width: 100px"><colgroup><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>层级</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>定位</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>模块/角色</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>说明</strong></p></td></tr><tr><td colspan="1" rowspan="2"><p style="text-align: center"><strong>参与角色</strong> (People)</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>供给端</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Co-creators</strong></p></td><td colspan="1" rowspan="1"><p>Agent 开发者、工具/服务提供方、模型开发者、GPU/数据贡献者与标注者，共同供给与建设生态。</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>需求端</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Co-owners</strong></p></td><td colspan="1" rowspan="1"><p>包括 Agent 创作者与 Agent 参与者，通过创作与参与共享智能体成长与价值。</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>生态合作伙伴</strong> (Partners)</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>外部合力</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Platform &amp; Framework Partners</strong></p></td><td colspan="1" rowspan="1"><p>钱包开发者、算法专家、Bot/聚合平台、低代码框架；并与 TensorOpera 与 FedML 深度合作。</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>硬件入口</strong> (Hardware)</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>入口层</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>AI Hardware</strong></p></td><td colspan="1" rowspan="1"><p>DeAI 手机、可穿戴设备、机器人，作为用户交互与数据采集的物理接口，支撑隐私计算与边缘智能。</p></td></tr><tr><td colspan="1" rowspan="2"><p style="text-align: center"><strong>平台层</strong> (Platforms)</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>中枢平台</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>TensorOpera GenAI Platform</strong></p></td><td colspan="1" rowspan="1"><p>提供 MLOps、Scheduler、Compute 一体化服务，支持大规模模型训练与部署。</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>产业桥梁</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>TensorOpera FedML Platform</strong></p></td><td colspan="1" rowspan="1"><p>企业级联邦学习与分布式平台，支持跨组织/跨设备的隐私保护与模型协作，连接科研与产业。</p></td></tr><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>技术基座</strong> (Foundation)</p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>技术底座</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>FedML Open Source</strong></p></td><td colspan="1" rowspan="1"><p>全球领先的联邦/分布式 ML 开源库，是整个生态的底层支撑，提供可信赖、可扩展的开源框架。</p></td></tr></tbody></table><p><br></p><h3 id="h-chainopera-ai-agent" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>四、ChainOpera核心产品及全栈式 AI Agent 基础设施</strong></h3><p>2025年6月，ChainOpera正式上线 <strong>AI Terminal App</strong> 与去中心化技术栈，定位为“<strong>去中心化版 OpenAI</strong>”，其核心产品涵盖四大模块：应用层（AI Terminal &amp; Agent Network）、开发者层（Agent Creator Center）、模型与 GPU 层（Model &amp; Compute Network）、以及 CoAI 协议与专用链，覆盖了从用户入口到底层算力与链上激励的完整闭环。</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/ff3952f9b419530b6cab249423ae14486469e7c4aaaf10e92cc1f774c564b176.jpg" alt="" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAASCAIAAAC1qksFAAAACXBIWXMAAAsTAAALEwEAmpwYAAAFkUlEQVR4nE1Ue2xTVRg/r/tsb++9bbfbdavbWDtK23Vl7A0jRB57gMAEhCA6/iAmEgYMAacRCDojMB7BYZQYMI7EJSZEAkT/EjFgEB8ENAxCICjbQDfWboP1sa495vYO4i/n5n45+b7zfd/5fucHYAYIY8ZYCBOEGZY1FmEIyMBwg0jHlG1sQEQgIgjpUQgzmDCEYQhhGMZwMMIBaxJJkQMX5RCPkyl0EJZBGAEIeEHgBYEQAhECz70zf4gQBICzKux0F/E4sTsHF+ciTQYsASxDGIJJpjgjSY4rb+v+vdsOfLDjUMfbXQfa2nds2bz5nfZ3mhY3MSxjMpkwxgAAs8uR7S9GGdvYyfcXbzzasbNr/65jB3cdO9x+6MPWfXt2H+1samqaOtzwKy+fNR6OUErT6TTNIJlMUkq/7O4GAPA8DwAweVy5617yvLHaXKAhghFCAID6xgZKaSqdTqXTcaoHJyf1wLPnzgEAdB9RFAEAFZUVAwP90VhsPBodHX8SjUcjT0YppSdPnAAAuFwujmOz3lwVWLYoMLvSXheS7GpujhMh1NDYSCmNxuP3Y+HeeDgcj4YfP05R+tWZ0wAAQRD0D0JYXlnZ19+fSqUmJpPJVDKZnkxMJPQEJ08CAFRV5RAS68rAhmZ3c0Px+uUmUdQ0DUK4eMkSSumTZPJeIvxoYnwkERuLjFBKe86egUbrRqfBUOj27TujY2OPI2EDQ4NDKZo+8kmXft2ZcYmaXakqySnz2YvyOYbVqwOgobExNTk5Ojo6+DgyPByORCJDg4PJVOpUT8+zAWcoYbaqvmBJSWVZybzaYF116dwa/5yqwLzanKKCKY4ixHKcbFUVR5YgioIo8LzAsazNbptZMStUVREqmxmcNbM0FCoJ6sibVmCcPAVk4hHBrNkkeF8QQ25uRj6Z7kLFeUiVEEY6RwHgbKrkzTf7prEOGwKQ53lBFERVZvMdbG4268xiNatObgOSAA3beA5Y4BizaFIskmQWBIHlOEIIwZjlecwQJHCFTXWBVQ0zVtZ7VyzyNi8oXrFQcmbzgiCpstmqiIpFkC2CLDGSiAQOCZxuYKxfkVGdo8pf2bYu2Lo6q3Vpwablnm2veLas9LQ05i2pZcwiJESb4c4LeZ2B4txAsbNkusPvNlkVjLGt1F3Wurpi05rZbS1z2lpmb3utZuu66vXNRQ21xKQPaeodzF279OfI/TN9v22//c3HN7/rvPz13h96Oq+ebu58C2Q6hQAgfUEMIIaQQMRgffJZpZ6fblyLj0f/Guh/NPjvo6HBofDwZCq5Ye92QAB6/tCWr19LKY3H4/FY/OYf1774rKvz2PFT3d3vHuoABPKyZNJUwS6LmmrNz1FyNSXPIdhlAIA9WPRt79VhGhuJjMTHxxPxxD99Q3TkTmvHDoBhJgHREyxrWUMpTcQT0adP+/4euHTpl+9/vHKr9/6+zz8FGHCiwKsSJ0u8apEdWRbNJmVbedmsJwgUXbh1PUZTE9GxdCqdSMZGYxE6ObHx/Z0A/6+DRa++3P8kfOfhg3tDA+cvXth98MjOw4c/On5iw552wBmaNcUOhjA8xyOEjEBbqfv8lYsDQ4N/3u39/dbNyzd+vdp7/e7DB6+/txUwCIJnQ7a48xz1VY4F5Y755bmN1e5ldf5V870rX7TX+LCh2M8SPD/aEFeTS8teWK7VV2oLK7T6ypz6KmdDTf7iObnVASJwU9qr85RjISEAIp28EGJCGI7DhMiybJYki2yRFdlqtcqKYpFlSZIURVFUVd+XZYZljUIhgiCj6pghWOAhhMRQbD0Lx2BVwooZKxKWzaxNYW0yq1p8Pl/htEK/3x8IBEKhUDAY9Pl8Xq/Xl4HH4wmFQlanBmUTscrYaiFWC7ZasN2CRF2AIYT/AWTWvb3X6sgIAAAAAElFTkSuQmCC" nextheight="772" nextwidth="1401" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p><strong>AI Terminal App</strong> 已集成 <strong>BNBChain</strong> ，支持链上交易与 DeFi 场景的 Agent。Agent Creator Center 面向开发者开放，提供 MCP/HUB、知识库与 RAG 等能力，社区智能体持续入驻；同时发起 CO-AI Alliance，联动 <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://io.net">io.net</a>、Render、TensorOpera、FedML、MindNetwork 等伙伴。</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/257ef230f76d6eb05432d33502f8b620559d24b7aab506f9b428e95cad115fa8.jpg" alt="" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAANCAIAAABHKvtLAAAACXBIWXMAAAsTAAALEwEAmpwYAAADdUlEQVR4nF1SXW/cRBQ1NFLWhbXipXbG3ztjjz2eje14/bEf2WTXu4q3m6StlLSF3Y3URiICgiKkVqF9AFVCqAIe+sQLz7wBD7wi8Rd46S9AvPZHFG0HJVuko6t7RzNzzzn3cr/89Okfv//48q8/59NpM81/+/Xn+fE8zbZbea8bJjv9Mm62uluDIExVDZqWwyJLADA8ErQ6/TrygWJJsnaJqiCfff5o7+Aup2tqvz8qil2IvHUABoPc9dwKL0qyyvPCDQmsiVJVkCRZvbZSXeUFsQZWeYElYg1UeJGV7668x3EVFhkAMCRZ5TzP3d0tFLAuy7puuZJsajp6X6iZFhZrCgCGWAPXVqq2TcQakGQVOn7lek2S1TryIPIAMKqCVOHFCi+ypCosCFV4keMqi4iQWxS7tBGLNcC0M2g6Yj4Uw7LfHwFgXDrT6w1MiD0SEhKqGlQ1SBtx3Mwh8pKkw6612r1iOImilDOhG8Zxu9udTG6pGoTINUybGc2S7tYgbrZUDbISIre8ue9g6m5EfpSwSdBG3Gpv6dCJ2h1Vg4piJVl3e2eEMeVMROTqO9lm487hnOeFtZoqySbj9R+7YPELAwCGg+lHxw8JiXASkTxVFAsAo5m2ynKvTmhzXOgmUhQTIo8GiWk5nAqM6bOnsycXj88fP7r4+ofnn9wcZ7UPDMZXfSOfOcNgWs72zsi2Cd7cdONYUSxFM6FD0qxTJ9TPU91EABhBnKdZByKPUyRt9s2z6ZOL+4ezo3sPut2UUg8o1pWCxlsNIPLKcu/tBnUH0yzrQkr9Tq6biF1zMF00qDuNwJVbCf32+YvhsBSE9eXfVQ0Gcb7cQJLVYlhGUe5mCckzZlEUZ2U5UesoLHqaaS8UhOn2TrFoYFqY+DhJkw+nJ8PxLbY8y/BIsFwuWoYpxg1IqE022KGDaRCmi9WK48sT9pBzsX/D8pGffff9izuzjyVZXaYPgMGsvBqyt/HZ2ReERKSdNXodpiCI88n+bcWwmuNCezM8B9Moyj0ScAAY05PTk9Oz49nD6YPTukMN016m7GD6P01BmCLkIeQ6tnd5ZzEqiJkmTUdxGp+fH4z3e9z11bVXr1+//Pufp19+dTQ7MSBRtas1ZQpMiJdlHd2b0yDxIc5IpGh1tqYHtw8tz0sPRpppy7Le3urdPb7faDb/BX5XzP3/2LdOAAAAAElFTkSuQmCC" nextheight="614" nextwidth="1456" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>根据<strong>BNB DApp Bay</strong> 近 30 日的链上数据显示，其独立用户 158.87K，近30日交易量260万，在在 BSC「AI Agent」分类中排名全站第二，显示出强劲的链上活跃度。</p><p><strong>Super AI Agent App – AI Terminal (</strong><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://chat.chainopera.ai/)￼作为去中心化"><strong>https://chat.chainopera.ai/)<br></strong>作为去中心化</a> ChatGPT 与 AI 社交入口，AI Terminal 提供多模态协作、数据贡献激励、DeFi 工具整合、跨平台助手，并支持 AI Agent 协作与隐私保护（Your Data, Your Agent）。用户可在移动端直接调用开源大模型 <strong>DeepSeek-R1</strong> 与社区智能体，交互过程中语言 Token 与加密 Token 在链上透明流转。其价值在于让用户从“内容消费者”转变为“智能共创者”，并能在 DeFi、RWA、PayFi、电商等场景中使用专属智能体网络。</p><p><strong>AI Agent Social Network (</strong><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://chat.chainopera.ai/agent-social-network)￼"><strong>https://chat.chainopera.ai/agent-social-network)<br></strong></a> 定位类似 LinkedIn + Messenger，但面向 AI Agent 群体。通过虚拟工作空间与 Agent-to-Agent 协作机制（MetaGPT、ChatDEV、AutoGEN、Camel），推动单一 Agent 演化为多智能体协作网络，覆盖金融、游戏、电商、研究等应用，并逐步增强记忆与自主性。</p><p><strong>AI Agent Developer Platform (</strong><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://agent.chainopera.ai/)￼"><strong>https://agent.chainopera.ai/)<br></strong></a> 为开发者提供“乐高式”创作体验。支持零代码与模块化扩展，区块链合约确保所有权，DePIN + 云基础设施降低门槛，Marketplace 提供分发与发现渠道。其核心在于让开发者快速触达用户，生态贡献可透明记录并获得激励。</p><p><strong>AI Model &amp; GPU Platform (</strong><a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="https://platform.chainopera.ai/)￼"><strong>https://platform.chainopera.ai/)<br></strong></a> 作为基础设施层，结合 DePIN 与联邦学习，解决 Web3 AI 依赖中心化算力的痛点。通过分布式 GPU、隐私保护的数据训练、模型与数据市场，以及端到端 MLOps，支持多智能体协作与个性化 AI。其愿景是推动从“大厂垄断”到“社区共建”的基建范式转移。</p><table style="min-width: 125px"><colgroup><col><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>层级</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>模块</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>定位</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>愿景 &amp; 价值主张</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>关键特点</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>入口层</strong></p></td><td colspan="1" rowspan="1"><p>AI Terminal</p></td><td colspan="1" rowspan="1"><p>去中心化 ChatGPT + 社交入口</p></td><td colspan="1" rowspan="1"><p>协作式 AGI；用户从“消费者”转为“共创者”</p></td><td colspan="1" rowspan="1"><p>数据激励、DeFi 工具、跨平台助手、Agent 协作</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>社交层</strong></p></td><td colspan="1" rowspan="1"><p>AI Agent Social Network</p></td><td colspan="1" rowspan="1"><p>AI Agent 的 LinkedIn + Messenger</p></td><td colspan="1" rowspan="1"><p>多 Agent 协作进化；推动单 Agent → 协作网络</p></td><td colspan="1" rowspan="1"><p>虚拟工作空间、Agent 协作、社交化、人类在环</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>开发层</strong></p></td><td colspan="1" rowspan="1"><p>Developer Platform</p></td><td colspan="1" rowspan="1"><p>开发者 Launchpad &amp; 工具箱</p></td><td colspan="1" rowspan="1"><p>乐高式低门槛开发；开发者从孤立 → 生态共创</p></td><td colspan="1" rowspan="1"><p>零代码、模块化扩展、链上验证、分布式算力、Marketplace</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>基础设施层</strong></p></td><td colspan="1" rowspan="1"><p>Model &amp; GPU Platform</p></td><td colspan="1" rowspan="1"><p>DePIN + 联邦学习基础设施</p></td><td colspan="1" rowspan="1"><p>社区驱动 AI 基建；从大厂垄断 → 社区共建</p></td><td colspan="1" rowspan="1"><p>分布式 GPU、联邦学习、模型/数据市场、MLOps</p></td></tr></tbody></table><br><h3 id="h-chainopera-ai" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>五、ChainOpera AI 的路线图规划</strong></h3><p>除去已正式上线全栈 <strong>AI Agent平台</strong>外， ChainOpera AI 坚信通用人工智能（AGI）来自 <strong>多模态、多智能体的协作网络</strong>。因此其远期路线图规划分为四个阶段：<br></p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/039d1360b87d5f1b6efa185b985633ea0f489bfd16fcff0867e690298098e221.jpg" alt="" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAWCAIAAAAuOwkTAAAACXBIWXMAAAsTAAALEwEAmpwYAAAEjUlEQVR4nJ1VbWvbVhSWZdnyi16voqsrWVFdxXZUz6sWLa3rRPGUuM2LMjWpcLx5M15cJ6aCmpl8GCwjMEo6yCiMQeZRBqGjH/uz9kM2bCWhK05L+nAQ4nLvOfc+55znYGmeNbyVKQmKoijB0ZdlWQAASZL4JGCTwMEpHkEWCjTgeQQZAfAI8giSVBpDpfybf/+p7+2ePP/ldDh89epVr9frdDosy070hWFYBMffNgzD7n//uP3H8Xd/Pv/2958bJz/2/v6t89evT9+8vLXmYDwAj76qL68sNxqN1dVV13Udx2k2m/V63bZtz/N833dd1x/Dtu2JIZMMrRTzsjEjyIhHUFBkXpaEjJxiGUwUxcHT/tHR0evXr09OTo6Pjw8ODobD4dnZ2bNnzw4PD1+8eHF6ejocDl3XlSCMEtEpfVrKZ3lFAhkk5bMggxhJLNxfYKFw5ZMnkvs26fjFfwSPUIBnoEALgBYAK0EOSXIxz8sQx3EylYwnyHgiQcTjZCpJxGMTnF5FPX7FPW4uWp/57r3dHdPfuNP0yu1Hd1vblu8u7O4UNxzsIxDBI2FuiXiMEQCGYZqmIVlOkGQqmWJZlqbp8Z6RfdgdEY+j0qx0K0cLQL5toGI+liAxDKMArxRzCYYaPULXW2M4jlOYnVVV9Vr3xWOpZCyRiIxq8pwlIh5joRAlouEeRVG63W6n0+n1eq1Wq1arFQoFjuOuESZKRI21pZJX05fmze11098w1pY+2awt7O5kzOIN7cbW1pbruqVSyTRNy7IkCAmCuFYAgkfw7sqSktUAknhJnJIRnM6kARen0ggh0zQVRZl8mKTS8pwh6tPSrC7fNijAk6lkmmdJKp2iKR7BbGUOFXSciM7dNvtBsOPX/e1H297Dh5tfLpTvZTKZYrF4cHDQbrc9z6vX657nSRC+S3RYHviY1mxlbuOHJ/Z+w95vlNtb9pNvKm3f9DfMubnGzk6z1Wo2m3v7e9VqtVKpCIIgy3K32221Wr1er91um6ZpGAYAowIbpUv/4k520SptPjDWloob1aX9r6UZLX9Tz+dyI/njARSEkFBd10ulkgQhSZLJZPKyLQiCoGmavcD/2iVKRO3tzVz582mrpFmfSrnslK7FqbRpmk/7/cfd7k9HR2vr6+EZ27YPDw9brVYoUPoYCKEPJPDB8vLebueBs7y+UivM5GRJQpJkWVZYdkEQ9Pv9bDYbNlS1WrVt27KsWq2m6zoA4D26e46FxcUgCDqdzvLKiqpOcxwHAJBl2TAMVVVZlhVF8ZzQjwCO4yGzAACO42iaDtmwLGswGPi+Xy6Xa7WaYRgShIqihJHeI1kTUK1WgyDwPM9xnLBZwrmmaZqiKJqmqaqqjKGqqiiKYVskWSaWiF+MnUhsrERRgghXokT0ss+xSqUSBEGz2RwMBqurqwghQbhC2bFzxBIkJ4kMFCjAcdIUBTgGCpwksuOV0ewciTl/LteapjmOMz8/H2qIIAiXPFw1lkcvYOgUyyQYmgJ8mmdTLHNpFOApwKU5ZsI8uFby8Iux/A5G+hiPRcda9B/HCNsXiivVjgAAAABJRU5ErkJggg==" nextheight="1013" nextwidth="1456" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><ul><li><p><strong>阶段一（Compute → Capital）</strong>：构建去中心化基础设施，包括 GPU DePIN 网络、联邦学习与分布式训练/推理平台，并引入 <strong>模型路由器</strong>（Model Router）协调多端推理；通过激励机制让算力、模型与数据提供方获得按使用量分配的收益。</p></li><li><p><strong>阶段二（Agentic Apps → Collaborative AI Economy）</strong>：推出 AI Terminal、Agent Marketplace 与 Agent Social Network，形成多智能体应用生态；通过 <strong>CoAI 协议</strong> 连接用户、开发者与资源提供者，并引入 <strong>用户需求–开发者匹配系统</strong> 与信用体系，推动高频交互与持续经济活动。</p></li><li><p><strong>阶段三（Collaborative AI → Crypto-Native AI）</strong>：在 DeFi、RWA、支付、电商等领域落地，同时拓展至 <strong>KOL 场景与个人数据交换</strong>；开发面向金融/加密的专用 LLM，并推出 Agent-to-Agent 支付与钱包系统，推动“Crypto AGI”场景化应用。</p></li><li><p><strong>阶段四（Ecosystems → Autonomous AI Economies）</strong>：逐步演进为自治子网经济，各子网围绕 <strong>应用、基础设施、算力、模型与数据</strong> 独立治理、代币化运作，并通过跨子网协议协作，形成多子网协同生态；同时从 Agentic AI 迈向 <strong>Physical AI</strong>（机器人、自动驾驶、航天）。</p></li></ul><p><em>免责声明：本路线图仅供参考，时间表与功能可能因市场环境动态调整，不构成交付保证承诺。</em></p><h3 id="h-" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>七、代币激励与协议治理</strong></h3><p>目前 ChainOpera 尚未公布完整的代币激励计划，但其 CoAI 协议以“<strong>共创与共拥有”</strong>为核心，通过区块链与<strong> Proof-of-Intelligence 机制</strong>实现透明可验证的贡献记录：开发者、算力、数据与服务提供者的投入按标准化方式计量并获得回报，用户使用服务、资源方支撑运行、开发者构建应用，所有参与方共享增长红利；平台则以 1% 服务费、奖励分配和流动性支持维持循环，推动开放、公平、协作的去中心化 AI 生态。</p><p><strong>Proof-of-Intelligence 学习框架</strong></p><p>Proof-of-Intelligence (PoI) 是 ChainOpera 在 CoAI 协议下提出的核心共识机制，旨在为去中心化 AI 构建提供透明、公平且可验证的激励与治理体系。其基于<strong>Proof-of-Contribution（贡献证明）</strong> 的区块链协作机器学习框架，旨在解决联邦学习（FL）在实际应用中存在的激励不足、隐私风险与可验证性缺失问题。该设计以智能合约为核心，结合去中心化存储（IPFS）、聚合节点和零知识证明（zkSNARKs），实现了五大目标：① 按贡献度进行公平奖励分配，确保训练者基于实际模型改进获得激励；② 保持数据本地化存储，保障隐私不外泄；③ 引入鲁棒性机制，对抗恶意训练者的投毒或聚合攻击；④ 通过 ZKP 确保模型聚合、异常检测与贡献评估等关键计算的可验证性；⑤ 在效率与通用性上适用于异构数据和不同学习任务。</p><br><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/10f668b08ec472fd39f03b68de3ed2868a0576ca80e8b1e6dcb566b083a869a9.jpg" alt="" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAXCAIAAADlZ9q2AAAACXBIWXMAAAsTAAALEwEAmpwYAAAEoUlEQVR4nK1VT0zbZhT/ruxCr70gbauKtMu4tAeknioVqfTIpipCQuO0rqqyikO7XliFOlZVZWuHxlg2qkasHkQsq5elmKwspCkezZoaLyYxpCYimIBjTB3H5osT502JS5bRP3RTf7Ks78/z93vv+/m9h2AvWJYFAJIk0TTNMEw4HE4mk/DKQK9oZxiGy+UaGRkhSZJl2ddPYFlWOBx2u92BQMCOwI7s9RBIUtY0Tb/f39vbe+HCBZIkd3Hb751BufpYpmlalrU3QcE0ifGJtcw6zye+GLhKURTG+Fn3awT/J4INKQtQXkmvjU9MqmqufktRFJqmASASifT19emapmr5zScqx8Vuk2Q8Hv+H4CV3alW3YrHY3enwTxQxTU/WFrPSxrDr+3VJOXHixNsHDvB84pepMDkVcjqdCKEjR448JwI72PqQS6UiAGRSbJQOfHXjqpciAKAMZQDAGOvb2zzPd3d3nzlzZiWVyuuGquVlWWZZVhCEpwSqqsbj8XoajLH9xjs3vrGSjEVobJbKUNqxKbDc4k3itqJs3bjh/tV/xzTN52gwMjKyf//+xsbGQ4cOpdOr08HwuGfC4XCIokgQxNjYmG2q6fn3nR9/NvytPc3r+nIqjXEhp+kF0wyGZqtSVeIqV6/B9gwZhtHa2trQ0HDw4MGGhobz58/HuAWKmjp16kOv1zs4OPidyxWNRnVNe/d4O6qid2CgIsnCUlUDKRicBgC3293b+6mmaRWOOjlRKpVqbm5GCDU2NiKEOjrew2Ypp+myrExSv82E7t2nH/zouZ2Vsu8cb0dvvoX27fvg3DkAeJwSje3tpaWkZ+LntLjR0tKCEJoJ3ROFyCI709//+eHDh0mSRDzPt7e3Dw0NnTx58vTp021tbQDQd+2T0IO7ZatMzz3MaXnbl68JouL/gebY4iLGxVRaVNXcxoYYZ38XhNTZs2cdDgfHLUjiQlp4NElNXbp0iWEYxHHc0aNHOzs7Ozo62tranE4nALhuXXsYmy3g4nQwtPVEBYBisThHz3o8nvGxsS1FycpPllOr1XW8ub6IMfb5fF6vV5bl3SInk0mv1+v3+2ma9vl8HMdVPiuVqlL9C/MsGwgEKYoSRREANpWttLhWME1VfRqiDbtQFIvFgi0yPIOsLIfpubyuV42t+i1J3hKEZcMw7GleN2ZCs9e/vAYAAwMDPT09zyZshaCWVvZfHOO4m+5bDyKRtcx6za5gFv6MzeW0fFdXF0FUEs02zml5YSVTOagKAEgwfwiJ+a3U8n3X8PNrke0CQujYsWML8URW3rQXCyZWFFWS5ATPT3hJjAuVDN1cv0tO1Ag2FYWZufMXQ897xj9Cb7ys2Lnd7uHhby5evIhxIbMuGwZWVfUHgohGHxmGkeCXbD80VVl9HF8Vxf7+/suXLwuCsKusvlI1zeuVS/f5fAihlpYWO5tqbQAAGGbe5XKNjo5SFFWv9ssI7HYBO2MAuHLlSlNTE0IoGo3uIrAsi2EYiqIymYoeu0XeE5qmJZPJoaGhnp4eh8MRDAZVtZIcNbyo2/wHgkAgEAqFBEFgWZaiKLv01h/6oo72N40Ec/XXYZ/KAAAAAElFTkSuQmCC" nextheight="1046" nextwidth="1456" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p><br></p><p><strong>全栈式 AI 中代币价值<br></strong> ChainOpera 的代币机制围绕五大价值流（LaunchPad、Agent API、Model Serving、Contribution、Model Training）运作，核心是 <strong>服务费、贡献确认与资源分配</strong>，而非投机回报。</p><ul><li><p><strong>AI 用户</strong>：用代币访问服务或订阅应用，并通过提供/标注/质押数据贡献生态。</p></li><li><p><strong>Agent/应用开发者</strong>：使用平台算力与数据进行开发，并因其贡献的 Agent、应用或数据集获得协议认可。</p></li><li><p><strong>资源提供者</strong>：贡献算力、数据或模型，获得透明记录与激励。</p></li><li><p><strong>治理参与者（社区 &amp; DAO）</strong>：通过代币参与投票、机制设计与生态协调。</p></li><li><p><strong>协议层（COAI）</strong>：通过服务费维持可持续发展，利用自动化分配机制平衡供需。</p></li><li><p><strong>节点与验证者</strong>：提供验证、算力与安全服务，确保网络可靠性。</p></li></ul><p><strong>协议治理</strong></p><p>ChainOpera 采用 <strong>DAO 治理</strong>，通过质押代币参与提案与投票，确保决策透明与公平。治理机制包括：<strong>声誉系统</strong>（验证并量化贡献）、<strong>社区协作</strong>（提案与投票推动生态发展）、<strong>参数调整</strong>（数据使用、安全与验证者问责）。整体目标是避免权力集中，保持系统稳定与社区共创。<br></p><h3 id="h-" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>八、团队背景及项目融资</strong></h3><p>ChainOpera项目由在联邦学习领域具有深厚造诣的 <strong>Salman Avestimehr 教授</strong> 与 <strong>何朝阳（Aiden）博士</strong> 共同创立。其他核心团队成员背景横跨 <strong>UC Berkeley、Stanford、USC、MIT、清华大学</strong> 以及 <strong>Google、Amazon、Tencent、Meta、Apple</strong> 等顶尖学术与科技机构，兼具学术研究与产业实战能力。截止目前，ChainOpera AI 团队规模已超过 <strong>40 人</strong>。</p><p><strong>联合创始人：Salman Avestimehr</strong></p><p>Salman Avestimehr 教授是 <strong>南加州大学（USC）电气与计算机工程系的 Dean’s Professor</strong>，并担任 <strong>USC-Amazon Trusted AI 中心创始主任</strong>，同时领导 USC 信息论与机器学习实验室（vITAL）。他是 <strong>FedML 联合创始人兼 CEO</strong>，并在 2022 年共同创立了 TensorOpera/ChainOpera AI。</p><p>Salman Avestimehr 教授毕业于 UC Berkeley EECS 博士（最佳论文奖）。作为<strong>IEEE Fellow</strong>，在信息论、分布式计算与联邦学习领域发表高水平论文 300+ 篇，引用数超 30,000，并获 <strong>PECASE、NSF CAREER、IEEE Massey Award</strong> 等多项国际荣誉。其主导创建 <strong>FedML</strong> 开源框架，广泛应用于医疗、金融和隐私计算，并成为 TensorOpera/ChainOpera AI 的核心技术基石。</p><p><strong>联合创始人：Dr. Aiden Chaoyang He</strong></p><p>Dr. Aiden Chaoyang He 是 TensorOpera/ChainOpera AI 联合创始人兼总裁，南加州大学（USC）计算机科学博士、<strong>FedML 原始创建者</strong>。其研究方向涵盖分布式与联邦学习、大规模模型训练、区块链与隐私计算。在创业之前，他曾在 <strong>Meta、Amazon、Google、Tencent</strong> 从事研发，并在腾讯、百度、华为担任核心工程与管理岗位，主导多个互联网级产品与 AI 平台的落地。</p><p>学术与产业方面，Aiden 已发表 30 余篇论文，Google Scholar 引用超过 13,000，并获 Amazon Ph.D. Fellowship、Qualcomm Innovation Fellowship 及 NeurIPS、AAAI 最佳论文奖。他主导开发的 <strong>FedML 框架是联邦学习领域最广泛使用的开源项目之一</strong>，支撑 <strong>日均 270 亿次请求</strong>；并作为核心作者提出 FedNLP 框架、混合模型并行训练方法，被广泛应用于Sahara AI等去中心化AI项目。</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/2f5769295dfcafcbc9f116c4545757f55ed08959971deeed553f9d40a5c37c32.jpg" alt="" blurdataurl="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACAAAAAOCAIAAADBvonlAAAACXBIWXMAAAsTAAALEwEAmpwYAAAErElEQVR4nD2UXW/TZhiG3cSOX8ev/b7+em3HdmLHifNhNyFpEyik5GOlbfoBtF0/6AotKwWB2DRAYWVbAW1iWrXTHU2TmMQkpGndDxhn7A+MX8CO+QHshElTqLSjR7p1S4/03NdzU4ADtpeJJhp+LfJqZbdcsLMukiTAcbbnpnzXCLJGKadmbGJbooQ5nk+5jlPwM6Mlo+gTzxnqeKibjk08J1XMpYo5knV0x8KSRDEMc2332mAw6HY6H66sXN7amp+bJ4QwDHN1Z2dvb296ampmZmZxcbE/21dVFQpwY3392u5uf3Z2+8r29evXL21s6DoRBKEQBJOtVr1W77TbnU6nXqspikwBjms2GjnfNw0jbTtpJ+1nfVmRAQD1Wr1aqfpZP+f7xaAQBAGWMA9hEASFoOBns6NhFJZKxUJBkiRegI7juBnXzbhZ1/M8L5PJSJJExRk6xtAUS3O6AlMEKJgVIQPYOE3THBjhWZbghC4xCMYhx7CJOEOPgATFJYCu0AqiRT7OAYZNxGiaTnK0kEwoiCWYxYgVBQawFAvAZKfz9bdPGpOng2oU1SvlKFSJxvF8KQpPnW21znUneu2xMyfHmk1VUzme6071Lq6ujrcm2jNTzdbp8VMnZUXheL7T6y6vrS1dWu+vnG+0TgfFgqppFAvYy1tbvzx79uvz5+3W5OzM7LluT9d1jue7ne7O9vbe7u7NGzd2rmxvbm4SQgAAn9y+/f13hzNT51ZXVi9euDDZag2zgfDs5OTq8sri/MLi/EKn3Z6emiaEUCwHeAGOxGPlKJSJloQQSTgJ+QRgRYwSgOUFASmypMjoPUIJwEqKrGgqy4EYQwui+L9f0VQsy0iRsaaqRJMVJQl5imETskGUjM3oSC/l7FIeGyQJYZyhLc8tjlVS5bzopgRTw7rGQT7BspKu4ZQ+sTD9wfqSlrUlkwx1wPIyFk0i5Rw5nxZTuqjKwwUIIzeT8Tzv0sbGiz9eDAYDRZIsy4rFY/v7+/+8fbu/v7+8vHR1Z6fX7cmyjDH2XNc0zVd/vXrz5k0UhrZlEUJ4np/r989MTIzV62P1eqPRaDabhGiUKIq2ZVUr1YMvvnx88PDC+fOmYRqGQdP0xztX/3z5cmvzo167PTfbLwSBLMsIobSTti37zqef3b83CPKBbVmapkEIm41mJYoqUVQqFKIwjMJQUzUqRsc5JEJVopIMxY0kEIQKZgB7rLNIBLrEaCItJI/xjdHxJEJQlThD4U2N1xVexvR7fGN0nKJHEgqi5aGfhtwQ0yTkU7ZlpO2DJ9/88NOP1ZMN3dQlRQY8l/H9fFgKRsPiiUowGnr5HMLDR7Mc23QdO/DTBd/yMmbawbJEM8zswvzdzwft6anlrY3p+X5tfHxYFccZuK779+vX/757t7y0rKmaYZqA43rd3uNHjx5+dXD/3uDBgwdra2uCICCMi0GhXqudqFSro5WxWn14Co3E6fjtW7d+Pzq6dfPmz0+fHv121DrTGmaQhLxhGkTX79y9e3h4WC6XNaJJsjQsNSvlel7GdS3bcl3XdhyEEf/er+u6bhj68TQNhIekpqyUZVm5XC4IgjAK8/k8wug/T2vYMTTDWGAAAAAASUVORK5CYII=" nextheight="592" nextwidth="1317" class="image-node embed"><figcaption htmlattributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>2024 年 12 月，ChainOpera AI 宣布完成 <strong>350 万美元种子轮融资</strong>，累计与 TensorOpera 共计融资 <strong>1700 万美元</strong>，资金将用于构建面向去中心化 AI Agent 的区块链 L1 与 AI 操作系统。本轮融资由 <strong>Finality Capital、Road Capital、IDG Capital</strong> 领投，跟投方包括 <strong>Camford VC、ABCDE Capital、Amber Group、Modular Capital</strong> 等，亦获得 Sparkle Ventures、Plug and Play、USC 以及 EigenLayer 创始人 Sreeram Kannan、BabylonChain 联合创始人 David Tse 等知名机构和个人投资人支持。团队表示，此轮融资将加速实现 <strong>“AI 资源贡献者、开发者与用户共同 co-own 和 co-create 的去中心化 AI 生态”</strong> 愿景。<br></p><h3 id="h-ai-agent" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>九、联邦学习与AI Agent市场格局分析</strong></h3><p>联邦学习框架主要有四个代表：<strong>FedML、Flower、TFF、OpenFL</strong>。其中，<strong>FedML</strong> 最全栈，兼具联邦学习、分布式大模型训练与 MLOps，适合产业落地；<strong>Flower</strong> 轻量易用，社区活跃，偏教学与小规模实验；<strong>TFF</strong> 深度依赖 TensorFlow，学术研究价值高，但产业化弱；<strong>OpenFL</strong> 聚焦医疗/金融，强调隐私合规，生态较封闭。总体而言，FedML 代表工业级全能路径，Flower 注重易用性与教育，TFF 偏学术实验，OpenFL 则在垂直行业合规性上具优势。</p><p>在产业化与基础设施层，TensorOpera（FedML 商业化）的特点在于继承开源 FedML 的技术积累，提供跨云 GPU 调度、分布式训练、联邦学习与 MLOps 的一体化能力，目标是桥接学术研究与产业应用，服务开发者、中小企业及 Web3/DePIN 生态。总体来看，TensorOpera 相当于 “开源 FedML 的 Hugging Face + W&amp;B 合体”，在全栈分布式训练和联邦学习能力上更完整、通用，区别于以社区、工具或单一行业为核心的其他平台。</p><p>在创新层代表中，<strong>ChainOpera</strong> 与 <strong>Flock</strong> 都尝试将联邦学习与 Web3 结合，但方向存在明显差异。ChainOpera 构建的是 <strong>全栈 AI Agent 平台</strong>，涵盖入口、社交、开发和基础设施四层架构，核心价值在于推动用户从“消费者”转变为“共创者”，并通过 AI Terminal 与 Agent Social Network 实现协作式 AGI 与社区共建生态；而 Flock 则更聚焦于 <strong>区块链增强型联邦学习（BAFL）</strong>，强调在去中心化环境下的隐私保护与激励机制，主要面向算力和数据层的协作验证。ChainOpera 更偏向 <strong>应用与 Agent 网络层</strong> 的落地，Flock 则偏向 <strong>底层训练与隐私计算</strong> 的强化。</p><table style="min-width: 125px"><colgroup><col><col><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>层级</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>代表玩家</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>定位 &amp; 特点</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>价值</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>局限</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>基础层</strong>（学术 &amp; 开源）</p></td><td colspan="1" rowspan="1"><p>FedML, Flower, TFF, OpenFL, PaddleFL, FederatedScope</p></td><td colspan="1" rowspan="1"><p>提出方法、定义标准、提供工具库；FedML 全栈最全能，Flower 轻量易用，TFF 偏学术研究，OpenFL 医疗/合规场景</p></td><td colspan="1" rowspan="1"><p>奠定标准化 API，推动技术演进，保证可复现性</p></td><td colspan="1" rowspan="1"><p>多停留在实验/小规模 PoC，产业化不足</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>平台层</strong>（产业化 基础设施）</p></td><td colspan="1" rowspan="1"><p><strong>TensorOpera（FedML 商业化）</strong>、Hugging Face、W&amp;B、NVIDIA Clara、IBM FL、Amazon SageMaker</p></td><td colspan="1" rowspan="1"><p>- <strong>TensorOpera</strong>：继承 FedML，提供跨云 GPU 调度、分布式训练、联邦学习与 MLOps 一体化</p><p>- <strong>Hugging Face</strong>：模型与数据社区 + API，偏重模型生态</p><p>- <strong>W&amp;B</strong>：实验管理与 MLOps 工具，强调可视化与协作&nbsp;</p></td><td colspan="1" rowspan="1"><p>降低门槛，把科研成果转化为企业可用产品与服务；连接研究与产业落地</p></td><td colspan="1" rowspan="1"><p>市场竞争激烈；算力和生态绑定明显；部分方案局限于特定行业或厂商生态</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>创新层</strong>（新叙事探索）</p></td><td colspan="1" rowspan="1"><p>ChainOpera, Flock</p></td><td colspan="1" rowspan="1"><p>将 FL 与 Web3 结合：DePIN、Token 激励、可验证训练；探索社区共建 AI</p></td><td colspan="1" rowspan="1"><p>引入新经济模式，解决算力/数据激励，开辟去中心化路径</p></td><td colspan="1" rowspan="1"><p>商业模式未成熟，仍处早期测试与叙事阶段</p></td></tr></tbody></table><p><br>在Agent网络层面，业内最有代表性的项目是Olas Network。ChainOpera 前者源自联邦学习，构建模型—算力—智能体的全栈闭环，并以 Agent Social Network 为实验场探索多智能体的交互与社交协作；Olas Network源于 DAO 协作与 DeFi 生态，定位为去中心化自主服务网络，通过 Pearl推出可直接落地的Defi收益场景，与ChainOpera展现出截然不同的路径。</p><table style="min-width: 75px"><colgroup><col><col><col></colgroup><tbody><tr><td colspan="1" rowspan="1"><p style="text-align: center"><strong>维度</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>ChainOpera AI</strong></p></td><td colspan="1" rowspan="1"><p style="text-align: center"><strong>Olas Network (Autonolas / Pearl)</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>定位</strong></p></td><td colspan="1" rowspan="1"><p>源自 <strong>联邦学习 (FedML)</strong>，演进为 <strong>全栈 AI Agent 网络</strong></p></td><td colspan="1" rowspan="1"><p>定位为 <strong>去中心化自主服务网络 (Autonomous Services)</strong>，强调<strong>可组合 Agent 服务</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>技术基因</strong></p></td><td colspan="1" rowspan="1"><p>继承 FedML 的 <strong>分布式学习与贡献证明 (PoC)</strong>，强调 <strong>隐私保护、跨节点调度、算力/数据激励</strong></p></td><td colspan="1" rowspan="1"><p>技术栈由 <strong>Agent Services + 组件组合 + On-chain Protocol</strong> 构成，类似 <strong>乐高式组合</strong>，强调 <strong>可组合性与可重用性</strong></p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>架构设计</strong></p></td><td colspan="1" rowspan="1"><p>三层结构：① <strong>Model &amp; GPU Platform</strong>（分布式训练）；② <strong>Agent Platform</strong>（开发/部署/协作）；③ <strong>ChainOpera Layer</strong>（激励与协调）</p></td><td colspan="1" rowspan="1"><p><strong>Agent Services</strong> 由多个独立程序组成，通过 <strong>共识装置 (consensus gadget)</strong> 协调，形成分布式复制的自治应用</p></td></tr><tr><td colspan="1" rowspan="1"><p><strong>产品功能</strong></p></td><td colspan="1" rowspan="1"><p><strong>Agent Social Network</strong>：以对话聊天为核心，探索 <strong>社交型 AI Agent 网络</strong>，强调 <strong>多智能体交互、社区与内容共创</strong></p></td><td colspan="1" rowspan="1"><p><strong>Pearl – AI Agent App Store</strong>：用户可 <strong>拥有并运行多类 Agents</strong>，涵盖 <strong>DeFi、预测市场、跨链资产管理</strong></p></td></tr></tbody></table><p><br></p><h3 id="h-" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>十、投资逻辑与潜在风险分析</strong></h3><p><strong>投资逻辑</strong></p><p>ChainOpera 的优势首先在于其 <strong>技术护城河</strong>：从 FedML（联邦学习标杆性开源框架）到 TensorOpera（企业级全栈 AI Infra），再到 ChainOpera（Web3 化 Agent 网络 + DePIN + Tokenomics），形成了独特的连续演进路径，兼具学术积累、产业落地与加密叙事。</p><p>在 <strong>应用与用户规模</strong> 上，AI Terminal 已形成数十万日活用户与千级 Agent 应用生态，并在 BNBChain DApp Bay AI 类目排名第一，具备明确的链上用户增长与真实交易量。其多模态场景覆盖的加密原生领域有望逐步外溢至更广泛的 Web2 用户。</p><p><strong>生态合作</strong> 方面，ChainOpera 发起 CO-AI Alliance，联合 <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://io.net">io.net</a>、Render、TensorOpera、FedML、MindNetwork 等伙伴，构建 GPU、模型、数据、隐私计算等多边网络效应；同时与三星电子合作验证移动端多模态 GenAI，展示了向硬件和边缘 AI 扩展的潜力。</p><p>在 <strong>代币与经济模型</strong> 上，ChainOpera 基于 Proof-of-Intelligence 共识，围绕五大价值流（LaunchPad、Agent API、Model Serving、Contribution、Model Training）分配激励，并通过 1% 平台服务费、激励分配和流动性支持形成正向循环，避免单一“炒币”模式，提升了可持续性。</p><p><strong>潜在风险</strong></p><p>首先，<strong>技术落地难度较高</strong>。ChainOpera 所提出的五层去中心化架构跨度大，跨层协同（尤其在大模型分布式推理与隐私训练方面）仍存在性能与稳定性挑战，尚未经过大规模应用验证。</p><p>其次，<strong>生态用户粘性仍需观察</strong>。虽然项目已取得初步用户增长，但 Agent Marketplace 与开发者工具链能否长期维持活跃与高质量供给仍有待检验。目前上线的 Agent Social Network 主要以 LLM 驱动的文本对话为主，用户体验与长期留存仍需进一步提升。若激励机制设计不够精细，可能出现短期活跃度高但长期价值不足的现象。</p><p>最后，<strong>商业模式的可持续性尚待确认</strong>。现阶段收入主要依赖平台服务费与代币循环，稳定现金流尚未形成，与 AgentFi或Payment 等更具金融化或生产力属性的应用相比，当前模式的商业价值仍需进一步验证；同时，移动端与硬件生态仍在探索阶段，市场化前景存在一定不确定性。<br></p><p><strong><em>免责声明：</em></strong><em>本文在创作过程中借助了 ChatGPT-5 的 AI 工具辅助完成，作者已尽力校对并确保信息真实与准确，但仍难免存在疏漏，敬请谅解。需特别提示的是，加密资产市场普遍存在项目基本面与二级市场价格表现背离的情况。本文内容仅用于信息整合与学术/研究交流，不构成任何投资建议，亦不应视为任何代币的买卖推荐。</em></p><p><br></p>]]></content:encoded>
            <author>0xjacobzhao@newsletter.paragraph.com (jacobzhao)</author>
            <category>联邦学习</category>
            <category>智能体网络</category>
            <category>fedml</category>
            <category>chainopera</category>
            <category>agent</category>
            <enclosure url="https://storage.googleapis.com/papyrus_images/9cacc6e8b823ef6f6e53141ffa7530b54d594d18ca580bbd9f81534f49189845.jpg" length="0" type="image/jpg"/>
        </item>
    </channel>
</rss>