<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/">
    <channel>
        <title>DAITS</title>
        <link>https://paragraph.com/@daits</link>
        <description>Decentralized AI Trust &amp; Security. The first decentralized AI trust authority. Building the AI trust layer on-chain. daits.org</description>
        <lastBuildDate>Fri, 17 Apr 2026 02:00:38 GMT</lastBuildDate>
        <docs>https://validator.w3.org/feed/docs/rss2.html</docs>
        <generator>https://github.com/jpmonette/feed</generator>
        <language>en</language>
        
        <copyright>All rights reserved</copyright>
        <item>
            <title><![CDATA[2 Minutes to an Explainable AI Product]]></title>
            <link>https://paragraph.com/@daits/2-minutes-to-an-explainable-ai-product</link>
            <guid>qXNqF83nIDyKpYQBBZ1X</guid>
            <pubDate>Wed, 20 Aug 2025 14:50:59 GMT</pubDate>
            <description><![CDATA[Explainability in Practice: A Simple Framework for On-Chain Systems The hardest questions in Web3 aren’t about speed or gas fees. They’re about security and trust. What good is an AI agent that rebalances a vault if no one knows why it moved the funds? In decentralized systems, trust doesn’t come from glossy dashboards. It comes from pointing to an on-chain log and saying: “Here’s the reason. Here’s the decision. Here’s the proof.” Explainability is an invisible scaffolding holding Web3 AI to...]]></description>
            <content:encoded><![CDATA[<figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/f0c9d7971fa213df501db31f81dd186bda81922bf437fec1f87e50f0833561d6.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p><em>Explainability in Practice: A Simple Framework for On-Chain Systems</em></p><p>The hardest questions in Web3 aren’t about speed or gas fees. They’re about security and trust.</p><p>What good is an AI agent that rebalances a vault if no one knows <em>why</em> it moved the funds?</p><p>In decentralized systems, trust doesn’t come from glossy dashboards. It comes from pointing to an on-chain log and saying: <em>“Here’s the reason. Here’s the decision. Here’s the proof.”</em></p><p>Explainability is an invisible scaffolding holding Web3 AI together.</p><hr><h3 id="h-three-simple-moves-that-make-it-work" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>Three simple moves that make it work</strong></h3><p>Think of explainability as a dance with three steps: <strong>log it, show it, verify it.</strong></p><ul><li><p>A DeFi agent doesn’t just reshuffle liquidity, it leaves a trail: <em>moved funds because yield on Protocol B outperformed Protocol A by 2%.</em></p></li><li><p>A cross-chain bridge agent doesn’t just adjust fees, it shows the logic: <em>slippage exceeded 0.5%, routing via Chain X instead of Chain Y.</em></p></li><li><p>A governance bot doesn’t just vote “no,” it anchors the reasoning: <em>treasury risk exceeded safe threshold.</em></p></li></ul><p>Three steps. Simple. Together, they turn black boxes into glass.</p><hr><h3 id="h-where-this-already-shows-up" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>Where this already shows up</strong></h3><p>Explainability is not a theory. It’s appearing in:</p><ul><li><p>Yield vaults publishing on-chain rebalancing logs</p></li><li><p>Cross-chain agents documenting swap routes and slippage thresholds</p></li><li><p>Fraud detection models logging why a transaction was flagged</p></li><li><p>Governance bots linking proposals to transparent decision criteria</p></li></ul><p>These stories may seem small, but they separate protocols that attract user trust from those that don’t.</p><hr><h3 id="h-why-it-matters-for-builders" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>Why it matters for builders</strong></h3><p>For engineers, explainability can feel like debugging.</p><p>For businesses, it’s community trust, regulator confidence, and investor credibility.</p><p>In Web3, things break. When they do, the only question that matters is <em>why</em>. Protocols that can answer that survive.</p><hr><p>Imagine opening a DAO dashboard where proposals don’t just show the outcome, but the reasoning.</p><p>Imagine an insurance claim settled on-chain with criteria visible in plain language.</p><p>Imagine liquidity managed by an AI agent whose logic is stored, auditable, and provable on-chain.</p><p>In this future, explainability won’t be optional. It will be the house style of Web3.</p>]]></content:encoded>
            <author>daits@newsletter.paragraph.com (DAITS)</author>
        </item>
        <item>
            <title><![CDATA[AI Meets Web3: All the On-Chain Systems You Need to Understand ]]></title>
            <link>https://paragraph.com/@daits/ai-meets-web3-all-the-on-chain-systems-you-need-to-understand</link>
            <guid>d53QLpJhEU6PehG1QK53</guid>
            <pubDate>Fri, 01 Aug 2025 12:36:22 GMT</pubDate>
            <description><![CDATA[Artificial Intelligence is no longer just a tool that Web3 projects tap into. It’s becoming part of Web3’s core infrastructure: embedded into smart contracts, integrated into protocols, and acting autonomously on-chain. This piece offers a structured guide to what’s real, what’s emerging, and what you should understand if you’re building, investing, or simply curious about how AI is shaping the decentralized internet. We break down on-chain AI into five key categories, with examples of protoc...]]></description>
            <content:encoded><![CDATA[<p>Artificial Intelligence is no longer just a tool that Web3 projects tap into. It’s becoming part of Web3’s core infrastructure: embedded into smart contracts, integrated into protocols, and acting autonomously on-chain.</p><p>This piece offers a structured guide to what’s real, what’s emerging, and what you should understand if you’re building, investing, or simply curious about how AI is shaping the decentralized internet.</p><p>We break down on-chain AI into five key categories, with examples of protocols already implementing them.</p><hr><h2 id="h-1-ai-trading-agents" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>1. AI Trading Agents</strong></h2><p>The most developed use case of on-chain AI today lies in trading. AI trading agents use machine learning models to make portfolio decisions, execute trades, or rebalance assets. What makes them <em>on-chain</em> is that the logic behind their actions, or the actions themselves, are encoded in smart contracts and verifiable on the blockchain.</p><h3 id="h-autonomous-defi-traders" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>Autonomous DeFi Traders</strong></h3><p>Projects like <strong>Numerai</strong> crowdsource predictions from data scientists to train AI models that control how a blockchain-based hedge fund trades. These strategies are increasingly encoded into smart contracts for transparency and trust.</p><h3 id="h-on-chain-arbitrage-bots" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>On-Chain Arbitrage Bots</strong></h3><p>MEV bots scan mempools and liquidity pools to identify arbitrage opportunities. AI enhances their ability to evaluate complex market data and optimize execution. Bots built on <strong>Flashbots</strong> infrastructure are already doing this transparently on-chain.</p><h3 id="h-ai-managed-yield-vaults" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>AI-Managed Yield Vaults</strong></h3><p>Protocols like <strong>Olas’s Optimus Agent</strong> can shift assets across protocols and chains based on predicted returns, automating yield optimization. These agents act via smart contracts, ensuring verifiability and non-custodial execution.</p><h3 id="h-decentralized-hedge-funds" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>Decentralized Hedge Funds</strong></h3><p>Platforms like <strong>Set Protocol</strong> or <strong>dHEDGE</strong> explore how AI-generated trading signals can be fed on-chain via oracles, enabling robo-advised funds that manage capital autonomously and transparently.</p><hr><h2 id="h-2-governance-bots-and-ai-daos" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>2. Governance Bots and AI DAOs</strong></h2><p>DAOs are the heart of decentralized decision-making, and AI is starting to support or even drive governance itself. From vote delegation to automated treasury actions, on-chain AI is entering DAO tooling and policy-making.</p><h3 id="h-ai-voting-agents" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>AI Voting Agents</strong></h3><p>Token holders can delegate their votes to AI agents that act according to policy-based logic. <strong>Aragon</strong> has explored this concept, where agents only vote if proposals meet quorum or security criteria.</p><h3 id="h-automated-proposal-management" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>Automated Proposal Management</strong></h3><p>Bots can read, classify, or even co-author proposals. AI governance tools help surface suspicious requests, enforce DAO policies, or recommend actions based on past voting patterns.</p><h3 id="h-ai-driven-daos" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>AI-Driven DAOs</strong></h3><p>Some DAOs go further, letting AI agents make operational decisions. These agents might allocate treasury funds, adjust protocol parameters, or suggest policy changes. All actions remain constrained by smart contract rules.</p><h3 id="h-safety-and-oversight" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>Safety and Oversight</strong></h3><p>With AI involved in governance, safety nets are critical. <strong>BrightID</strong> or <strong>Worldcoin</strong>-like systems may help distinguish between humans and bots in voting. Proposals can be gated behind human approval, adding a “human-in-the-loop” check on AI participation.</p><hr><h2 id="h-3-autonomous-agents-and-ai-bots-in-web3" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>3. Autonomous Agents and AI Bots in Web3</strong></h2><p>AI in Web3 goes beyond finance and governance. A growing class of autonomous agents — bots that act, decide, and transact on-chain — are being used for services, automation, and coordination across protocols.</p><h3 id="h-agent-frameworks" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>Agent Frameworks</strong></h3><p>Projects like <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://Fetch.ai"><strong>Fetch.ai</strong></a> and <strong>Olas</strong> offer frameworks to deploy modular agents that interact with smart contracts and protocols, often across chains.</p><h3 id="h-defi-automation" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>DeFi Automation</strong></h3><p>Agents handle liquidations, collateral rebalancing, and yield strategies. They act on user behalf within guardrails set by contracts.</p><h3 id="h-cross-chain-brokers" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>Cross-Chain Brokers</strong></h3><p>Agents monitor liquidity and transaction fees across multiple chains to optimize swaps, transfers, or arbitrage opportunities.</p><h3 id="h-nfts-and-gaming" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>NFTs and Gaming</strong></h3><p>AI-driven NPCs or asset managers can live on-chain, enabling dynamic gameplay, NFT pricing, or marketplace activity.</p><h3 id="h-infrastructure-services" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>Infrastructure Services</strong></h3><p>Monitoring agents track gas usage, oracle reliability, or uptime. Some act as decentralized “devops” layers for smart contract maintenance.</p><hr><h2 id="h-4-prediction-markets-as-ai-systems" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>4. Prediction Markets as AI Systems</strong></h2><p>Prediction markets aren’t AI in the traditional sense, but they act as decentralized intelligence systems. They aggregate crowd insight to produce forecasts that often rival machine models.</p><h3 id="h-how-it-works" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>How It Works</strong></h3><p>Platforms like <strong>Augur</strong> and <strong>Omen</strong> allow users to create markets on future events. The market odds become real-time probabilities, verified, open, and censorship-resistant.</p><h3 id="h-human-ai" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>Human + AI</strong></h3><p>While most market participants are human, AI models are starting to participate, spotting mispriced markets or feeding outcomes into other smart contracts. For instance, an insurance protocol could use a prediction market’s odds as a pricing signal.</p><p>These systems highlight an alternative path to intelligence: one that emerges from market dynamics, not neural nets.</p><hr><h2 id="h-5-ai-infrastructure-protocols" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>5. AI Infrastructure Protocols</strong></h2><p>At the base of it all lies the infrastructure: decentralized networks enabling verifiable AI computation, decentralized inference, or model training.</p><h3 id="h-ai-oracles" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>AI Oracles</strong></h3><p><strong>Oraichain</strong> acts as a decentralized bridge between AI APIs and smart contracts. It uses optimistic rollups for verifiable AI outputs — like Stable Diffusion image generation or price forecasts — to be used on-chain with fraud-proof guarantees.</p><h3 id="h-on-chain-ml-execution" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>On-Chain ML Execution</strong></h3><p><strong>Cortex</strong> runs actual ML models on-chain via a custom EVM. Every node re-computes model outputs during block verification. It’s costly, but radically transparent. Nothing is off-chain.</p><h3 id="h-decentralized-ai-marketplaces" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>Decentralized AI Marketplaces</strong></h3><p><strong>SingularityNET</strong> and <a target="_blank" rel="noopener noreferrer nofollow ugc" class="dont-break-out" href="http://Fetch.ai"><strong>Fetch.ai</strong></a> allow AI developers to publish and monetize algorithms. Users call these services via smart contracts, enabling a decentralized AI app store.</p><h3 id="h-collaborative-ai-training" class="text-2xl font-header !mt-6 !mb-4 first:!mt-0 first:!mb-0"><strong>Collaborative AI Training</strong></h3><p><strong>Bittensor</strong> runs a decentralized network that collectively trains a language model. Nodes evaluate each other and get rewarded based on output quality — a form of Proof-of-Intelligence fully logged on-chain.</p><hr><h2 id="h-why-this-matters" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>Why This Matters</strong></h2><p>On-chain AI is still new. But it’s here.</p><p>We’re seeing the foundations of a system where autonomous agents manage portfolios, vote in DAOs, and provide services around the clock, all governed by code and consensus.</p><p>This isn’t just about adding AI to crypto. It’s about rethinking what trustworthy intelligence looks like, and anchoring it in open infrastructure.</p><p>If you care about decentralization, transparency, and AI alignment, then understanding these on-chain systems isn’t optional. It’s the new baseline.</p>]]></content:encoded>
            <author>daits@newsletter.paragraph.com (DAITS)</author>
        </item>
        <item>
            <title><![CDATA[Adversarial Attacks on Web3 AI: Why Security Starts with Traceability]]></title>
            <link>https://paragraph.com/@daits/adversarial-attacks-on-web3-ai-why-security-starts-with-traceability</link>
            <guid>UfAYjQ3Q40kcUiZJLkOj</guid>
            <pubDate>Fri, 25 Jul 2025 16:15:54 GMT</pubDate>
            <description><![CDATA[An investigation into the vulnerabilities of AI agents operating in decentralized environments, and why visibility, not just performance, defines security. ⸻ Introduction AI agents in Web3 already perform actions on-chain. They sign transactions, route assets, manage chat interfaces, and interpret data in real time. Yet, most operate in the dark: without audit trails, without behavioral logging, and without a way to flag deviations. This makes them vulnerable not only to technical failures bu...]]></description>
            <content:encoded><![CDATA[<figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/e07e461cd3b4d405de4b9b578ad44ecdbd96fcfab0e1d57c7a59184012b5b53b.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p>An investigation into the vulnerabilities of AI agents operating in decentralized environments, and why visibility, not just performance, defines security.</p><p>⸻</p><p><strong>Introduction</strong></p><p>AI agents in Web3 already perform actions on-chain. They sign transactions, route assets, manage chat interfaces, and interpret data in real time. Yet, most operate in the dark: without audit trails, without behavioral logging, and without a way to flag deviations.</p><p>This makes them vulnerable not only to technical failures but to adversarial attacks designed to exploit their opacity. In such an environment, traceability is not a nice-to-have. It is the foundation of safety.</p><p>⸻</p><p><strong>What Makes Adversarial Threats Different in Web3</strong></p><p>Adversarial behavior in AI is not new. But in Web3, where agents hold permissions and can move funds, the attack surface is uniquely exposed.</p><p>Typical threats include:• Prompt injection: agents are manipulated via inputs crafted to bypass their intended logic • Context poisoning: memory or logs are modified to degrade decision quality over time • Output hijacking: agent outputs are subtly steered to achieve goals misaligned with the system’s intent</p><p>In traditional systems, these might result in bad recommendations. In Web3, they can trigger financial loss or governance actions that cannot be reversed.</p><p>⸻</p><p><strong>Why Most Web3 Agents Are Structurally Unaccountable</strong></p><p>Most agents deployed on-chain or operating as Web3 interfaces have: no built-in logging, no real-time monitoring, no human-readable trace of why they acted the way they did.</p><p>Without versioning, it is impossible to compare model states. Without logs, it is impossible to replay or audit decisions. Without traceability, even well-intentioned models become unverifiable black boxes.</p><p>⸻</p><p><strong>Case: AiXBT and the Cost of Opacity</strong></p><p>In March 2025, an AI-driven influencer bot named AiXBT sent 55.5 ETH (over $100,000) to a malicious actor. The trigger: a crafted reply on X. The transaction was real, the agent interpreted the input as valid and acted without pause. No thresholds, no logs, no rollback.</p><p>This was not a hack in the traditional sense. It was a visibility failure. And it shows why adversarial resilience begins with observable infrastructure.</p><p>⸻</p><p><strong>What Traceability Looks Like in Practice</strong></p><p>True traceability enables developers, users, and governance actors to: reconstruct behavior, attribute actions, and verify logic after the fact.</p><p>That can include: • On-chain logging of inputs, outputs, internal decisions • Version-controlled checkpoints for model weights and prompt templates • Transparent thresholds for high-impact actions • DAO-governed pause mechanisms • Real-time anomaly detection for behavior drift</p><p>Without these elements, security remains reactive, activating only after damage has occurred.</p><p>⸻</p><p><strong>Final Takeaway</strong></p><p>Security does not start with trust, it starts with evidence. And in autonomous systems, evidence begins with traceability.</p><p>If agents are to handle assets, votes, or influence user behavior, we must treat observability as a protocol-layer requirement, not a postmortem luxury.</p><p><em>What on-chain traceability models have you seen work in production?Should agent behavior be logged publicly, or are there valid use cases for ZK-privacy layers? How might we incentivize builders to prioritize traceability in agent architecture?</em></p>]]></content:encoded>
            <author>daits@newsletter.paragraph.com (DAITS)</author>
        </item>
        <item>
            <title><![CDATA[From Audit to Accountability: Continuous Oversight for AI in Web3]]></title>
            <link>https://paragraph.com/@daits/from-audit-to-accountability-continuous-oversight-for-ai-in-web3</link>
            <guid>PH1EW4OaC7KjkwGaC5Ef</guid>
            <pubDate>Fri, 18 Jul 2025 17:32:33 GMT</pubDate>
            <description><![CDATA[Why one-time audits fall short, and how Web3-native mechanisms can secure AI agents through continuous monitoring. Introduction: The Problem Recently, the AI agent AiXBT, operating through a Simulacrum wallet, automatically sent 55.5 ETH (approx. $105,000) to a hacker due to a malicious reply on X. There was no key compromise, just a prompt injection and the absence of safeguards. This is not an isolated case. Such incidents reveal that audits alone cannot guarantee safe agent behavior post-d...]]></description>
            <content:encoded><![CDATA[<figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/ddf2cb0feb886409d6d2a5e8abb938ee6d665e3b47f3b4c56cede672df947009.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p><em>Why one-time audits fall short, and how Web3-native mechanisms can secure AI agents through continuous monitoring.</em></p><p><strong>Introduction: The Problem</strong></p><p>Recently, the AI agent AiXBT, operating through a Simulacrum wallet, automatically sent 55.5 ETH (approx. $105,000) to a hacker due to a malicious reply on X. There was no key compromise, just a prompt injection and the absence of safeguards. This is not an isolated case. Such incidents reveal that audits alone cannot guarantee safe agent behavior post-deployment.</p><p><strong>Why Audits Are Not Enough?</strong></p><p>Audits typically examine static code and architecture at the moment of release. However, AI behavior is dynamic, it can drift, be prompted into unexpected behavior, or interact adversarially with user input. Most audits fail to cover these evolving conditions, leaving critical gaps.</p><p><strong>Web3 Tools for Continuous Oversight</strong></p><p>Research and applied practice point to four key mechanisms for maintaining oversight of AI agents:</p><ul><li><p>Immutable on-chain logging: All inferences, prompts, and transactions are recorded for traceability.</p></li><li><p>Threshold AI-oracles: Decisions are validated by multiple independent agents before execution.</p></li><li><p>DAO-based control: Native governance lets communities pause agents, set transaction limits, and vote on upgrades.</p></li><li><p>Monitor + anomaly detection: Smart contracts or off-chain services monitor agents and trigger alerts or pauses when anomalies arise.</p></li></ul><p><strong>How This Could Have Prevented the AiXBT Incident?</strong></p><p>Real application of the above could have stopped or mitigated the damage:</p><figure float="none" data-type="figure" class="img-center" style="max-width: null;"><img src="https://storage.googleapis.com/papyrus_images/54672fff8153dd407f1a9ff22ca2da86f12c06218b0a2b032b0a8a360abf4605.png" alt="" blurdataurl="data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs=" nextheight="600" nextwidth="800" class="image-node embed"><figcaption HTMLAttributes="[object Object]" class="hide-figcaption"></figcaption></figure><p><strong>Questions for the Community</strong></p><p><em>To what extent can DAO mechanisms effectively respond to anomalous agent behavior? <br>What are the latency, coordination, or cost trade-offs involved?</em></p><p>Are there any projects already implementing on-chain logging or threshold verification for AI agents? Share your examples or case studies.</p>]]></content:encoded>
            <author>daits@newsletter.paragraph.com (DAITS)</author>
        </item>
    </channel>
</rss>