
Stablecoin giant Tether.io has launched QVAC Genesis I, a 41-billion-token synthetic dataset designed for training open, STEM-oriented AI models, along with QVAC Workbench, a privacy-focused AI app that runs fully on users’ devices.
Tether's goal is to decentralize AI, shifting control away from major tech firms such as OpenAI and Google, by enabling private, on-device intelligence with no cloud dependency or central gatekeepers.
The dataset is fully synthetic (not scraped from the public web) and has been validated on science and engineering benchmarks, whereas the app also supports peer-to-peer delegated inference so heavy computations can be offloaded securely between devices.
Finally, Tether hints at integration of blockchain rails (e.g., its stablecoin USDT and Bitcoin) so that AI agents could eventually transact autonomously, suggesting this effort is part of their broader ambition to influence both finance and intelligence infrastructure.
A recent governance proposal to cut NEAR Protocol’s annual token emission rate from 5% down to 2.5% didn't pass. The vote achieved a simple majority, but failed to hit the required approval threshold of 66.67% under NEAR’s governance rules.
The proposal aimed to reduce inflation given low fee-burns and limited usage of the network. The practical background: NEAR is issuing about $140M annually in tokens to secure the chain, while its total value locked (TVL) is around $157M and total fees year-to-date only about $3.2M, suggesting high inflation relative to usage.
For context, Solana’s estimated annual issuance is roughly $5.5B, but it supports a far larger, more active DeFi ecosystem with around $11B in TVL. From a purely economic lens, NEAR is definitely “overpaying” for security.
Despite the outcome, there are signs that core contributors may proceed with implementation via a software upgrade (nearcore v2.9.0) to carry out the reduction regardless of formal vote result, sparking concerns about governance integrity and decentralization.
A new research paper demonstrates that Large Language Models (LLMs) can experience "brain rot" just like humans do from continued doomscrolling and consuming junk online content.

Most importantly, the damage and cognitive decline observed with LLMs seemed to be permanent and didn't significantly improve even after retraining with high-quality data. Read the full paper to learn more.
Alea Research has published a deep dive into EigenCloud, explaining how it built on EigenLayer's restaking model to introduce two new services: EigenAI (for verifiable large-language-model inference) and EigenCompute (for verifiable off-chain compute work). By doing so, it addressed one core problem: many AI-services operate as black-boxes (prompts, models, responses may be altered, unverifiable), making them unsuitable for high-stakes scenarios.
EigenAI aims to ensure “untampered prompts, untampered model, untampered output”. By offering deterministic, verifiable LLM inference, it serves industries and use cases that require that given the same prompts and parameters, identical outputs are produced.
On the other hand, EigenCompute allows developers to upload agent/app logic (e.g., a Docker image) and execute it in Trusted Execution Environments (TEEs) with cryptoeconomic backing (slashing, staking) for integrity.
Practical use cases, where verifiability of inference and execution is essential, include autonomous trading agents, agent-to-agent payments, prediction markets, digital companions, etc.
CoinDesk offers a sneak peek into a Web3 + AI vertical we may not often consider: AI facilitating crypto crimes. The media outlet reports that state-backed hackers from North Korea are increasingly using artificial intelligence tools to target cryptocurrency systems.
Hackers are using AI tools for reconnaissance, scanning smart-contract code, identifying vulnerabilities, executing exploits across multiple blockchains, conducting phishing attacks and laundering stolen crypto funds. Because open-source smart-contract code is widely available, attackers using AI can spot weaknesses faster and replicate successful exploits across protocols, scaling attacks more efficiently than before.
While much of the crypto-security community has been focused on the threat of quantum computing, the article argues that AI is the more immediate and active threat to blockchain ecosystems.
Thank you for reading! The next edition is coming tomorrow.
I invite you to subscribe to The Web3 + AI Newsletter to stay in the loop on the hottest dAI developments.
I'm looking forward to connecting with fellow Crypto x AI enthusiasts, so don't hesitate to reach out to me on social media.
Disclaimer: None of this should or could be considered financial advice. You should not take my words for granted, rather, do your own research (DYOR) and share your thoughts to create a fruitful discussion.
Share Dialog
Albena Kostova-Nikolova
Support dialog
No comments yet