

One of the teams executing best in agentic commerce right now is Rye. Rye added a buy-anything skill to the open-source AI agent framework OpenClaw, enabling agents to purchase products directly from Amazon and Shopify via Rye's Universal Checkout API.
There are currently three approaches to agentic commerce:
Protocol-based (ACP, UCP): Merchants must opt into the protocol. OpenAI's ChatGPT Checkout is the poster child — roughly 12 Shopify merchants adopted it before it effectively stalled. Waiting for millions of merchants to opt in is a structural scaling bottleneck.
Browser scraping: Agents navigate checkout pages directly. As CommerceBench demonstrated, this approach is fragile against anti-bot protections, inaccurate, and carries the security risk of handing login credentials to the agent.
API-first checkout: The agent passes a product URL and a payment token to a backend API, and Rye handles everything downstream. The buy-anything skill falls here.
Here's how buy-anything works. The user shares a product URL. The agent collects shipping information, and card details are tokenized through Stripe. The card number never touches the chat, Rye's API, the agent, or the LLM provider. Rye's API then handles product validation, price confirmation, tax calculation, shipping, and order placement in a single flow. V2 added Shopify support, order status tracking, and spending limit controls.
The skill works in practice. In a live demo on the Retailgentic podcast, it passed three challenges specifically designed to break agentic commerce systems, including a complex Amazon purchase with multiple size and expiration options.
The key design decision is that the agent never scrapes or visits the merchant page. It passes only the URL to Rye, and Rye handles the rest. This is precisely why API-first checkout works reliably where browser scraping fails.
Open questions remain. Rye is a centralized intermediary, meaning full trust in Rye's API is required and a single point of failure exists. Spending limits are in place, but the risk of prompt injection manipulating the agent persists. The per-transaction fee structure hasn't been disclosed. Still, with the protocol-based approach stuck behind the structural bottleneck of merchant opt-in, API-first checkout appears to be the only approach that actually works today.
Simon Taylor published Agents Will Use Cards First, Then Stablecoins on Fintech Brainfood, arguing that the "stablecoins will kill Visa" thesis is mostly wrong.
The core argument is that cards and stablecoins are complementary, not competitive. Cards authorize the movement of money; stablecoins move the money. Cards are accepted everywhere and have mature controls (single-use, budget caps, merchant restrictions), but settlement is slow. Stablecoins settle instantly and are programmable, but acceptance is still nearly nonexistent.
Taylor sees agent payments evolving in three stages:
Stage 1 (now): Virtual cards. Virtual cards issued by Ramp or Brex are powerful tools for agents. Single-use cards, budget caps, merchant category restrictions, and single-merchant locks (e.g., usable only at Anthropic) are already mature control mechanisms. The critical inflection point is seeing agents as the customer — not the developer, not the human. Agents become a new customer type. Patrick Collison has said agents will make orders of magnitude more payments than humans. About 20 startups including agentcard.sh, privacy.com, and Stripe Issuing are entering this market.
Stage 2 (next): Cards settling via stablecoins. The user experience stays the same — cards — but the settlement infrastructure changes underneath. Merchants currently wait days for settlement, up to 30 days for cross-border transactions. Stablecoin settlement is instant, 24/7, and global. If an agent buys expensive AI tokens from Anthropic and suddenly scales up, cash can run out before revenue arrives. Instant settlement accelerates the entire cycle. Stablecoins don't need to replace cards. They make cards work better.
Stage 3 (later): Stablecoin-native wallets. Imagine a business running hundreds of agents. With virtual cards, the master agent either makes all purchases on behalf of sub-agents or pays $5 each time to create a new card. With stablecoin wallets, sub-wallets can be spun up as needed, as often as needed. Policy compliance can be verified in real time rather than after the fact. Programmability is the key: a master agent creating sub-wallets with fine-grained spending rules, across borders, without permission, at machine speed — that's something cards simply cannot do.
Taylor raises another insight: agents can become the new merchants. A vibe-coder builds a tool that presents financial data in four hours. No website, no terms of service, no legal entity. Another developer's agent calls it 40,000 times per week, generating $40 in weekly revenue. Existing payment processors struggle to onboard these "merchants" — not because the technology is lacking, but because onboarding a merchant means taking on that merchant's risk.

Anyone interested in crypto has probably thought about — or actually tried — running a prediction market bot with some kind of AI edge. I'd been vaguely thinking along these lines myself, until I stumbled on this paper co-authored by Professor Yongjae Lee (whose class I took during my undergrad years at UNIST) together with Kalshi. In one sentence: the paper proposes using LLMs as an auxiliary tool for prediction market trading.
First, some background.
In prediction markets, lead-lag relationships can exist between different events. For example, when the price of a "Japan recession" event moves first, the price of a "U.S. GDP growth" event may follow days later. Identifying these relationships lets you observe the leader's movement and bet on the follower for profit.
The problem is that the standard statistical method for finding these relationships — Granger causality — produces too many false positives. Granger causality, put simply, is a statistical test that checks whether past data from event A helps predict the future of event B. For instance, it might flag a statistical correlation between "Taylor Swift Tour" and "U.S. GDP," which is pure coincidence. Betting on these spurious links leads to large losses. In other words, Granger causality provides only weak evidence of actual temporal causation.
The paper proposes a better solution: a two-stage framework.
Stage 1 (statistical discovery): Run Granger causality tests on Kalshi prediction market price data and extract the top 100 candidate pairs.
Stage 2 (LLM semantic filtering): For each of the top 100 pairs, ask the LLM: "Is there a plausible economic transmission mechanism between these two events?" The LLM evaluates the presence, strength, direction, and reasoning behind the mechanism, then re-ranks the pairs by plausibility. Only the top 20 enter the portfolio.
The prompt includes the instruction: "Be skeptical — many statistical correlations are spurious." The key point is that the LLM isn't making better predictions. It's filtering out fragile, spurious relationships.
The results are striking. Compared to the pure statistical approach, adding LLM filtering yields:
Win rate: 51.4% to 54.5% (modest improvement)
Average loss size: $649 to $347 (46.5% reduction)
Total PnL: $4,100 to $12,500 (more than tripled)
The driver isn't win-rate improvement — it's loss reduction. LLM filtering cuts the average magnitude of losing trades nearly in half. Pairs that are statistically significant but lack a real economic mechanism are precisely the ones that cause large losses, and the LLM filters them out.
LLM filtering is most valuable during large market moves. When the leader event moves by 10 or more points, the win rate jumps from 53.8% under the statistical approach to 71.4% under the hybrid approach.
There are also cases where the LLM surfaces high-value pairs that statistics miss. The "Japan recession to U.S. GDP growth" pair had a Granger rank of #71 — well outside the top-20 cutoff — but the LLM recognized the mechanism: "recessions weaken domestic demand, and through trade linkages and financial spillovers, downturns in major economies drag on overall growth." It elevated the pair to #5, and it generated $700 in profit.
These results held consistently across holding periods (1 to 21 days), model variants (GPT-5-nano, GPT-5-mini), and post-training-cutoff evaluation windows. What the paper ultimately demonstrates is that the LLM functions not as a better predictor, but as an auxiliary tool for separating signal from noise in the gaps that statistics alone leave behind. Given the chance, I'd like to try incorporating this methodology into an actual trading strategy.


One of the teams executing best in agentic commerce right now is Rye. Rye added a buy-anything skill to the open-source AI agent framework OpenClaw, enabling agents to purchase products directly from Amazon and Shopify via Rye's Universal Checkout API.
There are currently three approaches to agentic commerce:
Protocol-based (ACP, UCP): Merchants must opt into the protocol. OpenAI's ChatGPT Checkout is the poster child — roughly 12 Shopify merchants adopted it before it effectively stalled. Waiting for millions of merchants to opt in is a structural scaling bottleneck.
Browser scraping: Agents navigate checkout pages directly. As CommerceBench demonstrated, this approach is fragile against anti-bot protections, inaccurate, and carries the security risk of handing login credentials to the agent.
API-first checkout: The agent passes a product URL and a payment token to a backend API, and Rye handles everything downstream. The buy-anything skill falls here.
Here's how buy-anything works. The user shares a product URL. The agent collects shipping information, and card details are tokenized through Stripe. The card number never touches the chat, Rye's API, the agent, or the LLM provider. Rye's API then handles product validation, price confirmation, tax calculation, shipping, and order placement in a single flow. V2 added Shopify support, order status tracking, and spending limit controls.
The skill works in practice. In a live demo on the Retailgentic podcast, it passed three challenges specifically designed to break agentic commerce systems, including a complex Amazon purchase with multiple size and expiration options.
The key design decision is that the agent never scrapes or visits the merchant page. It passes only the URL to Rye, and Rye handles the rest. This is precisely why API-first checkout works reliably where browser scraping fails.
Open questions remain. Rye is a centralized intermediary, meaning full trust in Rye's API is required and a single point of failure exists. Spending limits are in place, but the risk of prompt injection manipulating the agent persists. The per-transaction fee structure hasn't been disclosed. Still, with the protocol-based approach stuck behind the structural bottleneck of merchant opt-in, API-first checkout appears to be the only approach that actually works today.
Simon Taylor published Agents Will Use Cards First, Then Stablecoins on Fintech Brainfood, arguing that the "stablecoins will kill Visa" thesis is mostly wrong.
The core argument is that cards and stablecoins are complementary, not competitive. Cards authorize the movement of money; stablecoins move the money. Cards are accepted everywhere and have mature controls (single-use, budget caps, merchant restrictions), but settlement is slow. Stablecoins settle instantly and are programmable, but acceptance is still nearly nonexistent.
Taylor sees agent payments evolving in three stages:
Stage 1 (now): Virtual cards. Virtual cards issued by Ramp or Brex are powerful tools for agents. Single-use cards, budget caps, merchant category restrictions, and single-merchant locks (e.g., usable only at Anthropic) are already mature control mechanisms. The critical inflection point is seeing agents as the customer — not the developer, not the human. Agents become a new customer type. Patrick Collison has said agents will make orders of magnitude more payments than humans. About 20 startups including agentcard.sh, privacy.com, and Stripe Issuing are entering this market.
Stage 2 (next): Cards settling via stablecoins. The user experience stays the same — cards — but the settlement infrastructure changes underneath. Merchants currently wait days for settlement, up to 30 days for cross-border transactions. Stablecoin settlement is instant, 24/7, and global. If an agent buys expensive AI tokens from Anthropic and suddenly scales up, cash can run out before revenue arrives. Instant settlement accelerates the entire cycle. Stablecoins don't need to replace cards. They make cards work better.
Stage 3 (later): Stablecoin-native wallets. Imagine a business running hundreds of agents. With virtual cards, the master agent either makes all purchases on behalf of sub-agents or pays $5 each time to create a new card. With stablecoin wallets, sub-wallets can be spun up as needed, as often as needed. Policy compliance can be verified in real time rather than after the fact. Programmability is the key: a master agent creating sub-wallets with fine-grained spending rules, across borders, without permission, at machine speed — that's something cards simply cannot do.
Taylor raises another insight: agents can become the new merchants. A vibe-coder builds a tool that presents financial data in four hours. No website, no terms of service, no legal entity. Another developer's agent calls it 40,000 times per week, generating $40 in weekly revenue. Existing payment processors struggle to onboard these "merchants" — not because the technology is lacking, but because onboarding a merchant means taking on that merchant's risk.

Anyone interested in crypto has probably thought about — or actually tried — running a prediction market bot with some kind of AI edge. I'd been vaguely thinking along these lines myself, until I stumbled on this paper co-authored by Professor Yongjae Lee (whose class I took during my undergrad years at UNIST) together with Kalshi. In one sentence: the paper proposes using LLMs as an auxiliary tool for prediction market trading.
First, some background.
In prediction markets, lead-lag relationships can exist between different events. For example, when the price of a "Japan recession" event moves first, the price of a "U.S. GDP growth" event may follow days later. Identifying these relationships lets you observe the leader's movement and bet on the follower for profit.
The problem is that the standard statistical method for finding these relationships — Granger causality — produces too many false positives. Granger causality, put simply, is a statistical test that checks whether past data from event A helps predict the future of event B. For instance, it might flag a statistical correlation between "Taylor Swift Tour" and "U.S. GDP," which is pure coincidence. Betting on these spurious links leads to large losses. In other words, Granger causality provides only weak evidence of actual temporal causation.
The paper proposes a better solution: a two-stage framework.
Stage 1 (statistical discovery): Run Granger causality tests on Kalshi prediction market price data and extract the top 100 candidate pairs.
Stage 2 (LLM semantic filtering): For each of the top 100 pairs, ask the LLM: "Is there a plausible economic transmission mechanism between these two events?" The LLM evaluates the presence, strength, direction, and reasoning behind the mechanism, then re-ranks the pairs by plausibility. Only the top 20 enter the portfolio.
The prompt includes the instruction: "Be skeptical — many statistical correlations are spurious." The key point is that the LLM isn't making better predictions. It's filtering out fragile, spurious relationships.
The results are striking. Compared to the pure statistical approach, adding LLM filtering yields:
Win rate: 51.4% to 54.5% (modest improvement)
Average loss size: $649 to $347 (46.5% reduction)
Total PnL: $4,100 to $12,500 (more than tripled)
The driver isn't win-rate improvement — it's loss reduction. LLM filtering cuts the average magnitude of losing trades nearly in half. Pairs that are statistically significant but lack a real economic mechanism are precisely the ones that cause large losses, and the LLM filters them out.
LLM filtering is most valuable during large market moves. When the leader event moves by 10 or more points, the win rate jumps from 53.8% under the statistical approach to 71.4% under the hybrid approach.
There are also cases where the LLM surfaces high-value pairs that statistics miss. The "Japan recession to U.S. GDP growth" pair had a Granger rank of #71 — well outside the top-20 cutoff — but the LLM recognized the mechanism: "recessions weaken domestic demand, and through trade linkages and financial spillovers, downturns in major economies drag on overall growth." It elevated the pair to #5, and it generated $700 in profit.
These results held consistently across holding periods (1 to 21 days), model variants (GPT-5-nano, GPT-5-mini), and post-training-cutoff evaluation windows. What the paper ultimately demonstrates is that the LLM functions not as a better predictor, but as an auxiliary tool for separating signal from noise in the gaps that statistics alone leave behind. Given the chance, I'd like to try incorporating this methodology into an actual trading strategy.

Web Proof, Make more data verifiable
API for everything without permisson (and legally)

10 Weeks of Journey into vFHE
i’ve been working on deep dive into vFHE ((verifiable Fully Homomorphic Encryption)) for last 10 weeks.

I read Sentient Whitepaper So You don’t need to
Sentient, Platform for 'Clopen' AI Models

Web Proof, Make more data verifiable
API for everything without permisson (and legally)

10 Weeks of Journey into vFHE
i’ve been working on deep dive into vFHE ((verifiable Fully Homomorphic Encryption)) for last 10 weeks.

I read Sentient Whitepaper So You don’t need to
Sentient, Platform for 'Clopen' AI Models
Articles about crypto projects that I'm interested in.
Articles about crypto projects that I'm interested in.

Subscribe to FLAVOR by moyed

Subscribe to FLAVOR by moyed
<100 subscribers
<100 subscribers
Share Dialog
Share Dialog
No activity yet