
I didn’t plan to start building a trading app.
Like most experiments, it began with curiosity and an airdrop.
I’d been a subscriber to Bankr Bot, one of those AI-native trading assistants that lets you talk your way into a position. It’s clever natural language to go long, check exposure, even manage your risk. One morning, I woke up to find I’d received an AVNT airdrop from Avantis for being a user of the Bot. That tiny nudge got me thinking: What if I built my own?
Not another “bot,” but something that could learn something that didn’t just execute, but remembered.
I’ve run fintech and scaling teams through chaos, but I’m not a coder. That means most of my ideas get bottlenecked at the “who can build this” stage.
Until now.
Thanks to AI-native tools (Cursor and multiple LLM), you can increasingly build through natural language. So I jumped in not necessarily just to ship a product, but to test a thesis: could I strip emotion from trading and layer in structured learning?
Spoiler: AI makes that possible, but it’s far from smooth.
You learn quickly that AI has its own kind of volatility, death loops, forgotten instructions, prompt drift. I’d spend an hour refining logic, only for the AI Agent to suddenly “forget” the rules that kept its code intact. Sometimes the agent would rewrite itself into silence. Other times it would freeze halfway through a trade simulation.
It’s humbling and addictive.
You think “no-code” means no pain. In reality, it’s still engineering, just at a higher level of abstraction. You’re debugging context, not syntax.
After losing a few simulated perps (and too many hours), I realised the real problem wasn’t volatility or emotion it was forgetfulness.
Most AI tools treat every run like the first one. You start clean, the agent resets, and the last ten mistakes vanish. Human traders at least remember their bad calls; AI ones don’t. That’s where the loop begins.
So I decided to add memory.
I used Honcho, an AI-native memory library from Plastic Labs. It basically lets an app keep a searchable history of every action and outcome a sort of “AI recall layer.” I plugged it into my small trading prototype on Base, experimenting with perps on Avantis.
Now, instead of trading blind each round, the app could look back and say:
“Last time we took a 5x long on AVNT with volatility >15%, we lost $250.50. Maybe shrink size by 20%.”
Nothing fancy. But suddenly, the system wasn’t reactive, it was adaptive.
It wasn’t learning alpha it was learning restraint.
People love to talk about how AI makes building easy. They rarely talk about how it makes building weird.
You start with momentum everything feels like magic then the AI rewrites itself, forgets your architecture, or breaks something subtle. You go from “I’ve built a working prototype” to “why does my code now print emojis instead of orders?”
That’s the gap no one sees in the demo videos: the lag between intention and outcome.
But that friction taught me a lot. It made me slow down, document, and version-control prompts like code. It forced discipline. AI gives you incredible speed but speed without structure collapses.
Memory, again, was the fix. Not just for the trading logic, but for me. (thanks Lama for the tip)
With memory active, my small Base trading app started behaving differently.
My Apps Output

It began surfacing patterns that I didn’t consciously track, things like:
Trades placed late on Fridays underperformed by ~12%.
Higher-volatility setups needed smaller bet sizes.
Momentum plays worked better when spaced out by 24 hours.
None of this was ground breaking, but the feeling of seeing an app “remember” your scars and adjust its behaviour accordingly that’s something new.
It felt less like a tool and more like a trading partner.
Perp trading has become one of the strongest Meta's in crypto a constant, liquid feedback loop for builders.
As volumes climb past $1T+ a month, we’re seeing the limits of human speed and emotional discipline.
AI gives us automation; memory gives us reflection.
Together, they open up a new design space one where trading systems don’t just react faster but learn smarter.
Platforms like Base make it cheap and composable to test these ideas, while Avantis gives a clear surface for interaction.
We’re still early, but it’s easy to imagine a world where onchain agents evolve strategies across time not by being told what to do, but by remembering what not to repeat.
I used to think “building in crypto” meant writing code.
Now I see it more as assembling intelligence shaping systems that can reason, recall, and adapt.
That doesn’t mean it’s easy. The scars are real late nights chasing AI loops, prompts that go rogue, memory stores that silently wipe themselves.
But when it works even once it’s addictive.
It’s not about replacing coders; it’s about giving more people the language of creation. The ability to experiment with intelligence itself.
That’s what keeps me hooked.
This started with an airdrop, a bit of curiosity, and too much coffee.
Now, it feels like the start of a new pattern: AI systems that remember their mistakes, just like traders do.
If you’re exploring AI-native agents, onchain memory, or DeFi automation, I’d love to compare notes.
Matt Dyer — Builder exploring the edges of AI and crypto.
Share Dialog
Matt Dyer
Support dialog
No comments yet