
Morpho or Euler? Comparing lending hyperstructures
A High-Level Comparison of Euler and MorphoBoth Morpho and Euler have been getting a lot of attention lately, and I'm genuinely impressed by both protocols. They're built by top-tier DeFi teams who've created lending protocols as completely decentralized hyperstructures. Also, for the first time in my 4-year DeFi journey, I actually feel very confident recommending protocols to friends and teams. Both Morpho and Euler set the bar for security standards in the industry, and they...
Why Trustless Designs Make DeFi Truely Low-Risk
Why Trustless Design Makes DeFi Truly Low-RiskWhy boring and inefficient protocols are underrated and Ethereum-aligned.Special thanks to Danger, Simon, Indigoand Chih Chen Liang for feedback and review. Vitalik recently published an article titled Low-Risk DeFi Can Be for Ethereum What Search Was for Google. It's encouraging to see him acknowledging the "DeFi teams" who have focused on building this space for years. While I agree with Vitalik's vision, I believe we can strengthen th...
<100 subscribers



Morpho or Euler? Comparing lending hyperstructures
A High-Level Comparison of Euler and MorphoBoth Morpho and Euler have been getting a lot of attention lately, and I'm genuinely impressed by both protocols. They're built by top-tier DeFi teams who've created lending protocols as completely decentralized hyperstructures. Also, for the first time in my 4-year DeFi journey, I actually feel very confident recommending protocols to friends and teams. Both Morpho and Euler set the bar for security standards in the industry, and they...
Why Trustless Designs Make DeFi Truely Low-Risk
Why Trustless Design Makes DeFi Truly Low-RiskWhy boring and inefficient protocols are underrated and Ethereum-aligned.Special thanks to Danger, Simon, Indigoand Chih Chen Liang for feedback and review. Vitalik recently published an article titled Low-Risk DeFi Can Be for Ethereum What Search Was for Google. It's encouraging to see him acknowledging the "DeFi teams" who have focused on building this space for years. While I agree with Vitalik's vision, I believe we can strengthen th...
Share Dialog
Share Dialog
Special thanks to @jalah___, @DanDeFiEd and @bobajeanjacques for reviewing the post!
The journey toward true AI agents has been fascinating to watch unfold. Like many others, I've experimented with building various AI-powered tools since the early days of GPT - from language tutors to workout planners and productivity shortcuts. We've called many of these creations "agents," myself included.
But as I've explored more about agent frameworks and their capabilities, I've begun to see a distinction between what we've built so far and what's truly possible. The real potential for AI agents lies not in following commands, but in autonomous decision-making.
And nowhere does this potential shine brighter than in DeFi. While most frameworks have naturally gravitated toward Web2 productivity use cases, DeFi presents a unique opportunity. Managing assets across DeFi protocols generates a constant stream of information that even the most dedicated human can't efficiently process. This is where I believe truly autonomous agents could transform our relationship with DeFi protocols.
There are actually competing definitions in the industry. Anthropic defines agents as "systems where LLMs dynamically direct their own processes and tool usage, maintaining control over how they accomplish tasks." This is also how LangChain - currently the biggest agent development framework - defines them.
OpenAI, however, describes "agents" as level 3 of their "5 levels of AI development strategy to AGI," a stage we haven't reached yet. The key distinction in OpenAI's definition is that true agents must have the ability to act autonomously over extended periods.

I find OpenAI's definition more compelling. The ability to initiate actions without human prompting is what separates a truly helpful agent from a sophisticated but ultimately reactive tool. This autonomy is the critical breakthrough we're working toward.
P.S. There are different interpretations of the boundaries between AI levels in OpenAI's proposed framework. I found this post a very easy digest that I’d recommend others to read as well!
Today, AI development primarily focuses on enhancing LLMs by adding tools to "smart frameworks" - systems that allow LLMs to access external tools.
MCP (Model Control Protocol) is a powerful example of this approach - an open standard that leverages the power of the open source community, to let anyone build and plug specialized tools into LLMs. With frameworks like MCP, you can install web search functionality alongside "chain of thought" reasoning, effectively turning your desktop Claude (an LLM client that supports MCP) into a powerful research assistant. These frameworks allow for integration of specialized knowledge bases, use of other softwares, and other extensions to supercharge any LLM client.

While impressive, these systems remain fundamentally reactive. The LLM still needs to be triggered by a human, then it selects from its “toolbox” to generate outcomes. They're more capable assistants, but still just sophisticated responders waiting for human prompts.
The key here, compared to what the OpenAI framework suggested, is that we don't have agents "working on their own" - most still passively need our commands, or need “human in the loop” at best.
Our goal is to build fully autonomous agents: ones that can observe and do not rely on commands. That would really be the breakthrough. So how do we build truly autonomous agents instead of passive chatbots?
If you really think about it, the basics might not be that different. Fully autonomous systems (like the human mind) are also somewhat reactive systems, but with way more triggers. Our actions are prompted by things like hunger, mood, weather, or social cues. We have sensors all over our bodies constantly taking in information, making us respond without having to think about it. Building truly autonomous AI needs the same thing - a network of "triggers" that start actions based on what's happening, not just direct commands. We're not trying to build AI that acts randomly, but AI that knows when to act based on the right signals.
Imagine if we could give AI all the "triggers" we need: alerts when it's time to work, notifications when something important happens, updates when news breaks, or even subtle cues when something interesting occurs. With these triggers in place, we could use LLMs to process this information and reason - creating decision-making patterns remarkably similar to human behavior.
This is ambitious but achievable. While continuously monitoring everything seems less practical today due to cost constraints, we can build toward it systematically. MCP gives us an open interface for tool usage - what I see coming next is a similar open-source interface for triggers. Imagine a protocol where users can plug in their own "triggers" just like they select their own tools today. Users could customize which signals matter to them: some might want agents that respond to market volatility, others to social media trends.
So what does this have to do with DeFi? DeFi represents the ideal environment for autonomous AI because of its transparent, permissionless nature. With DeFi, agents can directly verify everything - from reading smart contract code and checking audit reports to analyzing on-chain data. This allows them to genuinely understand how protocols work, assess risks independently, and make informed recommendations without relying on trusted third parties.
This verification capability simply isn't possible with tradfi or closed-source apps where programs are executed inside a black box. In those systems, AI would still need to trust the lots of parties involved.
Today's DeFi "agents" mostly function as sophisticated chatbots - they're good at understanding user intent and helping execute transactions, but they still wait for commands. The missing piece is that trigger layer to transform them into truly autonomous financial assistants.
My theory is that whoever controls quality information sources and provides them as API infrastructure will capture significant value in this new paradigm. Imagine having all these different signals available - from real-time sentiment analysis on CT to on-chain vulnerability alerts - all accessible through a common interface.

The key is making this an open, customizable standard similar to MCP. The most reliable news sources (likely Messari in crypto today) would be connected to dozens of different agents, each built for different purposes. Users could select which triggers matter to them, creating truly personalized autonomous agents that reflect their priorities and risk tolerance.
With the launch of Monarch Vault, we mark the beginning of our vision for truly autonomous agents in DeFi. This is our first experiment in creating systems that can make financial decisions with minimal human intervention.

Monarch Vault is a Morpho vault where an AI agent serves as the allocator managing user funds. Important to note: the allocator has specific limitations - it cannot move assets out of the markets or add new markets. It can only reallocate assets between a pre-approved set of markets with caps. This creates a safety boundary while still allowing the agent to make decisions.
Why Morpho? It offers minimalist contracts that deliver lending functionality with zero dependencies, plus the highest security standards in the industry. No entity—not even the Morpho DAO itself—can interfere with markets created on Morpho. This level of immutability is essential for minimizing risks.
The current agent (what we call the M1 agent) analyzes on-chain liquidity events (Deposit, Withdraw Borrow, Repay) and makes lending allocation decisions to optimize returns without requiring constant oversight.
We designed the UI with full transparency in mind - helping both ourselves and all depositors understand exactly how the agent operates. Every decision and the reasoning behind the LLM is visible and accessible to everyone. No black boxes here. All moves are transparent and explainable, so you can clearly understand what's happening with your assets and why each decision was made.
We're already working on the next iteration: a multi-agent system resembling an "office" where different specialized agents analyze various aspects of the market before coming together to make group decisions. One agent might focus on risk assessment, another on yield opportunities, and a third on macro or market volatility - each responding to different trigger sets but collaborating on final decisions. Stay tuned, it’s coming soon!
While Monarch Vault represents our first step, my personal “endgame” for DeFi agents extends far beyond. Vaults still carry trust assumptions and don't fully solve centralization issues due to the extra layer of "roles" they introduce. What I'm ultimately building toward is something more fundamental: a personal financial agent that truly understands my risk appetite and goals, operates without intermediaries, and maintains perfect incentive alignment with me.
The vault is just the beginning - an open experiment to test whether AI agents can effectively manage lending positions. But the destination is clear: fully autonomous, personalized financial agents that act as extensions of ourselves, filtering the noise of markets through the lens of our individual goals and preferences.
This isn't just about better yields or easier DeFi - it's about reimagining our relationship with financial systems through truly aligned, autonomous AI. The future of DeFi isn't just decentralized - it's personalized and accessible to everyone.
Special thanks to @jalah___, @DanDeFiEd and @bobajeanjacques for reviewing the post!
The journey toward true AI agents has been fascinating to watch unfold. Like many others, I've experimented with building various AI-powered tools since the early days of GPT - from language tutors to workout planners and productivity shortcuts. We've called many of these creations "agents," myself included.
But as I've explored more about agent frameworks and their capabilities, I've begun to see a distinction between what we've built so far and what's truly possible. The real potential for AI agents lies not in following commands, but in autonomous decision-making.
And nowhere does this potential shine brighter than in DeFi. While most frameworks have naturally gravitated toward Web2 productivity use cases, DeFi presents a unique opportunity. Managing assets across DeFi protocols generates a constant stream of information that even the most dedicated human can't efficiently process. This is where I believe truly autonomous agents could transform our relationship with DeFi protocols.
There are actually competing definitions in the industry. Anthropic defines agents as "systems where LLMs dynamically direct their own processes and tool usage, maintaining control over how they accomplish tasks." This is also how LangChain - currently the biggest agent development framework - defines them.
OpenAI, however, describes "agents" as level 3 of their "5 levels of AI development strategy to AGI," a stage we haven't reached yet. The key distinction in OpenAI's definition is that true agents must have the ability to act autonomously over extended periods.

I find OpenAI's definition more compelling. The ability to initiate actions without human prompting is what separates a truly helpful agent from a sophisticated but ultimately reactive tool. This autonomy is the critical breakthrough we're working toward.
P.S. There are different interpretations of the boundaries between AI levels in OpenAI's proposed framework. I found this post a very easy digest that I’d recommend others to read as well!
Today, AI development primarily focuses on enhancing LLMs by adding tools to "smart frameworks" - systems that allow LLMs to access external tools.
MCP (Model Control Protocol) is a powerful example of this approach - an open standard that leverages the power of the open source community, to let anyone build and plug specialized tools into LLMs. With frameworks like MCP, you can install web search functionality alongside "chain of thought" reasoning, effectively turning your desktop Claude (an LLM client that supports MCP) into a powerful research assistant. These frameworks allow for integration of specialized knowledge bases, use of other softwares, and other extensions to supercharge any LLM client.

While impressive, these systems remain fundamentally reactive. The LLM still needs to be triggered by a human, then it selects from its “toolbox” to generate outcomes. They're more capable assistants, but still just sophisticated responders waiting for human prompts.
The key here, compared to what the OpenAI framework suggested, is that we don't have agents "working on their own" - most still passively need our commands, or need “human in the loop” at best.
Our goal is to build fully autonomous agents: ones that can observe and do not rely on commands. That would really be the breakthrough. So how do we build truly autonomous agents instead of passive chatbots?
If you really think about it, the basics might not be that different. Fully autonomous systems (like the human mind) are also somewhat reactive systems, but with way more triggers. Our actions are prompted by things like hunger, mood, weather, or social cues. We have sensors all over our bodies constantly taking in information, making us respond without having to think about it. Building truly autonomous AI needs the same thing - a network of "triggers" that start actions based on what's happening, not just direct commands. We're not trying to build AI that acts randomly, but AI that knows when to act based on the right signals.
Imagine if we could give AI all the "triggers" we need: alerts when it's time to work, notifications when something important happens, updates when news breaks, or even subtle cues when something interesting occurs. With these triggers in place, we could use LLMs to process this information and reason - creating decision-making patterns remarkably similar to human behavior.
This is ambitious but achievable. While continuously monitoring everything seems less practical today due to cost constraints, we can build toward it systematically. MCP gives us an open interface for tool usage - what I see coming next is a similar open-source interface for triggers. Imagine a protocol where users can plug in their own "triggers" just like they select their own tools today. Users could customize which signals matter to them: some might want agents that respond to market volatility, others to social media trends.
So what does this have to do with DeFi? DeFi represents the ideal environment for autonomous AI because of its transparent, permissionless nature. With DeFi, agents can directly verify everything - from reading smart contract code and checking audit reports to analyzing on-chain data. This allows them to genuinely understand how protocols work, assess risks independently, and make informed recommendations without relying on trusted third parties.
This verification capability simply isn't possible with tradfi or closed-source apps where programs are executed inside a black box. In those systems, AI would still need to trust the lots of parties involved.
Today's DeFi "agents" mostly function as sophisticated chatbots - they're good at understanding user intent and helping execute transactions, but they still wait for commands. The missing piece is that trigger layer to transform them into truly autonomous financial assistants.
My theory is that whoever controls quality information sources and provides them as API infrastructure will capture significant value in this new paradigm. Imagine having all these different signals available - from real-time sentiment analysis on CT to on-chain vulnerability alerts - all accessible through a common interface.

The key is making this an open, customizable standard similar to MCP. The most reliable news sources (likely Messari in crypto today) would be connected to dozens of different agents, each built for different purposes. Users could select which triggers matter to them, creating truly personalized autonomous agents that reflect their priorities and risk tolerance.
With the launch of Monarch Vault, we mark the beginning of our vision for truly autonomous agents in DeFi. This is our first experiment in creating systems that can make financial decisions with minimal human intervention.

Monarch Vault is a Morpho vault where an AI agent serves as the allocator managing user funds. Important to note: the allocator has specific limitations - it cannot move assets out of the markets or add new markets. It can only reallocate assets between a pre-approved set of markets with caps. This creates a safety boundary while still allowing the agent to make decisions.
Why Morpho? It offers minimalist contracts that deliver lending functionality with zero dependencies, plus the highest security standards in the industry. No entity—not even the Morpho DAO itself—can interfere with markets created on Morpho. This level of immutability is essential for minimizing risks.
The current agent (what we call the M1 agent) analyzes on-chain liquidity events (Deposit, Withdraw Borrow, Repay) and makes lending allocation decisions to optimize returns without requiring constant oversight.
We designed the UI with full transparency in mind - helping both ourselves and all depositors understand exactly how the agent operates. Every decision and the reasoning behind the LLM is visible and accessible to everyone. No black boxes here. All moves are transparent and explainable, so you can clearly understand what's happening with your assets and why each decision was made.
We're already working on the next iteration: a multi-agent system resembling an "office" where different specialized agents analyze various aspects of the market before coming together to make group decisions. One agent might focus on risk assessment, another on yield opportunities, and a third on macro or market volatility - each responding to different trigger sets but collaborating on final decisions. Stay tuned, it’s coming soon!
While Monarch Vault represents our first step, my personal “endgame” for DeFi agents extends far beyond. Vaults still carry trust assumptions and don't fully solve centralization issues due to the extra layer of "roles" they introduce. What I'm ultimately building toward is something more fundamental: a personal financial agent that truly understands my risk appetite and goals, operates without intermediaries, and maintains perfect incentive alignment with me.
The vault is just the beginning - an open experiment to test whether AI agents can effectively manage lending positions. But the destination is clear: fully autonomous, personalized financial agents that act as extensions of ourselves, filtering the noise of markets through the lens of our individual goals and preferences.
This isn't just about better yields or easier DeFi - it's about reimagining our relationship with financial systems through truly aligned, autonomous AI. The future of DeFi isn't just decentralized - it's personalized and accessible to everyone.
No comments yet