If you were on Twitter this weekend you may have seen the [series of] posts about “Plaid for Memory.” On Friday night, I began getting tagged, DMs, and Telegram messages about the tweets. The attention was warranted. It felt like I was seeing adverts for what we’re building with Memory. Just like Plaid has made integrating bank payments and data as seamless as biometric authentication from your phone, what we’re building at Memory wants to make that possible for all of your internet data.
The first tweet was from Ashley Mayer
Then came the quote tweets and replies. Notably, Aaron Levie, agreeing with Ashley
What I found most interesting grooming the quote tweets and responses was a reply with a link to a Nikunj Kothari's Substack, 'Balancing Act,' titled 'Memory Changes Everything.' In the piece from Kothari, he agrees with Ashley and Aaron, but also dives deeper with some core beliefs that explain why memory with AI is so important:
Cross-app memory unlocks deeper personalization
AI that can access aggregated context from every tool you use will move from surface-level suggestions to decisions shaped around your long-term patterns. This is not news. There have been numerous companies recently announced that are focused specifically on personal data's advantage with LLMs.
A neutral conduit is required
Something like Anthropic’s Model Context Protocol (MCP) is needed so apps can trade context securely instead of hoarding it in silos. I would add too that MCP is integral for the integrating experience. Next week, we are allowing users to manually upload their chatGPT history to their Memory Vault and use our MCP tooling to bring it to Claude. This solves a real problem but has too much friction. Now we need a standard for exporting.
Strategic forgetting and user control are non-negotiable
Users must be able to inspect, correct, or delete stored context and see clear benefits for anything they share. Unlike other companies who have recognized the importance of personal data portability we are taking a user-permissioned approach.
Startups have a short window before giants re-platform
Kothari estimates roughly 18 months for new players to own this layer before incumbents rebuild their stacks around memory. I've said this to several people: the winner of this category already exists today.
AI will move from autocomplete to understanding intent
The real jump isn’t better phrasing; it’s software that grasps why you act and adapts workflows. What can be done with Memory Vaults is limited only by time and imagination, but that demand requires supply, and we're starting with Memory Vaults.
The blog was also a fascinating read on the 'how' of AI memory, but the 'why' echoes what inspired Memory Protocol.
Plaid unblocked fintech in three moves:
Standardized access: a single, predictable API instead of tons of bank scrapers.
User-centric permissions: OAuth pop-up, tap to grant, tap to revoke.
Developer-friendly economics: small API fees that felt trivial next to the value unlocked.
AI memory is stuck pre-Plaid. Each LLM keeps its own walled-garden memory: ChatGPT knows your topical interests in depth, Claude knows your personal habits, Replit and Cursor know your coding abilities. None of them talk to each other, so you you will keep re-training assistants on facts you already know... and that’s before you bring in Spotify, Factory.fm, Goodreads, Twitter graphs, or the infinite other pockets of context you’ve built over the past decade+ online.
A “Plaid for Memory” would be the layer that:
Collects your scattered context into a single vault you own.
Normalizes & secures it behind a simple permission pop-up.
Pays you whenever an app or agent taps that vault for value.
Sound familiar?
Interested in learning more? Check out Memory Protocol or get in touch hello@memoryproto.co
Jack Spallone
LLMs with no private memory are gonna be the most dangerous technologies that we will ever build!