Clawdbot, OpenAI Atlas, Perplexity Comet, Claude Chrome plugin, and now even Chrome's built in AI are all vectors to what we call "prompt injection".
Mitsukeru today screens/scans content for scams, watches your clipboard, checks URLs before you visit them, and has a strong social engineering focus. Naturally this extends to protecting agents that run on your behalf.
e.g. if your Clawdbot browser had LinkSentinel, it pre-warns it for a bad URL, steers you clear of scam pages, etc. It can also watch for clipboard manipulation of the agent (useful)
what we don't do (yet?) is help with malicious smart contract interaction - this requires Mitsukeru to be less privacy focused, less local, and needs to connect to APIs that also can ban you geographically (iykyk).
We definitely do not protect writing corrupt memories (this might be a need) to files that Clawdbot controls.
So we still need to focus on segmentation and policy to make an AI agent run safely on your behalf and not get damaged by it, and other prompt injections. It is also sometimes how smartly you craft your prompt. Preventing a prompt injection by alerting on <blank> or display:none is actually easy (though you wonder why the LLM itself didn't pick up on it).
As always, how important is this to the end user base? And I have actively warned/cautioned on agents acting on your behalf.
Hope that answers
@kenjiquest - and would love to see more feature requests and thoughts like this!