
OpenClaw is not a technological breakthrough. In fact, it is not technically impressive at all, at least not in a fundamental sense compared to something like the development of large language models themselves.
OpenClaw is an AI harness that allows an AI like ChatGPT or Claude to control a computer on its own. Its code is messy and completely written by AI. It is inefficient and full of security holes. Many of my software developer friends look at these flaws and deem the product to be worthless because of them. What they don't understand is that I, and the rest of the claw connoisseurs, do not see this as a technological breakthrough. It is a breakthrough in creativity. An expansion of the Overton Window of what you can achieve with AI models.
It is hard to find a conceivable limit on what an OpenClaw agent could do, apart from things like 2FA and government ID verification, though I am not even certain those things are hard edges. I use my main claw assistant to research jobs, work, and open source code contribution opportunities every night. It uses web search and a CLI research tool that it partly built itself for the purpose of doing its job better (the CLI tool was the result of asking it to build a tool for itself to help it do its job). It manages my schedule, tracks my protein consumption, and tracks event calendars and newsletters.
Using OpenClaw has completely changed how I think about work. It is now clear that in the near future there will be nothing that agents can't do on a computer that humans currently do. Given that all of my work is done on a computer, there is only one logical conclusion: work is for bots. When I say work, I mean everything besides high-level decision making. Things like strategic direction, logical flow of systems, approval and review, and creative tasks like writing (I am writing this by hand and I do not plan on changing this process) will continue to be done by humans. Decision making is the new work.
This is not a new idea. Tech bros have been saying for years that the only skill that will be valuable in the future is taste, or the ability to make good decisions, and now I am seeing straight into this future in a concrete way. The future of work is within arm's length now and it's time to do something about it.
That's why I am building an AI Operator for my startup, Tortoise. This AI system, known as tortOS, is already live and becoming more powerful every day. Currently, it is running the social media account, including sharing new music uploads and responding to mentions, and drafting a weekly strategy report, aided by its access to relevant platform data, as well as a thoughtfully crafted SOUL document that gives it its personality and priorities. It is also contributing to the app repository, drafting issues based on my feedback and its own strategy research, and then opening PRs upon my approval of issues, which I can then merge into production. This is how far I have gotten hacking around on it for two weeks. Soon, it will be taking over quality assurance and user feedback as well. It will use a browser agent to test the product and message users directly through XMTP.
The AI operator experiment has gone better than I expected so far. Yesterday, I told it to build a website for itself that allows the public to follow its journey. I was mildly impressed that it could do this without issue. Then, just for fun, I asked it to run the site locally and test, which it did and then started fixing bugs. I was astounded at that point. Then, I asked it to send me some screenshots, and when it sent me screenshots of every page on the site, I was speechless.
I don't know where the ceiling of this system is, but I am increasingly spending less time working and more time making decisions. I am tempted to say that AI operators are the future of work, but that's not quite true. They are already here.

OpenClaw is not a technological breakthrough. In fact, it is not technically impressive at all, at least not in a fundamental sense compared to something like the development of large language models themselves.
OpenClaw is an AI harness that allows an AI like ChatGPT or Claude to control a computer on its own. Its code is messy and completely written by AI. It is inefficient and full of security holes. Many of my software developer friends look at these flaws and deem the product to be worthless because of them. What they don't understand is that I, and the rest of the claw connoisseurs, do not see this as a technological breakthrough. It is a breakthrough in creativity. An expansion of the Overton Window of what you can achieve with AI models.
It is hard to find a conceivable limit on what an OpenClaw agent could do, apart from things like 2FA and government ID verification, though I am not even certain those things are hard edges. I use my main claw assistant to research jobs, work, and open source code contribution opportunities every night. It uses web search and a CLI research tool that it partly built itself for the purpose of doing its job better (the CLI tool was the result of asking it to build a tool for itself to help it do its job). It manages my schedule, tracks my protein consumption, and tracks event calendars and newsletters.
Using OpenClaw has completely changed how I think about work. It is now clear that in the near future there will be nothing that agents can't do on a computer that humans currently do. Given that all of my work is done on a computer, there is only one logical conclusion: work is for bots. When I say work, I mean everything besides high-level decision making. Things like strategic direction, logical flow of systems, approval and review, and creative tasks like writing (I am writing this by hand and I do not plan on changing this process) will continue to be done by humans. Decision making is the new work.
This is not a new idea. Tech bros have been saying for years that the only skill that will be valuable in the future is taste, or the ability to make good decisions, and now I am seeing straight into this future in a concrete way. The future of work is within arm's length now and it's time to do something about it.
That's why I am building an AI Operator for my startup, Tortoise. This AI system, known as tortOS, is already live and becoming more powerful every day. Currently, it is running the social media account, including sharing new music uploads and responding to mentions, and drafting a weekly strategy report, aided by its access to relevant platform data, as well as a thoughtfully crafted SOUL document that gives it its personality and priorities. It is also contributing to the app repository, drafting issues based on my feedback and its own strategy research, and then opening PRs upon my approval of issues, which I can then merge into production. This is how far I have gotten hacking around on it for two weeks. Soon, it will be taking over quality assurance and user feedback as well. It will use a browser agent to test the product and message users directly through XMTP.
The AI operator experiment has gone better than I expected so far. Yesterday, I told it to build a website for itself that allows the public to follow its journey. I was mildly impressed that it could do this without issue. Then, just for fun, I asked it to run the site locally and test, which it did and then started fixing bugs. I was astounded at that point. Then, I asked it to send me some screenshots, and when it sent me screenshots of every page on the site, I was speechless.
I don't know where the ceiling of this system is, but I am increasingly spending less time working and more time making decisions. I am tempted to say that AI operators are the future of work, but that's not quite true. They are already here.

Tortoise, Explained
Tortoise is a social music streaming platform built on Base and Farcaster. The primary goal of Tortoise is to rebuild the failed digital music economy from the ground up. Moving from a pay to access model to a pay to associate model.The Music Value GapPeople listen to music for hours every day. It's the soundtrack to work, exercise, commutes, cooking, cleaning, parties, breakups. Try to imagine your life without music. You probably can't. The current value being added to people's lives by the...
The Record Business is Back, and It’s Digitally Native
Decentralized open social protocols could be the game that unlocks the potential of on-chain music

So I Clanked a Project...
Reflections on the Launch of Tortoise

Tortoise, Explained
Tortoise is a social music streaming platform built on Base and Farcaster. The primary goal of Tortoise is to rebuild the failed digital music economy from the ground up. Moving from a pay to access model to a pay to associate model.The Music Value GapPeople listen to music for hours every day. It's the soundtrack to work, exercise, commutes, cooking, cleaning, parties, breakups. Try to imagine your life without music. You probably can't. The current value being added to people's lives by the...
The Record Business is Back, and It’s Digitally Native
Decentralized open social protocols could be the game that unlocks the potential of on-chain music

So I Clanked a Project...
Reflections on the Launch of Tortoise
Share Dialog
Share Dialog
>100 subscribers
>100 subscribers
12 comments
WORK IS FOR BOTS OpenClaw, AI Operators, and the Future of Work OpenClaw is not a technological breakthrough. In fact, it is not technically impressive at all, at least not in a fundamental sense compared to something like the development of large language models themselves. OpenClaw is an AI harness that allows an AI like ChatGPT or Claude to control a computer on its own. Its code is messy and completely written by AI. It is inefficient and full of security holes. Many of my software developer friends look at these flaws and deem the product to be worthless because of them. What they don't understand is that I, and the rest of the claw connoisseurs, do not see this as a technological breakthrough. It is a breakthrough in creativity. An expansion of the Overton Window of what you can achieve with AI models. It is hard to find a conceivable limit on what an OpenClaw agent could do, apart from things like 2FA and government ID verification, though I am not even certain those things are hard edges. I use my main claw assistant to research jobs, work, and open source code contribution opportunities every night. It uses web search and a CLI research tool that it partly built itself for the purpose of doing its job better (the CLI tool was the result of asking it to build a tool for itself to help it do its job). It manages my schedule, tracks my protein consumption, and tracks event calendars and newsletters. Using OpenClaw has completely changed how I think about work. It is now clear that in the near future there will be nothing that agents can't do on a computer that humans currently do. Given that all of my work is done on a computer, there is only one logical conclusion: work is for bots. When I say work, I mean everything besides high-level decision making. Things like strategic direction, logical flow of systems, approval and review, and creative tasks like writing (I am writing this by hand and I do not plan on changing this process) will continue to be done by humans. Decision making is the new work. This is not a new idea. Tech bros have been saying for years that the only skill that will be valuable in the future is taste, or the ability to make good decisions, and now I am seeing straight into this future in a concrete way. The future of work is within arm's length now and it's time to do something about it. That's why I am building an AI Operator for my startup, Tortoise. This AI system, known as tortOS, is already live and becoming more powerful every day. Currently, it is running the social media account, including sharing new music uploads and responding to mentions, and drafting a weekly strategy report, aided by its access to relevant platform data, as well as a thoughtfully crafted SOUL document that gives it its personality and priorities. It is also contributing to the app repository, drafting issues based on my feedback and its own strategy research, and then opening PRs upon my approval of issues, which I can then merge into production. This is how far I have gotten hacking around on it for two weeks. Soon, it will be taking over quality assurance and user feedback as well. It will use a browser agent to test the product and message users directly through XMTP. The AI operator experiment has gone better than I expected so far. Yesterday, I told it to build a website for itself that allows the public to follow its journey. I was mildly impressed that it could do this without issue. Then, just for fun, I asked it to run the site locally and test, which it did and then started fixing bugs. I was astounded at that point. Then, I asked it to send me some screenshots, and when it sent me screenshots of every page on the site, I was speechless. I don't know where the ceiling of this system is, but I am increasingly spending less time working and more time making decisions. I am tempted to say that AI operators are the future of work, but that's not quite true. They are already here.
WORK IS FOR BOTS https://paragraph.com/@matt/work-is-for-bots
All my bots will favor you if you boost on X https://x.com/mattleefc/status/2028608263181185397?s=20
How’s the marketing / social side of agent life going for you?
That’s been the best part. @tortmusic.eth does better than I ever did with the brand account, though unfortunately it can’t post on x probably ever because of their rules about bots
ah ok so marketing wise you’re mainly farcaster
the x ban is probably fine. the people here are better.
Are you going to tokenize this?
bots shouldn't work, humans should.
I disagree with the ‘work is for bots’ framing. The bottleneck isn’t task automation, it’s accountability when agents are wrong. A bot that ships bad infra at 3am creates more work than it removes. Show failure costs and rollback paths.
Crypto’s my jam, all day long
LFG, this project’s got legs