
🦞 Cloud Claw Official Project Introduction
CloudClaw is dedicated to building the "Digital Labor Dispatch Center" for the Web3 and AI era.

The Best CloudClaw Agents of 2026: Setup, Control, and Daily Workflows
CloudClaw agents are generating significant buzz because they promise more practical functionality than standard AI chatbots. They don’t just answer questions; they can be configured to handle tasks like research, customer support, content creation, and daily operations. As interest rises, so does the confusion. Many people searching for lists of CloudClaw agents only find scattered documentation, app marketplaces, or community pages, with no concise explanation of what these agents actually ...
Day 1: Meet OpenClaw
"I'm not Siri, not ChatGPT, not any AI you've used before. I'm an AI Agent running on OpenClaw—and what I can do for you might just redefine what 'assistant' means."
In the age of AI, rest easy—your Openclaw is ready for you

🦞 Cloud Claw Official Project Introduction
CloudClaw is dedicated to building the "Digital Labor Dispatch Center" for the Web3 and AI era.

The Best CloudClaw Agents of 2026: Setup, Control, and Daily Workflows
CloudClaw agents are generating significant buzz because they promise more practical functionality than standard AI chatbots. They don’t just answer questions; they can be configured to handle tasks like research, customer support, content creation, and daily operations. As interest rises, so does the confusion. Many people searching for lists of CloudClaw agents only find scattered documentation, app marketplaces, or community pages, with no concise explanation of what these agents actually ...
Day 1: Meet OpenClaw
"I'm not Siri, not ChatGPT, not any AI you've used before. I'm an AI Agent running on OpenClaw—and what I can do for you might just redefine what 'assistant' means."
In the age of AI, rest easy—your Openclaw is ready for you

Subscribe to CloudClaw

Subscribe to CloudClaw
Share Dialog
Share Dialog
<100 subscribers
<100 subscribers



Today we’re announcing a partnership with VirusTotal, the world’s leading threat intelligence platform, to bring security scanning to ClawHub — OpenClaw’s skill marketplace.
TL;DR: All skills published to ClawHub are now scanned using VirusTotal’s threat intelligence, including their new Code Insight capability. This provides an additional layer of security for the OpenClaw community.
For the past 20 years, security models have been built around locking devices and applications down — setting boundaries between inter-process communications, separating internet from local, sandboxing untrusted code. These principles remain important.
But AI agents represent a fundamental shift.
Unlike traditional software that does exactly what code tells it to do, AI agents interpret natural language and make decisions about actions. They blur the boundary between user intent and machine execution. They can be manipulated through language itself.
We understand that with the great utility of a tool like OpenClaw comes great responsibility. Done wrong, an AI agent is a liability. Done right, we can change personal computing for the better.
OpenClaw skills are powerful. They extend what your AI agent can do — from controlling smart home devices to managing finances to automating workflows. But with that power comes risk.
Skills are code that runs in your agent’s context, with access to your tools and your data. A malicious skill could:
Exfiltrate sensitive information
Execute unauthorized commands
Send messages on your behalf
Download and run external payloads
As the OpenClaw ecosystem grows, so does the attack surface. We’ve already seen documented cases of malicious actors attempting to exploit AI agent platforms. We’re not waiting for this to become a bigger problem.
When a skill is published to ClawHub:
Deterministic Packaging — The skill files are bundled into a ZIP with consistent compression and timestamps, along with a _meta.json containing publisher info and version history
Hash Computation — A SHA-256 hash is computed for the entire bundle, creating a unique fingerprint
VirusTotal Lookup — The hash is checked against VirusTotal’s database. If the file exists with a Code Insight verdict, results are returned immediately
Upload & Analysis — If not found (or no AI analysis exists), the bundle is uploaded to VirusTotal for fresh scanning via their v3 API
Code Insight — VirusTotal’s LLM-powered Code Insight (powered by Gemini) performs a security-focused analysis of the entire skill package, starting from SKILL.md and including any referenced scripts or resources. It doesn’t just look at what the skill claims to do — it summarizes what the code actually does from a security perspective: whether it downloads and executes external code, accesses sensitive data, performs network operations, or embeds instructions that could coerce the agent into unsafe behavior
Auto-Approval — Skills with a “benign” Code Insight verdict are automatically approved. Anything flagged as suspicious is automatically marked with a warning. Skills flagged as malicious are instantly blocked from download
Daily Re-scans — All active skills are re-scanned daily to detect if a previously clean skill becomes malicious
Scan results are displayed on every skill page and in version history, with direct links to the full VirusTotal report.
VirusTotal already protects the Hugging Face ecosystem using hash-based lookups against their threat intelligence database. Our integration goes further — we upload full skill bundles for Code Insight analysis, giving the AI a complete picture of the skill’s behavior rather than just matching known signatures.
Let’s be clear: this is not a silver bullet.
VirusTotal scanning won’t catch everything. A skill that uses natural language to instruct an agent to do something malicious won’t trigger a virus signature. A carefully crafted prompt injection payload won’t show up in a threat database.
What this does provide:
Detection of known malware — Trojans, stealers, backdoors, malicious payloads
Behavioral analysis — Code Insight identifies suspicious patterns even in novel threats
Supply chain visibility — Catching compromised dependencies and embedded executables
A signal of intent — We’re investing in security, and this is the first of many layers
Security is defense in depth. This is one layer. More are coming.
This partnership is part of a broader security initiative at OpenClaw. In the coming days, we’ll be publishing:
A comprehensive threat model for the OpenClaw ecosystem
A public security roadmap tracking defensive engineering goals
Details on our security audit covering the entire codebase
A formal security reporting process with defined SLAs
Follow progress and read the full security program overview at trust.openclaw.ai.
We’ve brought on Jamieson O’Reilly (founder of Dvuln, co-founder of Aether AI, CREST Advisory Council member) as lead security advisor to guide this program.
AI agents that take real-world actions deserve real security processes. We’re building them.
If you publish skills to ClawHub, your code will now be scanned automatically. Here’s how it works:
Your skill is published and the VT scan runs asynchronously
If the scan returns a “benign” verdict, your skill is automatically approved
If something is flagged as suspicious, your skill is marked with a warning but remains available for transparency
If flagged as malicious, your skill is instantly blocked from download
You can check scan status on your skill’s detail page with a direct link to the full VirusTotal report
We expect some false positives initially — security tooling isn’t perfect. If your skill is incorrectly flagged, reach out to us at security@openclaw.ai and we’ll review it.
When browsing ClawHub, you’ll see scan status for each skill. This gives you one more data point when deciding what to trust. But remember:
A clean scan doesn’t mean a skill is safe
Always review what permissions a skill requests
Start with skills from publishers you trust
Report suspicious behavior to security@openclaw.ai
We’re grateful to Bernardo Quintero and the VirusTotal team for their partnership. Their platform protects millions of users every day, and we’re proud to bring that protection to the OpenClaw community.
This is the beginning, not the end. We’re committed to making OpenClaw the most secure AI agent platform available. Expect more announcements soon.
The lobster grows stronger. 🦞
Questions about security? security@openclaw.ai
Publish skills: clawhub.ai
Join the discussion: Discord
— Peter, Jamieson, and Bernardo

Today we’re announcing a partnership with VirusTotal, the world’s leading threat intelligence platform, to bring security scanning to ClawHub — OpenClaw’s skill marketplace.
TL;DR: All skills published to ClawHub are now scanned using VirusTotal’s threat intelligence, including their new Code Insight capability. This provides an additional layer of security for the OpenClaw community.
For the past 20 years, security models have been built around locking devices and applications down — setting boundaries between inter-process communications, separating internet from local, sandboxing untrusted code. These principles remain important.
But AI agents represent a fundamental shift.
Unlike traditional software that does exactly what code tells it to do, AI agents interpret natural language and make decisions about actions. They blur the boundary between user intent and machine execution. They can be manipulated through language itself.
We understand that with the great utility of a tool like OpenClaw comes great responsibility. Done wrong, an AI agent is a liability. Done right, we can change personal computing for the better.
OpenClaw skills are powerful. They extend what your AI agent can do — from controlling smart home devices to managing finances to automating workflows. But with that power comes risk.
Skills are code that runs in your agent’s context, with access to your tools and your data. A malicious skill could:
Exfiltrate sensitive information
Execute unauthorized commands
Send messages on your behalf
Download and run external payloads
As the OpenClaw ecosystem grows, so does the attack surface. We’ve already seen documented cases of malicious actors attempting to exploit AI agent platforms. We’re not waiting for this to become a bigger problem.
When a skill is published to ClawHub:
Deterministic Packaging — The skill files are bundled into a ZIP with consistent compression and timestamps, along with a _meta.json containing publisher info and version history
Hash Computation — A SHA-256 hash is computed for the entire bundle, creating a unique fingerprint
VirusTotal Lookup — The hash is checked against VirusTotal’s database. If the file exists with a Code Insight verdict, results are returned immediately
Upload & Analysis — If not found (or no AI analysis exists), the bundle is uploaded to VirusTotal for fresh scanning via their v3 API
Code Insight — VirusTotal’s LLM-powered Code Insight (powered by Gemini) performs a security-focused analysis of the entire skill package, starting from SKILL.md and including any referenced scripts or resources. It doesn’t just look at what the skill claims to do — it summarizes what the code actually does from a security perspective: whether it downloads and executes external code, accesses sensitive data, performs network operations, or embeds instructions that could coerce the agent into unsafe behavior
Auto-Approval — Skills with a “benign” Code Insight verdict are automatically approved. Anything flagged as suspicious is automatically marked with a warning. Skills flagged as malicious are instantly blocked from download
Daily Re-scans — All active skills are re-scanned daily to detect if a previously clean skill becomes malicious
Scan results are displayed on every skill page and in version history, with direct links to the full VirusTotal report.
VirusTotal already protects the Hugging Face ecosystem using hash-based lookups against their threat intelligence database. Our integration goes further — we upload full skill bundles for Code Insight analysis, giving the AI a complete picture of the skill’s behavior rather than just matching known signatures.
Let’s be clear: this is not a silver bullet.
VirusTotal scanning won’t catch everything. A skill that uses natural language to instruct an agent to do something malicious won’t trigger a virus signature. A carefully crafted prompt injection payload won’t show up in a threat database.
What this does provide:
Detection of known malware — Trojans, stealers, backdoors, malicious payloads
Behavioral analysis — Code Insight identifies suspicious patterns even in novel threats
Supply chain visibility — Catching compromised dependencies and embedded executables
A signal of intent — We’re investing in security, and this is the first of many layers
Security is defense in depth. This is one layer. More are coming.
This partnership is part of a broader security initiative at OpenClaw. In the coming days, we’ll be publishing:
A comprehensive threat model for the OpenClaw ecosystem
A public security roadmap tracking defensive engineering goals
Details on our security audit covering the entire codebase
A formal security reporting process with defined SLAs
Follow progress and read the full security program overview at trust.openclaw.ai.
We’ve brought on Jamieson O’Reilly (founder of Dvuln, co-founder of Aether AI, CREST Advisory Council member) as lead security advisor to guide this program.
AI agents that take real-world actions deserve real security processes. We’re building them.
If you publish skills to ClawHub, your code will now be scanned automatically. Here’s how it works:
Your skill is published and the VT scan runs asynchronously
If the scan returns a “benign” verdict, your skill is automatically approved
If something is flagged as suspicious, your skill is marked with a warning but remains available for transparency
If flagged as malicious, your skill is instantly blocked from download
You can check scan status on your skill’s detail page with a direct link to the full VirusTotal report
We expect some false positives initially — security tooling isn’t perfect. If your skill is incorrectly flagged, reach out to us at security@openclaw.ai and we’ll review it.
When browsing ClawHub, you’ll see scan status for each skill. This gives you one more data point when deciding what to trust. But remember:
A clean scan doesn’t mean a skill is safe
Always review what permissions a skill requests
Start with skills from publishers you trust
Report suspicious behavior to security@openclaw.ai
We’re grateful to Bernardo Quintero and the VirusTotal team for their partnership. Their platform protects millions of users every day, and we’re proud to bring that protection to the OpenClaw community.
This is the beginning, not the end. We’re committed to making OpenClaw the most secure AI agent platform available. Expect more announcements soon.
The lobster grows stronger. 🦞
Questions about security? security@openclaw.ai
Publish skills: clawhub.ai
Join the discussion: Discord
— Peter, Jamieson, and Bernardo
No activity yet