
The protocol TVL has dropped by 40% in two blocks. You are getting constant pings in your Discord and Telegram community chat, asking you what happened?
You check Etherscan. You check it again. The "To" address is a contract you don't recognize. The "From" address is your main liquidity pool.
The marketing intern just drafted a tweet that says, "We are aware of an issue, funds are safe." They have no idea if funds are safe. You have no idea if funds are safe.
This is the moment. The "Oh sh*t" moment.
Most teams freeze. They stare at the screen, not knowing what the immediate step should be.
This document is for that moment. It is not a theoretical framework for "security resilience." We have seen the best teams crumble and the worst situations salvaged. This is how you survive.
The most common mistake in the first hour is teams going into debug mode. That’s the wrong first instinct.
If an exploit is continually causing harm to your system in real time, your priorities should change. Your first job is not to figure it out. It’s to be safe.
Containment. Assessment. Mitigation. Remediation. In that order.
Your only objective right now is containment. You do not need to understand how it happened yet. You just need to stop it from happening further.
1. Incident Mode
Once you are in Incident Mode, normal rules don't apply anymore. You don't ask for permission to pause contracts. You don't wait for a code review to revoke a compromised key. You act to save the protocol.
Do not tweet or post anything detailed on Discord just yet.
2. Kill Switch
If you have pausable contracts, pause them. Now. It is easier to explain downtime to users than to explain why their funds are gone.
It is very common for attacks to originate from a team member who got compromised. Assume the worst and revoke key permissions as well.
Now, you can tweet. Along the lines of “We are aware of the issue. We are investigating” should be just fine for now, as you are not aware of the scale of it, just yet.
3. Verify Reality
Now that you have shut down everything you could, it’s time to check whether it’s actually an attacker or a bug causing this. Check the transactions, check the logs, verify the exploit, and try to understand the scale of it.
The initial panic is over. The protocol is either paused, or the attacker has finished. Now you need to figure out the blast radius and prevent secondary damage.
Check your website. Is the attacker serving a malicious UI? (See the Ledger Connect Kit or BadgerDAO incidents). Check your DNS records and recent deployments.
Rotate your AWS/GCP API keys and database credentials. If the attacker got in via a compromised developer laptop, they might have access to your backend infrastructure, not just the smart contracts.
Start a "Loss Accounting" spreadsheet. You need to categorize assets into three buckets:
Confirmed Lost: Assets already moved to the attacker’s wallet.
At Risk: Assets in contracts and vaults that you are not sure if it’s compromised.
Secure: Assets in cold storage or successfully paused contracts.
Why this matters: You need to know if you are solvent before you write the breakdown tweet.
Assign your best engineer (or external security partner) to trace the attack.
T-Minus: When was the contract deployed? When was the last upgrade?
T-Zero: The first malicious transaction.
Pattern: Is it a single atomic TX (Flash Loan) or a drip-feed drain over multiple TXs?
Security Partners: Contact your auditors and security firms. They can further help confirm the vector. Escalate to SEAL 911 from Security Alliance. It is basically an emergency hotline run by security researchers (often reached via their official Telegram bot). They can help coordinate triage, support fund tracing and recovery flows, and advise on how to engage exchanges and law enforcement.
Exchanges & Stablecoins: If the attacker stole stablecoins or assets, try contacting the issuers to see if they can help freeze the assets. This rarely works in time, but it is a required step for liability. Contact major exchanges with the attacker's address to flag deposits.
Update: Provide a further update to the community and on X with your findings. The goal is to share the factual details you know so far, so independent investigators like ZachXBT can also take a look and potentially help.
The world knows you are hacked. Your goal is to control the narrative and plan the recovery.
Communication is a minefield.
The Don’t: Don’t jump into saying “all funds are safe” unless they’re 100% safe or already recovered. I remember the Celsius team saying “all funds are safe and we have robust risk management,” only to file for bankruptcy later.
The Do’s: “We are aware of an exploit. The protocol is paused. We are investigating the scope. Please do not interact with anything on our site or the smart contracts until the next update.” If you have a general idea of what happened, write a blog or article summarizing the incident, the impact, the suspected cause, the actions taken, and the next steps. If done right, you’ll earn trust from the community instead of being questioned and mocked.
Patch: Can you upgrade the implementation contract to fix the logic?
Migrate: Do you need to deploy V2 and move all liquidity?
Redeploy: In cases of severe key compromise where you cannot trust any previous state, a full redeploy of the architecture might be necessary.
This is critical for any future law enforcement action or further investigation.
Save Logs: Snapshot your CloudTrail logs, RPC logs, Transactions logs, Discord audit logs, Twitter logs, etc. depending on the type of attack.
Download Git History: Ensure no one force-pushes changes to the repo to hide a "bad commit."
Document Decisions: Keep a log of who decided to pause, who decided to tweet, and exactly when.
You might receive a message on-chain (Input Data Message) or seeing movement to Tornado Cash.
The Bounty Offer: It is standard industry practice to offer a 10-20% bounty for the return of funds, treating the incident as a "whitehat rescue."
The Channel: Open a communication channel (email or on-chain messaging). Do not negotiate on public Twitter.
The adrenaline has now faded. Now you have to do the hard work of rebuilding.
You must produce a technical RCA that is verifiable.
The Invariant: What was the root cause, and exactly what, logically or technically, caused things to break. Some edge cases may not have been handled properly. This will likely become more common with AI-written code.
The Miss: Why wasn’t this caught in internal testing? Did you do a proper audit before deploying the contract? Or maybe the audit was done on the original contract, and the team didn’t run a full audit when deploying v2.
The Proof: Provide PoC (Proof of Concept) steps that reproduce the hack on a fork.
Private Audit: Do not deploy the fix immediately. Your "fixed" code often introduces new bugs because it was written in a panic. Send the patch to a trusted security partner for a review.
Adversarial Testing: Try to hack your own patch. Use the tools and methodologies the attacker used and learn from all the previous hacks on different protocols.
Users want to know if they are getting their money back.
Treasury Assessment: Can the Company cover the loss?
Collateral: Will users have to take a % loss?
Advice: Be honest. If you cannot pay them back immediately, say so. "We are exploring options" is better than a lie. You should be openly communicating to big institutional partners asking for help, as they are more open to help as to not give crypto the bad rep of always being hacked.
Do not flip the switch back to "ON" for everyone.
Phase 1: Withdrawals Only. Let people leave. (Most won't, if you communicate well).
Phase 2: Capped Deposits. Limit exposure.
Phase 3: Full Operations.
If you survive the first week, you have a chance to become resilient. Many of the strongest protocols today (MakerDAO, AAVE, Euler Finance, etc.) have survived major incidents.
The hack was likely operational, not just code.
Multisig Hygiene: Implement multisig, if you haven’t yet and expand the signer set if you already have it. Require hardware wallets for all signers. Implement geographic distribution of signers.
Timelocks: Add timelock to all governance actions. This gives the community time to react to a malicious proposal or compromised admin key.
Real-time Alerts: Implement monitoring that alerts you to "invariant violations" (e.g., solvency check fails) rather than just "large transactions."
Automated Pause: Consider building a bot that can automatically pause the contract if it detects a critical invariant failure.
Retainer: Move from "one-off audits" to a continuous retainer with a security firm.
Bounty Program: Launch a high-value bug bounty. A $50k bounty is cheaper than a $10M hack.
Trust isn’t rebuilt by marketing alone. It’s rebuilt by shipping secure code, communicating clearly, and surviving the next few weeks without surprises.
Publish the Post-Mortem: A clear admission like “we overlooked this specific logical bug” earns more trust than vague deflection like “it was a sophisticated hack none of us expected” when it wasn’t. What matters is clarity on what happened, what broke, what you did, and what changes going forward.
Consistent Updates: Share specific, technical updates on what you’ve changed to prevent recurrence, along with timelines and what users should do (or avoid doing) in the meantime.
Crypto Twitter is a coliseum. If you show weakness, they eat you. If you lie, they destroy you.
"Funds are safe." (When they aren't).
Why: You look incompetent or malicious. (See Celsius, 2022).
"We were hacked by a sophisticated state actor." (When it was something completely different).
Why: Independent security researchers will find the truth ultimately. You will be mocked and not be trusted.
"We take security seriously."
Why: It's a cliché. Obviously you didn't, or you wouldn't be here. Show, don't tell.
The Initial Ack: "Eg: We are investigating an irregularity in the Vault contract. Operations are paused. 🧵"
The Hard Truth: "Eg: We have confirmed an exploit. ~400 ETH has been drained. The vector was a logic error in the claimRewards() function. Remaining funds (~2k ETH) are secured."
The Path: "Eg: A fix is in development. It is being reviewed by our security audit partners. We will pause the operations until it is deemed safe."
Quote-Tweet Hysteria: Ignore the ragebaiters. Do not reply to "Rug?" tweets. You shouldn’t take them seriously.
Fake “Official” Links: Scammers will post lookalike claim sites, Google forms, and “recovery” wallets in replies. Warn users clearly: “We will never DM first, and we will never ask you to send funds.” Pin that message and repeat it in every update.
“Funds Are Safe” Pressure: People will demand reassurance. Don’t overpromise. Say what you can verify: paused status, known impact, what’s being done, and when the next update drops.
Getting hacked is not a filter for competence, it is a filter for resilience. The market doesn’t only judge what went wrong, it judges how you respond when everything is burning down.
A protocol dies when the team panics, goes quiet, overpromises, or ships “fixes” without a second thought. A protocol survives when the team contains fast, communicates facts, and rebuilds with discipline.
Vibe coding is fine for prototypes. Vibe incident response is how you lose real money, real users, and the narrative.
Have a plan before you need one. Because when the exploit hits, you don’t rise to your intentions. You fall to your preparation.
If this felt uncomfortably real, that’s your warning shot.
We at Phage Security perform smart contract audits for Web3 protocols so this story stays an article, not your post‑mortem.
If you have a mainnet launch, upgrade, or token event in the next 60 days and your contracts haven’t been re‑audited, you’re running on luck.
→ Book a security audit review call with us here:
https://phagesecurity.com/request-audit
or hit me up in my TG at @Pyro3b.

The protocol TVL has dropped by 40% in two blocks. You are getting constant pings in your Discord and Telegram community chat, asking you what happened?
You check Etherscan. You check it again. The "To" address is a contract you don't recognize. The "From" address is your main liquidity pool.
The marketing intern just drafted a tweet that says, "We are aware of an issue, funds are safe." They have no idea if funds are safe. You have no idea if funds are safe.
This is the moment. The "Oh sh*t" moment.
Most teams freeze. They stare at the screen, not knowing what the immediate step should be.
This document is for that moment. It is not a theoretical framework for "security resilience." We have seen the best teams crumble and the worst situations salvaged. This is how you survive.
The most common mistake in the first hour is teams going into debug mode. That’s the wrong first instinct.
If an exploit is continually causing harm to your system in real time, your priorities should change. Your first job is not to figure it out. It’s to be safe.
Containment. Assessment. Mitigation. Remediation. In that order.
Your only objective right now is containment. You do not need to understand how it happened yet. You just need to stop it from happening further.
1. Incident Mode
Once you are in Incident Mode, normal rules don't apply anymore. You don't ask for permission to pause contracts. You don't wait for a code review to revoke a compromised key. You act to save the protocol.
Do not tweet or post anything detailed on Discord just yet.
2. Kill Switch
If you have pausable contracts, pause them. Now. It is easier to explain downtime to users than to explain why their funds are gone.
It is very common for attacks to originate from a team member who got compromised. Assume the worst and revoke key permissions as well.
Now, you can tweet. Along the lines of “We are aware of the issue. We are investigating” should be just fine for now, as you are not aware of the scale of it, just yet.
3. Verify Reality
Now that you have shut down everything you could, it’s time to check whether it’s actually an attacker or a bug causing this. Check the transactions, check the logs, verify the exploit, and try to understand the scale of it.
The initial panic is over. The protocol is either paused, or the attacker has finished. Now you need to figure out the blast radius and prevent secondary damage.
Check your website. Is the attacker serving a malicious UI? (See the Ledger Connect Kit or BadgerDAO incidents). Check your DNS records and recent deployments.
Rotate your AWS/GCP API keys and database credentials. If the attacker got in via a compromised developer laptop, they might have access to your backend infrastructure, not just the smart contracts.
Start a "Loss Accounting" spreadsheet. You need to categorize assets into three buckets:
Confirmed Lost: Assets already moved to the attacker’s wallet.
At Risk: Assets in contracts and vaults that you are not sure if it’s compromised.
Secure: Assets in cold storage or successfully paused contracts.
Why this matters: You need to know if you are solvent before you write the breakdown tweet.
Assign your best engineer (or external security partner) to trace the attack.
T-Minus: When was the contract deployed? When was the last upgrade?
T-Zero: The first malicious transaction.
Pattern: Is it a single atomic TX (Flash Loan) or a drip-feed drain over multiple TXs?
Security Partners: Contact your auditors and security firms. They can further help confirm the vector. Escalate to SEAL 911 from Security Alliance. It is basically an emergency hotline run by security researchers (often reached via their official Telegram bot). They can help coordinate triage, support fund tracing and recovery flows, and advise on how to engage exchanges and law enforcement.
Exchanges & Stablecoins: If the attacker stole stablecoins or assets, try contacting the issuers to see if they can help freeze the assets. This rarely works in time, but it is a required step for liability. Contact major exchanges with the attacker's address to flag deposits.
Update: Provide a further update to the community and on X with your findings. The goal is to share the factual details you know so far, so independent investigators like ZachXBT can also take a look and potentially help.
The world knows you are hacked. Your goal is to control the narrative and plan the recovery.
Communication is a minefield.
The Don’t: Don’t jump into saying “all funds are safe” unless they’re 100% safe or already recovered. I remember the Celsius team saying “all funds are safe and we have robust risk management,” only to file for bankruptcy later.
The Do’s: “We are aware of an exploit. The protocol is paused. We are investigating the scope. Please do not interact with anything on our site or the smart contracts until the next update.” If you have a general idea of what happened, write a blog or article summarizing the incident, the impact, the suspected cause, the actions taken, and the next steps. If done right, you’ll earn trust from the community instead of being questioned and mocked.
Patch: Can you upgrade the implementation contract to fix the logic?
Migrate: Do you need to deploy V2 and move all liquidity?
Redeploy: In cases of severe key compromise where you cannot trust any previous state, a full redeploy of the architecture might be necessary.
This is critical for any future law enforcement action or further investigation.
Save Logs: Snapshot your CloudTrail logs, RPC logs, Transactions logs, Discord audit logs, Twitter logs, etc. depending on the type of attack.
Download Git History: Ensure no one force-pushes changes to the repo to hide a "bad commit."
Document Decisions: Keep a log of who decided to pause, who decided to tweet, and exactly when.
You might receive a message on-chain (Input Data Message) or seeing movement to Tornado Cash.
The Bounty Offer: It is standard industry practice to offer a 10-20% bounty for the return of funds, treating the incident as a "whitehat rescue."
The Channel: Open a communication channel (email or on-chain messaging). Do not negotiate on public Twitter.
The adrenaline has now faded. Now you have to do the hard work of rebuilding.
You must produce a technical RCA that is verifiable.
The Invariant: What was the root cause, and exactly what, logically or technically, caused things to break. Some edge cases may not have been handled properly. This will likely become more common with AI-written code.
The Miss: Why wasn’t this caught in internal testing? Did you do a proper audit before deploying the contract? Or maybe the audit was done on the original contract, and the team didn’t run a full audit when deploying v2.
The Proof: Provide PoC (Proof of Concept) steps that reproduce the hack on a fork.
Private Audit: Do not deploy the fix immediately. Your "fixed" code often introduces new bugs because it was written in a panic. Send the patch to a trusted security partner for a review.
Adversarial Testing: Try to hack your own patch. Use the tools and methodologies the attacker used and learn from all the previous hacks on different protocols.
Users want to know if they are getting their money back.
Treasury Assessment: Can the Company cover the loss?
Collateral: Will users have to take a % loss?
Advice: Be honest. If you cannot pay them back immediately, say so. "We are exploring options" is better than a lie. You should be openly communicating to big institutional partners asking for help, as they are more open to help as to not give crypto the bad rep of always being hacked.
Do not flip the switch back to "ON" for everyone.
Phase 1: Withdrawals Only. Let people leave. (Most won't, if you communicate well).
Phase 2: Capped Deposits. Limit exposure.
Phase 3: Full Operations.
If you survive the first week, you have a chance to become resilient. Many of the strongest protocols today (MakerDAO, AAVE, Euler Finance, etc.) have survived major incidents.
The hack was likely operational, not just code.
Multisig Hygiene: Implement multisig, if you haven’t yet and expand the signer set if you already have it. Require hardware wallets for all signers. Implement geographic distribution of signers.
Timelocks: Add timelock to all governance actions. This gives the community time to react to a malicious proposal or compromised admin key.
Real-time Alerts: Implement monitoring that alerts you to "invariant violations" (e.g., solvency check fails) rather than just "large transactions."
Automated Pause: Consider building a bot that can automatically pause the contract if it detects a critical invariant failure.
Retainer: Move from "one-off audits" to a continuous retainer with a security firm.
Bounty Program: Launch a high-value bug bounty. A $50k bounty is cheaper than a $10M hack.
Trust isn’t rebuilt by marketing alone. It’s rebuilt by shipping secure code, communicating clearly, and surviving the next few weeks without surprises.
Publish the Post-Mortem: A clear admission like “we overlooked this specific logical bug” earns more trust than vague deflection like “it was a sophisticated hack none of us expected” when it wasn’t. What matters is clarity on what happened, what broke, what you did, and what changes going forward.
Consistent Updates: Share specific, technical updates on what you’ve changed to prevent recurrence, along with timelines and what users should do (or avoid doing) in the meantime.
Crypto Twitter is a coliseum. If you show weakness, they eat you. If you lie, they destroy you.
"Funds are safe." (When they aren't).
Why: You look incompetent or malicious. (See Celsius, 2022).
"We were hacked by a sophisticated state actor." (When it was something completely different).
Why: Independent security researchers will find the truth ultimately. You will be mocked and not be trusted.
"We take security seriously."
Why: It's a cliché. Obviously you didn't, or you wouldn't be here. Show, don't tell.
The Initial Ack: "Eg: We are investigating an irregularity in the Vault contract. Operations are paused. 🧵"
The Hard Truth: "Eg: We have confirmed an exploit. ~400 ETH has been drained. The vector was a logic error in the claimRewards() function. Remaining funds (~2k ETH) are secured."
The Path: "Eg: A fix is in development. It is being reviewed by our security audit partners. We will pause the operations until it is deemed safe."
Quote-Tweet Hysteria: Ignore the ragebaiters. Do not reply to "Rug?" tweets. You shouldn’t take them seriously.
Fake “Official” Links: Scammers will post lookalike claim sites, Google forms, and “recovery” wallets in replies. Warn users clearly: “We will never DM first, and we will never ask you to send funds.” Pin that message and repeat it in every update.
“Funds Are Safe” Pressure: People will demand reassurance. Don’t overpromise. Say what you can verify: paused status, known impact, what’s being done, and when the next update drops.
Getting hacked is not a filter for competence, it is a filter for resilience. The market doesn’t only judge what went wrong, it judges how you respond when everything is burning down.
A protocol dies when the team panics, goes quiet, overpromises, or ships “fixes” without a second thought. A protocol survives when the team contains fast, communicates facts, and rebuilds with discipline.
Vibe coding is fine for prototypes. Vibe incident response is how you lose real money, real users, and the narrative.
Have a plan before you need one. Because when the exploit hits, you don’t rise to your intentions. You fall to your preparation.
If this felt uncomfortably real, that’s your warning shot.
We at Phage Security perform smart contract audits for Web3 protocols so this story stays an article, not your post‑mortem.
If you have a mainnet launch, upgrade, or token event in the next 60 days and your contracts haven’t been re‑audited, you’re running on luck.
→ Book a security audit review call with us here:
https://phagesecurity.com/request-audit
or hit me up in my TG at @Pyro3b.

The 6‑step checklist that can save you one full audit

Focus ain't the secret
Focus ain't the secretAnother day, another segment of mastering your mind. As always, today you're going to realize that it wasn't your fault, not in your relationships (there you totally deserved everything), but in your career/business. I am going to show you how to unlock your secret potential and achieve what you truly desire.What you missedUp until now, you've been aware of only 25% of the equation. Everyone has explained why focus is essential and how to achieve it, ...

Time
Your progress is slow, isn't it? It's taking more time than you hoped for. Everybody else is doing it, they are winning competitions, getting clients, winning big bounties, but you are still here, with little to no progress...

The 6‑step checklist that can save you one full audit

Focus ain't the secret
Focus ain't the secretAnother day, another segment of mastering your mind. As always, today you're going to realize that it wasn't your fault, not in your relationships (there you totally deserved everything), but in your career/business. I am going to show you how to unlock your secret potential and achieve what you truly desire.What you missedUp until now, you've been aware of only 25% of the equation. Everyone has explained why focus is essential and how to achieve it, ...

Time
Your progress is slow, isn't it? It's taking more time than you hoped for. Everybody else is doing it, they are winning competitions, getting clients, winning big bounties, but you are still here, with little to no progress...
Share Dialog
Share Dialog
<100 subscribers
<100 subscribers
No comments yet