In January 2026, the Bombay Stock Exchange was forced into emergency mode – again. AI-generated videos of its chief executive were circulating on WhatsApp and Telegram, offering fraudulent stock tips and promising crazy high returns. The exchange quickly issued official advisories confirming the video was fabricated and urging investors not to act on it, although untold numbers of retail investors saw the Deepfakes and were fleeced.
Three months later, the National Stock Exchange was in the Bombay High Court seeking an injunction against deepfake videos of its managing director being used to endorse identical investment scams across Facebook, Instagram, WhatsApp, and Telegram. Justice Sharmila Deshmukh ordered domain suspensions within thirty-six hours, locked the infringing properties against re-registration, and extended the restraint to unidentified John Doe defendants. The videos had already run for weeks.
As AI becomes both incredibly good and incredibly cheap, a new category of corporate risk, and every public company and organization is exposed to potential Deepfake brand disaster. Production cost of the NSE Deepfakes, according to subsequent analysis, is roughly five US dollars, and distribution across dozens of WhatsApp funnels or groupchats costs effectively nothing.
Other stories are sadly similar. An 82-year-old American retiree named Steve Beauchamp drained his retirement account and invested roughly six hundred ninety thousand dollars in a scam after watching what he believed was a video of Elon Musk endorsing the investment. The video was synthetic. Italian business leaders were called last year by a cloned voice of the country's defense minister soliciting urgent funds.
Legislators on three continents noticed and took action. As of mid-2025, forty-seven US states had enacted laws addressing deepfakes, with new statutes in Pennsylvania and Washington State criminalizing the creation or distribution of forged digital likenesses used to defraud, and Tennessee's ELVIS Act extending publicity rights to AI-generated voice clones. At the federal level, the TAKE IT DOWN Act signed in May 2025 imposes notice-and-takedown requirements on covered platforms by May 2026. In Europe, Article 50 of the EU AI Act takes effect in August 2026 and requires any deployer of an AI system that generates or manipulates image, audio, video, or public-interest text to disclose that the content has been artificially generated, with the European Commission already publishing a draft Code of Practice that favors a multilayered approach combining visible disclosure with machine-readable provenance metadata.
India's IT Rules Amendment 2026, effective February 20 of this year, requires provenance metadata on synthetic content and a three-hour takedown mandate on platforms. The regulatory pressure is converging on a single expectation. Synthetic content must be marked, authentic content should be verifiable, and the companies producing the content are responsible for both.
Hijacking reputation to perpetrate fraud by engendering trust is now squarely on the radar of regulators. In India, the Securities and Exchange Board issued a November 2025 warning noting that fraudsters routinely pose as executives of reputed organizations to build credibility. SEBI has since facilitated the removal of more than one hundred twenty thousand misleading influencer posts tied to financial impersonation, and forty-seven percent of Indian adults have been victimized by, or know a victim of, an AI voice or deepfake scam, nearly double the global average.
By one measure, US corporate accounts lost approximately one point one billion dollars to deepfake fraud in 2025, roughly three times the prior year figure. Documented losses in North America exceeded two hundred million dollars in the first quarter of 2025 alone.
Every public company already publishes exactly the training data an attacker needs. Every earnings call, keynote, podcast appearance, LinkedIn video, and media interview featuring a chief executive is also a resource for generating a synthetic version of that executive. The more visible the leader, the more raw material is available, and the more credible the eventual fake.
Legal remedies can not fix this problem. Cease and desist letters, DMCA notices, trademark suits, and John Doe filings all work, eventually, and none operate at the speed of a WhatsApp forward. By the time the injunction is granted, the scam has already made its money and the reputational damage has already set in. The court order is documentation, not defense.
Every piece of official content a company produces should carry a cryptographically verifiable credential issued at the moment of publication. Press releases, executive videos, product announcements, investor communications, recorded interviews, earnings materials, regulatory filings. The open standard for this exists. It is called C2PA, backed by Adobe, Microsoft, Intel, the BBC, and a growing list of camera manufacturers and media organizations through the Content Authenticity Initiative. It is in production use at enterprises including Vivendi for press release authentication.
A credential attached at publication changes who bears the burden of verification. Without it, the audience has to trust the company's claim that a video is real, which is exactly the trust that deepfake fraud has destroyed. With it, the audience runs a check, the platform runs a check, the regulator runs a check, and the result is a cryptographic answer rather than a judgment call. Unsigned content carrying the company's brand or an executive's likeness becomes, by default, suspect. That inversion is the point.
The architecture underneath the credential matters as much as the credential itself. A public blockchain as the anchor, an open standard for the signature format, and customer-held keys produce a verification system that does not depend on any single vendor's continued existence, cannot be rewritten by an insider or compromised operator, and costs a small fraction of what traditional certificate-based PKI imposes.
Traditional certificates run tens or hundreds of dollars each, which is unworkable at enterprise scale. At cost points measured in fractions of a cent per signature, a company can authenticate every communication it produces rather than only the ones deemed high risk in advance, which matters because fraudsters target whichever communications the company left unsigned.
Companies that authenticate their output now will be compliant by default across the US, EU, and Indian frameworks already in force or taking effect this year, and will have already shifted the trust burden to the right side of the equation before their competitors do.
How are we fighting Deepfakes? Because it's inevitable. The question is whether, when the Deepfake of your chief executive appears on a social platform next quarter, your customers and investors have a tool to verify in seconds that the video is not from you. Brand used to be a marketing asset protected by press strategy. It is now a security surface, and the attack surface is every public appearance your leadership has ever made. Authenticate your own output or accept that someone else will define what your company said, and the market will have no way to tell the difference.
Note: We built Nodle ContentSign to tackle this problem. Drop us a line for trial. https://www.nodle.com/contentsign




