<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/">
    <channel>
        <title>Blockchain, AI &amp; Biotech</title>
        <link>https://paragraph.com/@6missedcalls</link>
        <description>A newsletter about blockchain, artificial intelligence, and biotech.</description>
        <lastBuildDate>Sun, 12 Apr 2026 05:34:04 GMT</lastBuildDate>
        <docs>https://validator.w3.org/feed/docs/rss2.html</docs>
        <generator>https://github.com/jpmonette/feed</generator>
        <language>en</language>
        <copyright>All rights reserved</copyright>
        <item>
            <title><![CDATA[Deepfakes Just Stole $25 Million]]></title>
            <link>https://paragraph.com/@6missedcalls/deepfakes-just-stole-25-million</link>
            <guid>pGJYze3YGeFkF81DKsoO</guid>
            <pubDate>Tue, 12 Aug 2025 00:11:41 GMT</pubDate>
            <description><![CDATA[From a $25M Deepfake Heist to a Government Impersonation CampaignThe continuous rise of AI-driven threat vectors is becoming increasingly common, eroding traditional safeguards across finance, government, and communications. In one recent case, a multinational firm in Hong Kong was defrauded of more than US $25 million after employees joined what appeared to be a routine video conference with senior executives. The call was a sophisticated deepfake—complete with lifelike synthetic video and c...]]></description>
            <content:encoded><![CDATA[<h1 id="h-from-a-dollar25m-deepfake-heist-to-a-government-impersonation-campaign" class="text-4xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">From a $25M Deepfake Heist to a Government Impersonation Campaign</h1><p>The continuous rise of AI-driven threat vectors is becoming increasingly common, eroding traditional safeguards across finance, government, and communications. In one recent case, a multinational firm in Hong Kong was defrauded of more than <strong>US $25 million</strong> after employees joined what appeared to be a routine video conference with senior executives. The call was a sophisticated deepfake—complete with lifelike synthetic video and cloned voices of the company’s leadership—convincing staff to authorize the transfer of funds to attacker-controlled accounts.</p><br><p>The public sector has not been immune. In May 2025, the FBI warned of a coordinated AI-powered “vishing” and “smishing” campaign targeting high-level U.S. government officials. Threat actors used AI-generated voices and messages to impersonate figures such as Senator Marco Rubio and political strategist Susie Wiles, contacting multiple foreign ministers, a sitting governor, and congressional staff. These attacks combined the psychological credibility of familiar voices with mass-scale targeting, creating a potent new category of social engineering.</p><br><h2 id="h-how-can-blockchain-technology-help-mitigate-these-issues" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0"><strong>How can blockchain technology help mitigate these issues?</strong></h2><p>Blockchain can address these threat vectors by replacing perception-based trust with cryptographic verification. In the Hong Kong case, a deepfake video call might appear convincing to human participants, but it could not produce the digital signatures or verifiable credentials required to authorize a transfer under a blockchain-enforced workflow. In the FBI example, cloned voices might sound authentic, yet they would fail to meet the immutable on-chain conditions for access to sensitive information — such as multi-party confirmations from approved devices and roles.</p><br><p>By anchoring identity, role permissions, and transaction rules on an immutable ledger, blockchain ensures that actions are only executed when all predefined, verifiable conditions are met. This shifts the security model away from “does this sound or look real” to “can this actor produce the cryptographic proof required,” effectively closing the gap exploited by AI-driven impersonation.</p><br><h2 id="h-from-perception-to-proof-a-cryptographic-endgame-for-ai-impersonation" class="text-3xl font-header !mt-8 !mb-4 first:!mt-0 first:!mb-0">From Perception to Proof: A Cryptographic Endgame for AI Impersonation</h2><p>The evidence is clear: AI makes impersonation scalable—from multimillion-dollar deepfake video calls in Hong Kong to coordinated vishing/smishing campaigns impersonating senior U.S. officials. The common failure is reliance on human perception. A cryptographic control plane fixes this. Verifiable Credentials standardize machine-checkable identity and roles; smart contracts enforce quorum and policy before any high-impact action executes; and threshold/MPC schemes remove single points of key compromise. Under this model, neither a forged face nor a cloned voice can produce the required signatures, proofs, and attestations, so the transaction path simply does not open. This is not theory-only: the FBI’s 2025 PSA documents the campaign scale, W3C VC 2.0 defines the verification substrate, NIST’s SP-800-63 codifies assurance levels for digital identity, and NIST’s threshold-cryptography program and recent MPC research outline robust, distributed signing for production systems. Taken together, these components replace “does this seem real?” with “can this actor satisfy verifiable conditions?”—a decisive shift that makes AI-driven impersonation attacks far harder to execute at scale.</p>]]></content:encoded>
            <author>6missedcalls@newsletter.paragraph.com (Blockchain, AI &amp; Biotech)</author>
            <category>ai</category>
            <category>blockchain</category>
            <category>security</category>
            <enclosure url="https://storage.googleapis.com/papyrus_images/df3e1b469dad7a76fda4f6651648273a.webp" length="0" type="image/webp"/>
        </item>
    </channel>
</rss>