# POST 09: THE SHIELD > The Security Layer: When $2 Billion in Losses Demands Mathematical Proof **Published by:** [The New Digital Renaissance](https://paragraph.com/@drdavide/) **Published on:** 2026-01-18 **URL:** https://paragraph.com/@drdavide/post-09-the-shield ## Content Week 9 of Building in Public Genesis Cohort — Builder #2 January 2026I. THE $2 BILLION QUESTIONIn 2024, hackers stole over $2.2 billion from Web3 protocols. Not through sophisticated nation-state attacks. Not through zero-day exploits that required years of research. Through bugs. Simple bugs in smart contract code that "looked fine" to auditors. The DAO hack. Wormhole. Ronin Network. Poly Network. Hundreds of millions—sometimes over half a billion—gone in minutes. Not because the code was complex. Because the audits weren't enough. Here's the uncomfortable truth the industry doesn't want to admit: 80% of deployed smart contracts have vulnerabilities. Less than 10% are ever formally verified. Bugs remain undetected for an average of six months. Traditional audits catch the obvious. Automated scanners find the simple. But the complex vulnerabilities—reentrancy attacks buried three calls deep, access control flaws that only manifest under specific conditions, economic exploits that drain liquidity pools through mathematically valid but unintended behaviors—these slip through. Every. Single. Time. Until someone finds them. And by then, the money is gone.II. THE AUDIT ILLUSIONLet's be honest about what a traditional smart contract audit actually is. The Process:You pay $50,000-$500,000Expert humans read your code for 2-6 weeksThey write a report listing what they foundYou fix those issuesYou deploy with a "badge" saying you were auditedThe Problem: Human auditors are brilliant. They catch real bugs. But they're fundamentally limited by: Time: They have days or weeks to review code. Attackers have forever. Scope: They check what they think to check. Attackers check everything. Consistency: Different auditors, different findings. No guarantees. Scale: A human can only read so much code. Protocols are getting more complex. Subjectivity: "We found no critical issues" ≠ "There are no critical issues" When an auditor says "we reviewed your code and found no vulnerabilities," they're really saying: "In the time we had, checking the things we thought to check, we didn't find anything obvious." That's not a guarantee. That's an opinion. And opinions don't stop hackers.III. THE MATHEMATICAL ALTERNATIVEWhat if instead of "We looked at your code and found no issues," you could say: "We mathematically PROVED your contract cannot be exploited in these specific ways." Not an opinion. A proof. Not "we didn't find anything." Rather: "We formally verified that this vulnerability is impossible." This is what formal verification offers. It's not new—it's been used for decades in aerospace, nuclear systems, and chip design. Industries where "we think it's fine" isn't acceptable. The concept is simple:Define what "correct" means (the specification)Build a mathematical model of your codeProve the model satisfies the specificationIf the proof succeeds: mathematically guaranteed correctIf it fails: you get a concrete counterexample showing exactly how it breaksNo sampling. No heuristics. No "we checked a lot of cases." All possible states. All possible inputs. Mathematical certainty.IV. WHY FORMAL VERIFICATION ISN'T EVERYWHERE (YET)If formal verification is so powerful, why isn't every smart contract formally verified? Three reasons:1. Expertise ScarcityThere are perhaps only a few hundred engineers in the world with deep formal verification expertise. They're expensive. They're in demand. And they're not scaling. Writing formal specifications requires understanding both the code AND the mathematics. Most smart contract developers know Solidity, not temporal logic.2. Cost and TimeA formal verification engagement can cost $100,000-$500,000 and take months. For a startup shipping fast, that's often not feasible. The tooling is complex. The learning curve is steep. The feedback loops are slow.3. The Specification ProblemHere's the catch-22: to formally verify code, you need a formal specification of what "correct" means. But writing that specification is the hardest part. If you specify the wrong properties, you prove the wrong things. The code might be "verified" against a flawed spec, giving false confidence. Garbage specification in, garbage proof out. This is why formal verification has remained the domain of experts working on high-value contracts with big budgets. Until now.V. ENTER THE SHIELD: AI-POWERED FORMAL VERIFICATIONThe Shield is an AI-powered formal verification platform that makes mathematical security proofs accessible, automated, and scalable. Not "AI that finds bugs" (there are plenty of those). Not "formal verification as a service" (expensive, slow, manual). AI that automates the hardest parts of formal verification, making mathematical proofs practical for every smart contract.The Breakthrough: AI-Generated SpecificationsThe biggest bottleneck in formal verification is writing specifications. The Shield uses large language models to: Analyze your code → Understand what it's supposed to do Infer invariants → Generate candidate properties that should always be true Translate intent to math → Convert natural language descriptions into formal specifications Learn from patterns → Use a database of verified contracts to suggest relevant properties Instead of needing a formal methods PhD to write specifications, developers describe what their contract should do in plain English. The AI generates candidate formal specs. Humans review and approve. The prover does the rest. Specification generation goes from months to hours.The Formal Engine: Mathematical RigorUnderneath the AI layer, The Shield uses battle-tested formal methods: Model Checking: Exhaustively explores all possible states of your contract. If a bad state is reachable, it finds it. If not, it proves it's impossible. Theorem Proving: Mathematically derives that certain properties hold for ALL possible inputs, not just sampled ones. Symbolic Execution: Tracks variable constraints through all execution paths, finding edge cases that concrete testing misses. Temporal Logic: Verifies properties over time—"this can never happen," "this must eventually happen," "if X then always Y." The AI makes it accessible. The math makes it rigorous.The Result: Proofs, Not OpinionsWhen The Shield verifies your contract, you get: ✅ Mathematical proof that specified properties hold across ALL possible executions ✅ Concrete counterexamples when properties are violated, showing exactly how attacks would work ✅ On-chain attestation that verification was performed (immutable, verifiable by anyone) ✅ Continuous monitoring as your code evolves (CI/CD integration) Not "we audited your code." Rather: "We proved these specific attack classes are impossible."VI. HOW IT WORKSStep 1: Upload Your CodeConnect your repository or paste your smart contract. The Shield supports:Solidity (EVM)Rust (Solana)Move (Aptos/Sui)More chains comingStep 2: AI Generates SpecificationsThe AI analyzes your code and generates candidate properties:[AI-GENERATED SPEC CANDIDATES] Invariant 1: Total supply never exceeds MAX_SUPPLY Confidence: 98% | Source: Token standard patterns Invariant 2: Only owner can call administrative functions Confidence: 94% | Source: Access control analysis Invariant 3: User balance never goes negative Confidence: 99% | Source: Mathematical constraint Invariant 4: Reentrancy impossible on withdraw() Confidence: 87% | Source: Control flow analysis [SUGGESTED ADDITIONS] Based on similar DeFi protocols, consider verifying: - Flash loan attack resistance - Price oracle manipulation bounds - Liquidity pool invariants You review, modify, add your own, and approve.Step 3: Formal Verification RunsThe prover takes over:[VERIFICATION IN PROGRESS] Checking Invariant 1: Total supply... ✅ PROVEN Checking Invariant 2: Access control... ✅ PROVEN Checking Invariant 3: Balance non-negative... ✅ PROVEN Checking Invariant 4: Reentrancy... ❌ COUNTEREXAMPLE FOUND [COUNTEREXAMPLE] Attack vector discovered: 1. Attacker calls withdraw(100) 2. Fallback triggers reentrant call to withdraw(100) 3. Balance check passes (not yet updated) 4. Funds drained: 2x intended amount Suggested fix: Add reentrancy guard or checks-effects-interactions pattern Step 4: Fix and Re-verifyYou fix the issue. Run again. This time:Checking Invariant 4: Reentrancy... ✅ PROVEN [VERIFICATION COMPLETE] All 4 invariants proven for ALL possible inputs and states. Mathematical guarantee: These attack classes are impossible. Step 5: On-Chain AttestationThe proof is published on-chain:[VERIFICATION ATTESTATION] Contract: 0x7a3b... Timestamp: 2026-03-15T14:30:00Z Properties verified: 4 Prover version: Shield v1.2.0 Proof hash: 0x9f2e... Verifiable by anyone. Immutable. Composable. Other protocols can check: "Has this contract been Shield-verified?" before interacting.VII. THE AGENT SECURITY CRISISSmart contracts are just the beginning. A far bigger threat is emerging.The Rise of Agentic AIAI agents aren't chatbots. They're autonomous systems that:Execute codeAccess databasesCall APIsSend emailsMove moneyMake decisions without human approvalIn the Intention Economy, these agents will manage your treasury, execute trades on your behalf, and interact with other agents at machine speed. And they're fundamentally insecure.Why Traditional Security Fails for AgentsTraditional security is built around human users and direct system access. It assumes:Actions come from authenticated humansIntentions can be inferred from identityPermissions apply to the person making the requestAudit logs show WHO did WHATAI agents break every assumption. The Confused Deputy Problem: An AI agent acts as a "deputy" for the user—but the agent has elevated privileges the user doesn't have (API keys, database access, system credentials). If an attacker manipulates the agent through natural language, they leverage the agent's privileges, not their own. This isn't theoretical. With a chatbot, malicious prompts yield offensive text. With an agent, a malicious prompt (prompt injection) can result in unauthorized data exfiltration, database corruption, or Server-Side Request Forgery (SSRF). Authorization Bypass: When actions are executed by an AI agent, authorization is evaluated against the agent's identity, not the requester's. As a result, user-level restrictions no longer apply. Your carefully designed permission system? Bypassed. Attribution Collapse: Logging and audit trails attribute activity to the agent's identity, masking who initiated the action and why. With agents, security teams have lost the ability to enforce least privilege, detect misuse, or reliably attribute intent. When something goes wrong, you can't even figure out what happened.The New Threat LandscapePrompt Injection: Attackers embed malicious instructions in external content the agent retrieves. The agent follows the instructions because it has no concept of trust boundaries. Memory Poisoning: Stateful agents with memory can be gradually corrupted over time. Attackers plant seeds that activate later. Tool Misuse: Unit 42 researchers documented successful attacks against popular frameworks where agents were manipulated through natural language to perform SQL injection, steal service account tokens from cloud metadata endpoints, and exfiltrate credentials from mounted volumes. These attacks succeeded with over 90% reliability using simple conversational instructions—no sophisticated exploit development required. Privilege Escalation: Privilege compromise arises when attackers exploit weaknesses in permission management to perform unauthorized actions. The result is that an agent gains more access than originally intended, allowing attackers to pivot across systems. Cascading Failures: In multi-agent systems, one compromised agent can corrupt others. A single injection can cascade through an entire agent network.What Real AI Security Looks LikeReal AI security isn't about jailbreak screenshots and prompt filters. It looks like: Capability Scoping: The Principle of Least Privilege is critical for autonomous agents. You must assume an LLM will hallucinate or fall victim to injection. Security relies on how you scope the tools themselves rather than trusting the model to use them correctly. Least Privilege for Agents: Agents should have the minimum access necessary to accomplish their tasks. Use scoped API keys that only grant the specific required permissions—for example, read-only database credentials rather than full administrative access. Tool-Level Restrictions: Instead of generic "database access" tools, create narrow-purpose tools like "get customer by ID" with hard-coded queries and parameter validation, preventing SQL injection and unauthorized access. Explicit Action Approval: Human-in-the-loop for high-risk decisions. Not everything, but where it actually matters. Tool Call Auditing: Every action logged, every decision traceable, every tool call recorded. Rate Limits and Blast-Radius Containment: Agents can enter recursive loops, increasing API costs and potentially DoS-ing internal services. Implement circuit breakers and token limits to halt runaway execution. Kill Switches That Work: Emergency stops that actually stop the agent, not just log a warning.The Gap: No Mathematical GuaranteesHere's the problem: all of these controls are implemented through configuration, policy, and hope. "We configured least privilege." Did you prove it? "We scoped the API keys." Can you verify it mathematically? "We added rate limits." Are you certain they can't be bypassed? Configuration can be wrong. Policies can have gaps. Hope isn't a security strategy. What if you could PROVE:"This agent can NEVER access data outside its scope""This agent will ALWAYS stop when balance falls below X""This agent can NEVER execute more than Y transactions per hour""This agent will NEVER interact with unverified contracts"Not "we configured it that way." Mathematical proof.The Shield for Agent SecurityThis is where formal verification meets AI safety. The same techniques that prove smart contracts are secure can prove agent policies are enforced: Policy Verification: Define agent safety constraints as formal specifications. Prove the agent's decision logic satisfies them across ALL possible inputs. Capability Bounds: Mathematically verify that an agent's tool access is bounded—prove it CAN'T access restricted resources, not just that it's "configured not to." Interaction Protocol Safety: When agents negotiate with other agents, prove the protocol is deadlock-free, fund-safe, and manipulation-resistant. Runtime Enforcement: Mitigation requires defining safe operating boundaries, implementing human-in-the-loop for critical decisions, applying least privilege for all agent actions, using rate limiting with circuit breakers. The Shield can verify these boundaries are PROVABLY enforced. Example:[AGENT POLICY SPECIFICATION] Agent: Treasury Manager Owner: did:fnf:0x7a3b... SAFETY CONSTRAINTS: 1. withdrawal_amount <= treasury_balance * 0.10 2. daily_transactions <= 50 3. recipient IN approved_addresses 4. NEVER interact_with(contract) WHERE verified(contract) = false [VERIFICATION RESULT] Constraint 1: ✅ PROVEN for all execution paths Constraint 2: ✅ PROVEN - counter bounded by state variable Constraint 3: ✅ PROVEN - allowlist enforced at tool level Constraint 4: ✅ PROVEN - external call gated by Shield verification CERTIFICATE: All safety constraints mathematically verified. This agent CANNOT violate these policies. Not "we think it's safe." Proof.VIII. WHY THIS MATTERS FOR THE AGENT ECONOMYThe Shield isn't just about securing today's smart contracts. It's infrastructure for the Post Web.Autonomous Agents Need Provable SafetyIn the Intention Economy, AI agents will:Execute transactions on your behalfManage your treasuryInteract with protocols autonomouslyMake economic decisions 24/7Would you trust an AI agent with your life savings if you couldn't prove it won't drain your wallet? The Shield provides formal verification for: Agent Behavior: Prove an agent's decision logic satisfies safety constraints (will never approve transactions above X, will never interact with unverified contracts, will always preserve minimum balance) Agent-to-Agent Protocols: Verify that when agents negotiate and transact, the interaction protocols are secure (no deadlocks, no fund loss, no exploitation) On-Chain AI Models: As AI models move on-chain, verify their behavior is bounded and predictable (robustness, fairness, safety properties) This is Nexus 7 (Autonomous Agents) infrastructure. Without provable safety, autonomous agents are a liability. With The Shield, they're trustworthy.The Multi-Species Economy Requires Mathematical TrustWhen humans and AI agents operate as economic peers: Humans can't manually review every agent action. Agents interact at machine speed. Value flows autonomously. Trust must be mathematically guaranteed, not socially constructed. The Shield makes this possible.IX. THE HARVEST: SECURITY FOR THE SOVEREIGNTY STACKWhat The Shield Extracts to the 7 NexiThe Shield is Genesis Cohort Builder #2, and like The Origin, it feeds multiple Nexi: PRIMARY: NEXUS 7 (Autonomous Agents)Formal verification frameworks for agent behaviorSafety property specifications for autonomous systemsAgent-to-agent protocol verificationOn-chain AI model verification standardsLLM-Verifier convergence protocols (AI + formal methods integration)Capability scoping verification (prove agents can ONLY access permitted resources)Least privilege enforcement proofsTool call boundary verificationKill switch and circuit breaker proofsHuman-in-the-loop enforcement verificationPRIMARY: NEXUS 2 (Trust & Privacy)Zero-Knowledge proof generation from formal verificationPrivacy-preserving verification: prove contract properties WITHOUT revealing contract logicIntegration with Prove and Win Paradigm (PWP) architectureVerifiable computation attestationsCROSS-NEXUS SECURITY: Nexus 1 (Venture Creation)Verify smart contract cap tablesProve equity distribution logic is correctNexus 3 (Resource Allocation)Constrained evaluation logic verificationBDD-based decision frameworks for grant allocationProve allocation algorithms are fair and deterministicNexus 4 (Value Exchange)Verify payment protocol correctnessProve auction mechanisms can't be exploitedValidate token transfer logicNexus 5 (Treasury Management)Verify yield strategy contractsProve multisig protocols are secureValidate fund flow constraintsNexus 6 (Autonomous Governance)Verify voting mechanismsProve proposal execution logicValidate delegation protocolsThe ZK Connection: Prove Without RevealingHere's where The Shield intersects with Nexus 2's privacy architecture: Traditional verification requires exposing your contract logic for audit. But what if you could prove your contract is secure without revealing how it works? The same mathematical structures that power formal verification—Binary Decision Diagrams, state transition proofs, invariant checking—can generate Zero-Knowledge proofs. What this enables: ✅ Prove your DeFi protocol is exploit-free without revealing your proprietary trading logic ✅ Prove your AI agent satisfies safety constraints without exposing your competitive advantage ✅ Prove your governance mechanism is manipulation-resistant without showing attackers how it works ✅ Prove compliance without revealing business-sensitive contract details to regulators This is the convergence of security and privacy. The Shield doesn't just verify contracts. It generates cryptographic proofs that verification occurred—proofs that anyone can check, but that reveal nothing about the underlying logic. This feeds directly into The Origin's Prove and Win Paradigm. Creators can prove their ventures are secure without exposing their IP.The Extraction TimelineMonth 6: Core verification engine battle-tested on Genesis Cohort contracts Month 12: AI specification inference used by 10+ ventures Month 18: Open-source release of verification libraries Month 24: Formal agent safety framework as Nexus 7 infrastructure Month 30: ZK proof generation from verification as Nexus 2 infrastructureOpen Source CommitmentThe Shield's core verification tools will be open source. Why? Because security is a public good. The entire ecosystem benefits when more contracts are formally verified. Proprietary security creates islands of safety in an ocean of risk. What's open:Formal verification engineSpecification librariesStandard property templatesCI/CD integrationsZK proof generation primitivesWhat's premium:AI specification generationContinuous monitoringEnterprise supportCustom verification servicesAdvanced ZK circuit optimizationThe infrastructure becomes public. The intelligence layer is the business.X. THE DUAL FILTERPost Web Alignment ✅Market Direction: Security is existential for the Intention Economy. Agents can't operate without provable safety. This is infrastructure for where markets are going. Business Model: Platform/protocol hybrid. Verification results on-chain, composable by other protocols. Not a walled garden. Technology Fit: AI-native (specification generation), on-chain (attestations), agent-ready (safety proofs for autonomous systems).Sovereign Economy Alignment ✅Human Sovereignty: Enables users to trust agents with their assets by providing mathematical guarantees, not blind faith. Sovereignty requires verified safety. Fair Value Distribution: Open-source core means security improvements benefit everyone. No extraction from essential safety infrastructure. Anti-Extraction: Prevents the ultimate extraction—theft. Every exploit prevented is value preserved for the ecosystem.XI. COMPETITIVE POSITIONINGThe Security LandscapeTraditional Auditors (Trail of Bits, OpenZeppelin)Strength: Experienced humans, reputationWeakness: Manual, slow, expensive, subjectiveThe Shield's advantage: Mathematical proofs, not opinionsFormal Verification Specialists (Certora, Runtime Verification)Strength: Mathematical rigor, proven technologyWeakness: Requires expert specification writers, expensive, slowThe Shield's advantage: AI automates specification generationHybrid Platforms (CertiK)Strength: Scale, AI + formal methodsWeakness: Centralized, closed sourceThe Shield's advantage: Open source, decentralized attestationsAI Security Tools (Octane, emerging)Strength: Fast, automatedWeakness: No mathematical proofs, just heuristicsThe Shield's advantage: AI enables formal proofs, not replaces themThe Unique PositionThe Shield combines:Mathematical rigor of formal verification specialistsAI automation of modern security toolsOpen source commitment of the sovereign economyOn-chain attestations for composable trustAgent-ready verification for the Post WebPrivacy-preserving proofs via ZK generation from verificationNobody else occupies this position.XII. THE TEAMStatus: TBA The Shield requires rare expertise at the intersection of:Formal methods and theorem provingAI/ML and large language modelsSmart contract developmentSecurity researchWe're actively building the team. If you have this background and believe security should be a public good, reach out.XIII. TIMELINE & MILESTONES2026Q1: Core verification engine development Q2: AI specification inference MVP Q3: Genesis Cohort integration (verify Origin, Foundation, and other ventures) Q4: Public beta, first external users2027H1: Multi-chain expansion (Solana, Move-based chains) H2: Agent verification framework Open-source release of core libraries2028Full Nexus 7 integration Enterprise adoption Decentralized attestation protocolXIV. THE CALL TO ACTIONFor ProtocolsStop hoping your audit caught everything. Get on the Shield waitlist: shield@fucinanexus.foundation Early access for Genesis Cohort participants and partners.For Security ResearchersHelp build the future of smart contract security. Contribute to open-source formal verification: We're building in public. Your expertise matters.For Formal Methods ExpertsThis is your chance to make formal verification mainstream. Join the team or advise: The intersection of AI and formal methods is the frontier.For the EcosystemEvery verified contract makes the ecosystem safer. Every exploit prevented preserves trust. Support open-source security: Because security is a public good.XV. WHAT THIS MEANS FOR THE FORGESecurity as FoundationYou can't build the sovereign economy on exploitable contracts. Every Nexus depends on secure smart contracts:Venture creation (Nexus 1): Cap table contracts must be bulletproofIdentity (Nexus 2): DID contracts must be tamper-proof AND privacy-preservingAllocation (Nexus 3): Grant distribution must be verifiableExchange (Nexus 4): Payment contracts must be exploit-freeTreasury (Nexus 5): Yield strategies must be mathematically soundGovernance (Nexus 6): Voting must be manipulation-resistantAgents (Nexus 7): Autonomous systems must be provably safeThe Shield secures all of it.The Privacy Bridge: Shield + Origin + Nexus 2Here's how the pieces connect: The Origin lets creators build ventures through AI conversation, with all IP protected via the Prove and Win Paradigm (PWP). The Shield formally verifies those ventures' smart contracts are secure. The ZK bridge: The Shield can generate Zero-Knowledge proofs from its verification process—proofs that feed directly into PWP. What this enables: A creator builds a DeFi protocol with The Origin. The Shield verifies it's secure. The creator can now prove to investors: "This protocol has been mathematically verified against reentrancy, overflow, and access control vulnerabilities"—WITHOUT revealing the actual contract logic. Security + Privacy = Sovereign.The Harvest Model in ActionThe Origin helps creators build ventures. The Shield ensures those ventures are secure. As Genesis Cohort ventures build, they use The Shield. The Shield proves their contracts are safe. Those verification tools become open-source infrastructure. Future ventures inherit battle-tested security. The flywheel:Ventures build on the NexiThe Shield verifies their contractsVerification tools improveZK proofs feed into Nexus 2's privacy layerTools get extracted as public infrastructureNext cohort inherits better security AND privacyRepeatMathematical Trust for the Agent EconomyThe Origin creates companies through AI conversation. The Shield proves those companies' smart contracts are secure. The Foundation will give agents identity. The Shield will prove agent behavior is safe. When AI agents operate autonomously in the Intention Economy, mathematical trust isn't optional. The Shield provides it.XVI. NEXT WEEK: THE FOUNDATIONThe Origin creates companies. The Shield secures their contracts. But who ARE these companies? Who are the agents? How do they prove identity across chains, across contexts, across the human-AI boundary? Next Week: The FoundationSovereign identity for humans AND AI agentsDecentralized reputation that travels everywhereThe DID layer for the Post WebWeek 10. January 23. The reveals continue.XVII. CLOSING: MATHEMATICAL CERTAINTY IN AN UNCERTAIN WORLD$2 billion stolen last year. $1.7 billion the year before. How much next year? The answer doesn't have to be "more." When billions are at stake, "we looked at your code" isn't enough. You need: "We proved it cannot fail." Not opinions. Proofs. Not "we found nothing." Rather: "These attacks are mathematically impossible." Not trust in auditors. Trust in mathematics. The Shield provides mathematical certainty in an uncertain world. Because the sovereign economy can't be built on hope. It must be built on proof.Ex Fucina, Nexus. From the Forge, a Network.FOOTER: Genesis Cohort Applications: Late February 2026 Website: fucinanexus.foundation Contact: shield@fucinanexus.foundation Building in Public:Week 1: The Sovereignty ThesisWeek 2: The Six NexiWeek 2.5: The Seventh NexusWeek 3: The Sovereignty Thesis RevisitedWeek 4: The Harvest ModelWeek 5: Why DAOWeek 6: The $FORGE TokenWeek 7: How to ParticipateWeek 8: The OriginWeek 9: The Shield (You Are Here)Week 10: The Foundation (Next Week)Word Count: ~4,500 words Reading Time: ~18 minutes ## Publication Information - [The New Digital Renaissance](https://paragraph.com/@drdavide/): Publication homepage - [All Posts](https://paragraph.com/@drdavide/): More posts from this publication - [RSS Feed](https://api.paragraph.com/blogs/rss/@drdavide): Subscribe to updates