Ethereum co-founder Vitalik Buterin emphasizes in his latest article that as digital technology deeply integrates into daily life, health, finance, and governance, building a technological system with "full-stack openness" and "verifiability" is crucial to avoid a "digital dystopia" risk. The article elaborates on three core areas:
* Health Sector: Using vaccine R&D/inequitable distribution and health data collection as examples, he illustrates how proprietary technology leads to access inequality and trust crises. Open, verifiable technologies (e.g., low-cost open-source vaccines, personal health devices) can enhance data security and fairness, reducing hacking and monopoly risks.
* Personal & Commercial Digital Technology: Comparing traditional signatures with blockchain transaction efficiency, he points out security vulnerabilities and power concentration issues in closed systems. Open-source, verifiable hardware and software (e.g., multisig wallets, secure chips) can strengthen user control over assets and data, resisting external interference.
* Digital Civic Technology: Government digital tools like e-voting, if operating as black boxes, undermine electoral credibility. Open, verifiable systems ensure procedural transparency, support local governance innovation, and balance public safety with civil liberties through open-source physical security devices (e.g., surveillance cameras).
The article proposes that achieving this path relies on cryptographic technology, formal verification, open-source operating systems, and hardware. It calls for prioritizing implementation in high-security-need scenarios like healthcare, finance, and elections to build a "retro-futurist" world combining technological benefits with individual sovereignty.
---
Summary
Author: Vitalik Buterin
Compiled by: Saoirse, Foresight News
Perhaps the most significant trend of this century so far can be summarized by the phrase "the internet has become real life." This trend began with email and instant messaging – private conversations, conducted for millennia via word-of-mouth and pen/paper, moved to digital infrastructure. Then came digital finance, encompassing both crypto-finance and the traditional financial sector's own digital transformation. Later, digital technology permeated health: through smartphones, personal health tracking watches, and data inferred from consumption habits, information about our bodies is being processed by computers and networks. In the next two decades, I expect this trend to sweep through more areas, including various government functions (potentially even elections eventually), monitoring physical/biological metrics and potential threats in public environments, and ultimately, via brain-computer interfaces, potentially touching our very thoughts.
I consider these trends inevitable: the benefits are too great, and in a competitive global environment, civilizations rejecting these technologies would first lose competitiveness, then cede sovereignty to those embracing them. However, besides offering powerful benefits, these technologies profoundly impact power dynamics within and between nations.
Civilizations benefiting most from the new technological wave aren't the "consumers" but the "producers." Centrally planned equal-access projects designed for closed platforms/interfaces can, at best, achieve only a fraction of the value and often fail outside预设 "normal" scenarios. Furthermore, in the future technological landscape, our trust in technology will significantly increase. Once this trust is broken (e.g., by backdoors, security vulnerabilities), serious problems arise. Even the possibility of broken trust forces reliance on inherently exclusive social trust models – "Was this made by someone I trust?". This creates chain reactions throughout the technology stack: the "dominant actor" is the one defining "special circumstances."
To avoid these problems, technologies across the stack – including software, hardware, and biotechnology – need two interrelated core properties: genuine openness (i.e., open source, including free licensing) and verifiability (ideally, end-users should be able to verify directly).
The internet is real life. We want it to be a utopia, not a dystopia.
The Importance of Openness and Verifiability in Health
During COVID-19, the consequences of unequal access to production means were exposed. Vaccines were produced in few countries, leading to vast gaps in access times: wealthy nations got high-quality vaccines in 2021, while others received lower-quality versions only in 2022 or 2023. Initiatives for equitable access had limited effect because vaccine production relied on capital-intensive proprietary processes feasible only in few locations.
A second major vaccine issue was opacity in related science and communication: attempts to tell the public vaccines were "completely risk-free, with no side effects," a claim contradicting facts, ultimately greatly exacerbated public distrust. This distrust has now escalated to questioning half a century of scientific achievements.
Solutions exist for both problems. For instance, vaccines like PopVax (funded by Balvi) have lower R&D costs and higher openness in the R&D process – reducing access inequality and making safety/efficacy analysis/verification easier. Future vaccines could even design "verifiability" as a core goal.
Similar issues exist in digitizing biotechnology. Speaking with longevity researchers, they almost all say the future of anti-aging medicine is "personalized" and "data-driven." Providing precise medication/nutrition advice requires understanding one's real-time bodily state; achieving this necessitates large-scale, real-time digital data collection/processing.
This logic also applies to defensive biotech aimed at "risk prevention," like pandemic control. The earlier a pandemic is detected, the more likely containment at source is; even if not contained, each week gained aids preparedness/R&D. During a pandemic, real-time knowledge of outbreak locations is valuable for deploying measures: if infected individuals self-isolate within 1 hour of knowing, spread is 72 times less than if active for 3 days while infectious; identifying "20% of locations causing 80% of spread" allows targeted air quality improvements. Achieving this requires (1) deploying many sensors and (2) sensors having real-time communication capability.
Looking further at "sci-fi" directions, brain-computer interfaces (BCIs) promise significant productivity gains, better mutual understanding via "telepathic communication," and safer paths to high-intelligence AI.
If bio/health-tracking infrastructure (personal/spatial) is proprietary, data defaults to large corporations. They control application development, excluding others. While API access might be offered, it's often limited, used for "monopoly rent-seeking," and revocable. This means few individuals/firms control core 21st-century resources, limiting others' economic benefits.
Conversely, if personal health data lacks security, hackers could blackmail using health issues, extract value via optimized insurance/product pricing, or even plan kidnaps using location data. Your location data (frequently hacked) could also infer health status. If BCIs are hacked, attackers could directly "read" (or worse, "alter") your thoughts. This isn't sci-fi: research shows hacked BCIs could cause loss of motor control.
In summary, these technologies offer great benefits but significant risks – and strong emphasis on "openness" and "verifiability" is an effective way to mitigate them.
The Importance of Openness and Verifiability in Personal/Commercial Digital Tech
Earlier this month, I needed to sign a legally binding document while abroad. Although the country had a national e-signature system, I hadn't preregistered. I ended up printing the document, signing by hand, going to a DHL, spending much time filling paper forms, and paying to express-mail it internationally. The process took 30 minutes and cost $119. The same day, I also needed to sign a digital transaction on the Ethereum blockchain – it took 5 seconds and cost $0.10 (fairly, digital signatures can be free without blockchain).
Such cases are common in corporate/non-profit governance, IP management, etc. Similar "efficiency comparisons" filled blockchain startup pitches over the past decade. Beyond this, the core application of "exercising personal rights digitally" is payments/finance.
Of course, this carries a major risk: what if software/hardware is hacked? Crypto realized this early – blockchains are "permissionless" and "decentralized"; if you lose access, there's no helpline ("not your keys, not your coins"). Thus, crypto explored solutions like multisig wallets, social recovery wallets, and hardware wallets early. But in reality, for many scenarios, the lack of a trusted third party isn't ideological but inherent. Even in traditional finance, trusted third parties hardly protect most people – e.g., only 4% of scam victims recover losses. In scenarios involving "personal data custody," once leaked, data cannot be "recalled." Therefore, we need genuine verifiability and security – both in software and, ultimately, hardware.
Critically, in hardware, the risks we try to guard against extend far beyond "malicious manufacturers." The core issue is: hardware R&D relies on many external components, mostly closed-source. A lapse in any component can lead to unacceptable security consequences. Research shows that even if software is proven "secure" in an isolated model, microarchitecture choices can break its side-channel resistance. Vulnerabilities like EUCLEAK are harder to detect precisely because components are proprietary. Furthermore, if AI models train on compromised hardware, backdoors can be inserted during training.
Another problem: even if closed centralized systems are secure, they create other drawbacks. Centralization creates "persistent power levers" between individuals, firms, or nations – if your core infrastructure is built/maintained by a "potentially untrustworthy firm" from a "potentially untrustworthy country," you become vulnerable to external pressure. This is what crypto aims to solve – but such problems exist far beyond finance.
The Importance of Openness and Verifiability in Digital Civic Tech
I often interact with people exploring governance models better suited for 21st-century contexts. For example, Audrey Tang works on upgrading existing political systems by empowering local open-source communities and adopting mechanisms like "citizen assemblies," "sortition," and "quadratic voting." Others start from "first principles" – some Russian political scientists drafted a new constitution for Russia, explicitly guaranteeing individual freedoms/local autonomy, emphasizing "pro-peace, anti-aggression" institutional design, and giving direct democracy unprecedented importance. Others (e.g., economists studying land-value taxes/congestion pricing) work to improve their national economies.
Acceptance of these ideas varies, but they share a need for "high-bandwidth participation," making any practical implementation necessarily digital. Pen/paper might suffice for simple property registration or quadrennial elections, but are useless for scenarios requiring higher frequency and information throughput.
However, historically, security researchers have been skeptical or opposed to digital civic tech like e-voting. Research summarizes core objections well:
"First, e-voting relies on 'black-box software' – the public cannot access the code controlling voting machines. While companies claim 'protecting software prevents fraud/competition,' it also prevents public scrutiny. Companies could easily manipulate software to produce false results. Furthermore, vendors compete, with no guarantee they prioritize 'voter interests' or 'ballot accuracy'."
Numerous real-world cases justify this skepticism.
These objections apply to other similar scenarios. But I predict that as technology advances, a stance of "complete digital refusal" will become increasingly unrealistic. Technology pushes the world towards higher efficiency (for better or worse); if a system resists, people will gradually bypass it, reducing its influence. Therefore, we need another approach: tackle the hard problem of making complex tech solutions "secure" and "verifiable."
In theory, "secure/verifiable" and "open source" are distinct. Proprietary tech can be secure – e.g., aircraft tech is highly proprietary, yet commercial aviation is very safe. However, what the proprietary model cannot achieve is "security consensus" – the ability for mutually distrusting parties to agree on its security.
Civic systems like elections are prime examples needing "security consensus." Another is court evidence collection. Recently, a Massachusetts court invalidated大量 breathalyzer evidence because the state crime lab was found hiding information about widespread device failures. The ruling noted:
"Were all results problematic? Not necessarily. Most devices involved had no calibration issues. But investigators found the lab concealed evidence of 'broader failure scope,' leading Judge Frank Gaziano to rule that all relevant defendants' due process rights were violated."
"Due process" inherently requires not just fairness/accuracy, but consensus on fairness/accuracy – if the public cannot confirm courts are "following the law," society risks devolving into vigilante justice.
Furthermore, "openness" has intrinsic value. It allows local communities to design governance, identity, etc., systems fitting their goals. If voting systems are proprietary, a country (or province/city) wanting to experiment faces huge hurdles: either convince a company to add their preferred rules as a "new feature," or build from scratch and validate security – drastically increasing the cost of political innovation.
Adopting the "open-source hacker ethic" (encouraging sharing, collaboration, innovation) in these areas grants more agency to local implementers – whether acting individually or as part of governments/businesses. This requires both widely available "easy-to-build-with open-source tools" and infrastructure/codebases under "free licenses" allowing derivative works. If the goal is "reducing power disparities," "copyleft" is especially important.
Another significant direction in civic tech is physical security. Over the past two decades, ubiquitous surveillance cameras raised civil liberty concerns. Unfortunately, the rise of drone warfare makes "not adopting high-tech security" a non-viable option. Even if a country's laws protect liberties, failure to protect citizens from illegal intervention by other states (or malicious actors) renders "freedom" meaningless – and drones make such attacks easier. Thus, defenses are needed, potentially involving many "counter-drone systems," sensors, and cameras.
If these tools are proprietary, data collection is opaque and highly centralized; if "open" and "verifiable," we can explore better models: security devices outputting limited data only in specific scenarios, auto-deleting the rest. Thus, the future of digital physical security could resemble a "digital watchdog" rather than a "digital panopticon." Imagine a world where public surveillance devices must be open-source/verifiable, any citizen has the legal right to randomly select a device, dismantle and verify its compliance, and university clubs could conduct such verification as teaching exercises.
The Path to Achieving Open Source and Verifiability
We cannot prevent digital computers from deeply embedding into individual/collective life. Left unchecked, future digital tech will likely be developed/operated centrally, serving profit motives for few, containing government backdoors, with most people globally unable to participate in creation or judge safety. But we can strive for a better path.
Imagine a world where:
* You own a secure personal electronic device – combining phone-like computing power with crypto-hardware-wallet security, inspectable nearly as well as a mechanical watch.
* All your encrypted messaging apps hide metadata via mixnets, with all code formally verified. You can be confident private conversations stay private.
* Your financial assets are on-chain standardized tokens (or on servers publishing hashes/proofs to a blockchain for accuracy), managed by wallets controlled by your device. If lost, you can recover access via self-chosen methods (combining your other devices, family/friends' devices, or institutions – not necessarily governmental; churches might offer this service if easy enough).
* Open-source Starlink-level infrastructure ensures global communication reliability, without relying on few operators.
* Your device runs a local, open-source LLM, scanning your actions, offering suggestions, automating tasks, and warning of misinformation/mistakes.
* The device's OS is open-source and formally verified.
* You wear a 24/7 personal health tracker, also open-source/inspectable – you access your data, ensured no one else can without permission.
* We have advanced governance models using sortition, citizen assemblies, quadratic voting, etc., cleverly combining democratic voting to set goals and expert screening to determine paths. You can be sure the system runs by the rules you understand.
* Public spaces have monitors tracking biological variables (CO2, AQI, airborne pathogens, wastewater metrics). But these devices (and all cameras/defensive drones) are open-source/verifiable, with legal frameworks allowing random citizen inspections.
In such a world, we would have more security, freedom, and equal global economic participation than today. Achieving this requires increased investment in:
1. Advanced Cryptography: ZK-SNARKs, fully homomorphic encryption, and obfuscation are powerful for running arbitrary computations on data from multiple parties, ensuring reliable outputs while keeping data/processes private, enabling stronger privacy-preserving apps. Related tools (e.g., blockchains ensuring data integrity/user inclusion, differential privacy adding noise) are also important.
2. Application & User-Level Security: An app's security promises are only real if understandable/verifiable by users. This requires software frameworks simplifying high-security-app development. Crucially, browsers, OSs, and middleware (e.g., local monitoring LLMs) must work together: verifying app security, assessing risk levels, and presenting this clearly to users.
3. Formal Verification: Using automated proof methods to algorithmically verify programs meet key properties (e.g., no data leaks, preventing unauthorized modifications). Tools like Lean are gaining traction. These technologies already verify ZK proofs for the EVM and other high-value crypto use cases, with similar applications elsewhere. Further breakthroughs in foundational security practices are needed.
4. Open-Source, Security-Focused Operating Systems: Examples include security-focused Android derivative GrapheneOS, minimal secure kernels (e.g., Asterinas), and Huawei's HarmonyOS (which has an open-source version and uses formal verification). Skepticism about Huawei highlights the point: regardless of developer, if a product is open and anyone can verify it, the developer's identity shouldn't matter. Openness/verifiability can counter global tech fragmentation.
5. Secure Open-Source Hardware: Secure software is useless if the hardware doesn't run it correctly and secretly leaks data. Key short-term goals include:
* Personal Secure Electronic Devices: "Hardware wallets" in crypto and "secure phones" for open-source enthusiasts – these will converge given the dual need for security and versatility.
* Public Space Physical Infrastructure: Smart locks, bio-monitors, IoT tech. Openness/verifiability are prerequisites for public trust.
6. Secure Open-Source Toolchains for Building Open-Source Hardware: Current hardware design relies on many closed-source components, increasing cost, raising barriers, and hindering verification – if chip design tools are closed, verification standards can't be determined. This can change.
7. Hardware Verification Technologies (e.g., IRIS, X-ray scanning): Needed to confirm chip logic matches design, with no extra components for malicious data extraction/ modification. Verification can be:
* Destructive: Auditors randomly purchase products, dismantle chips to verify logic.
* Non-Destructive: Using IRIS or X-ray scanning, theoretically enabling per-chip inspection.
Achieving "security consensus" ideally requires broad access to verification tech. Currently, X-ray equipment isn't ubiquitous. Improvement paths include making verification devices/chip designs for verifiability more accessible, and supplementing full verification with simpler methods (e.g., ID tag verification via smartphone, signature verification using PUF-generated keys) to confirm devices are from verified batches.
8. Open-Source, Low-Cost Local Environmental/Bio-Monitoring Equipment: Communities/individuals should monitor their environment/health and identify biological risks. Devices range from personal medical devices (e.g., OpenWater), air quality sensors, general airborne pathogen sensors (e.g., Varro), to larger environmental monitors.
From Vision to Implementation: Paths and Challenges
Compared to traditional tech visions, the "full-stack open verifiable" vision crucially emphasizes safeguarding local sovereignty, empowering individual rights, and enabling freedom. Its security logic shifts from "eliminating all global threats" to "enhancing robustness at each layer." Its definition of "openness" extends beyond "centrally planned API access" to "every layer being improvable, optimized, and built upon." Its "verification" property is not the exclusive domain of proprietary auditors (who may have conflicts of interest), but a basic public right and encouraged practice – anyone can verify, not just passively accept "security promises."
This vision better adapts to the 21st century's fragmented global reality, but the window for implementation is short. Centralized security solutions are advancing rapidly, based on logic of increasing centralized data collection points, presupposing backdoors, and reducing verification to a single standard – "from a trusted developer/manufacturer." Decades of attempts to replace "genuinely open access" with centralized schemes – from Facebook's internet.org to more sophisticated monopolistic models – each more insidious than the last, mean we have a dual task: accelerate R&D/adoption of open verifiable tech to compete, and clearly communicate that "safer, fairer technological alternatives are feasible, not fantasy."
Achieving this vision leads to a "retro-futurist" world: we enjoy cutting-edge tech benefits – better health via powerful tools, more efficient/robust social organization, defense against new/old threats (pandemics, drone attacks) – while recapturing a core quality of the 1900s tech ecosystem: infrastructure isn't an "untouchable black box" but a tool that can be dismantled, verified, and adapted; anyone can innovate at any layer (from chip design to OS security logic), beyond being just a "consumer" or "app developer"; crucially, people can truly trust technology – confident devices function as advertised, without secretly stealing data or executing unauthorized operations.
Achieving "full-stack open verifiability" isn't cost-free – software/hardware performance optimizations often come at the expense of understandability and increased fragility, and the open-source model conflicts with many traditional business models. While these issues are often overstated, shifting public/market perception takes time. Therefore, a pragmatic short-term goal is clear: prioritize building full-stack open verifiable tech for "high-security-need, non-performance-critical applications," covering consumer/institutional scenarios, remote/local contexts, and software/hardware/bio-monitoring domains.
This choice is rational because most scenarios with extreme "security" demands (e.g., health data storage, voting systems, financial key management) are not critically performance-sensitive. Even for performance-needy scenarios, a combination strategy using "high-performance untrusted components + lower-performance trusted components" can balance security and efficiency – e.g., using high-performance chips for general data and open-source verified secure chips for sensitive information.
We needn't pursue "ultimate security/openness for all areas" – it's unrealistic and unnecessary. But we must ensure that in core areas directly affecting individual rights, social equity, and public safety (like healthcare, democratic participation, financial security), "open and verifiable" becomes the technological standard, allowing everyone to enjoy safe, trustworthy digital services.
<100 subscribers
Share Dialog
0x2597...11B8
Support dialog