
The Beginning of History
I. FloatingMeaning is fluid. It is fleeting, ephemeral, intangible: it only exists in a mind. The origin of a thought is hard to pin down — some extrinsic and intrinsic stimulus sensed, focused upon and distilled, conjured from the ether. Insight only manifests within the awareness of a sentient individual. And yet, meaning is the basis of our entire existence. As humans we view the world through the lens of our attention, watching as the universe pours in. As consciousness deepens, pattern r...

War in the Web3 era
Undermining the logic of violence Today nine countries are presumed to possess nuclear weapons; many more possess the capability to manufacture other classes of weapons of mass destruction (WMDs — biological, chemical and radiological weapons). This existential threat — and humanity’s innate innovative drive, plus perverse incentivization structures related to the armaments industry — means that we are racing towards ever more threatening capabilities. Still more countries are earnestly devel...

Reimagining "global"
Programmable incentivization and implications for personal governancegovern: to exercise continuous sovereign authority over (Merriam Webster 2019)sovereign: one that exercises supreme authority within a limited sphere (Merriam Webster 2019) Self-sovereign identities put the user in control of their data and financial assets. If governing can be thought of decision-making around how resources are gathered and distributed, how could self-sovereign identities and smart contracts offer new oppor...
_Homo integralis_

The Beginning of History
I. FloatingMeaning is fluid. It is fleeting, ephemeral, intangible: it only exists in a mind. The origin of a thought is hard to pin down — some extrinsic and intrinsic stimulus sensed, focused upon and distilled, conjured from the ether. Insight only manifests within the awareness of a sentient individual. And yet, meaning is the basis of our entire existence. As humans we view the world through the lens of our attention, watching as the universe pours in. As consciousness deepens, pattern r...

War in the Web3 era
Undermining the logic of violence Today nine countries are presumed to possess nuclear weapons; many more possess the capability to manufacture other classes of weapons of mass destruction (WMDs — biological, chemical and radiological weapons). This existential threat — and humanity’s innate innovative drive, plus perverse incentivization structures related to the armaments industry — means that we are racing towards ever more threatening capabilities. Still more countries are earnestly devel...

Reimagining "global"
Programmable incentivization and implications for personal governancegovern: to exercise continuous sovereign authority over (Merriam Webster 2019)sovereign: one that exercises supreme authority within a limited sphere (Merriam Webster 2019) Self-sovereign identities put the user in control of their data and financial assets. If governing can be thought of decision-making around how resources are gathered and distributed, how could self-sovereign identities and smart contracts offer new oppor...
_Homo integralis_

Subscribe to Juvenalis X

Subscribe to Juvenalis X
Share Dialog
Share Dialog
<100 subscribers
<100 subscribers


Subverting the privacy-security trade-off with 21st century information technologies.
In the 18th century Jeremy Bentham conceived of a structure designed in such a way that a single observer could simultaneously supervise each occupant. In Bentham’s “panopticon” security would be enhanced, both because the central observer could detect and disrupt misbehavior, and because the observed would be less likely to break the rules if they knew they were being watched.
Bentham’s thought experiment holds particular relevance as we construct and deploy a network of connected sensors into every corner of the world. Such connected sensors — including IoT devices, remote sensors mounted on drones and orbiting satellites and, mostly, smartphones — observe their surroundings, and can record and transmit information. Through these digital periscopes observers can peer intimately into the lives of a growing proportion of the global population, as well as into the operations of enterprises, governments and militaries. Edward Snowden’s revelations suggest that not only is a truly global panopticon possible — it has already been built.

While the extent and character of the NSA’s surveillance apparatus is deeply troubling, the motives behind the decision to build it are not. It is a just aim, to protect lives and public health. But the methods and implementation matter deeply.
Conventional wisdom suggests that a human’s right to security sits in tension with their right to privacy — that an inherent trade-off exists between the two and that to improve security, privacy must be sacrificed. This is built on the assumption that the state (or similar authority) must attain knowledge of potential harm in order to impede the violent actor in carrying out that harm.
I accept that agents with a harmful intent will exist in any population, and recognize that with adequate awareness, coordinated intervention can mitigate the extent of the harm caused. Emerging technologies for observation, coupled with advancements in global coordination tools, suggest that the understanding of the privacy-security tradeoff is misguided: in the 21st century, it appears that a benevolent panopticon could be built. A surveillance apparatus that optimizes both for public security and individual privacy is possible.
The primary ethical question with the panopticon is, of course, who is entitled to sit at its center? Who is chosen to see all, under what circumstances, and what are the empowered to do with the information they receive?
If the guardian sitting in the panopticon’s central observation room is incorruptible, then the security benefits of such a system may be justifiable. History, of course, tells us that no authority is incorruptible, and even less so a centralized authority.
Knowledge is, after all, power. Knowledge is distilled from information within a mind, and, fundamentally, a panopticon is a system in which all information is funneled into one mind. The one who sits at the panopticon’s point of informational convergence is thus in a position of power. The all-knowing watcher has the power to detect and (possibly) disrupt malicious behavior, as well as the power to abuse their knowledge to the detriment of the subjects observed.
When subjects are prisoners (individuals who have foregone their rights by breaking a law), the violation of their privacy may be ethical. However, when subjects are law abiding citizens, such surveillance is categorically unethical. In any case, we should always be striving to evolve our systems and more deeply enshrine human dignity and planetary stewardship into them — so understanding the prospect of privacy-preserving monitoring is worthwhile.
Jeremy Bentham drew up designs for the panopticon over a century before Charles Babbage conceived of his “analytical engine.” In the time since these men lived and died — as Babbage foresaw — non-human entities that are both aware and able to communicate have emerged.

Modern computers perceive their environments, through camera, microphone, keyboard, antenna, ethernet cable. They then process their perceptions, and act in response. What’s more, their actions occur in a logical space governed by transparent and objective laws, which can be understood by anyone. Operating systems and programs can be read, dissected and disputed. Algorithms and protocols can exist independent of any individual human developer, collectively governed by an open community. Ciphertexts can be processed without revealing the plaintext to anyone.
So — the question naturally arises: could a computer system sit at the center of the panopticon? Could a synthetic mind watch over us, only intervening when some threat to security is discerned? And could such an informational agent respect the privacy of the subjects of observation?
I am increasingly convinced that the answer is yes.
With properly-designed systems at the point of informational convergence of the global panopticon, we can preserve each human’s right to privacy without sacrificing the potential gains to security afforded in the digital age.
Numerous approaches to surveillance are emerging that protect the privacy of the monitored. This is by no means an exhaustive review. Instead, it is intended to illustrate that the idea of a tradeoff existing between privacy and security is inaccurate.
Smartphones and laptops are powerful computers in their own right, and the interface between private citizens and the Internet. Many situations that require surveillance and intervention for the public good can be addressed on the edge device itself, with personal data never having to leave a user’s phone.
By designing applications with an eye toward maintaining users’ privacy, and leveraging the computational and data storage capabilities of mobile and IoT devices, edge computing can do much to leverage the potential our connected sensors networks afford to improve public health and security — without sacrificing individual right to choose privacy.
Computers operate on symbols representing informational units — bits — within an execution environment. Algorithms detect patterns in binary sequences (which can represent text, images, audio, etc), and adapt their actions accordingly. Edge devices are often face by storage, bandwidth and memory constraints — sometimes it is necessary to analyze datasets that may contain a signal indicating a threat to public safety on central servers.
A drawback to traditional execution environments is that the data being analyzed can be exposed to other programs within the operating system. Thus, the controller of the computer can theoretically access all information processed — a potentially unacceptable violation of privacy.
Secure enclaves are execution environments that provide “confidentiality, integrity and attestation”. Within a secure enclave, information processes remains private — “an adversary outside of the enclave cannot inspect the state of execution inside the enclave”. Algorithms run exactly as intended, and enclaves can provide “an unforgeable proof that enables a remote party to verify what has run inside the enclave even if they don’t have physical access to the machine.” (Oasis / Keystone)
Put another way, enclaves enable secure computation on private information without revealing the information to … anyone. Furthermore, secure enclaves can have the computing, storage and connectivity resources required for advanced analytics and machine learning applications. In instances where edge devices do not have the computational power, or access to information, required for a certain analytical task, secure enclaves could analyze personal private data without violating the data subjects’ privacy. Again, only if threats to public health or security are detected, the enclave would expose necessary information to appropriate human actors authorized to intervene.
In some instances, security could be enhanced by one party providing a provable attestation of a fact. This might include that they were or were not at a particular location at a particular time, that they have tested positive for a certain antibody, that they are a certain age or have a certain qualification, and so on.
In a zero knowledge protocol a prover can mathematically prove something without revealing any additional information to the verifying party. Applications of zero-knowledge proofs range from buying cocktails to privacy-preserving digital currencies to verifying nuclear disarmament.
In a bafflingly obvious inversion of the norm, compute-to-data systems involve sending an algorithm to the dataset. This has the dual benefit of minimizing the volumes of data transfer on a network (algorithms tend to be much smaller than the datasets they’re designed to analyze) and ensuring that there is no possibility that the data custodian will reveal individually-identifiable records.
Homomorphic ciphers allow “computation on ciphertexts, [that generate] an encrypted result which, when decrypted, matches the result of the operations as if they had been performed on the plaintext.”
The plaintext information encoded in the encrypted message is never revealed to the computer processing it. These computations can be executed on untrusted devices (i.e. within traditional execution environments) while preserving the privacy of its subject.
Of course, a computer is only as trustworthy as its programmers. So how do we ensure that malicious developers and hardware engineers don’t let themselves in to the panopticon’s control room to peek into everyone’s the cells — or cell phones?
Clearly several viable options exist for placing a constellation of privacy-preserving computers at the center of the panopticon. How, then, to ensure that such a guardian is benevolent?
The answer must rely heavily on the collective governance of open source code repositories. The algorithms and, when appropriate, training datasets used to process personal data on the edge or inside a secure enclave must be visible to all. “Given enough eyeballs, all bugs are shallow.”
Fortunately, such a system of collective governance already exists. Many commonly used libraries and packages are open and inspected by thousands of developers all over the world. Communities of experts debate the merits of changes to the repositories.
A blockchain-based system can enable an electorate of developer-citizens to cast reputation-weighted votes on updates to code. This could ensure that the algorithms we use to surveil ourselves are trustworthy — reviewed by the public. I can think of no better way to ensure that our code is designed to serve the public than to make sure that it is written by the very people it is surveilling, agreed on through a process forming intersubjective consensus. The integrity guarantees provided to algorithms executed inside a secure enclave would provide us confidence that the software agents watching over us were publicly approved, fetched from a decentralized code registry.
System architecture must be decentralized to the greatest extent possible, to minimize the consequences of a failure. Consensus networks of trusted institutions running clusters of secure enclaves could provide the public additional layers of confidence that all participants are behaving appropriately. As users become self-sovereign, they will choose open source and privacy-preserving software and hardware — and could even be compensated for opting in to sharing their data with these surveillance architectures.
Innovation’s evolution inevitably outstrips our capacity to effectively govern it. Plus, we need to use our inventions for a while before we know how to regulate them. In this envelope between invention and effective regulation, actors — some ignorant, some unscrupulous — take advantage of the new technology in unjust ways. We are now witnessing this happen on a scale never seen before, a perilous development as we construct informational infrastructures we will occupy for a long time.
Human beings have a fundamental right to privacy, and to choose how their personal information is used. This right is being violated wholesale, trampled upon by firms and governments all around the world. As we transition out of adolescence and into maturity as an informational society, we must unweave the threads of institutional injustice that are being entwined into our tapestry, and replace them with just and equitable ones.
Calls to roll back advancements in AI and machine learning may come from a righteous place, but they are misguided. The potential benefits to be gleaned from the new capabilities we are developing —both the algorithms and the connected sensor networks that capture the data they consume — are too great. A benevolent panopticon will help us achieve our potential as a species coexisting harmoniously on Earth.
Subverting the privacy-security trade-off with 21st century information technologies.
In the 18th century Jeremy Bentham conceived of a structure designed in such a way that a single observer could simultaneously supervise each occupant. In Bentham’s “panopticon” security would be enhanced, both because the central observer could detect and disrupt misbehavior, and because the observed would be less likely to break the rules if they knew they were being watched.
Bentham’s thought experiment holds particular relevance as we construct and deploy a network of connected sensors into every corner of the world. Such connected sensors — including IoT devices, remote sensors mounted on drones and orbiting satellites and, mostly, smartphones — observe their surroundings, and can record and transmit information. Through these digital periscopes observers can peer intimately into the lives of a growing proportion of the global population, as well as into the operations of enterprises, governments and militaries. Edward Snowden’s revelations suggest that not only is a truly global panopticon possible — it has already been built.

While the extent and character of the NSA’s surveillance apparatus is deeply troubling, the motives behind the decision to build it are not. It is a just aim, to protect lives and public health. But the methods and implementation matter deeply.
Conventional wisdom suggests that a human’s right to security sits in tension with their right to privacy — that an inherent trade-off exists between the two and that to improve security, privacy must be sacrificed. This is built on the assumption that the state (or similar authority) must attain knowledge of potential harm in order to impede the violent actor in carrying out that harm.
I accept that agents with a harmful intent will exist in any population, and recognize that with adequate awareness, coordinated intervention can mitigate the extent of the harm caused. Emerging technologies for observation, coupled with advancements in global coordination tools, suggest that the understanding of the privacy-security tradeoff is misguided: in the 21st century, it appears that a benevolent panopticon could be built. A surveillance apparatus that optimizes both for public security and individual privacy is possible.
The primary ethical question with the panopticon is, of course, who is entitled to sit at its center? Who is chosen to see all, under what circumstances, and what are the empowered to do with the information they receive?
If the guardian sitting in the panopticon’s central observation room is incorruptible, then the security benefits of such a system may be justifiable. History, of course, tells us that no authority is incorruptible, and even less so a centralized authority.
Knowledge is, after all, power. Knowledge is distilled from information within a mind, and, fundamentally, a panopticon is a system in which all information is funneled into one mind. The one who sits at the panopticon’s point of informational convergence is thus in a position of power. The all-knowing watcher has the power to detect and (possibly) disrupt malicious behavior, as well as the power to abuse their knowledge to the detriment of the subjects observed.
When subjects are prisoners (individuals who have foregone their rights by breaking a law), the violation of their privacy may be ethical. However, when subjects are law abiding citizens, such surveillance is categorically unethical. In any case, we should always be striving to evolve our systems and more deeply enshrine human dignity and planetary stewardship into them — so understanding the prospect of privacy-preserving monitoring is worthwhile.
Jeremy Bentham drew up designs for the panopticon over a century before Charles Babbage conceived of his “analytical engine.” In the time since these men lived and died — as Babbage foresaw — non-human entities that are both aware and able to communicate have emerged.

Modern computers perceive their environments, through camera, microphone, keyboard, antenna, ethernet cable. They then process their perceptions, and act in response. What’s more, their actions occur in a logical space governed by transparent and objective laws, which can be understood by anyone. Operating systems and programs can be read, dissected and disputed. Algorithms and protocols can exist independent of any individual human developer, collectively governed by an open community. Ciphertexts can be processed without revealing the plaintext to anyone.
So — the question naturally arises: could a computer system sit at the center of the panopticon? Could a synthetic mind watch over us, only intervening when some threat to security is discerned? And could such an informational agent respect the privacy of the subjects of observation?
I am increasingly convinced that the answer is yes.
With properly-designed systems at the point of informational convergence of the global panopticon, we can preserve each human’s right to privacy without sacrificing the potential gains to security afforded in the digital age.
Numerous approaches to surveillance are emerging that protect the privacy of the monitored. This is by no means an exhaustive review. Instead, it is intended to illustrate that the idea of a tradeoff existing between privacy and security is inaccurate.
Smartphones and laptops are powerful computers in their own right, and the interface between private citizens and the Internet. Many situations that require surveillance and intervention for the public good can be addressed on the edge device itself, with personal data never having to leave a user’s phone.
By designing applications with an eye toward maintaining users’ privacy, and leveraging the computational and data storage capabilities of mobile and IoT devices, edge computing can do much to leverage the potential our connected sensors networks afford to improve public health and security — without sacrificing individual right to choose privacy.
Computers operate on symbols representing informational units — bits — within an execution environment. Algorithms detect patterns in binary sequences (which can represent text, images, audio, etc), and adapt their actions accordingly. Edge devices are often face by storage, bandwidth and memory constraints — sometimes it is necessary to analyze datasets that may contain a signal indicating a threat to public safety on central servers.
A drawback to traditional execution environments is that the data being analyzed can be exposed to other programs within the operating system. Thus, the controller of the computer can theoretically access all information processed — a potentially unacceptable violation of privacy.
Secure enclaves are execution environments that provide “confidentiality, integrity and attestation”. Within a secure enclave, information processes remains private — “an adversary outside of the enclave cannot inspect the state of execution inside the enclave”. Algorithms run exactly as intended, and enclaves can provide “an unforgeable proof that enables a remote party to verify what has run inside the enclave even if they don’t have physical access to the machine.” (Oasis / Keystone)
Put another way, enclaves enable secure computation on private information without revealing the information to … anyone. Furthermore, secure enclaves can have the computing, storage and connectivity resources required for advanced analytics and machine learning applications. In instances where edge devices do not have the computational power, or access to information, required for a certain analytical task, secure enclaves could analyze personal private data without violating the data subjects’ privacy. Again, only if threats to public health or security are detected, the enclave would expose necessary information to appropriate human actors authorized to intervene.
In some instances, security could be enhanced by one party providing a provable attestation of a fact. This might include that they were or were not at a particular location at a particular time, that they have tested positive for a certain antibody, that they are a certain age or have a certain qualification, and so on.
In a zero knowledge protocol a prover can mathematically prove something without revealing any additional information to the verifying party. Applications of zero-knowledge proofs range from buying cocktails to privacy-preserving digital currencies to verifying nuclear disarmament.
In a bafflingly obvious inversion of the norm, compute-to-data systems involve sending an algorithm to the dataset. This has the dual benefit of minimizing the volumes of data transfer on a network (algorithms tend to be much smaller than the datasets they’re designed to analyze) and ensuring that there is no possibility that the data custodian will reveal individually-identifiable records.
Homomorphic ciphers allow “computation on ciphertexts, [that generate] an encrypted result which, when decrypted, matches the result of the operations as if they had been performed on the plaintext.”
The plaintext information encoded in the encrypted message is never revealed to the computer processing it. These computations can be executed on untrusted devices (i.e. within traditional execution environments) while preserving the privacy of its subject.
Of course, a computer is only as trustworthy as its programmers. So how do we ensure that malicious developers and hardware engineers don’t let themselves in to the panopticon’s control room to peek into everyone’s the cells — or cell phones?
Clearly several viable options exist for placing a constellation of privacy-preserving computers at the center of the panopticon. How, then, to ensure that such a guardian is benevolent?
The answer must rely heavily on the collective governance of open source code repositories. The algorithms and, when appropriate, training datasets used to process personal data on the edge or inside a secure enclave must be visible to all. “Given enough eyeballs, all bugs are shallow.”
Fortunately, such a system of collective governance already exists. Many commonly used libraries and packages are open and inspected by thousands of developers all over the world. Communities of experts debate the merits of changes to the repositories.
A blockchain-based system can enable an electorate of developer-citizens to cast reputation-weighted votes on updates to code. This could ensure that the algorithms we use to surveil ourselves are trustworthy — reviewed by the public. I can think of no better way to ensure that our code is designed to serve the public than to make sure that it is written by the very people it is surveilling, agreed on through a process forming intersubjective consensus. The integrity guarantees provided to algorithms executed inside a secure enclave would provide us confidence that the software agents watching over us were publicly approved, fetched from a decentralized code registry.
System architecture must be decentralized to the greatest extent possible, to minimize the consequences of a failure. Consensus networks of trusted institutions running clusters of secure enclaves could provide the public additional layers of confidence that all participants are behaving appropriately. As users become self-sovereign, they will choose open source and privacy-preserving software and hardware — and could even be compensated for opting in to sharing their data with these surveillance architectures.
Innovation’s evolution inevitably outstrips our capacity to effectively govern it. Plus, we need to use our inventions for a while before we know how to regulate them. In this envelope between invention and effective regulation, actors — some ignorant, some unscrupulous — take advantage of the new technology in unjust ways. We are now witnessing this happen on a scale never seen before, a perilous development as we construct informational infrastructures we will occupy for a long time.
Human beings have a fundamental right to privacy, and to choose how their personal information is used. This right is being violated wholesale, trampled upon by firms and governments all around the world. As we transition out of adolescence and into maturity as an informational society, we must unweave the threads of institutional injustice that are being entwined into our tapestry, and replace them with just and equitable ones.
Calls to roll back advancements in AI and machine learning may come from a righteous place, but they are misguided. The potential benefits to be gleaned from the new capabilities we are developing —both the algorithms and the connected sensor networks that capture the data they consume — are too great. A benevolent panopticon will help us achieve our potential as a species coexisting harmoniously on Earth.
No activity yet