<100 subscribers

There are many things to work on in the world. The science today is incredible, moving forward fast, a range of problems to solve is enormous, from biology to astrophysics, from AI security to information storage and everything in between.
However, out of all that variety I made my choice to continue digging into the creature called ‘zero-knowledge cryptography’ (ZK) that most people in Computer Science, Cyber, and other digital verticals have no clue about.
ZK appeared in academia in 1989. But comm’n, hundreds of matters, ideas, and concepts pop up in academia every decade and die there without anyone ever incorporating it into the industry and the everyday world we live in.
However, for ZK the story is different. In 2014, ZK was picked up by the blockchain industry to solve the scaling problem and preserve security. Since then, at least several dozens of companies all over the world contributed to the zero-knowledge cryptography R&D, as well as funded research in academia. The rough estimation is that at least $500M was invested directly and indirectly into ZK R&D by the blockchain industry. Which made the technology way more mature and attractive for other industries.
Today we are at the point where ZK is somehow good for prototyping and PoCs. Which is a great moment to start experimenting with placing it in other industries and checking what problems it can solve and how feasible the technology actually is.
In this essay, I would provide some reasoning of why zero-knowledge cryptography meets the current sentiment of the world and why I am putting the best years of my life into giving this technology a chance to change the world as we know it.
When the computers were invented, they acted like boxes executing human commands. I give a command, the computer executes the command. This was the deal.
Today, computers have taken the role of an intermediary or even an interface between the human and a bunch of algorithms that have their own opinion on what should be done and how it should be done.
The first stage of the transition was the adoption of recommendations in the world using various ML techniques. Say, my intention is to choose the best laptop for my needs, while the intention of the recommendation algorithm is to force me to buy the laptop whose seller paid the largest bid for its ads. And the computer here is just an intermediary silently facilitating the interactions between myself and the algorithm.
The second stage has started with wide adoption of LLMs. Where we, humans, have decided that quite some decisions about our being can be outsourced to LLMs that know better. They know better what is the optimal learning pass to understand the subject, what disease I might have, or where the terrorists are located.
The core difference between ML models and LLMs is that the decisions of LLMs are more obfuscated and less obvious. Think Google maps, if you ask for a route from Munchen to Berlin, you can validate the output: it should be more or less direct, definitely shouldn’t go through Paris or London, and if this is the case you can easily identify that something is wrong with the model.
However, with LLMs it’s different. Assume the doctor is using the LLM to define the disease. The doctor doesn’t know for sure, he follows the protocols. The LLM follows the training and finetuning. How does the doctor know that the LLM is acting in a good face? How does the doctor know that the LLM is not lying? The doctor doesn’t know.
Combine this with the behavioural pattern of how blindly we believe all types of chats (compared to search engines, that we took with a pinch of salt). The result will be the world ruled by a bunch of models, part of which is hallucinating, part of which are corrupted, part of which are just poorly trained and finetuned, part of which are completely fine but they are probably the minority.
I want to amplify the urgency to push forward hallucinating models: all core LLMs we use are produced by profit-oriented organizations. How many years were assumed in the financial model before they started to earn money? What is the projected Revenue growth quarter over quarter? The world is driven by Returns On Investment, not by noble intentions.
So the computers, as we used to know them, are now just intermediaries between humans and models. When we give the computer a command ‘do this’, we can’t trust that ‘this’ will be done. We should require cryptography guarantees that ‘this’ has been done for real and our command wasn’t modified or tweaked on the go because the model knows better what is good for us.
The classical approach in cyber defense assumes that we want to make attacks hard enough but not impossible so that the attacker doesn’t have incentive to attack because the cost will be higher than the gain. What does it practically mean ‘hard enough’ – by adding different layers that hackers need to go over to get to the target. For example, multi-factor authorization, special requirements to particular means of authorization (e.g. password complexity and rotation), detection of anomalies (e.g. monitoring of everyone’s sessions in the system and detecting if the session looks suspicious), as well as other defense approaches each of those makes it harder to compromise the target. But not impossible.
LLMs give a larger advantage in offensive tools rather than defensive tools, adding asymmetry. Models are good in producing on average good outputs (distributed unequally). Which is a fit for an offensive approach where one time success is enough to get into. Compared to the defensive approach where one needs 100% accuracy and to defend properly always except for one time might have an incredibly high cost (especially if we are talking of government sponsored attacks.
The amount of LLMs surrounding us in a day-to-day life is enormous: managing hardware devices from personal computers to medical and military devices, managing facilities including critical infrastructure, improving efficiency and ROI from the biggest manufacturers to small businesses all over the world, processing personal data, etc. If something looks like a commercially motivated attack today (e.g. malware) it is only the question of the time when it will turn into a strategic geopolitical weapon. And one day it will. Every technology that can be used at the offensive battlefield will be used at the offensive battlefield (or maybe it is already used and we just do not know? Or maybe we pretend we don't know? :)
The wars will continue. This is how superpowers stay superpowers, powers become superpowers, and small players turn into medium players ticking the required checkboxes. The share of cyber in military conflicts is growing steadily and I do not see any single reason why it will change. Better defense is the most natural thing to work on under those assumptions.
However, the wars continuing as they are today is a relatively good outcome. There are some reasons why I would assume they will become worse. There are two reasons for it:
Shift towards cyber battlefields and weapons will eventually result in massive downfall of the amount of human empathy engaged into making decisions. Death is unnatural to most people on the Earth (except for the case, when one is brought up with the idea of death being the most precious reward). Killing others is unnatural to most people. However, killing others through pushing a button without seeing the result is way easier than doing it with one’s own hands. So, the more automated the battlefield is, the more casualties and cruelty we are about to expect.
Eventually, autonomous LLM-powered weapons will be deployed. The reason is, we kind of have all the required technologies for it. The question is about right assembling and technology fine tuning to operate together. It is not a secret that superpowers are working on this type of weapons that is to say – it will come. It might look that we are quite far from approved regulations… At some point, when superpowers will have the real need for it – the approval will happen very fast. Battlefield is operating according to game theory way more often than according to noble intentions. And the game theory tells that when one party raises the stakes – everyone has to raise: if one party now makes a striking decision in 30 sec, another party can’t take 30 min for the response. It has to switch to a 30 sec window too, even if purely morally it would prefer the decision being more thoughtful and thorough. Otherwise, it loses immediately.
A question with the star: how will the superpower allocation change while the role of cyber on the battlefields is growing? Who is the most ruthless technology (not general) superpower today? Why is it not attacking the US today to overthrow the world order?
(China is not attacking the US in cyberspace because it is a part of the diplomatic deal. Not because America is nice and not because China is incapable. Will it last forever? I am very much not sure, or frankly speaking, I am almost sure it will not.)
The speed of computer world development is quite fast. It doesn’t necessarily mean becoming more advanced, but it definitely means adding tons of code every day. Can you imagine the codebase of Amazon or the electricity facility today? Some people who wrote this code have died by now. Very high amount of legacy software and hardware, libraries that no one is maintaining today, niche languages, and millions, millions, millions of rows of code. Only lord knows what all this code does or doesn’t do.
Dozens or even hundreds of trusted 3rd parties serving organizations in accounting, sales, security, legal, and other domains. Each of them needs to remotely push updates from time to time which provides an efficient vector to compromise thousands of organizations through one organization. Take at least Solarwind infecting 17,000 organizations including the US agencies or CrowsStrike internal mistake interrupting the operating activity of hundreds of organizations all over the world including international airports, airlines, hotels, stock markets, broadcasting, gas stations, etc.
The world is very fragile and ruled by digital systems at the same time. Which is a very unlucky combination if thinking of it in a vacuum but this is the world we live in. What happens to the country left without electricity for one hour? How long can a port be closed before other domains and facilities will be affected by it? Which systems today should be labelled as critical infrastructure? Say, are ports a critical infrastructure. On one hand, it might seem that not that much. What will happen if the port is closed for a week? However, in several hours there will be some facilities that run out of components and have to stop the production. The cascade effect starts in several hours sucking into everyone with even mere dependencies on the sea logistics.
So we need to defend systems better. To defend from possibly malicious LLMs, to defend on battlefields, and merely from human mistakes. Furthermore, we need to defend not only the most critical systems and most privileged accounts, but also regular systems and accounts, because the amount of dependencies and system complexity in 2025 leaves too many entry points at the level of ordinary access.
While for most privileged systems and accounts we rely on better tools (e.g. biometry authorization, cold keys, or even cold devices), for ordinary systems and accounts we usually rely on people’s consciousness (secure passwords, not clicking the links, proper reviews, reading what exactly they are signing, etc.)
However, relying on people is incredibly unreliable. What if instead of that we relied on mathematics?
The root problem is that when someone has an intention of making an action, (e.g. getting access to the database), they do not know what the computer actually does. Because most of the time we are just clicking the buttons on the interface and signing everything it suggests.
The whole idea of authorizing actions by signing them does not fit the complexity of existing digital systems. It creates layers of trust on top of layers of trust where each new layer in the chain of trust has no clue why the previous authorized it and even what exactly it has authorized.
Think of access management provided by Okta or network access provided by CloudFlare: it is a trusted 3rd party authorizing (aka signing with its secret key) millions of requests all over the world every minute.
Now recall everything we talked about before: LLMs and AI agents growing penetration, growing share of cyber on battlefields, enormous system complexity. Can we really use secure private key storage as an assumption for world security? I do not think so.
The alternative to authorization through digital signatures is authorization using zero-knowledge proofs. Where the access right is guaranteed by the mathematics and we are actually verifying the statement itself and not the mere fact that some trusted 3rd party has looked at the statement and approved it. Which brings us to the world of provable computations, which fits the world today and in ten years is way more than multiplying chains of trust again and again.
This is not a trivial suggestion. While using new more superior types of software is a regular practice, shifts to new types of cryptography do not happen too often. But I do not see other other ways around. At some point (five years? Ten years?), it will be the question of survival. We do not need real AGI to lose control over machines and models we created and injected in every inch of our lives, it is enough to give them enough autonomy. This is what we are already doing, and it will neither reverse nor slow down. The only thing we can do now – require cryptography accountability of every action they perform. Provable computations are the only way to keep machines accountable. It is not too late today. It will be too late in ten years.
Thank you for reading. You are welcome to share your thoughts and argue with me here: lisaakselrod@gmail.com
No comments yet