<100 subscribers

Looking for PMF for a technology is not canonically correct (according to YC and other Venture Capitalists). However, if we abstract from the YC mindset, this is exactly what has been happening with a lot of scientific inventions turning into DeepTech companies. So let’s do it!
I suggest a four layer approach that helps to think about where else (except for blockchain) we can find a PMF for zero-knowledge proofs.
This is non-trivial. Even though we say in marketing reports ‘prove everything’ we can prove very limited amount of things.
For example, we can prove only programs with synchronous execution while most programs in the world are executed async. So, we can’t prove most programs in the world.
We also can prove only programs written in the languages that can be compiled into the RISC V instruction set.
Let’s not think at this stage about efficiency. Just what we actually can prove with reasonable resources, let’s say the cap is 1h proof generation on the 64 GPU cluster.
At this stage, we are looking at existing problems. We do not validate the actual feasibility of it (who and when and how much gonna pay us for it). Let’s say the problem of industrial espionage (IP documents stolen from labs and manufacturings) is a completely fine problem.
There quite a few problems in the world, where zk can be a solution. Think almost everything about privacy and integrity of computations. Recall how many ideas about using zk to prove statements about documents, identity management, data processing, AI proving we have heard within the last ten years? Dozens if not hundreds. And at the layer of the problem – most of them are legit.
Either it is a consumer product that runs on the mobile, an Enterprise SaaS or a hardware firmware, the currently existing environment where the application is expected to sit is complex and regulated. There are probably dozens of other applications, standard infrastructure, and a list of standard requirements. Can your zk solving a particular problem actually fit all of that?
This circles back to how and when and how much will pay us for it?
Is the problem we can solve actually urgent and incorporated into existing budgets of our clients?
Is there a market for this problem or do we actually need to create a market?
How large the competitors are and is the zk-powered solution actually 10x better? (like for real for real). These guys will fight you back when you try to move them out (tho they can also buy you which is probably a good outcome)
And other questions around market, competition, sales cycle, etc.
I didn’t tell anything about efficiency so far. The reason is that if there is a real reason (i.e. a real market) many things can be optimized.
We can optimize proving systems, ZK-VMs (possibly alternative architectures and custom instruction sets), as well as design custom chips. How much can we optimize? God knows.
But this is actually a critical issues because of we can get 100 ms latency this is one set of products, but if we can get let’s say 1 ms latency it unlocks us another category of products, and if we can get 0.01 ms latency – we can actually go into dynamic critical systems that will maybe win from the resilience way more than others.
If coming back to the ground, hash-based snarks over binary fields seem it look quite good and even though more optimizations will be done, intuitively it doesn’t seem we need to try to invent something brand new.
On the ZK-VM efficiency… Custom ZK ISA ZK-VMs still to be constructed (or finished). Alternative designs (e.g. graph based) – feels like a too heavy cost so I would say probably won’t happen. At least today, I do not see who would invest in designing alternative architecture ZK-VMs without clarity about the output.
On the ZK chips – hard matter. Overall, as it is today, I would say a too heavy sunk cost no one has an incentive to invest it. However, what if chip design cost comes down drastically because LLMs become incredibly good in it? Then we can actually expect to have the whole variety of ZK custom chips (the next question however will be to convince Intel, Apple, and the rest to add a default ZK chip into their devices).
As a bottom line: the efficiency we have today is good enough to build prototypes but not enough to build products. A number of things should happen to make this efficiency production stage compatible which at raw estimation will take another $100-200M. How convincing the market opportunities should be to make this happen?
In almost all use cases, SNARK is perfectly fine and we do not actually need the ZK property which is actually great news for efficiency (but we will call it ZK anyway, it sounds better). To decrypt the statement from the snark, one needs a bunch of random noise that they have no way to get. It won’t be perfect from the theoretical perspective, but it will be good enough from the practical perspective.
Thank you for reading. You are welcome to share your thoughts and argue with me here: lisaakselrod@gmail.com
No comments yet