
Why digital identity does not work and can we do anything at all
If five years ago digital identity was still a bit exotic, today it feels casual. However, existing solutions are hardly ever usable. In this article, I dissect the identity both vertically and horizontally and suggest a couple of options how it could work.

How do I think about PMF for zero-knowledge proofs
Four steps to find the PMF and why do we have a lot of amazing zk-products that no one uses (spoiler: they stop at the second step).

Let me speak from my heart about colonialism
I wanted to write a very long essay on ‘What is really happening between Israel and Gaza?’ but then I changed my mind and decided: let me talk from my heart (even though it never ends in a good way). In this essay, I want to talk about the image of the war in our minds. And suggest some alternatives to how we picture this image today.
Chasing waves and verifiable computations

Why digital identity does not work and can we do anything at all
If five years ago digital identity was still a bit exotic, today it feels casual. However, existing solutions are hardly ever usable. In this article, I dissect the identity both vertically and horizontally and suggest a couple of options how it could work.

How do I think about PMF for zero-knowledge proofs
Four steps to find the PMF and why do we have a lot of amazing zk-products that no one uses (spoiler: they stop at the second step).

Let me speak from my heart about colonialism
I wanted to write a very long essay on ‘What is really happening between Israel and Gaza?’ but then I changed my mind and decided: let me talk from my heart (even though it never ends in a good way). In this essay, I want to talk about the image of the war in our minds. And suggest some alternatives to how we picture this image today.
Chasing waves and verifiable computations

Subscribe to Lisa Akselrod

Subscribe to Lisa Akselrod
<100 subscribers
<100 subscribers


We think about AI as a new technology, while it’s more like a natural phenomenon. It should be treated as an ocean rather than the internet or even electricity.
Before AI, even the brightest people’s inventions were deterministic. Either electricity or the internet, from the very beginning people working with it had a good understanding of how it works and what to expect.
AI is actually the opposite: the more we work on it – the less we understand how it works.
One can say it is similar to the ocean – huge power, containing a ton of beautiful and inspiring elements while being dangerous and having a number of ways to hurt people. On its own, the ocean doesn’t have any good or bad intentions, it just exists. Its impact on us humans depends mostly on how we interact with it.
We can try to control it or fight it – however its power is larger than any human control capabilities. We can pray to it and beg to be kind (which some religions are still preserving till today) – but the efficiency of this method is a bit doubtful.
We can study it, try to understand, and collaborate with it – using its power for our own good. How people build water stations to produce electricity and surfers take 20m waves to slide down, we can collaborate with AI to advance science, technology, life quality, arts, and whatever humanity cares about today.
It can also be used to damage people and the planet. Like the internet today – it is one of the greatest inventions of humanity but among all the good things it brought to us, it also brought a lot of bad things. And there is no escape from it. We better be sober and realistic and call a spade a spade from the very beginning.
In surfing, safety is knowing what can go wrong and what to do in every case. You want to know the landscape of the ocean floor whether it is sand or reef and what is the depth. This defines how you will fall. This doesn’t mean you won’t fall, this doesn’t mean you won’t hurt, but it means that when you fall you maximize your chance to hurt as little as possible.
In surfing, you try not to find yourself under the waves breaking onto your head. But from time to time it happens. So you teach yourself to pay as little energy and struggle for each wipe out, and to be attentive and thoughtful to get there as rarely as possible.
You know there is no 100% guarantee but you want to be prepared. Everything that can go wrong will go wrong. It is fair for surfing, it is the same for AI.
AI is one of the most exciting and empowering technologies humanity has ever had. We can’t imagine today how far it will get us. But we should think today about using it thoughtfully and understanding what risks we are taking at every moment.
The larger is the potential gain the higher is the risk people will be willing to take. Think of two people experiencing a medical issue. One has a lethal issue. They are suggested to take a surgery that has a low but not negligible chance of success, while having high probability of lethal outcome. The optimal strategy will be to take the risk because the potential gain is worth it.
Another person has an issue which is not lethal. Even though it still bothers them. They are suggested to take a surgery that has a small but not negligible probability of lethal outcome while having high probability of full recovery and curing the issue. The optimal strategy will be not to take the surgery because the risk of lethal outcome overweights.
However the truth is that the optimal strategy varies from person to person. The same is fair with AI: different people will be willing to take different risks.
One company might fully optimize for speed, so they are willing to take a risk of 4h downtime to gain faster development speed. Another company might optimize for security and control over the code. So they will optimize for being battletested and aware.
A research institution might optimize for accelerating the experiment in case of emergency (e.g. COVID) while optimizing for precision of following the process in all other cases.
One founder might optimize for outreach wideness so they will be willing to take the risk of sending stupid messages from their account, addressing a person in a wrong way, or sounding completely unrelated (e.g. the one sending $10 SaaS). Another founder might optimize for the quality of relationships, so they prefer double-checked personalization (e.g. the one sending cyber defense products to governments).
But the really interesting thing is that it’s the same person who is a risk-taker in one situation today will be completely risk-averse in another situation tomorrow, and will be somewhere in the mid-risk preference on a third aspect the night between today and tomorrow.
The question sounds like this: how we can get as much value out of collaboration with AI as possible while feeling as little pain as possible when AI fails. The task is not trivial at all.
We want it to stay powerful while compartmenting potential damage. A form factor that really works is still to be found. But I have a personal guess.. The answer will be cryptography.
We think about AI as a new technology, while it’s more like a natural phenomenon. It should be treated as an ocean rather than the internet or even electricity.
Before AI, even the brightest people’s inventions were deterministic. Either electricity or the internet, from the very beginning people working with it had a good understanding of how it works and what to expect.
AI is actually the opposite: the more we work on it – the less we understand how it works.
One can say it is similar to the ocean – huge power, containing a ton of beautiful and inspiring elements while being dangerous and having a number of ways to hurt people. On its own, the ocean doesn’t have any good or bad intentions, it just exists. Its impact on us humans depends mostly on how we interact with it.
We can try to control it or fight it – however its power is larger than any human control capabilities. We can pray to it and beg to be kind (which some religions are still preserving till today) – but the efficiency of this method is a bit doubtful.
We can study it, try to understand, and collaborate with it – using its power for our own good. How people build water stations to produce electricity and surfers take 20m waves to slide down, we can collaborate with AI to advance science, technology, life quality, arts, and whatever humanity cares about today.
It can also be used to damage people and the planet. Like the internet today – it is one of the greatest inventions of humanity but among all the good things it brought to us, it also brought a lot of bad things. And there is no escape from it. We better be sober and realistic and call a spade a spade from the very beginning.
In surfing, safety is knowing what can go wrong and what to do in every case. You want to know the landscape of the ocean floor whether it is sand or reef and what is the depth. This defines how you will fall. This doesn’t mean you won’t fall, this doesn’t mean you won’t hurt, but it means that when you fall you maximize your chance to hurt as little as possible.
In surfing, you try not to find yourself under the waves breaking onto your head. But from time to time it happens. So you teach yourself to pay as little energy and struggle for each wipe out, and to be attentive and thoughtful to get there as rarely as possible.
You know there is no 100% guarantee but you want to be prepared. Everything that can go wrong will go wrong. It is fair for surfing, it is the same for AI.
AI is one of the most exciting and empowering technologies humanity has ever had. We can’t imagine today how far it will get us. But we should think today about using it thoughtfully and understanding what risks we are taking at every moment.
The larger is the potential gain the higher is the risk people will be willing to take. Think of two people experiencing a medical issue. One has a lethal issue. They are suggested to take a surgery that has a low but not negligible chance of success, while having high probability of lethal outcome. The optimal strategy will be to take the risk because the potential gain is worth it.
Another person has an issue which is not lethal. Even though it still bothers them. They are suggested to take a surgery that has a small but not negligible probability of lethal outcome while having high probability of full recovery and curing the issue. The optimal strategy will be not to take the surgery because the risk of lethal outcome overweights.
However the truth is that the optimal strategy varies from person to person. The same is fair with AI: different people will be willing to take different risks.
One company might fully optimize for speed, so they are willing to take a risk of 4h downtime to gain faster development speed. Another company might optimize for security and control over the code. So they will optimize for being battletested and aware.
A research institution might optimize for accelerating the experiment in case of emergency (e.g. COVID) while optimizing for precision of following the process in all other cases.
One founder might optimize for outreach wideness so they will be willing to take the risk of sending stupid messages from their account, addressing a person in a wrong way, or sounding completely unrelated (e.g. the one sending $10 SaaS). Another founder might optimize for the quality of relationships, so they prefer double-checked personalization (e.g. the one sending cyber defense products to governments).
But the really interesting thing is that it’s the same person who is a risk-taker in one situation today will be completely risk-averse in another situation tomorrow, and will be somewhere in the mid-risk preference on a third aspect the night between today and tomorrow.
The question sounds like this: how we can get as much value out of collaboration with AI as possible while feeling as little pain as possible when AI fails. The task is not trivial at all.
We want it to stay powerful while compartmenting potential damage. A form factor that really works is still to be found. But I have a personal guess.. The answer will be cryptography.
Share Dialog
Share Dialog
No activity yet