
It’s not a question or a hypothesis today, it’s a fact. The day-to-day human life will be fully automated: opening a door, making a coffee, all vehicles, all indoor and outdoor machines, all medical monitoring and actions made, surgery, hospital assistance, and many more. But that is not all. These are only routine things we can think of today.
Putting spiritual and religious aspects aside, humans are bio robots: chemistry goes up, we feel happy, chemistry goes down, we feel frustrated and sad. We say we are sad because someone has died, or we were rejected by a job or a romantic partner, because of global events, or the amount of homeless cats. But the truth is we are sad because the body chemistry went down.
In the machine world, the goal will be to maintain chemistry at the optimal level. Today, it is also the goal, but it is harder to achieve as the direct access to human chemistry is quite limited. But it will change.
What do we call ‘the optimal level’ of human chemistry? The one that will allow to extract as much value from the individual as possible. Who are the value extractors? Corporations and political powers. Corporations are rather straightforward – they need humans to consume more because their quarter reports need to show numbers climb up. Powers’ KPI is humans’ beliefs. They need humans to believe in a particular branch of reality.
The optimal level of human chemistry is not too sad but also not too happy, pretty much just fine, coping with the reality, not having hopes, but also not committing suicide. Why is this ‘optimal’? At this chemistry level, an individual will be down to maintain the basic lifestyle till the end of their life: boring work, money enough just to survive, mediocre relationships, mechanical sex, okayish health, low energy, no dreams, no ambitions, no hunger for challenges. In this state, an individual is Netflix's ‘good boy’. In this state, it is a no-brainer of how to make an individual adopt and propagate an idea with a large emotional hook and one layer reasoning.
Today twitter is optimizing our chemistry: giving you some views but not too much, algorithms staying reluctant to your content but not completely reluctant. It gives us little injections of engagement with our account that are enough just to want more.
But what if your coffee machine starts acting like this too? What if your coffee machines gives you the minimal amount of coffee that you won’t commit the suicide today but at the same time it will maintain your chemistry at the most vulnerable level so that other machines can affect you however they want?
Systems act according to their design. Everything that can be exploited will be exploited. The assumptions about humans having good intentions on average is wrong.
It is very hard to go back in time and reverse design when one realizes that in fact the basic assumption is humans having malicious intentions and making their best to exploit the system. Think of privacy: a lot of modern digital systems haven’t thought of privacy when being created and this is the reason why we lost in privacy. We will never catch up trying to fix the things that are already produced according to their protocols.
But in the cyborg reality we are still at quite early days, and we have all chances to learn from previous mistakes and design the machines of the future to be accountable.
Scanning iris to prove that you are a human means that unless you didn’t do it – for the system you are not a human. And it is only one particular entity deciding if what is being scanned is actually an iris. This system is a presumption of guilt: you are not a human till you prove the opposite. But what if someone doesn’t want to share their bio data for ideological or ethical reasons? Or even what if someone has lost their eyes or was born like without eyes? Does it mean that relying on the automatic system they are not humans?
Instead of making humans prove they are humans, we should make computers prove that they act in a good faith according to humans’ will. We live in a human-centric world, built (or at least planned to be built) by humans for humans and not for machines to rule the ball.
All the machines that affect human life, chemistry, brain, and safety must prove that they are acting in a good faith. And these proofs should be cryptographical and verifiable by the end user any time.
If the device is connected to the internet, its firmware can be remotely updated, including being remotely updated by malicious actors and you will never be aware of it because we do not check the firmware before using the device.
For example, when one opens the door of the car, the car looks fine if its wheels and windows are at their regular place. If one bit of its firmware was remotely flipped – you will never see it.
If someone has an implanted device to monitor some body processes and report to the doctor – how will you know that the data reported is the data collected from your body?
Remember that type of attack where people by accident sign the wrong transaction because it looks okayish? What if instead of the wallet draining it’s your physical body being drained? Your brain activity and body chemistry?
Machines’ behavior should be fixed, and its compliance should be proven cryptographically. Otherwise, the future is very dark.
It is happening right now, while I am writing the article, autonomous robots all over the world are making surgery, driving people around the world, carrying stuff in the air, cultivating crops, serving coffee. And it will go one step further every day. What is the final destination? Is it a doomer place? Is it a hopeful place? Depends on what we are doing today, what future we are picturing as the desired one and how we are moving towards it.
Thank you for reading! Feel free to share your thoughts about enforcing machine accountability: lisaakselrod@gmail.com
No comments yet