<100 subscribers
Share Dialog
Share Dialog


Let's look into Ethereum's consensus. In today's lecture by Alex Stokes we will examine the history of consensus, its fundamentals, and how it manifests as the heartbeat of Ethereum.

Alex argues that blockchains are useful because they generate digital scarcity.
Prior to blockchain, digital objects were "copyable".
An easy to way to think of this is the double spend problem: let's say you have 100 apples, and you sell one of them. You have to have 99 apples after for the exchange, you can't "sell it twice". Doing so would in the real world, defy the laws of nature (things are intrinsically scarce). We didn't have this prior to blockchains, so blockchains manufacture this digital scarcity.
Money is a measurement of some relative work. It needs to have controlled scarcity.
That is, as money is issued into the system, it has to be an amount that is equal to what it's worth (a few cheeseburgers worth of work, for example).
You need scarcity to have relative value and monetization, otherwise you can't have fractions between things, they just go towards zero or infinity, but worse, those ratios can change over time.
Money is good when it is scarce, of course money would not work if it were unlimited: unlimited money has 0 value, because it cannot measure any scarce resources as a medium of exchange.
Let's say we want to make a money protocol (and of course, money is scarce).
In this example we trust an operator on a web server to respect the money protocol.
Think about it: how would you protect such a system? Even if the operator wanted to uphold their reputation, there could be problems. The security they would get would come from hardening various walls of security, but there will always be at least one entity (or a mix of entities) that are the single "key" that is trusted for the system.
On top of that, because scarcity by definition means these things are valuable, which creates an incentive to attack it!
So... this operator sucks, let's get rid of it! (this is the beginning to a decades long research on how to solve this)
For this system to be maximally useful, it needs to be minimally trusted.
Let's make everyone do the computation.
Consensus via "state machine replication".
Consensus meaning, the honest nodes must end up with the same output as all the other N nodes doing the computation.
And as the number of nodes increases, it gets harder to attack.
And you just need an honest majority of nodes here, because even if some are faulty, you have an overall sense of the world.
Want to make sure our consensus protocol is tolerant to any kind of bad things that may happen:
There could be a bug
Missed messages
Hardware failure
Active attack
But eventually things will come back to normal.
All of these things eventually will come back to consensus, it will come back to a state where a majority of the nodes have the same view.
Otherwise, that's a consensus failure, and our protocol is broken.
Thus, a BFT system is one that can resist these partial faulty nodes.
A two phase commit is a way to achieve consistency across distributed systems.
There is a trust assumption (an honest majority) which is needed to guarantee consensus.
The first phase is the prepare, it sends out the updated state and sees if everyone agrees to it.
If everyone agrees (eg. 2/3), the coordinator will command the other nodes to update their state.
But two phase commit doesn't really work for malicious nodes, for that we can use PBFT.
This consensus algorithm uses two prepares and a commit.
Only works with small consensus sets because message exchange is really high (like N^N messages for N nodes).
This doesn't scale, latency will grow way too high to reach consensus so this system is not practical.
It's also susceptible to a sybil attack.
Thus, PBFT does not solve the byzantine general's problem.
While this may be practically tolerant to byzantine faults, the system is no good for the main reasons:
We can't have a large consensus set (which gives us that nice property of digital scarcity)
It is susceptible to sybil attacks, again ruining our scarcity property
The way that bitcoin finds the correct head of the chain is by adding up the chain that has the most work.
Any malicious forks are orphaned off, so the heaviest chain wins.
When clients sync, they are not adding any work, they are just verifying the transactions for correctness.
Proof of stake is endogenous, whereas previous proof of work is exogenous (the protocol cannot see the work).
By having stake we can also penalize (whereas with work we are kind of just rewarding).
Looks closer to traditional BFT consensus protocols (eg. something like two phase commit).
Timestamp: 53:16 - This is where these notes are currently left off.
Two phase commits: https://www.youtube.com/watch?v=-_rdWB9hN1c
Let's look into Ethereum's consensus. In today's lecture by Alex Stokes we will examine the history of consensus, its fundamentals, and how it manifests as the heartbeat of Ethereum.

Alex argues that blockchains are useful because they generate digital scarcity.
Prior to blockchain, digital objects were "copyable".
An easy to way to think of this is the double spend problem: let's say you have 100 apples, and you sell one of them. You have to have 99 apples after for the exchange, you can't "sell it twice". Doing so would in the real world, defy the laws of nature (things are intrinsically scarce). We didn't have this prior to blockchains, so blockchains manufacture this digital scarcity.
Money is a measurement of some relative work. It needs to have controlled scarcity.
That is, as money is issued into the system, it has to be an amount that is equal to what it's worth (a few cheeseburgers worth of work, for example).
You need scarcity to have relative value and monetization, otherwise you can't have fractions between things, they just go towards zero or infinity, but worse, those ratios can change over time.
Money is good when it is scarce, of course money would not work if it were unlimited: unlimited money has 0 value, because it cannot measure any scarce resources as a medium of exchange.
Let's say we want to make a money protocol (and of course, money is scarce).
In this example we trust an operator on a web server to respect the money protocol.
Think about it: how would you protect such a system? Even if the operator wanted to uphold their reputation, there could be problems. The security they would get would come from hardening various walls of security, but there will always be at least one entity (or a mix of entities) that are the single "key" that is trusted for the system.
On top of that, because scarcity by definition means these things are valuable, which creates an incentive to attack it!
So... this operator sucks, let's get rid of it! (this is the beginning to a decades long research on how to solve this)
For this system to be maximally useful, it needs to be minimally trusted.
Let's make everyone do the computation.
Consensus via "state machine replication".
Consensus meaning, the honest nodes must end up with the same output as all the other N nodes doing the computation.
And as the number of nodes increases, it gets harder to attack.
And you just need an honest majority of nodes here, because even if some are faulty, you have an overall sense of the world.
Want to make sure our consensus protocol is tolerant to any kind of bad things that may happen:
There could be a bug
Missed messages
Hardware failure
Active attack
But eventually things will come back to normal.
All of these things eventually will come back to consensus, it will come back to a state where a majority of the nodes have the same view.
Otherwise, that's a consensus failure, and our protocol is broken.
Thus, a BFT system is one that can resist these partial faulty nodes.
A two phase commit is a way to achieve consistency across distributed systems.
There is a trust assumption (an honest majority) which is needed to guarantee consensus.
The first phase is the prepare, it sends out the updated state and sees if everyone agrees to it.
If everyone agrees (eg. 2/3), the coordinator will command the other nodes to update their state.
But two phase commit doesn't really work for malicious nodes, for that we can use PBFT.
This consensus algorithm uses two prepares and a commit.
Only works with small consensus sets because message exchange is really high (like N^N messages for N nodes).
This doesn't scale, latency will grow way too high to reach consensus so this system is not practical.
It's also susceptible to a sybil attack.
Thus, PBFT does not solve the byzantine general's problem.
While this may be practically tolerant to byzantine faults, the system is no good for the main reasons:
We can't have a large consensus set (which gives us that nice property of digital scarcity)
It is susceptible to sybil attacks, again ruining our scarcity property
The way that bitcoin finds the correct head of the chain is by adding up the chain that has the most work.
Any malicious forks are orphaned off, so the heaviest chain wins.
When clients sync, they are not adding any work, they are just verifying the transactions for correctness.
Proof of stake is endogenous, whereas previous proof of work is exogenous (the protocol cannot see the work).
By having stake we can also penalize (whereas with work we are kind of just rewarding).
Looks closer to traditional BFT consensus protocols (eg. something like two phase commit).
Timestamp: 53:16 - This is where these notes are currently left off.
Two phase commits: https://www.youtube.com/watch?v=-_rdWB9hN1c
No comments yet