Subscribe to Untitled
Subscribe to Untitled
Share Dialog
Share Dialog
<100 subscribers
<100 subscribers
First, let’s define two terms, scalability and performance, which have standard computer science meanings that are often misused in blockchain contexts. Performance measures what a system is currently capable of achieving. As we’ll discuss below, performance metrics might include transactions per second or median transaction confirmation time. Scalability, on the other hand, measures the ability of a system to improve performance by adding resources.
This distinction is important: Many approaches to improving performance do not improve scalability at all, when properly defined. A simple example is using a more efficient digital signature scheme, such as BLS signatures, which are roughly half the size of Schnorr or ECDSA signatures. If Bitcoin switched from ECDSA to BLS, the number of transactions per block could go up by 20-30%, improving performance overnight. But we can only do this once — there isn’t an even more space-efficient signature scheme to switch to (BLS signatures can also be aggregated to save more space, but this is another one-off trick).
A number of other one-off tricks (such as SegWit) are possible in blockchains, but you need a scalable architecture to achieve continual performance improvement, where adding more resources improves performance over time. This is the conventional wisdom in many other computer systems as well, such as building a web server. With a few common tricks, you can build one very fast server; but ultimately, you need a multi-server architecture that can meet ever-growing demand by continually adding extra servers.
Understanding the distinction also helps avoid the common category error found in statements like, “Blockchain X is highly scalable, it can handle Y transactions per second!” The second claim may be impressive, but it’s a performance metric, not a scalability metric. It doesn’t speak to the ability to improve performance by adding resources.
Scalability inherently requires exploiting parallelism. In the blockchain space, Layer 1 scaling appears to require sharding or something that looks like sharding. The basic concept of sharding — splitting state into pieces so that different validators can process independently — closely matches the definition of scalability. There are even more options on Layer 2 which allow adding parallel processing — including off-chain channels, rollup servers, and sidechains.
First, let’s define two terms, scalability and performance, which have standard computer science meanings that are often misused in blockchain contexts. Performance measures what a system is currently capable of achieving. As we’ll discuss below, performance metrics might include transactions per second or median transaction confirmation time. Scalability, on the other hand, measures the ability of a system to improve performance by adding resources.
This distinction is important: Many approaches to improving performance do not improve scalability at all, when properly defined. A simple example is using a more efficient digital signature scheme, such as BLS signatures, which are roughly half the size of Schnorr or ECDSA signatures. If Bitcoin switched from ECDSA to BLS, the number of transactions per block could go up by 20-30%, improving performance overnight. But we can only do this once — there isn’t an even more space-efficient signature scheme to switch to (BLS signatures can also be aggregated to save more space, but this is another one-off trick).
A number of other one-off tricks (such as SegWit) are possible in blockchains, but you need a scalable architecture to achieve continual performance improvement, where adding more resources improves performance over time. This is the conventional wisdom in many other computer systems as well, such as building a web server. With a few common tricks, you can build one very fast server; but ultimately, you need a multi-server architecture that can meet ever-growing demand by continually adding extra servers.
Understanding the distinction also helps avoid the common category error found in statements like, “Blockchain X is highly scalable, it can handle Y transactions per second!” The second claim may be impressive, but it’s a performance metric, not a scalability metric. It doesn’t speak to the ability to improve performance by adding resources.
Scalability inherently requires exploiting parallelism. In the blockchain space, Layer 1 scaling appears to require sharding or something that looks like sharding. The basic concept of sharding — splitting state into pieces so that different validators can process independently — closely matches the definition of scalability. There are even more options on Layer 2 which allow adding parallel processing — including off-chain channels, rollup servers, and sidechains.
No activity yet