banner
whiskoy.eth

whiskoy.eth

crypto explorer
twitter

What are the benefits of Cancun's upgrade?

Background of Scaling:#

  • Each block has a gas limit. Ethereum sets a limit on the amount of data a block can hold based on gas fees. A block can hold a maximum of 30 million gas.
  • Large blocks come at a cost. Ethereum does not want every block to be too large, so each block has a Gas Target of 15 million gas.
  • Gas automated adjustment mechanism. Once the gas consumption of a block exceeds the Gas Target (15 million gas), the base fee for the next block will increase by 12.5%. If it is lower, the base fee will decrease. This allows for cost increases during peak transaction times to alleviate congestion, and cost decreases during low transaction times to attract more transactions.

Scaling Solution (Layer2+Sharding):#

Layer2:

  • Optimism Rollup: Assuming all transactions are honest and trustworthy, many transactions are compressed into one transaction and submitted to Ethereum. After submission, there is a time window (challenge period - currently one week) during which anyone can challenge and verify the authenticity of the transactions. However, if a user wants to transfer ETH from OP Rollup to Ethereum, they need to wait until the end of the challenge period to receive final confirmation.
  • ZK Rollup: Only one zero-knowledge proof and the final state change data need to be uploaded, which means it can compress more data than OP Rollup in terms of scalability, and it does not require a challenge period as long as OP Rollup. However, it has a higher level of technical difficulty.

Sharding:

Before explaining Sharding, let's review the block generation process of ETH.

The merged Ethereum has transitioned from PoW to PoS, with one block (Slot) generated every 12 seconds, and 32 blocks (Epoch) generated every 6.4 minutes. When a miner stakes 32 ETH to become a validator, the beacon chain will select a validator as the block producer through a random algorithm. Each block will randomly select a block producer once. In each Epoch, each block (Slot) is allocated a committee composed of 1/32 of the total number of validators. The committee needs to vote to verify the block produced by the block producer. Once the block is packaged by the block producer, it can be successfully generated if more than two-thirds of the validators vote in favor.

  • Sharding 1.0 (deprecated, can be skipped)

Sharding 1.0 is a general term for the Ethereum sharding design before Danksharding. It is necessary to have a rough understanding of this initial sharding solution. In the initial sharding solution Sharding 1.0, the design concept was essentially to achieve "state sharding". Ethereum was designed to have up to 64 shard chains instead of a single main chain, achieving scalability by adding multiple new chains. In this solution, each shard chain is responsible for processing Ethereum data and submitting it to the beacon chain, which is responsible for coordinating the entire Ethereum. The block producers and committees of each shard chain are randomly assigned by the beacon chain. However, the core problems are data synchronization and the constantly growing MEV.

Untitled.png

Danksharding (the protagonist today)#

Danksharding uses a new sharding approach to solve the scalability problem of Ethereum. It is a sharding solution based on Layer2 Rollup, which can achieve scalability without significantly increasing the burden on nodes while ensuring decentralization and security, and also solves the negative impact of MEV.

Untitled 1.png

EIP-4844:#

EIP-4844 introduces a new transaction type, Blob Transaction, to Ethereum. This new transaction type Blob provides an additional external database (md, essentially a large block) for Ethereum:

  • The size of a Blob is approximately 128 KB (the average size of a block is about 60-70 KB).
  • A transaction can carry up to two Blobs (256 KB).
  • The target number of Blobs per block is 8 (1 MB), with a maximum of 16 Blobs (2 MB) per block.
  • Blobs do not need to be permanently stored like callData (currently recommended for 30 days).

Untitled 2.png

Blob brings a huge amount of additional storage space to Ethereum. It is worth noting that the total data size of all Ethereum ledgers since the birth of Ethereum is only about 1 TB, while Blobs can bring an additional data volume of 2.5 TB to 5 TB to Ethereum each year, several times the size of the entire Ethereum ledger data.

The Blob transactions introduced by EIP-4844 are tailor-made for Rollup. Rollup data is uploaded to Ethereum in the form of Blobs, and the additional data space allows Rollup to achieve higher TPS and lower costs, while also freeing up block space occupied by Rollup for more users.

Danksharding Scaling Solution#

The complete Danksharding solution further expands the data capacity that Blobs can carry from 1-2 MB per block to 16-32 MB per block, and proposes a new mechanism called Producers-Builders Separation (PBS) to solve the problems caused by MEV.

Danksharding proposes a solution called Data Availability Sampling (DAS) to reduce the burden on nodes while ensuring data availability.

  • Data Availability Sampling (DAS) reduces node storage burden and avoids excessive centralization of nodes (erasure coding technology and KZG polynomial commitment)

The idea of Data Availability Sampling (DAS) is to divide the data in Blobs into data fragments and let nodes switch from downloading Blob data to randomly sampling Blob data fragments, so that the data fragments of Blobs are scattered among every node in Ethereum. However, the complete Blob data is stored in the entire Ethereum ledger, provided that there are a sufficient number of decentralized nodes.

For example, if the Blob data is divided into 10 fragments and there are 100 nodes in the network, each node will randomly sample and download one data fragment and submit the sampled fragment number to the block. As long as all the fragment numbers can be assembled in one block, Ethereum will assume that the data of this Blob is available. The original data can be restored by assembling the fragments. However, there is a very low probability that none of the 100 nodes will sample a certain fragment number, resulting in missing data, which reduces security to some extent but is acceptable in terms of probability.

Untitled 3.png

Untitled 4.png

Untitled 5.png

However, there is a problem: who is responsible for encoding the original data?

  • Producers-Builders Separation (PBS) solves the division of labor between block producers and data distributors

To encode the original data of Blobs, the nodes performing the encoding must have the complete original data. This places higher requirements on the nodes. As mentioned earlier, Danksharding proposes a new mechanism called Producers-Builders Separation (PBS) to solve the problems caused by MEV. In fact, this solution not only solves the MEV problem but also solves the encoding problem.

  1. Nodes with high-performance configurations can become builders, who only need to download Blob data for encoding and create blocks, which are then broadcasted to other nodes for sampling. Builders have higher synchronization data volume and bandwidth requirements, so they tend to be more centralized.
  2. Nodes with lower-performance configurations can become proposers, who only need to verify the validity of the data and create and broadcast block headers. Proposers have lower synchronization data volume and bandwidth requirements, so they tend to be more decentralized.
  • Anti-Censorship List (crList) restricts the ability of block producers to exploit MEV

However, builders have the ability to censor transactions and manipulate their order to exploit MEV. Builders can intentionally ignore certain transactions, arbitrarily sort and insert their own transactions to gain MEV. The Anti-Censorship List (crList) solves these problems.

Untitled 6.png

The mechanism of the Anti-Censorship List (crList) is as follows:

  1. Before the builder packages block transactions, the proposer will first publish an Anti-Censorship List (crList), which contains all transactions in the mempool.
  2. The builder can only package and sort transactions from the crList. This means that the builder cannot insert private transactions to gain MEV or intentionally reject a transaction (unless the gas limit is reached).
  3. After the builder has packaged the transactions, they will broadcast the hash of the final transaction list to the proposer. The proposer will select one of the transaction lists to generate a block header and broadcast it.
  4. When nodes synchronize data, they will obtain block headers from the proposer and block bodies from the builder to ensure that the block body is the final selected version.

Conclusion#

Danksharding provides a revolutionary solution for Ethereum to solve the "blockchain trilemma", achieving scalability while ensuring decentralization and security:

  • The introduction of EIP-4844: Proto-Danksharding introduces a new transaction type, Blob, which can help Ethereum achieve higher TPS and lower costs on Rollup.
  • Data Availability Sampling (DAS) reduces the burden on nodes and ensures data availability through erasure coding and KZG polynomial commitment.
  • By implementing Data Availability Sampling (DAS), the additional data capacity of Blobs is expanded to 16-32 MB, further enhancing scalability.
  • Producers-Builders Separation (PBS) separates the work of block validation and block packaging into two node roles, achieving decentralization for validators and centralization for builders.
  • The Anti-Censorship List (crList) and dual-slot PBS greatly reduce the negative impact of MEV, preventing builders from inserting private transactions or censoring transactions.

If everything goes as planned, the precursor solution EIP-4844 of Danksharding will be implemented in the upcoming London hard fork of Ethereum. After the implementation of the EIP-4844 solution, the most direct benefit will be seen in Layer2 Rollup and the ecosystem built on Rollup. Higher TPS and lower costs are very suitable for high-frequency applications on the chain. We can imagine the emergence of some "killer applications".

Reference: https://research.web3caff.com/zh/archives/6259

Loading...
Ownership of this post data is guaranteed by blockchain and smart contracts to the creator alone.