Blob aggregation does not modify the original data submitted by rollups, so their KZG commitments remain valid post-aggregation—as long as the data can be losslessly recovered.
EIP-4844 blobs have a fixed size of 128KB. Because this data is too large to pass around, a lighter commitment to the blob data is calculated and used for blob verification. KZG commitments allow validators and nodes to efficiently prove that some data (like a blob) exists and has not been altered, without actually transmitting or storing the full data on-chain.
But this begs a question, what happens to a blob’s KZG commitment when it is aggregated into a shared blob? In short, the answer is that nothing happens to it. Nothing happens to the blob’s data during the aggregation process that would alter it’s KZG commitment after it has been unpacked.
To understand why this is, we need to look at how data is encoded into a blob, and how KZG commitments are calculated. Blobs in EIP-4844 have a fixed length of exactly 4096 field elements, each field element being 32 bytes, thus totaling 128KB.
KZG Commitments operate on a polynomial constructed from these 4096 fixed-size field elements, not on arbitrarily sized data directly, so if a rollup’s original data is smaller than 128KB, it must first pad or encode the data to exactly 128KB before calculating the KZG Commitment. This is what libraries like alloys-rs do when encoding data into EIP-4844 blobs.
The point to keep in mind is that blob encoding and kzg commitment computation are both deterministic processes. So long as the input rollup data doesn’t change, the kzg commitment won’t change. And as we’ll see, the aggregation/deaggregation process does not mutate the original rollup data.
Overview
We’ll walk through a simple code example using the open source alloys crate, which is what reth uses internally for its eip4844 support.
In this example, we’re going to take two data strings:
let string1 = “Transaction data from Rollup 1”
let string2 = “Transaction data from Rollup 2”
we’re then going to “blobify” these strings individually, and calculate their kzg commitments. We’ll then do a very basic form of aggregation by concatenating them, blobifying the concatenated data, and then reverse that process to show that the kzg commitment stays intact.
Walkthrough
We start with some simulated rollup data, and convert it into bytes:
let rollup_1_data = "Transaction data from rollup 1";
let rollup_2_data = "Transaction data from rollup 2";
let rollup_1_data_len = rollup_1_data.len();
let rollup_2_data_len = rollup_2_data.len();
let rollup_1_data_bytes = rollup_1_data.as_bytes();
let rollup_2_data_bytes = rollup_2_data.as_bytes();
Next, we use alloy’s SimpleCoder and SidecarBuilder to encode each rollup’s data into a 128KB blob, and retrieve its KZG commitment:
let mut rollup_1_blob_builder = SidecarBuilder::<SimpleCoder>::new();
rollup_1_blob_builder.ingest(rollup_1_data_bytes);
let rollup_1_sidecar: BlobTransactionSidecar = rollup_1_blob_builder.build()?;
let rollup_1_blob = rollup_1_sidecar.blobs.get(0).ok_or("Sidecar1 has no blobs")?;
let rollup_1_commitment = rollup_1_sidecar.commitments.get(0).ok_or("Sidecar1 has no commitments")?;
We do the same for rollup two as well. It’s worth noting these two KZG commitments, as these are what the rollups would keep track of.
Next, we “aggregate” the rollup data by concatenating the raw bytes for the original rollup data strings (not the blob-encoded versions of these strings)
let mut aggregated_data_bytes = Vec::new();
aggregated_data_bytes.extend_from_slice(rollup_1_data_bytes);
aggregated_data_bytes.extend_from_slice(rollup_2_data_bytes);
We then go through the process of encoding this aggregated data as a blob, and calculating its KZG commitment. Note that this “aggregated” KZG commitment is not of much use to us in this example, although it will be different that either rollup 1 or rollup 2’s individual KZG commitments.
Finally, we deaggregate the aggregated blob by stripping it of its blob encoding, and manually parsing the bytes for rollup 1 and rollup 2
let mut coder = SimpleCoder::default();
let decoded_data = coder.decode_all(&[owned_aggregated_blob])
.and_then(|v| v.into_iter().next())
.ok_or_else(|| eyre::eyre!("Failed to decode or find data in aggregated blob"))?;
let decoded_data_str = String::from_utf8(decoded_data)?;
println!("Decoded Data: {}", decoded_data_str);
// prints out Decoded Data: Transaction data from rollup 1Transaction data from rollup 2
// We know the lengths of the original data, so we can split the decoded data into the two original strings
// In reality, aggregated blobs encode offsets and lengths into a common header format
let rollup_1_data_after_aggregation = decoded_data_str[..rollup_1_data_len].to_string();
let rollup_2_data_after_aggregation = decoded_data_str[rollup_1_data_len..].to_string();
println!("Rollup 1 Data After Aggregation and Decoding: {}", rollup_1_data_after_aggregation);
println!("Rollup 2 Data After Aggregation and Decoding: {}", rollup_2_data_after_aggregation);
we now have the original string data that the rollups submitted. If this were transaction data, the rollup nodes could begin running it through the rest of their derivation pipeline. As a last step, we re-encode each string as a blob so we can calculate its KZG commitment, and confirm that it hasn’t changed since the rollup initially submitted it.
// Now, let's encode the two original strings back into blobs and verify the KZG commitments
let mut rollup_1_blob_builder = SidecarBuilder::<SimpleCoder>::new();
rollup_1_blob_builder.ingest(rollup_1_data_after_aggregation.as_bytes());
let rollup_1_sidecar: BlobTransactionSidecar = rollup_1_blob_builder.build()?;
let rollup_1_blob = rollup_1_sidecar.blobs.get(0).ok_or("Sidecar1 has no blobs")?;
let rollup_1_commitment_after_aggregation = rollup_1_sidecar.commitments.get(0).ok_or("Sidecar1 has no commitments")?;
// println!("Rollup 1 Blob After Encoding: {}", hex::encode(rollup_1_blob));
println!("Rollup 1 Commitment After Encoding: {}", hex::encode(rollup_1_commitment));
let mut rollup_2_blob_builder = SidecarBuilder::<SimpleCoder>::new();
rollup_2_blob_builder.ingest(rollup_2_data_after_aggregation.as_bytes());
let rollup_2_sidecar: BlobTransactionSidecar = rollup_2_blob_builder.build()?;
let rollup_2_blob = rollup_2_sidecar.blobs.get(0).ok_or("Sidecar2 has no blobs")?;
let rollup_2_commitment_after_aggregation = rollup_2_sidecar.commitments.get(0).ok_or("Sidecar2 has no commitments")?;
// println!("Rollup 2 Blob After Encoding: {}", hex::encode(rollup_2_blob));
println!("Rollup 2 Commitment After Encoding: {}", hex::encode(rollup_2_commitment));
// assert that the commitments are the same before and after aggregation
assert_eq!(rollup_1_commitment, rollup_1_commitment_after_aggregation);
assert_eq!(rollup_2_commitment, rollup_2_commitment_after_aggregation);
We’ve seen that aggregation preserves a rollup blob’s original KZG commitment, so long as the aggregation process doesn’t irreversibly alter the rollup’s data in any way.
If your an L2 and want to save costs on Ethereum DA without code changes get started with us now https://spire.deform.cc/DABuilder/
Spire Labs and Antony Denyer