
The Quest for a Content Addressable SQLite
Happy Birthday SQLite! SQLite 1.0 was released 23 years ago today, so to celebrate this momentous occasion, join us on an exciting quest as we blend the old with the new, the familiar with the cutting-edge, and dare to dream of a future where edge databases and content addressable storage work together. Our destination? An experimental project to combine SQLite's features with the high performance, scalability, and deduplication capabilities of Content Addressable Storage (CAS). We’ll pr...

We're moving to Paragraph.xyz!
We started this blog and our Weeknotes newsletter on Substack to give the community insight into the latest happenings with Tableland. Since then, we began research and experiments around web3-native data needs—which led to the MVP for Basin. Our blogs started to incorporate topics outside of the core Tableland database, but everything we’ve shared has been built by the team behind the protocol—Textile. ICYMI—Mirror and Paragraph are joining forces (see here), so moving from Mirror to Paragra...

Discord Roles from Chain-driven Application Data
What Is It?The Tableland team is excited to introduce our new Discord<>Tableland bot integration—linking on/off-chain retribution back to Discord roles! Namely, developers can create/deploy this bot as an extension to Vulcan using its native features. It allows the bot to read data from an application’s Tableland tables and use it in Discord user/role management—all with a decentralized cloud database!Why Did We Do It?Vulcan is great for checking Discord member NFT ownership and creating role...
Tableland is a permissionless database that allows developers to use relational data and SQL from any contract, wallet, or app.

The Quest for a Content Addressable SQLite
Happy Birthday SQLite! SQLite 1.0 was released 23 years ago today, so to celebrate this momentous occasion, join us on an exciting quest as we blend the old with the new, the familiar with the cutting-edge, and dare to dream of a future where edge databases and content addressable storage work together. Our destination? An experimental project to combine SQLite's features with the high performance, scalability, and deduplication capabilities of Content Addressable Storage (CAS). We’ll pr...

We're moving to Paragraph.xyz!
We started this blog and our Weeknotes newsletter on Substack to give the community insight into the latest happenings with Tableland. Since then, we began research and experiments around web3-native data needs—which led to the MVP for Basin. Our blogs started to incorporate topics outside of the core Tableland database, but everything we’ve shared has been built by the team behind the protocol—Textile. ICYMI—Mirror and Paragraph are joining forces (see here), so moving from Mirror to Paragra...

Discord Roles from Chain-driven Application Data
What Is It?The Tableland team is excited to introduce our new Discord<>Tableland bot integration—linking on/off-chain retribution back to Discord roles! Namely, developers can create/deploy this bot as an extension to Vulcan using its native features. It allows the bot to read data from an application’s Tableland tables and use it in Discord user/role management—all with a decentralized cloud database!Why Did We Do It?Vulcan is great for checking Discord member NFT ownership and creating role...
Tableland is a permissionless database that allows developers to use relational data and SQL from any contract, wallet, or app.

Subscribe to Tableland

Subscribe to Tableland
Share Dialog
Share Dialog
<100 subscribers
<100 subscribers


Data Availability (”DA”) is a hot topic now that EIP-4844 is live! There are a few major players in the space, and all of them take similar approaches but with a few differences. We'll take a look at EigenDA, Celestia, Avail, and Arbitrum AnyTrust. But first, let's start off with a bit of background information.
DA layers ensure that block data is provably published so that applications and rollups can know what the state of the chain is—but once the data is published, DA layers do not guarantee that historical data will be permanently stored and remain retrievable. DAs either deploy a validity proof or a fraud/fault proof (validity proofs are more common). Data availability sampling (”DAS”) by light clients is also a common feature to ensure data can be recovered. However, the term DA typically refers to simple blocks/transaction data, so it differs from large, arbitrary, long-term data availability and storage.
The following section outlines common terms across DA protocols. Skip if familiar.
To clarify, here's a quick recap of what DAs implement:
Validity proofs: ensure that all data and transactions are valid before they are included onchain via zk-SNARKs/STARKs.
Computationally intensive but provides strong security guarantees.
Fraud/fault proofs: allow data to be posted onchain before guaranteed valid—and use a challenge period for tx dispute resolution.
Less computationally intensive but lower security guarantees (i.e., requires the network to actively generate fraud proofs).
Data Availability (”DA”) is a hot topic now that EIP-4844 is live! There are a few major players in the space, and all of them take similar approaches but with a few differences. We'll take a look at EigenDA, Celestia, Avail, and Arbitrum AnyTrust. But first, let's start off with a bit of background information.
DA layers ensure that block data is provably published so that applications and rollups can know what the state of the chain is—but once the data is published, DA layers do not guarantee that historical data will be permanently stored and remain retrievable. DAs either deploy a validity proof or a fraud/fault proof (validity proofs are more common). Data availability sampling (”DAS”) by light clients is also a common feature to ensure data can be recovered. However, the term DA typically refers to simple blocks/transaction data, so it differs from large, arbitrary, long-term data availability and storage.
The following section outlines common terms across DA protocols. Skip if familiar.
To clarify, here's a quick recap of what DAs implement:
Validity proofs: ensure that all data and transactions are valid before they are included onchain via zk-SNARKs/STARKs.
Computationally intensive but provides strong security guarantees.
Fraud/fault proofs: allow data to be posted onchain before guaranteed valid—and use a challenge period for tx dispute resolution.
Less computationally intensive but lower security guarantees (i.e., requires the network to actively generate fraud proofs).
KZG commitment scheme: data redundancy via erasure encoding—and correctness thereof without needing a fraud proof.
E.g., full nodes to prove transaction inclusion to light nodes using succinct proof.
Erasure encoding: reduce per-node storage requirements by splitting data up across many nodes while ensuring the original data can be recovered if lost.
This involves decreasing an individual node’s storage requirement by increasing the total size of a piece of data (splitting into blocks & adding additional redundancy/erasure encoded blocks).
Then, distribute the blocks across many nodes. If you need the original data, it should be recoverable by piecing blocks back together from the network—assuming some defined tolerance threshold is held.
Data availability sampling: ensure data availability without requiring nodes to hold the entire dataset; complements erasure encoding to he
lp guarantee data is available.
I.e., randomly sampled pieces of erasure-coded block data to assure the entire block is available in the network for reconstruction—else, slash nodes.
Data availability committee: a trusted set of nodes—or validators in a DA PoS network—that store full copies of data and publish onchain attestations to prove ownership.
The following section outlines common L2s and how they approach DA.
According to Avail, there are a few different approaches that L2s take to DA. Note this is in the sense of block/transaction DA and differs from the “arbitrary” / large DA approach Textile focuses on with Basin and object storage:
Post proofs (validity or fraud) onchain along with state commitments.
All data and computation, except for deposits, withdrawals, and Merkle roots, to be kept offchain.
Adaptation of Optimistic rollups that also take data availability offchain while using fraud proofs for verification.
I.e., differs from traditional rollups in that transaction data is entirely in offchain storage.
E.g., Optimism offers a “plasma mode” where data is uploaded to the DA storage layer via plain HTTP calls
Adaptation of ZK rollups that shift data availability offchain while continuing to use validity proofs.
E.g., Starknet posts a STARK validity proof and also sends a state diff, which represents the changes in the L2 state since the last validity proof was sent (updates/modifications made to the network's state).
KZG commitment scheme: data redundancy via erasure encoding—and correctness thereof without needing a fraud proof.
E.g., full nodes to prove transaction inclusion to light nodes using succinct proof.
Erasure encoding: reduce per-node storage requirements by splitting data up across many nodes while ensuring the original data can be recovered if lost.
This involves decreasing an individual node’s storage requirement by increasing the total size of a piece of data (splitting into blocks & adding additional redundancy/erasure encoded blocks).
Then, distribute the blocks across many nodes. If you need the original data, it should be recoverable by piecing blocks back together from the network—assuming some defined tolerance threshold is held.
Data availability sampling: ensure data availability without requiring nodes to hold the entire dataset; complements erasure encoding to he
lp guarantee data is available.
I.e., randomly sampled pieces of erasure-coded block data to assure the entire block is available in the network for reconstruction—else, slash nodes.
Data availability committee: a trusted set of nodes—or validators in a DA PoS network—that store full copies of data and publish onchain attestations to prove ownership.
The following section outlines common L2s and how they approach DA.
According to Avail, there are a few different approaches that L2s take to DA. Note this is in the sense of block/transaction DA and differs from the “arbitrary” / large DA approach Textile focuses on with Basin and object storage:
Post proofs (validity or fraud) onchain along with state commitments.
All data and computation, except for deposits, withdrawals, and Merkle roots, to be kept offchain.
Adaptation of Optimistic rollups that also take data availability offchain while using fraud proofs for verification.
I.e., differs from traditional rollups in that transaction data is entirely in offchain storage.
E.g., Optimism offers a “plasma mode” where data is uploaded to the DA storage layer via plain HTTP calls
Adaptation of ZK rollups that shift data availability offchain while continuing to use validity proofs.
E.g., Starknet posts a STARK validity proof and also sends a state diff, which represents the changes in the L2 state since the last validity proof was sent (updates/modifications made to the network's state).
No activity yet