With the approaching era of modular blockchain, current PoS validators and PoW miners are anticipated to encounter various business opportunities.
The potential infrastructure roles to consider are 1) Rollup sequencer, 2) Rollup Prover, 3) Shared sequencer, and 4) DA layer validators.
While most roles seem approachable for current validators, the prover might face challenges during the early stages of network effect bootstrapping.
In January 2009, the advent of the Bitcoin network marked the commencement of blockchain's history. Central to blockchain is its decentralized nature, leading to collective maintenance by a multitude of participants. In the Bitcoin network, Proof of Work (PoW) was introduced for the right to create blocks. Within this PoW system, a mining node that finds a nonce that produces a target hash value for a specific block can have the right to create blocks and receive rewards.
Source: Leo Lu
Mining nodes don't do anything particularly remarkable to find the nonce. They simply and continually input random numbers through semiconductor equipment until they obtain the desired hash value. In the early days of the network, there were fewer miners, and the overall hash rate was low, so mining was easily possible with just a CPU. However, as the hash rate of the Bitcoin network gradually increased, the mining process evolved from using GPUs and FPGAs to now predominantly using ASICs specialized for Bitcoin mining. Currently, major Bitcoin mining companies include Marathon Digital Holdings, Riot Blockchain, Hut 8, and Bitfarms, among others.
However, as many are aware, Proof of Work requires massive equipment and significant electricity. To mitigate these concerns, networks based on Proof of Stake (PoS) were introduced. These networks allocate block creation rights based on the proportion of tokens staked rather than computational effort. Unlike PoW, PoS doesn't require extensive computational power, meaning participants can join the network with relatively low hardware specifications or even through cloud services. While high-scalability networks like Solana have higher requirements, such as a 12+ core CPU and 256GB RAM, others like the Polygon PoS and Aptos need only an 8+ core CPU and 32 GB RAM, and for the Ethereum network, just a 2+ core CPU with 16GB RAM is sufficient. Major infrastructure companies participating as validators in PoS networks include Figment, Chorus One, P2P, and DSRV.
The blockchain industry, having experienced an era dominated by PoW and PoS infrastructure, is now shifting towards the age of modular blockchains. Traditional blockchains operated as monolithic systems, where a single network performed all functions, including:
Execution - Processing transactions to update the network state.
Sequencing - Aggregating user transactions and determining their order.
Settlement - Confirming the states of rollup networks through fraud proofs and validity proofs.
Data Availability - Storing transaction data of rollup networks, ensuring data availability.
Consensus - Establishing the order of transactions, bundles, and rollup blocks.
However, monolithic blockchains demand that every node fulfills all functions, limiting scalability and decentralization improvements. In modular blockchains, these responsibilities are distributed across different networks. Tasks like execution are handled by rollups, sequencing by shared sequencing networks, settlements by settlement layers, data availability by DA layers, and consensus by consensus layers. This division showcases the potential to address the blockchain trilemma more efficiently.
As we look forward to the debut of multiple modular blockchain projects soon, the roles assumed by infrastructure players may not deviate significantly from those in PoW or PoS systems. Yet, with networks specializing in specific functions, new roles, such as the Prover, will come into play. Therefore, infrastructure companies ought to remain vigilant about emerging opportunities.
First, let's delve into potential business opportunities for infrastructure players within rollups and sequencing networks. In simple terms, a rollup is a blockchain that stores its transaction data on another blockchain. If malicious activities occur on the rollup network, the state can be restored using the stored transaction data on another blockchain, leveraging its security.
Similar to how PoW blockchains utilize miners and PoS blockchains rely on validators, rollup networks employ Sequencers, with zk-rollups uniquely introducing the role of the Prover.
A Sequencer's role closely resembles that of validators in traditional PoS networks. They gather L2 user transactions, determine their order, and submit them in batches to L1. There are primarily three methods by which rollup networks handle sequencing.
2.1.1 Centralized Sequencer
Firstly, some rollup networks have a single Sequencer. Notable rollup networks like Optimism, Arbitrum, zkSync Era, Starknet, Base, and Polygon zkEVM all employ this approach. Unlike PoS networks, rollup networks rely on the security of the base layer, so centralization of the sequencer doesn't present security concerns. In fact, having a centralized sequencer can eliminate the need for an additional consensus algorithm, offering enhanced scalability.
For infrastructure companies, participating as a block producer in rollup networks using a centralized sequencer is currently impossible. However, since most of these rollup networks plan to decentralize their sequencers in the future, I've compiled the specifications of a Full node in a table. We can observe that the minimum specifications for most rollup sequencers are not significantly different from the specifications of Ethereum's Geth client. Therefore, if most rollups decentralize their sequencers in the future, existing PoS validator companies can easily participate and are expected to receive rewards.
2.1.2 Decentralized Sequencer
The second case involves operating a decentralized sequencer in rollup networks. Even with a single sequencer, rollup networks can sufficiently rely on the security of the base layer. However, decentralizing sequencers offers various advantages.
Firstly, there's an improvement in liveness. In a rollup network with only a single sequencer, if the sequencer goes offline, the service of the rollup network may be halted. But in a decentralized sequencer system, if one sequencer goes offline, others remain operational, preventing any disruptions. Secondly, it enhances censorship resistance. If a single sequencer acts maliciously, it could reject certain transactions or maliciously extract MEV. Although some rollups have systems in place that allow transactions to be forcibly sent from L1 to L2, mitigating some of these disadvantages, they don't entirely solve the issue.
Therefore, while most rollups initially use a centralized sequencer, they plan to decentralize in the future. However, there's a project planning to use a decentralized sequencer from the outset upon its mainnet launch, and that is Taiko.
Source: Taiko
Taiko targets type-1 zkEVM as a zk-rollup. It employs an intriguing sequencing mechanism known as Based Rollup. (For more details, see “Based rollup: Sequenced by Ethereum”). In Based Rollup, the L1 network handles the sequencing for the rollup. This means that, for Taiko, the entities creating transaction bundles in the Ethereum network can perform the role of block builders in Taiko L2.
The process of block creation in Taiko L2 is an extended version of the block building pipeline existing in the Ethereum network. An L2 searcher gathers transactions from the L2 mempool into a bundle and hands it over to the L2 block builder. This L2 block builder then, through a bidding process, transfers it to the L1 block builder to get the L2 block added to the Ethereum network. In Taiko, anyone can participate as a sequencer and create blocks. To receive a reward, the created block must be included in the Ethereum network. Of course, staking of the native token is required to prevent malicious activities.
Despite Taiko being in its testnet stages, it presents an enticing chance for direct involvement in its decentralized sequencer system. Those already functioning as searchers or proposers within the Ethereum landscape might find it beneficial to engage.
2.1.3 Outsourcing to Shared Sequencing Layer
Instead of handling sequencing internally in rollup networks, outsourcing to a specialized shared sequencing layer is also an effective approach. Since the shared sequencing layer offers sequencing services to rollup networks, it not only addresses single point failures and malicious MEV issues but also provides atomic interoperability between rollup networks. In the traditional system, executing atomic transactions across multiple rollups was impossible. However, the shared sequencing layer, which creates blocks for multiple rollup networks, makes this feasible. This also offers a solution to the liquidity fragmentation problem across multiple rollup networks.
Currently, there are various shared sequencing layer projects such as Astria, Espresso Sequencer, Fairblock, Nodekit, and Radius. Further details can be found in Four Pillars’ Modular Odyssey. This article will only examine a few examples.
Astria is a shared sequencing layer based on the Celestia ecosystem. Astria Shared Sequencer collects user transactions to form sequenced blocks (ordered blocks) and submits them to the DA layer. Typically, rollup networks fetch blocks from the DA layer and execute transactions (hard commitment). However, if one wishes to reduce latency further, transactions can be directly fetched and executed from the Astria Shared Sequencer before their submission to the DA layer (soft commitment).
Espresso Sequencer is a shared sequencing layer project within the Ethereum ecosystem, focusing on the consensus algorithm between decentralized sequencers. It employs a customized version of the HotStuff BFT, dubbed the HotShot consensus algorithm. To become a sequencer, one must re-stake ETH in the EigenLayer. Sequencers collect transactions from rollups, determine their sequence, and submit the corresponding commitment to L1.
Radius introduces a cryptographic scheme named Practical Verifiable Delay Encryption (PVDE) to uniquely solve the malicious MEV problem even with a single sequencer. Here, the transaction order is exclusively determined by an auction, and transaction details are only disclosed after this sequencing. This ensures that even a single sequencer cannot exploit malicious MEV. The Radius sequencer receives encrypted transactions, validates their legitimacy through proofs, determines their sequence to form blocks, and then decrypts and forwards them to the rollup network.
Since only a few shared sequencing layers have launched testnets, not many have disclosed the hardware requirements to become a sequencer. However, it's expected to be relatively modest, as their functions are not significantly different from traditional rollup sequencers. For instance, the Espresso Sequencer has shared benchmark results where the employed hardware specification was 2vCPUs, 16GB RAM. With shared sequencing layers garnering immense attention lately and numerous projects gearing up for launch, existing validator enterprises should prepare to participate as sequencers.
2.2.1 Client-Side vs. Server-Side Proving
Source: Figment Capital
A prover is responsible for generating Zero-Knowledge Proofs (ZKP). By creating a ZKP, the prover can validate the correctness of specific executions or states without revealing any information about them. Thanks to this feature, ZKPs are not only utilized in zk rollups but also in various areas such as privacy protocols, cross-chain bridges, storage services, data compression, and identity protocols.
The process of creating ZKP, known as proving, can be categorized into client-side proving and server-side proving. In client-side proving, the user who submits a transaction also creates and sends the ZKP alongside. Generating ZKPs for relatively simple operations is not difficult, making client-side proving a suitable method. Examples include ZCash and Tornado Cash. However, for applications like zk-rollups, where creating a ZKP requires substantial computational power, it's impractical for the user submitting the transaction also to handle the proving. Hence, specialized provers within the protocol generate the ZKP on behalf of the users, which is known as server-side proving.
When classifying server-side proving, it can be divided into a centralized method, where only a single prover exists, and a decentralized method with multiple provers participating. Similarly to sequencers, even if the Prover operates in a centralized manner, it doesn't pose significant security concerns for the protocol. This is because ZKPs certify the validity of executions. However, for ensuring liveness and censorship resistance, transitioning to a decentralized method is preferable.
2.2.2 Proof Market vs. Prover Network
Source: =nil; Foundation
There are two methods to decentralize the prover: 1) Proof Market and 2) Prover Network. The proof market is a marketplace that matches services requiring ZKPs with provers. A typical example of this is the =nil; Foundation's proof market, which employs an order book method, as can be seen in the above picture. Using the proof market for zk rollups is not ideal because zk rollups require consistent and continuous ZKP generation. The proof market is more suitable for protocols where ZKP creation is sporadic.
The prover network involves the protocol establishing its own decentralized network of provers. A single prover network operates exclusively for one service. So, when multiple provers exist, how can ZKP creation rights be granted? There are three main methods:
Stake-based - Similar to a PoS network. To become a prover, tokens must be staked, and the right to create ZKPs is given based on the proportion of staked tokens. Given the probabilistic nature of granting rights, certain provers may struggle to produce ZKPs promptly if significant computational resources are needed, potentially causing liveness problems. While this method is highly decentralized, its performance can be lacking. Taiko's Alpha-4 testnet used this approach.
Proof mining - Similar to a PoW network. Provers continue to generate ZKPs until a hash value meeting certain conditions is produced. This method is also highly decentralized but can waste computational power. Aleo uses this approach.
Proof racing - Rewards are given to the prover who generates the ZKP first. Since the ZKP creation time is directly tied to hardware specifications, the same prover might monopolize ZKP creation rights by consistently winning the race, making this the most centralized method. However, since ZKP creation is left to free-market competition, it promises the highest performance. Taiko's Alpha-3 testnet employed this method.
2.2.3 Examples of Prover Network in Rollup
While almost all rollup networks operate with a centralized prover, Taiko has been operating a decentralized prover from its testnet stage. The minimum specifications to become a prover in Taiko are an 8 core CPU and 32GB RAM. These specs are currently on the lower side since all features haven't been fully introduced yet. Also, these are just minimum requirements and might not be sufficient in reality.
Taiko's Alpha-3 testnet used the proof racing method. Here, there's a target window that serves as an appropriate interval for submitting ZKPs. If ZKPs are submitted too frequently, users would have to pay high network fees. Conversely, if submitted too slowly, withdrawal latency would increase. Hence, it's essential that ZKPs are submitted within each target window.
Source: Taiko
In the Taiko Alpha-3 testnet, an EIP-1559 style incentive scheme is introduced to encourage provers to submit ZKPs close to the target window. If a ZKP is submitted earlier than the target window, the reward is reduced, and the base reward for the next window is slightly decreased. This nudges provers to submit ZKPs later in the subsequent window. Conversely, if a ZKP is submitted after the target window, the reward is increased, and the base reward for the following window is also marginally raised. This provides an incentive for provers to submit ZKPs earlier in the next window than they did in the previous one. Meanwhile, the Taiko Alpha-4 testnet employs a stake-based method. The top 32 provers who have staked the most tokens can participate, and the right to produce ZKPs is probabilistically granted based on the staking ratio.
Source: Opside
Another example of a decentralized prover network is Opside, a zk-RaaS project that assists in easily creating zk-rollups. Rollup networks developed with Opside can utilize Opside ZK-PoW Cloud, a prover network spread across multiple networks. To become a prover, ten machines are required per cluster, with each machine necessitating a 48-core CPU and 1TB RAM. While Opside ZK-PoW primarily uses the staked-based approach, unlike traditional methods, every prover that submits a valid ZKP within a window receives a reward.
2.2.4 ZK Hardware
Unlike sequencers discussed earlier, provers in rollup networks need to produce ZKPs for computations, requiring highly advanced hardware. Looking at examples of centralized prover-operated zk-rollup networks, Polygon zkEVM demands a 96-core CPU with 768GB RAM, while Linea requires a 96-core CPU with 384GB RAM. Therefore, in the short term, there will likely be few infrastructure players capable of participating as provers. However, as hardware advances, it is anticipated that the barriers to entry will gradually lower.
Source: Amber
After the CPU, the hardware that can be used for ZKP generation includes 1) GPU 2) FPGA 3) ASIC. In the short term, CPUs and GPUs are expected to be used, while FPGAs will be used in the medium term, and both FPGAs and ASICs are anticipated for long-term use. These three types of hardware have distinct trade-offs.
Firstly, GPUs are easily available in the market and are flexible in utilizing various algorithms. Moreover, their development environment is well-established, making them developer-friendly. However, compared to FPGA and ASIC, they have a lower power efficiency and limited peak performance.
Secondly, FPGAs, being programmable hardware, can use various ZK algorithms like GPUs. They also have a slightly better performance and power efficiency than GPUs. However, they are harder to procure in the market and require more specialized personnel to operate.
Lastly, ASICs have the highest power efficiency and performance. But, they require significant time and funds to manufacture. Once they are produced, they are limited to a specific algorithm. Given the variety of existing ZK algorithms, they are currently an unsuitable solution. Nevertheless, if the number of ZK algorithms in use gets limited in the future, ASICs might emerge as the most efficient solution.
The ZK hardware sector is still in its infancy, and many ZK hardware startups are actively researching and developing. Notable ZK hardware companies include Aligned, Ingonyama, Cysic, and Ulvetanna, all of which are focused on FPGA solutions. It remains to be seen who will dominate the future prover market—these new players, existing PoS validators, or PoW miners. Opinions vary, and each party has distinct infrastructure advantages.
The Data Availability (DA) Layer is a specialized blockchain network for storing transaction data of rollup networks. Unlike other blockchains, it doesn't perform transaction computations. Instead, it ensures the security of the rollup network by safely storing transaction data whenever a block is created. This means that even if malicious activity occurs on the rollup network, the state can be restored using the transaction data stored on the DA Layer.
To store transaction data, the DA layer must have very large block sizes, which could lead to network centralization. To address this, the DA layer uses erasure coding and DA sampling. Erasure coding adds redundancy to transaction data, ensuring that even with partial data, the original can be restored. This means that if various light nodes of the DA layer sample only parts of the block data (DA sampling), they can easily recover the original transaction data and verify the block safely. Prominent examples of the DA layer include Celestia and Avail.
So, how different is the role of a validator on the DA layer compared to traditional PoS network validators? Fundamentally, both networks require token staking and probabilistically grant block creation rights based on the staking weight. Also, both receive transactions, determine their order, and create blocks. However, due to the unique nature of the DA layer, there are some differences.
Validators on the DA layer essentially don't execute transaction operations. Instead, they perform erasure coding, expanding transaction data, which means they handle much larger block sizes than traditional PoS networks. Because of this, DA layer validators require a significantly higher bandwidth.
Source: Celestia
The hardware requirements for participating as a node in the Celestia network, a leading DA layer project, are as shown in the above picture. Light nodes, which only participate in DA sampling, can join with relatively low specs, but they are not eligible for incentives. Validators, who receive block rewards, need a 6-core CPU and 8GB RAM, and the bandwidth requirement is relatively high at 1 Gbps. In the case of Avail, it requires validators to have at least a 2-core CPU, 4GB RAM, and a 20GB SSD. In reality, this level of hardware specification is similar to existing PoS networks, so existing validators can easily participate.
While existing PoS validators can easily participate as sequencers in the shared sequencing layer or as validators in the DA layer, the prover market is expected to open a third new market beyond PoW and PoS. What does the future hold for the prover market? Paradigm predicts that ZKP will serve as a de facto medium to prove computational integrity on the web. They forecast the size of the ZKP market to be on par with the PoW mining market. Aligned has a lower estimate than Paradigm, predicting the ZKP market size to reach $10B by 2030. I also agree with the bright future of the ZKP market, but I am concerned from the perspective of initial market bootstrapping.
Networks like Bitcoin, which are PoW, were able to grow even when the token price was low, and incentives were weak because the overall hash rate was low, making the barrier to entry for miners low. However, in the case of ZKP, even if the network is in its infancy, the computational power required to generate ZKP is not insignificant. This means that even if the token price is low initially, the computational power required to become a prover might be high, posing a potential challenge to the initial network bootstrapping.
The concept of modular blockchain became popular in 2021, but in reality, there aren't many projects that have launched mainnets other than computation layers like rollups. However, currently, many shared sequencing layers and DA layers are in the testnet phase, and there are gradual movements to decentralize sequencers and provers in computation layers. In the near future, there will likely be many new business opportunities for blockchain infrastructure players such as miners and validators.
Thanks to Kate for designing the graphics for this article.
We produce in-depth blockchain research articles
zkRollup is emerging as a major pillar of layer 2 solutions, leveraging the technical advantages of zero-knowledge proofs. It is particularly impressive that Ethereum, which has the largest ecosystem, has officially chosen zkRollup as the direction for its layer 2 rollup. Additionally, Bitcoin is also seeking to achieve scalability by utilizing zkRollup. Following the emergence of Optimistic Rollup, zkRollup has been rapidly growing, offering advantages such as faster processing and lower operational costs. Let's take an in-depth look at zkRollup from its basics to the current market status and future prospects.
Initia’s future growth plans include the launch of its mainnet and the development of various DeFi, social, and NFT projects, potentially positioning it as a favorable option for launching rollups due to its user-centric and interconnected infrastructure.
Arbitrum and Optimism are striving to improve the technological aspects of fraud proof, while other projects are also implementing interesting approaches. Let's walk through their current activities and ongoing developments.
The strategic decisions and journey of Mantle, from BitDAO's inception to the Mantle V2 upgrade, offer valuable insights into building a successful Layer 2 blockchain.