image source head

The Ethereum Foundation's 13th AMA key points: native Rollup, Blob fee model, DA value capture, etc.

trendx logo

Reprinted from panewslab

03/06/2025·1M

Compilation | GaryMa Wu Shuo Blockchain

The Ethereum Foundation research team conducted its 13th AMA on the reddit forum on February 25, 2025. Community members can leave messages and ask questions in the post. The research team members will answer them, covering topics such as EXECUTE precompilation, native Rollup, Blob fee model, DA value capture, block construction endgame, L2 strategic reflection, Verge, VDF, encrypted memory pool, and academic funding. Wu Shuo summarized the relevant questions/technical points involved in this AMA as follows:

Question 1: Native Rollup and EXECUTE precompilation

Asked:

You may have seen Martin Köppelmann's speech, who proposed the concept of "Native Rollups", similar to the "Execution Shards" we envisioned earlier.

In addition, Justin Drake also proposed a "native Rollup" solution, suggesting that some of the functions of L2 are integrated into the consensus layer.

This is important to me because L2 now doesn't provide what I expect from Ethereum - for example, they have problems such as administrator backdoors. I also don't see them going to solve these problems in the future, because sooner or later they will be outdated if they cannot be upgraded. How are these proposals currently progressing? Has the community reached a consensus on these ideas, or is it a general belief that Rollup should remain organizationally independent from Ethereum? Are there any other related proposals?

Answer (Justin Drake — Ethereum Foundation):

To avoid confusion, I suggest calling Martin's proposal directly "Execution Sharding", a concept that was found nearly ten years ago. The main difference between performing shards and native Rollups is flexibility. Execution shards are a single chain of preset templates, such as a complete replica of L1 EVM, which usually generates a fixed number of shards from top to bottom by hard forks. Native Rollup is a customizable chain that supports flexible sorting, data availability, governance, bridging and expense settings, and is generated from the bottom up without permission through programmable precompilation. I think native Rollup is more in line with Ethereum's programmable spirit.

We need to provide an EVM equivalent L2 when the L1 hard forks provides a path to get rid of the security committee and maintain complete L1 security and EVM equivalent. Due to the lack of flexibility in executing sharding, it is difficult to meet the needs of existing L2. Native Rollup may open up new design space by introducing EXECUTE precompilation (and possibly auxiliary DERIVE precompilation to support derivation functions).

About "Community Consensus":

The discussion of native Rollup is still in its early stages. But I found that it is not difficult to promote this concept to developers who are equivalent to EVM Rollup. If a Rollup can choose to be "native", this is almost a free upgrade from L1, why not accept it? It is worth mentioning that the founders of top Rollups such as Arbitrum, Base, Namechain, Optimism, Scroll and Unichain expressed their interest at the 17th sorting meeting and other occasions.

In contrast, I think promoting native Rollup is at least 10 times easier than promoting Based Rollup. Based Rollup does not appear to be a free upgrade at first glance—it will lose MEV revenue, and 12 seconds of block time can affect the user experience. But in reality, based on the incentive-compatible sorting and pre-confirmation mechanism, it can provide a better experience, just taking more time to explain and digest.

Technically, EXECUTE precompilation sets Gas limits and adopts a dynamic fee mechanism similar to EIP-1559 to prevent DoS attacks. For optimistic L2, this is not a problem, because EXECUTE is only called when fraudulent proof is verified. For pessimistic Rollup, data availability (DA) may be more bottleneck than execution, as validators can easily verify SNARK, and home network bandwidth is a fundamental limitation.

About the "status status":

Looking back at history, Vitalik proposed EXECTX precompilation in 2017, when the terms “native” or “Rollup” had not yet appeared. Although it was too early, in 2025, the idea of ​​joining EVM to reflect on itself has attracted attention again.

Regarding “Whether Rollup should be separated from Ethereum organizations”:

An ideal endgame model is to treat native Rollup and Based Rollup as smart contracts on L1, but with a lower fee. They can enjoy the network effects and security of L1 while being scalable.

For example, ENS is currently an L1 smart contract. In the future, I expect Namechain to become a native and based application chain, essentially a scalable L1 smart contract. It can preserve organizational independence (such as token economics and governance) while deeply integrating into the Ethereum ecosystem.

Inline Questions:

Q: Executing shards may be an advantage in many people's eyes, while native L2 is now like a suboptimal choice, or the only option, without built-in executing shards as optional.

Answer (Justin Drake):

EXECUTE precompilation is more flexible and powerful than executing shards. In fact, it can simulate executing shards, but the other way around it won't work. If anyone wants an exact copy of L1 EVM, native Rollup can also offer this option.

Q: The problem I want to solve is that it requires a neutral, trustworthy, Ethereum brand Rollup, rather than outsourcing the responsibility to a company-operated Rollup, which seems to not meet the demand.

Answer (Justin Drake):

This can be achieved through EXECUTE precompilation. For a preliminary idea, the Ethereum Foundation can use it to deploy 128 "slices".

Q: You mentioned that native L2 is a customizable chain that can be generated from bottom up through precompilation, which is more in line with Ethereum's programmable spirit; you also mentioned that you need to provide EVM equivalent L2 to get rid of the security committee. So, if the base layer does not implement functions such as sorting, bridging, and some kind of governance mechanism, can we really get rid of the security committee? Not being able to keep up with EVM changes is just an outdated way. In executing shards, we solve these problems through hard fork upgrades, benefiting from subjective governance. But if it is built on the upper layer and the basic layer does not interfere with the upper layer program, if there is a bug, we will not risk forking to save the application layer. Have the team you are in contact with made it clear that if Ethereum launches EXECUTE, they will completely remove the security committee and achieve complete trustlessness?

Answer (Max Gillett):

The main reason for the existence of the Security Council is that the fraud proof and validity proof systems are very complex, and even one implementation error in the validator can be catastrophic. If these complex logics (at least in fraud proof) are incorporated into the L1 consensus, client diversity can reduce risks, which is an important step in removing the Security Council. I think that if EXECUTE is precompiled properly, most of the “Rollup application logic” (such as bridging, messaging, etc.) can be easy to audit, meeting the standards of DeFi smart contracts — where contracts usually don’t require a security committee.

Subjective governance is indeed a simple way to upgrade, but this is only practical when there is less competition among shards. Part of the significance of programmable native Rollup is to allow existing L2 to continue experimental sorting, governance and other factors, which are ultimately determined by the market. I expect a range of native Rollups, from zero-governance community-deployed versions (try to follow L1 EVM), to versions with token governance and experimental precompiled.

Answer (Justin Drake):

Regarding “whether the team promises totally no trust”:

What I can be sure of is:

1. Many L2 teams want to achieve complete trustlessness.

2. EXECUTE mechanism is a necessary condition for achieving this goal.

3. For some applications (such as the minimum execution shard that Martin wants), EXECUTE is sufficient to achieve complete trustlessness.

These three points are enough to push us on the path of EXECUTE. Of course, EXECUTE may not be enough for some specific L2s, which is why DERIVE precompilation was introduced in earlier discussions.

Question 2: Blob fee model optimization

Asked:

Blob's fee model seems to be incomplete and too simple - the minimum fee is only 1 Wei (the smallest unit of ETH). Combined with the EIP-1559 price mechanism, if the Blob capacity is greatly expanded, we may not see the increase in Blob fees for a long time. This is not ideal, we want to encourage blob usage, but we don't want the network to host this data for free. Are there any plans to adjust the cost model for Blob? If so, how will it be changed? What alternatives or adjustments are under consideration?

Answer (Vitalik Buterin):

I think the protocol should be kept simple, avoid over-optimizing short-term situations, while uniformly implementing the market logic of Gas and Blob Gas. EIP-7706 is a primary direction (the other direction is to add independent Gas dimensions to Calldata).

I support the introduction of Super-Exponential Basefee Adjustment, which has been repeatedly proposed in different scenarios. If supercapacity blocks appear continuously, the expenses will rise at an exponential speed and quickly reach a new balance. After setting the parameters reasonably, almost any Gas price spike can return to stability within a few minutes.

Another independent idea is to directly increase the minimum blob fee. This can shorten the peak period of use (favoring network stability) and add more consistent cost destruction.

Answer (Ansgar Dietrichs — Ethereum Foundation):

Your concerns about the Blob fee model are reasonable, especially during the efficiency improvement phase. Indeed, this is a big problem with "L1 value accumulation", but I want to focus on efficiency first.

During the development of EIP-4844, we discussed this issue and finally decided to set the minimum fee to 1 Wei as the "neutral value" of the initial implementation. Later observations revealed that this did present challenges to L2 during the non-congested to congestion transition period. Max Resnick proposed a solution in EIP-7762, suggesting that the minimum fee be set to near zero during non-congestion periods, but to rise faster as demand increases.

This proposal was proposed later in the Pectra fork development, and implementing it may delay the fork. We discussed it on RollCall #9 (a L2 feedback forum) to see if a delayed fork is required. L2 feedback shows that this is no longer an urgent issue, so we decided to keep the status quo in Pectra. But if the ecosystem demand is strong, future forks may adjust.

Answer (Barnabé Monnot — Ethereum Foundation):

Thank you for your question. Indeed, a previous study of EIP-4844 (completed by u/dcrapis) shows that there may be problems in the transition period from 1 Wei to reasonable market prices, which will disrupt the market when it is congested, and we can see this every time we are congested with blobs. Therefore, with EIP-7762, it is proposed to increase the minimum Blob base fee.

However, even if the base fee is 1, it does not mean that they are "hiking for free" on the Internet. First, Blobs usually require a priority fee to compensate block proposers. Secondly, to determine whether it is free, we have to see whether the blob occupies unreasonably priced resources. Someone mentioned that the increased risk of restructuring (affecting activity) of Blobs is not compensated, and I responded to this view on X).

I think the discussion should focus on compensating for activity risks. Some people link Blob base fees to value accumulation because the base fees will be destroyed (EIP-1559). If the base fee is low and the network value accumulates less, should we increase the base fee and draw more taxes from L2? I think this is short-sighted: first, the Internet must define "reasonable tax rates" (like fiscal policy); second, I believe that the growth of the Ethereum economy will bring more value. Raising the cost of blobs without reason (raw materials that expand the economy) will backfire.

Answer (Dankrad Feist — Ethereum Foundation):

I want to clarify that concerns about the low cost of blobs have been exaggerated and a little short-sighted. The crypto field may grow significantly in the next 2-3 years. At this time, we should try to consider the cost extraction as little as possible and pay more attention to long-term development.

Nevertheless, I don't think Ethereum's current pure congestion pricing resource model is ideal, both in terms of price stability and ETH long-term value accumulation. When Rollup stabilizes, a minimum price model that occasionally degrades to congestion pricing will be better. In the short term, I also support higher blob minimum prices, which will be a better choice.

Answer (Justin Drake — Ethereum Foundation):

About "Whether to plan for a redesign":

Yes, EIP-7762 proposes to increase the minimum base fee from 1 Wei to higher values, such as 2²⁵ Wei.

Answer (Davide Crapis — Ethereum Foundation):

I support raising the minimum base fee, which was mentioned in my initial 4844 analysis. However, the core developers were a little opposed at that time. Now the consensus seems to be more inclined to think this is useful. I think the minimum base fee (even a little lower) is meaningful and not short-sighted. Demand will increase in the future, but so will supply, and we may again encounter the long-term lowest blob fees we have seen in the past year.

More broadly, Blobs also consume network bandwidth and memory pool resources, which are currently unpriced. We are looking at related upgrades that may optimize Blob pricing in this direction.

Inline Questions:

Q: I want to emphasize that this is not an attempt to extract the greatest value from L2, because every time you question Blob pricing, it is often overlooked by this reason.

answer:

Thanks for the clarification, it's totally correct. The focus is not to maximize the extraction, but to design a cost mechanism that encourages the adoption, and at the same time, to price resources fairly to facilitate the development of the cost market.

Question 3: DA vs. L1/L2 value capture

Asked:

L2 expansion leads to a significant reduction in the value accumulation of L1 (Ethereum main network), affecting the value of ETH. Besides the saying that “Layer 2 will burn more ETH and handle more transactions”, what specific plans do you have to solve this problem?

Answer (Justin Drake — Ethereum Foundation):

The revenue of blockchain (whether it is L1 or L2) comes mainly from two parts: congestion expenses (i.e. "base expenses") and competition expenses (i.e. MEV, maximum extractable value).

Let’s talk about the competition fees first. With the advancement of application and wallet design, I think MEVs will be increasingly captured by upstream (apps, wallets, or users), and eventually almost all of them are taken away by entities close to the source of traffic, and downstream infrastructure (L1 and L2) can only be divided into a little bit of residue. In the long run, L1 and L2 chasing MEVs may be futile.

Let’s talk about the congestion cost. Historically, the bottleneck for L1 was EVM execution, and consensus participants’ hardware requirements (such as disk I/O and state growth) limited execution of Gas. But after modern design expands with SNARKs or fraud proofs, execution resources will enter the "post-scarcity era" and the bottleneck will turn to data availability (DA). Because validators rely on limited home network bandwidth, DA is fundamentally scarce. Data Availability Sampling (DAS) can only provide about 100 times linear scaling, unlike SNARKs or fraud proofs that are almost unlimited.

So, we focus on DA economics, and I think this is the only sustainable source of income for L1. EIP-4844 (with Blob's DA Supply) has been implemented for less than a year. Blob demand has increased over time (mainly driven by demand), from an average of 1 blob/block to 2 and 3. Now that the supply is saturated, the price discovery is just beginning, low-value "jam" transactions are being squeezed out by higher economic density transactions.

If the DA supply is stable for several months, I expect hundreds of ETH to burn through the DA every day. But L1 is currently in the "growth mode", and the upcoming Pectra hard fork (which is expected to be launched within months) will increase the number of target blobs from 3 to 6. This will crush the Blob expense market and demand will take months to catch up. In the next few years, with Danksharding fully launched, DA supply and demand will play a cat and mouse game.

In the long run, I think DA demand will exceed supply. The supply is limited by the bandwidth of home networks, and the throughput of about 100 home networks may not meet global demand, especially humans can always find new ways to consume bandwidth. I expect Ethereum to stabilize at 10 million TPS (about 100 transactions per person per day) over the next 10 years, and even if it charges only US$0.001 per transaction, it will still generate US$1 billion in revenue per day.

Of course, DA income is only part of the accumulation of ETH value. Issuance volume and currency premium are also crucial, so I suggest check out my 2022 Devcon speech.

Inline Questions:

Q: You said “If the DA supply remains unchanged for several months, hundreds of ETH will burn through the DA every day.” Why so predicted? Data from the blob target saturation in the past 4 months does not seem to support this growth and paid demand. How do you infer from these data that there will be a significant increase in "high paid demand" within a few months?

Answer (Justin Drake):

My rough model is that “real” economic transactions (such as user trading tokens) can afford small fees, such as $0.01 per transaction. I guess many "jamm" transactions (robot generation) are being replaced by real transactions now. Once the real transaction demand exceeds the DA supply, price discovery will start.

Answer (Vitalik Buterin):

Many L2s currently either use off-chain DA or postpone their launch, because if they use on-chain DA as planned, they will fill the blob space alone, resulting in a surge in fees. L1 trading is the daily decision of many small participants, while L2 Blob space is the long-term decision of a few large participants and cannot be simply inferred from the daily market. I think even if the capacity of Blobs increases dramatically, there is still a great opportunity for huge demand to pay reasonable fees.

Q: 10 million TPS? This seems unrealistic. Can you explain how it is possible?

Answer (Justin Drake):

It is recommended to watch my 2022 Devcon speech.

Simply put:

● L1 Raw Throughput: 10 TPS

● Rollups: 100 times higher

● Danksharding: 100 times higher

● Nelson's Law (10 years): Increase 100 times

Q: I believe that the supply side can do it, but what about the demand side?

Answer (Dankrad Feist — Ethereum Foundation):

All blockchains have accumulated value problems and there is no perfect answer. If Visa charges a fixed fee by transaction, regardless of the amount, their income will be greatly reduced, but this is the current situation of blockchain. The execution layer is slightly better than the data layer and can extract priority fees that reflect urgency, while the data layer only has a fixed fee.

My advice is to add value first. There is no value to create, there is no question of accumulation. To this end, we should maximize the Ethereum data layer so that alternative DA is unnecessary; expand L1 so that high-value applications can run on L1; and encourage projects like EigenLayer to expand the use of ETH as (non-financial) collateral. (Pure financial collateral is more difficult to expand, which may exacerbate the risk of death spiral.)

Q: Are “encouraging EigenLayer” and “making alternative DA unnecessary” contradictory? If DA is the only sustainable source of income, isn’t supporting EigenLayer risking that EIGEN stakes out potential 10 million TPS or $1 billion in revenue per day? As an independent validator and EigenLayer operator, I feel like introducing a Trojan horse, which is very contradictory.

Answer (Dankrad Feist):

I think EigenLayer is more like a decentralized insurance product with ETH as collateral (EigenDA is just one of them). I want Ethereum DA to expand to make EigenDA unnecessary for financial use cases.

Justin believes that DA is the main source of Ethereum's revenue, and may be wrong. Ethereum already has something more valuable — a high liquidity execution layer, and DA is just a small part of it (but useful for white label Ethereum and high scalability applications). DA has a moat, but the price is much lower than the execution layer, so more extensions are needed.

Answer (Justin Drake):

Haha, Dankrad and I have been arguing about this over the years. I think the execution layer is not defensive, MEVs will be captured by the application, and SNARKs make execution no longer a bottleneck. Time will tell.

Answer (Dankrad Feist):

SNARKs have no effect on this. Synchronous state access is the basis of the value and limitation of the execution layer. What a core can execute, SNARKs have nothing to do. I don't think DA is worthless, but the execution layer and DA's ability to charge each transaction may be 2–3 orders of magnitude. The one that can charge a high price may be the combined sorting DA rather than the general DA.

Answer (Justin Drake):

You believe that "competition" (status access restrictions or sorting constraints) are valuable. I agree that it is valuable, but do not believe it will be rewarding for L1 or L2 in the long term. Apps, wallets, and users close to the source of traffic will recapture competitive value.

L1 DA is irreplaceable for applications that pursue top-level security and composability. EigenDA is the "most fit" alternative DA, often as a "overflow" option for high-capacity, low-value applications (like games).

Question 4: The final block construction

Asked:

How will Ethereum’s end-block construction work? The trusted gateway model proposed by Justin seems to be a centralized sorter, and may be incompatible with the APS ePBS (improved proposal-builder separation). The current FOCIL (forced list included) design is not suitable for transactions carrying MEVs, so block building seems to be more inclined toward non-financial applications of L1, which may drive application choices to run on fast centralized sorter L2.

To go deeper, can we design a sorting system that is not maximized and efficient on L1? Does all efficient and low-extraction transactions require a principal agent (like a centralized sorter or pre-confirmation/gateway)? Is Multiproposaler Coordination (MCP) like BRAID still exploring?

Answer (Justin Drake — Ethereum Foundation):

I don't quite understand what you mean. A few points to clarify:

1. APS (Advanced Proposaler Commitment) and ePBS (Improved Proposaler-Builder Separation) are different design areas, and it is probably the first time I have seen the "APS ePBS" combination.

2. The gateway I understand is similar to "pre-confirm relay". If ePBS eliminates the role of the middleman of the relay, APS also eliminates the need for gateways. Under APS, L1 execution proposers (if professional enough) can provide pre-confirmation directly without delegating to the gateway.

3. Saying that "gateway is incompatible with APS" is like saying that "relay is incompatible with ePBS" - The original intention of the design is to remove the intermediate role! Gateways are just temporary and complex measures before APS arrives.

4. Before APS, I didn't understand why gateways were compared to centralized sorting. Centralized sorting is authorized, while gateway markets (and sets of L1 proposers delegated to gateways) are unauthorized. Do you say that because there is only a single gateway sort per slot? According to this logic, L1 is also sorted in a centralized manner, because there is only a single proposer for each time slot. The core of decentralized sorting is to rotate the transient sorters from unauthorized collections.

I think MCP (multiple proposer coordination) is a suboptimal design. There are several reasons: it introduces centralized multi-block game, complicates cost processing, and requires complex infrastructure (such as VDF, delayed verification function) to prevent final bidding.

If MCP is as excellent as Max Resnick says, we will see results on Solana soon. Max is currently working full-time at Solana, Anatoly also supports MCP latency reduction, and Solana iterates quickly™. By the way, I also like to see that L2 can test MCP without permission. But Max failed to convince the internal L2 Linea to turn to MCP when Consensys was in charge of MetaMask.

Answer (Barnabé Monnot — Ethereum Foundation):

I want to provide an alternative perspective for the end. My initial roadmap is as follows, which is already a big challenge:

● Deploy FOCIL to ensure censorship resistance and start decoupling expansion restrictions from local block building restrictions.

● Deploy SSF (single-slot finality) as soon as possible to shorten the time slot. This requires the deployment of Orbit to ensure that the validator size is consistent with SSF and time slot targets.

At the same time, I believe that application layer improvements (such as BuilderNet, various Rollups and L1-based Rollups) can ensure innovation in block construction and support new applications.

At the same time, we should seriously consider the different architectures of L1 block construction, including BRAID. The end may never be concluded? Who knows. But after FOCIL and SSF/shorter time slot deployment, the next step will be more informative.

Question 5: Do you regret focusing on L2?

Asked:

Given the community’s sentiment, do you still believe that focusing on L2 is the right choice? If you could go back to the past, what would you change?

Answer (Ansgar Dietrichs — Ethereum Foundation):

My view is that Ethereum's strategy has always pursued principle-based architectural solutions. In the long run, Rollup is the only principled solution required to extend blockchain to the basic layer of the global economy. Monomer chains require “every participant validates everything”, while Rollup greatly reduces the verification burden by “performing compression”. Only the latter can scale to billions of users (and possibly even AI proxy).

Looking back, I feel that we have insufficient attention to the path to achieve our final goal and the intermediate user experience. Even in a world dominated by Rollup, L1 still needs to expand significantly, which Vitalik has mentioned recently. We should have realized that continuously expanding L1 while promoting L2 can bring more value to users during the transition period.

I think Ethereum has a long-term lack of real opponents and is somewhat complacent. The fiercer competition now exposes these misjudgments and is also pushing us to deliver better "products", not just theoretically correct solutions.

But to reiterate, Rollup is crucial to achieving a "scaling endgame." The specific architecture is still evolving - for example, Justin's exploration of native Rollup shows that the method is still being adjusted - but the general direction is obviously correct.

Answer (Dankrad Feist — Ethereum Foundation):

I disagree in some ways. If Rollup is defined as "extended DA and execution verification", how is it different from executing shards?

In fact, we regard Rollup more as a "white label Ethereum". To be fair, this releases a lot of energy and money. If we only focus on executing shards in 2020, there will be no progress in zkEVM and interoperability research today.

Technically, we can now achieve any goal — highly expanded L1, extremely expanded shard chain, or Rollup’s base layer. The best thing about Ethereum is to combine the first and third types.

Question 6: ETH Economic Security Risk

Asked:

If the US dollar price of ETH falls below a certain level, will it threaten the economic security of Ethereum?

Answer (Justin Drake — Ethereum Foundation):

High economic security is crucial if we want Ethereum to be effectively resistant to attacks — including attacks from the country level. Currently, Ethereum has approximately $80 billion in fines-forfeiable economic security (based on 33,644,183 ETH staked, approximately $2,385 per ETH), the highest of all blockchains. By comparison, Bitcoin has only about $10 billion (not forfeit) economic security.

Question 7: Mainnet expansion and cost reduction plan

Asked:

What plans will the Ethereum Foundation have to improve the scalability of the main network and reduce transaction fees in the next few years?

Answer (Vitalik Buterin):

1. Extend L2: Add more blobs, such as PeerDAS in Fusaka, to further improve data capacity.

2. Optimize interoperability and user experience: Improve interaction across L2, such as the recent Open Intents Framework.

3. Moderately increase the L1 Gas limit.

Question 8: Future application scenarios are coordinated with L1/L2

Asked:

What applications and usage scenarios have you designed for Ethereum during the following time periods:

● Short term (<1 year)

● Medium term (1–3 years)

● Long-term (4+ years)

How do L1 and L2 activities work together during these time periods?

Answer (Ansgar Dietrichs — Ethereum Foundation):

This is a broad question and I provide some insights that focus on overall trends:

● Short-term (<1 year): Focus on stablecoins, which are already the pioneer in real-world applications due to their few regulatory restrictions, and small-scale cases such as Polymarket have also begun to show influence.

● Medium-term (1–3 years): Expand to more real-world assets (such as stocks, bonds), seamless interoperability with DeFi modules, and other innovations such as business process chaining, governance, and forecasting the market.

● Long-term (4+ years): Realize the "real world Ethereum" (DC Posch vision), build real products for billions of users and AI agents, and encryption is an enabler rather than a selling point.

● L1/L2 relationship: The original vision of "L1 is only for settlement and rebalancing" needs to be updated. L1 expansion continues to be important, and L2 is still the main force in expansion. The relationship will further evolve in the next few months.

Answer (Carl Beekhuizen — Ethereum Foundation):

We focus on expanding the entire technology stack, not designing for specific applications. Ethereum’s strength is to be neutral with content running in EVM and provide the best platform for developers. The core theme is extension: how to build the most powerful system while remaining decentralized and censor-resistant.

● Short-term (<1 year): The focus is on launching PeerDAS, which significantly increases the number of blobs in blocks; and improves EVM, such as launching EOF (EVM object format) as soon as possible. Research is also underway, including statelessness, Gas repricing, EVM zero-knowledgeization, etc.

● Medium-term (1–3 years): Further expand Blob throughput and launch pre-stage research projects such as ethproofs.org's zkEVM program.

● Long-term (4+ years): Adding a lot of expansion to EVM (L2 will also benefit), significantly improving Blob throughput, improving censorship resistance through measures such as FOCIL, and using zero-knowledge technology to speed up.

Question 9: Verge selection and hash function

Asked:

Vitalik mentioned in a recent post about Verge that we will soon face three options: (i) Verkle tree, (ii) STARK-friendly hash function, and (iii) conservative hash function. Have you decided which way to go?

Answer (Vitalik Buterin):

This is still under heated discussion. I personally feel that the atmosphere has slightly tended to (ii) over the past few months, but has not been finalized yet.

I think these options should be considered in the context of the overall roadmap. The realistic choice may be:

● Option A:

1. 2025: Pectra, may add EOF

● 2026: Verkle Tree

● 2027: L1 execution optimization (delayed execution, multi-dimensional gas, repricing)

● Option B:

● 2025: Pectra, may add EOF

● 2026: L1 execution optimization (delayed execution, multi-dimensional gas, repricing)

● 2027: Initial launch of Poseidon (only a small number of clients are encouraged to become stateless in the early stage to reduce risks)

● 2028: Gradually increase stateless clients

Option B is also compatible with conservative hash functions, but I still tend to step by step. Even if the hash function is less risky than Poseidon, it is proved that there are still higher risks in the early stage of the system.

Answer (Justin Drake — Ethereum Foundation):

As Vitalik said, recent choices are still being discussed. But from the long-term fundamentals, (ii) is obviously the direction. Because (i) no post-quantum security, (iii) is inefficient.

Question 10: VDF progress

Asked:

How is the latest progress in VDF (delay validation function)? I remember a paper in 2024 pointed out some basic issues.

Answer (Dmitry Khovratovich — Ethereum Foundation):

We currently lack ideal VDF candidates. As new models (for analysis) and new constructs (heuristic or non-heuristic) develop, things may change. But with our current technical level, we cannot confidently say that any solution cannot be accelerated, such as 5 times. So the consensus is to temporarily put VDF on hold.

Question 11: Block time and final time adjustment

Asked:

From a developer's perspective, is it a tendency to gradually shorten block time, or reduce finality time, or keep both unchanged before achieving single-slot finality (SSF)?

Answer (Barnabé Monnot — Ethereum Foundation):

I'm not sure if there is a compromise path to shorten the finality time between the current to SSF. I think the introduction of SSF is the best opportunity to reduce final delay and slot time at the same time. We can adjust based on existing protocols, but if SSF can be implemented in the short term, it may not be worth the effort on the current protocol.

Answer (Francesco D'Amato — Ethereum Foundation):

Before SSF, we can certainly reduce block time (for example to 6–9 seconds), but it is best to confirm whether this is compatible with SSF and other contents of the roadmap (for example, ePBS). At present, I understand that SSF should be compatible, but this does not mean that we should do it right away. The SSF design has not been completely confirmed yet.

Question 12: FOCIL vs. Encrypted Memory Pool

Asked:

Why not skip FOCIL (force list included) and use encrypted memory pool directly?

Answer (Justin Drake — Ethereum Foundation):

Unfortunately, encrypted memory pools are not sufficient to guarantee forced inclusion. This is already reflected in BuilderNet based on TEE (Trustable Execution Environment) running on the main network. For example, Flashbots reviews OFAC transactions from their BuilderNet blocks. TEE (accessible to access unencrypted transactions) can be easily filtered. More advanced memory pools based on MPC (multi-party computing) or FHE (full homomorphic encryption) also have similar problems. The sorter can require zero-knowledge proof to exclude transactions that do not want to include.

More broadly, encrypted memory pools and FOCIL are orthogonal and complementary. Encrypted memory pools focus on privacy inclusion, FOCIL focuses on mandatory inclusion. They also operate at different layers of the technology stack: FOCIL is the L1 built-in infrastructure, while the encrypted memory pool is off-chain or application layer.

Answer (Julian Ma — Ethereum Foundation):

Although both FOCIL and crypto memory pools are designed to improve censorship resistance, they are not entirely substitutes, but complementary. So FOCIL is not a transition to an encrypted memory pool. The main reason for not having encrypted memory pools is the lack of satisfactory proposals, although efforts are underway. If deployed now, honest assumptions will be imposed on Ethereum activity.

FOCIL should be deployed because it has robust proposals, the community has confidence in it, and achieves relatively light weight. When the two are combined, crypto transactions in FOCIL can limit the economic damage to users of reordering.

Question 13: Gas and Blobs restrict voting

Asked:

Will you let the number of blobs be decided by the stakeholder vote like Gas limit? Large players may collude to increase restrictions, squeeze out small household stakers with insufficient hardware or bandwidth, resulting in centralized pledges and destroy decentralization. Moreover, if these increase without limits, will it become difficult to object through hard forks? If hardware bandwidth requirements are determined by votes, what is the point of setting these requirements? The interests of the pledger may not be consistent with the overall network. Is this voting appropriate?

Answer (Vitalik Buterin):

I personally think that: (i) making the Blobs be decided by the stakeholder voting like Gas restrictions, and (ii) making the client coordinate more frequently to update the default Gas voting parameters. This is equivalent to the "Blob Parameter (BPO) Fork" feature, but is more robust. If the client fails to upgrade in time or fails to implement the implementation, it will not cause consensus failure. Many BPO fork supporters actually refer to this idea.

Question 14: Fusaka and Glamsterdam upgrade features

Asked:

What features should Fusaka and Glamsterdam upgrades include to significantly advance the roadmap?

Answer (Francesco D'Amato — Ethereum Foundation):

As mentioned, Fusaka will significantly improve data availability (DA). I want Glamsterdam to make a similar leap at the execution layer (EL), by which time the EL improvement is the most room (with over a year to determine the direction). The current repricing efforts may bring significant changes in Glamsterdam, but this is not the only option.

In addition, FOCIL can be regarded as a scalable EIP, which can better separate the needs of block builders and validator in the region, and combines its goal of anti-censorship and reducing altruistic dependence, which will push Ethereum forward. These are my current priorities, but by no means all.

Answer (Barnabé Monnot — Ethereum Foundation):

Fusaka focuses on PeerDAS, which is crucial to L2 expansion, and almost no one wants other features to delay it. I want Glamsterdam to include FOCIL and Orbit to pave the way for SSF.

The above tends to be consensus layer (CL) and DA, but Glamsterdam should also have an execution layer (EL) effort to significantly advance L1 expansion. A discussion of specific feature sets is underway.

Question 15: Forced L2 decentralization

Asked:

Given the slow progress of L2 decentralization, can L2 be "forced" through EIP to adopt Stage 1 or Stage 2 decentralization?

Answer (Vitalik Buterin):

Native Rollup (such as EXECUTE precompilation) implements this to some extent. L2 is still free to ignore it and encode it into the backdoor, but they can be used with the simple, high-security proof system built into L1. L2, which pursues EVM compatibility, will likely choose this option.

Question 16: The biggest risk of Ethereum survival

Asked:

What is the biggest survival risk facing Ethereum?

Answer (Vitalik Buterin):

Super smart AI can lead to a single entity controlling most of the global resources and power, making blockchain irrelevant.

Question 17: The impact of Alt-DA on ETH holders

Asked:

Is Alt-DA (a non-ETH mainnet DA) a vulnerability or function for ETH holders in the short, medium and long term?

Answer (Vitalik Buterin):

I still stubbornly hope to have a dedicated R&D team to study ideal Plasma-like designs so that chains relying on Ethereum L1 still provide users with stronger (although imperfect) security guarantees when using alternative DAs. There are many overlooked opportunities here that can increase user safety and are valuable to the DA team.

Question 18: Future prospects of hardware wallets

Asked:

What is your vision for the future of hardware wallets?

Answer (Justin Drake — Ethereum Foundation):

In the future, most hardware wallets will be based on cell phone quarantine instead of standalone devices like Ledger USB. Account abstraction has made infrastructure such as Passkeys available. I hope to see native integrations within this decade (like in Apple Pay).

Answer (Vitalik Buterin):

Hardware wallets need to be "really safe" from several aspects:

1. Security hardware: Based on an open source, verifiable stack (such as IRIS), reducing the risk of backdoor and side channel attacks.

2. Interface security: Provide sufficient transaction information to prevent the computer from tricking users into signing unexpected content.

3. Popularity: The ideal is to manufacture a device that doubles as a crypto wallet and other secure uses, encouraging more people to obtain and use it.

Question 19: 2025 L1 Gas Restricted Target

Asked:

What are the Gas restrictions targets for L1 in 2025?

Answer (Toni Wahrstätter — Ethereum Foundation):

There are different opinions on Gas restrictions, but the core question is: Should we expand L1 by increasing Gas restrictions, or focus on L2 and increase blobs with technologies such as DAS?

Vitalik's recent blog discusses reasons for moderately expanding L1. But there are trade-offs to increase Gas restrictions:

● Higher hardware requirements

● Status and historical data increase, increasing the burden on nodes

● Larger bandwidth requirements

On the other hand, the Rollup-centric vision aims to improve scalability without adding node requirements. PeerDAS (short-term) and complete DAS (medium- and long-term) will unlock significant potential while keeping resources under control.

I wouldn't be surprised if the validators pushed the Gas limit to 60 million after the Pectra hard fork (April). But in the long run, the expansion focus may be on DAS solutions rather than simply adding Gas restrictions.

Question 20: Beam Client Transition

Asked:

If the Ethereum Beam client experiment (or its renamed version) is successful, and there are several implementations available within 2–3 years, is it necessary to have a phase that allows current PoS and Beam PoS to run in parallel and both receive stake rewards, just like when the PoW to PoS transition?

Answer (Vitalik Buterin):

I think it is possible to directly upgrade it.

The reason for using double-stranded during merging is:

● PoS has not been fully tested and takes time to keep the ecosystem running to ensure the switching is safe.

● PoW is reorganizable and the switching mechanism needs to be robust.

PoS is final and most infrastructure (such as staking) can be continued. We can change the verification rules from beacon chain to a new design through a hard fork. Economic finality at the transition point may be short-term, but this is an acceptable small price.

Answer (Justin Drake — Ethereum Foundation):

I'm assuming that the upgrade from beacon chain to Beam will be handled like a normal fork without "merging 2.0". Some thoughts:

1. Consensus participants (ETH stakers) are the same on both sides of the fork, unlike when the merger is replaced and there is a risk of miner interference.

2. The "clocks" on both sides of the fork are consistent, unlike the probability slot to fixed slot transition from PoW to PoS.

3. Infrastructures such as libp2p, SSZ, anti-cutting databases are mature and can be reused.

4. There is no need to rush to disable PoW this time to avoid additional issuances, and you can spend time doing due diligence and quality assurance (multiple test network runs) to ensure the smooth fork of the mainnet.

Question 21: Academic Grant Program 2025

Asked:

The Ethereum Foundation launches a $2 million academic grant program in 2025. What research areas are prioritized? How to integrate results into the Ethereum roadmap?

Answer (Fredrik Svantes — Ethereum Foundation):

The protocol security team is interested in:

● P2P Security: Many vulnerabilities are associated with network-layer DoS attacks (such as libp2p or devp2p), and it is valuable to improve this aspect.

● Fuzzy testing: EVM and consensus layer clients have been tested, but areas such as network layer can be explored in depth.

● Supply Chain Risk: Understand the current dependency risks of Ethereum.

● LLM application: How large language models improve protocol security (such as audit code, automated fuzz testing).

Answer (Alexander Hicks — Ethereum Foundation):

In terms of integration, we continue to carry out by connecting with academia, funding research and participating in it. The Ethereum system is unique, and the impact of academic research on the roadmap is not always direct (for example, the consensus protocol is relatively unique and academic results are difficult to directly transform), but it is obvious in areas such as zero-knowledge proof.

The Academic Grant Program is part of our internal and external research, and this time it is to explore content that is interesting but not necessarily directly impacting the roadmap. For example, I added formal verification and AI-related topics. The practicality of AI in Ethereum tasks currently needs to be verified, but I want to drive progress in the next one or two years. This is a great opportunity to evaluate the status quo and improve methods, and it can also attract cross-field researchers who don’t know much about Ethereum but are interested in it.

more