Disclaimer: Information found on CryptoreNews is those of writers quoted. It does not represent the opinions of CryptoreNews on whether to sell, buy or hold any investments. You are advised to conduct your own research before making any investment decisions. Use provided information at your own risk.
CryptoreNews covers fintech, blockchain and Bitcoin bringing you the latest crypto news and analyses on the future of money.
Ethereum’s unexpected decline in usage indicates the network addressed the wrong issue with the Fusaka upgrade.
Ethereum implemented the Fusaka upgrade on December 3, 2025, enhancing the network’s data availability capacity via Blob Parameter Overrides that progressively increased blob targets and maximum limits.
Two subsequent modifications elevated the target from 6 blobs per block to 10, and then to 14, with a maximum limit of 21. The objective was to lower layer-2 rollup expenses by boosting throughput for blob data, which are the compressed transaction bundles that rollups submit to Ethereum for security and finality.
Three months into data analysis, findings indicate a discrepancy between capacity and actual utilization. An analysis by MigaLabs of over 750,000 slots since the activation of Fusaka reveals that the network is not achieving the target blob count of 14.
The median blob usage actually decreased following the initial parameter adjustment, and blocks containing 16 or more blobs show increased miss rates, indicating reliability issues at the extremes of the new capacity.
The conclusion of the report is straightforward: no further increases in the blob parameter should occur until high miss rates for blobs stabilize and demand materializes for the additional capacity that has already been created.
Changes introduced by Fusaka and their timeline
Prior to Fusaka, Ethereum’s baseline, established through EIP-7691, set the target at 6 blobs per block with a maximum of 9. The Fusaka upgrade brought in two consecutive Blob Parameter Override adjustments.
The first adjustment was enacted on December 9, increasing the target to 10 and the maximum to 15. The second adjustment took effect on January 7, 2026, raising the target to 14 and the maximum to 21.
These modifications did not necessitate hard forks, and the mechanism permits Ethereum to adjust capacity through client coordination rather than requiring protocol-level upgrades.
The MigaLabs analysis, which made reproducible code and methodology available, monitored blob usage and network performance during this transition.
It found that the median blob count per block decreased from 6 before the first override to 4 afterward, despite the network’s capacity expansion. Blocks containing 16 or more blobs remain exceedingly rare, occurring between 165 and 259 times each throughout the observation period, depending on the specific blob count.
The network has unused capacity.
One parameter inconsistency exists: the report’s timeline description states the first override raised the target from 6 to 12, while the Ethereum Foundation’s mainnet announcement and client documentation refer to the adjustment as 6 to 10.
We rely on the Ethereum Foundation’s parameters as the source: 6/9 baseline, 10/15 after the first override, 14/21 following the second. However, we consider the report’s dataset for observed utilization and miss-rate patterns as the empirical foundation.
Ethereum's Fusaka upgrade timeline illustrates increases in blob parameters from the 6/9 baseline to 12/15, then 14/21 during December 2025 and January 2026.
Increasing miss rates with high blob counts
Network reliability, assessed via missed slots—blocks that fail to propagate or attest correctly—reveals a distinct pattern.
At lower blob counts, the baseline miss rate hovers around 0.5%. Once blocks contain 16 or more blobs, miss rates rise to between 0.77% and 1.79%. At 21 blobs, the maximum capacity introduced in the second override, the miss rate reaches 1.79%, more than three times the baseline.
The analysis dissects this across blob counts from 10 to 21, revealing a gradual degradation curve that accelerates past the 14-blob target.
This degradation is significant as it suggests that the network’s infrastructure, including validator hardware, network bandwidth, and attestation timing, struggles to accommodate blocks at the upper limits of capacity.
If demand ultimately increases to fill the 14-blob target or approaches the 21-blob maximum, the elevated miss rates could lead to considerable finality delays or reorganization risks. The report presents this as a stability threshold: the network can technically process high-blob blocks, but maintaining consistent and reliable performance remains uncertain.
Miss rates stay below 0.75% for blocks with fewer than 16 blobs but exceed 1% at higher counts, reaching 1.79% at 21 blobs.
Blob economics: the significance of the reserve price floor
Fusaka not only increased capacity but also modified blob pricing via EIP-7918, which implements a reserve price floor to prevent blob auctions from collapsing to 1 wei.
Prior to this adjustment, when execution costs dominated and blob demand remained low, the blob base fee could plummet until it effectively vanished as a price signal. Layer-2 rollups incur blob fees to submit their transaction data to Ethereum, and these fees should reflect the computational and network costs that blobs impose.
When fees approach zero, the economic feedback loop breaks down, resulting in rollups consuming capacity without proportionate payment. This leads to the network losing insight into actual demand.
The reserve price floor established by EIP-7918 links blob fees to execution costs, ensuring that even during times of low demand, the price remains a meaningful indicator.
This prevents the free-rider dilemma wherein inexpensive blobs encourage wasteful usage and provides clearer data for future capacity decisions: if blob fees remain elevated despite increased capacity, it indicates genuine demand; if they drop to the floor, it signals that there is available headroom.
Early insights from Hildobby’s Dune dashboard, which tracks Ethereum blobs, indicate that blob fees have stabilized post-Fusaka instead of continuing the downward trend seen in earlier periods.
The average blob count per block corroborates MigaLabs’ finding that utilization has not surged to fill the new capacity. Blocks typically contain fewer than the 14-blob target, and the distribution remains heavily weighted toward lower counts.
Blob fees peaked above $2 million in early 2024 and late 2024 before declining through 2025, with consistently low activity extending into 2026.
Insights on effectiveness from the data
Fusaka successfully expanded technical capacity and demonstrated that the Blob Parameter Override mechanism functions without necessitating contentious hard forks.
The reserve price floor appears to be operating as intended, preventing blob fees from becoming economically insignificant. However, utilization remains behind capacity, and reliability at the edges of new capacity shows measurable decline.
The miss rate curve suggests that Ethereum’s current infrastructure adequately manages the pre-Fusaka baseline and the first override’s 10/15 parameters, but begins to falter beyond 16 blobs.
This creates a risk profile: if layer-2 activity increases and consistently pushes blocks toward the 21-blob maximum, the network could experience heightened miss rates that undermine finality and resistance to reorganization.
Demand trends provide another indicator. The median blob usage decline following the first override, despite capacity increases, suggests that layer-2 rollups are not currently constrained by blob availability.
Either their transaction volumes have not escalated sufficiently to require more blobs per block, or they are optimizing compression and batching to fit within existing capacity rather than increasing usage.
Blobscan, a dedicated blob explorer, indicates that individual rollups maintain relatively stable blob counts over time instead of ramping up to exploit new capacity.
The pre-Fusaka concern was that limited blob capacity would hinder Layer 2 scaling and keep rollup fees high as networks competed for scarce data availability. Fusaka addressed the capacity constraint, yet the bottleneck seems to have shifted.
Rollups are not utilizing the available space, suggesting that either demand has yet to materialize or other factors, such as sequencer economics, user engagement, and cross-rollup fragmentation, are constraining growth more than blob availability did.
Future directions
Ethereum’s roadmap includes PeerDAS, a more fundamental redesign of data availability sampling, which would further enhance blob capacity while improving decentralization and security attributes.
However, the findings from Fusaka imply that raw capacity is not the primary limiting factor at present.
The network has the potential to fully utilize the 14/21 parameters before necessitating another expansion, and the reliability curve at elevated blob counts indicates that infrastructure enhancements may be required before further capacity increases.
The miss rate data outlines a clear boundary condition. If Ethereum raises capacity while blocks with 16+ blobs continue to display high miss rates, it risks introducing systemic instability that could arise during periods of high demand.
The more prudent approach is to allow utilization to approach the current target, observe whether miss rates improve as clients optimize for higher blob loads, and adjust parameters only once the network proves it can reliably manage edge cases.
The effectiveness of Fusaka varies depending on the metric employed. It successfully expanded capacity and stabilized blob pricing via the reserve floor. However, it did not drive immediate increases in utilization or resolve the reliability issues at maximum capacity.
The upgrade created room for future growth, but whether that growth will materialize remains an unresolved question that the data has yet to clarify.
The post Ethereum’s surprising usage drop suggests the network solved the wrong problem with Fusaka upgrade appeared first on CryptoSlate.