Hoskinson may be mistaken regarding the future of decentralized computing.

27

Cardano’s founder recently presented a viewpoint regarding hyperscalers that warrants consideration, according to Fan.

Charles Hoskinson during Consensus Hong Kong 2026. (Photo by Michael Perini/Consensus)

The blockchain trilemma resurfaced at Consensus in Hong Kong in February, placing Charles Hoskinson, the founder of Cardano, in a position to reassure attendees that hyperscalers such as Google Cloud and Microsoft Azure do not pose a threat to decentralization.

It was highlighted that prominent blockchain initiatives require hyperscalers, and that concerns regarding a single point of failure are unwarranted because:

  • Advanced cryptography mitigates the risk
  • Multi-party computation shares key material
  • Confidential computing protects data during use

The argument was based on the notion that ‘if the cloud cannot access the data, the cloud cannot control the system,’ and the discussion was concluded due to time limitations.

However, an alternative perspective to Hoskinson’s support for hyperscalers deserves further exploration.

MPC and Confidential Computing Mitigate Exposure

This was a strategic cornerstone in Charles’ assertion – that technologies such as multi-party computation (MPC) and confidential computing guarantee that hardware providers do not gain access to the fundamental data.

These technologies are effective tools. However, they do not eliminate the inherent risk.

MPC distributes key material among several parties, ensuring that no individual participant can reconstruct a secret. This significantly lowers the risk of a solitary compromised node. Nevertheless, the security perimeter expands in other areas. The coordination layer, communication channels, and governance of the participating nodes become essential factors.

Rather than relying on a single key holder, the system now relies on a dispersed set of actors to function correctly and on the appropriate execution of the protocol. The single point of failure does not vanish; instead, it transforms into a distributed trust surface.

Confidential computing, particularly through trusted execution environments, presents a different trade-off. Data is encrypted during processing, which limits exposure to the hosting provider.

However, Trusted Execution Environments (TEEs) are contingent on hardware assumptions. They rely on microarchitectural isolation, firmware integrity, and accurate implementation. Academic studies, for instance, have consistently shown that side-channel and architectural vulnerabilities continue to surface across enclave technologies. The security boundary is tighter than in traditional cloud setups, but it is not absolute.

Moreover, both MPC and TEEs frequently operate on hyperscaler infrastructure. The physical hardware, virtualization layer, and supply chain remain concentrated. If an infrastructure provider controls access to machines, bandwidth, or geographical areas, it retains operational leverage. While cryptography may obstruct data inspection, it does not prevent throughput limitations, shutdowns, or policy interventions.

Advanced cryptographic mechanisms make certain attacks more challenging, yet they do not eliminate the risk of infrastructure-level failures. They merely replace a visible concentration with a more intricate one.

The ‘No L1 Can Handle Global Compute’ Argument

Hoskinson asserted that hyperscalers are essential because no single Layer 1 can accommodate the computational needs of global systems, pointing to the trillions of dollars invested in establishing such data centers.

Indeed, Layer 1 networks were not designed to execute AI training loops, high-frequency trading engines, or enterprise analytics pipelines. Their purpose is to maintain consensus, verify state transitions, and ensure durable data availability.

He is accurate regarding the function of Layer 1. However, global systems primarily require outcomes that anyone can validate, even if the computation occurs elsewhere.

In contemporary crypto infrastructure, substantial computation increasingly occurs off-chain. What is crucial is that results can be demonstrated and verified on-chain. This forms the basis of rollups, zero-knowledge systems, and verifiable compute networks.

Concentrating on whether an L1 can execute global compute overlooks the fundamental issue of who governs the execution and storage infrastructure behind verification.

If computation occurs off-chain but relies on centralized infrastructure, the system adopts centralized failure modes. While settlement remains decentralized in theory, the pathway to generating valid state transitions is concentrated in practice.

The focus should be on dependency at the infrastructure layer, rather than computational capability within Layer 1.

Cryptographic Neutrality Is Not the Same as Participation Neutrality

Cryptographic neutrality is a compelling concept and something Hoskinson referenced in his argument. It signifies that rules cannot be altered arbitrarily, hidden backdoors cannot be introduced, and the protocol remains equitable.

However, cryptography operates on hardware.

This physical layer dictates who can participate, who has the means to do so, and who may be excluded, as throughput and latency are ultimately restricted by tangible machines and the infrastructure they operate on. If hardware production, distribution, and hosting stay centralized, participation becomes economically gated even when the protocol itself is mathematically neutral.

In high-compute systems, hardware is the pivotal factor. It influences cost structures, , and resilience under censorship pressure. A neutral protocol built on concentrated infrastructure may be theoretically neutral, but it is constrained in practice.

The emphasis should transition toward cryptography complemented by diversified hardware ownership.

Without infrastructure diversity, neutrality becomes vulnerable under pressure. If a limited number of providers can rate-limit workloads, restrict regions, or impose compliance requirements, the system inherits their leverage. Fairness in rules alone does not ensure fairness in participation.

Specialization Beats Generalization in Compute Markets

Competing with AWS is often framed as a question of scale, but this perspective can be misleading.

Hyperscalers focus on flexibility. Their infrastructure is designed to handle thousands of workloads concurrently. Features such as virtualization layers, orchestration systems, enterprise compliance tools, and elasticity guarantees are strengths for general-purpose compute but also represent cost layers.

Zero-knowledge proving and verifiable compute are deterministic, compute-intensive, memory-bandwidth constrained, and sensitive to pipeline performance. In other terms, they favor specialization.

A specially designed proving network competes based on proof per dollar, proof per watt, and proof per latency. When hardware, prover software, circuit design, and aggregation logic are vertically integrated, efficiency accumulates. Eliminating unnecessary abstraction layers cuts down on overhead. Sustained throughput on persistent clusters surpasses elastic scaling for narrow, constant workloads.

In compute markets, specialization consistently outperforms generalization for steady, high-volume tasks. AWS optimizes for optionality, while a dedicated proving network focuses on a specific class of work.

The economic structure also differs. Hyperscalers set prices based on enterprise margins and fluctuating demand. A network aligned with protocol incentives can amortize hardware differently and optimize performance around sustained usage rather than short-term rental models.

The competition revolves around structural efficiency for a defined workload.

Use Hyperscalers, But Do Not Be Dependent on Them

Hyperscalers are not adversaries. They serve as efficient, reliable, and globally distributed infrastructure providers. The challenge lies in dependency.

A resilient architecture leverages major vendors for burst capacity, geographic redundancy, and edge distribution, but does not tie core functions to a single provider or a limited group of providers.

Settlement, final verification, and the availability of crucial artifacts should remain intact even if a cloud region fails, a vendor exits a market, or policy restrictions tighten.

This is where decentralized storage and computing infrastructure provide a viable alternative. Proof artifacts, historical records, and verification inputs should not be withdrawable at a provider’s discretion. Instead, they should reside on infrastructure that is economically aligned with the protocol and structurally challenging to shut down.

Hyperscalers should be employed as optional accelerators rather than foundational elements of the product. Cloud services can still offer reach and capacity for bursts, but the system’s ability to generate proofs and maintain what verification relies on should not be dependent on a single vendor.

In such a scenario, if a hyperscaler were to vanish tomorrow, the network would merely experience a slowdown, as the critical components would be owned and managed by a broader network rather than rented from a significant brand chokepoint.

This approach bolsters the ethos of decentralization in crypto.