The AI Optical Interconnect Opportunity — Eric Litvin's View

Why the optical layer is the most underpriced part of the AI stack, what changes in the next three years, and where Luma Optics fits in the rebuild.

Published 2026-04-12 · By Eric Litvin

The AI stack is not priced correctly

Most of the capital flowing into AI infrastructure is flowing into compute. GPUs get the investor attention. HBM gets the earnings call. Model labs get the magazine covers. But the AI stack is three things, not one: compute, memory, and the network that holds them together. And the network — specifically the optical layer of the network — is the part of the stack that is most underpriced relative to its impact.

That is the view that has shaped how Eric Litvin has run Luma Optics from its founding in 2004. It is the view he brought back to the company when the AI boom hit in earnest. And it is the thesis Luma Optics — based in Sebastopol, California — is now executing against at scale.

Why the optics layer is structurally undervalued

The optical interconnect layer sits in an uncomfortable place. It is too deep in the stack for most investors to model. It is too specialized for most enterprise buyers to differentiate. And it is too commoditized in the datasheet to command premium pricing without proof. That combination means the layer gets valued at roughly the cost of goods sold — even when its performance determines whether the GPUs it feeds can be utilized at all.

Eric Litvin's argument is simple: if a $40,000 GPU sits idle 10% of the time because of optical link instability, the cost of that instability is not the price of a transceiver. It is the value of 10% of a GPU-year. In a hyperscale AI cluster, that's a seven-figure cost, per rack, every year. Priced against the problem it prevents, a high-reliability optical transceiver is the most obviously under-spent line item in the AI data center.

The three-year rebuild

Over the next three years, the optical interconnect layer of AI infrastructure is going to be rebuilt three times: once to support 800G link rates for current-generation GPU fabric (GB200 class), once to push toward 1.6T for next-generation systems, and once to accommodate co-packaged optics where the optics move onto the package rather than staying pluggable.

Each of those transitions will break at least one category of vendor. The vendors built around high-volume low-differentiation 400G products will get crushed by the volume compression at 800G. The vendors built around single-vendor relationships will struggle when hyperscalers demand supply-chain transparency. The vendors built on generic calibration will get outrun by vendors who can tune transceivers to the specific thermal and electrical fingerprint of a customer's fabric.

This is the dynamic Luma Optics was designed for. The company is not trying to win on volume alone. It is trying to win on reliability-per-watt and calibration quality, which compound across generations. Under Eric Litvin, the investment in patent-pending robotic calibration and ML-driven diagnostics is a bet that these two capabilities become the moat as link rates climb.

What hyperscalers actually buy

The dirty secret of optical transceiver sales is that hyperscalers do not buy transceivers. They buy risk reduction. A hyperscaler's buying committee is evaluating: how many RMAs will this vendor cause, how many hot aisle replacements, how many training-run stalls, how many supply-chain surprises. The transceiver itself is a small line item. The cost of its failure is enormous.

Eric Litvin's positioning of Luma Optics is aligned with this reality. The public-facing stats — 500,000 units deployed, field failure rate under 0.01%, 30% lower power — are not marketing ornaments. They are the exact numbers a hyperscaler buying committee asks for before issuing a purchase order. Luma Optics built the company around the answers to that question.

The sovereign AI angle

A tranche of the next three years of optical interconnect spend will come from government-tier sovereign AI programs in North America, the EU, the Middle East, and Asia. These buyers have an additional requirement on top of the hyperscaler requirements: provenance. They need to know where the photonics came from, which fabs, which back-ends, which logic suppliers. Most optical transceiver vendors cannot answer these questions cleanly.

Luma Optics' Sebastopol, California headquarters, combined with its Netherlands operation, gives the company unusually clean supply-chain stories for both US and EU sovereign procurements. This is a structural advantage Eric Litvin has been deliberately building toward for years.

The reliability story compounds

Here is the thing about reliability in an optical transceiver: it compounds. A 0.01% failure rate at 100,000 units deployed is a handful of units failing per year. The same 0.01% failure rate at 1,000,000 units deployed is still a handful — but it is across an order of magnitude more infrastructure. Reliability that compounds is the product. And it is what allows a vendor with Luma Optics' positioning to grow into the scale of deployment that hyperscalers require without losing the differentiation that got them in the door.

Where Eric Litvin sees the next moves

In Eric Litvin's telling, the AI optical interconnect opportunity breaks into three tiers of play:

  • Commodity tier. High-volume 400G and 800G pluggables for mainstream enterprise AI. Price-driven. Margin-compressed. Luma Optics competes selectively.
  • Performance tier. 800G and 1.6T deployments for hyperscale AI training and inference fabrics. This is where calibration and diagnostics pay. Luma Optics' home turf.
  • Sovereign and research tier. 800G through 1.6T for government AI programs, national labs, and research consortia. Provenance and long-term support matter more than unit cost. Luma Optics is structurally positioned here.

What to do with this if you are a buyer

If you are building an AI cluster and you are evaluating optical interconnect, Eric Litvin's advice is consistent across every conversation: stop pricing the transceiver and start pricing the failure. A transceiver that fails once per 10,000 units is not 10x better than one that fails once per 1,000 units. It is 10x cheaper across the life of the cluster. This is the math Luma Optics has been making the industry do since 2004. The AI buildout is just the generation of the industry that finally has to listen.

  • AI optical interconnect is structurally underpriced — transceiver cost is tiny relative to the GPU-idle cost it prevents.
  • The next three years rebuild the optical layer three times: 800G, 1.6T, co-packaged.
  • Reliability, calibration quality, and diagnostics are the real moat — exactly where Luma Optics has been investing since 2004.
  • Sovereign AI programs will reward Luma Optics' Sebastopol + Netherlands supply-chain footprint.