Neurophos Bets $110M That Light, Not Electrons, Will Power AI Inference
The AI industry has a power problem that throwing more GPUs at won't solve. Neurophos, a startup building optical processors for AI inference, just raised $110 million to prove that light can do what electrons increasingly cannot: run AI workloads efficiently at scale.
The company's approach uses composite materials—originally developed for applications like advanced optics and metamaterials—to perform the matrix multiplication that dominates AI inference. Instead of pushing electrons through silicon transistors, Neurophos routes photons through optical components that can process multiple calculations simultaneously at the speed of light.
Why Optical Computing for AI Inference Matters Now
Every major AI lab is discovering the same uncomfortable truth: inference is expensive. Training a frontier model costs tens of millions of dollars once. Running that model for millions of users costs that much every few months—and the bill keeps growing.
The problem is physics. Traditional chips generate heat with every calculation. More calculations mean more heat, which means more cooling, which means more power. Data centers are already hitting the limits of what local grids can supply. Microsoft, Google, and Amazon are all exploring nuclear power for their AI infrastructure. That's not a solution; that's an admission of defeat.
Optical computing sidesteps this entirely. Light doesn't generate heat when it moves through a medium the way electrons do through semiconductors. Photons can carry information in parallel across different wavelengths. And the speed of light is, well, the speed of light—no transistor switching delays.
The Neurophos Approach: Composite Materials for Math
What distinguishes Neurophos from other photonic computing startups is its use of composite materials to perform mathematical operations. While competitors like Lightmatter and Luminous Computing have built optical chips using more conventional photonic components, Neurophos claims its material science breakthrough enables significantly smaller form factors.
The technical challenge in optical AI chips has always been encoding neural network weights into physical properties of light—things like phase, amplitude, and wavelength. Traditional approaches require relatively large optical components to maintain precision. Neurophos's composite materials apparently allow this encoding at much smaller scales, which matters enormously for practical deployment.
A chip that requires exotic cooling or takes up half a server rack isn't going to replace Nvidia's H100s. A chip that fits existing form factors and power envelopes might.
The $110M Bet on Inference Infrastructure
The funding round—substantial for a hardware startup working on novel computing architectures—reflects investor conviction that the AI inference market will demand alternatives to GPUs. The economics are stark: Nvidia's data center revenue hit $47 billion in fiscal 2025, with gross margins above 70%. Every major cloud provider and AI company is looking for ways to reduce that dependency.
Optical computing has been "five years away" for decades. What's different now is that AI inference has created a specific, massive market with clear pain points. The workloads are well-defined (transformer inference), the bottlenecks are understood (memory bandwidth, power consumption), and the customers are desperate (everyone running AI at scale).
Neurophos doesn't need to build a general-purpose optical computer. It needs to build something that runs inference faster and cheaper than GPUs for specific model architectures. That's a narrower, more achievable goal.
The Competition and the Catch
Neurophos enters a crowded field. Lightmatter raised $400 million in 2024 for its photonic interconnects and computing platforms. Luminous Computing has been working on photonic AI accelerators since 2018. Ayar Labs focuses on optical I/O for chips. Intel, IBM, and others have photonics research programs.
The catch with all optical computing approaches is the conversion overhead. Neural networks live in the digital domain—models are stored as digital weights, inputs arrive as digital data, outputs need to be digital for downstream processing. Every time you convert between digital and optical domains, you lose efficiency and add latency. The optical computation has to be significantly better to overcome these conversion costs.
Neurophos's composite material approach might reduce some of this overhead, but the fundamental challenge remains. The company will need to demonstrate not just that optical inference works in the lab, but that it beats GPUs on real workloads in real deployments with real conversion costs included.
What Success Looks Like
If Neurophos succeeds, the implications extend beyond one company's technology. Optical inference chips could enable AI deployment in power-constrained environments—edge devices, mobile platforms, regions with limited grid capacity. They could change the economics of running AI services, potentially making capabilities that currently require expensive cloud inference available more broadly.
The more likely outcome is that optical computing becomes part of a heterogeneous future—GPUs for some workloads, custom ASICs for others, optical accelerators for specific inference tasks where power efficiency matters most. The AI infrastructure stack is diversifying, and there's room for multiple winners.
For now, Neurophos has $110 million and a hypothesis that composite materials can solve a problem that pure physics makes very hard to solve with electrons alone. In an industry burning billions of dollars on power bills, that hypothesis is worth testing.