Neurophos and the Case for Computing With Light
For a long time, the story of computing was mostly a story of better silicon.
That model has not disappeared, but AI has made its limits much harder to ignore. The problem is no longer just how to build better models. It is how to power them, cool them, and deploy them at scale without turning AI infrastructure into an even larger industrial project than it already is. Neurophos is interesting because it is aiming directly at that bottleneck. The company says its optical processing units, or OPUs, are designed to be up to 100x faster and 100x more energy efficient than current leading GPUs for AI inference. (Neurophos)
What makes that claim worth taking seriously is that Neurophos is not presenting photonics as a vague future concept. It is describing a specific architecture built around micron-scale metamaterial optical modulators, which it says are 10,000x smaller than previous photonic elements and allow more than one million optical processing elements on a single chip. In January 2026, the company announced a $110 million Series A, bringing total funding to $118 million, with investors including Gates Frontier, M12, Bosch Ventures, Aramco Ventures, Space Capital, and Carbon Direct Capital. (Neurophos)
Why this matters now
The AI industry is increasingly constrained by infrastructure, not just algorithms.
Inference is where that becomes obvious. Training runs get attention, but serving models over and over again at scale is where cost, latency, power delivery, and cooling all start to dominate. Neurophos is effectively making the argument that conventional silicon-only scaling is running into a wall, and that light offers a different path around it. Its public materials frame the OPU as a practical replacement for GPU-heavy inference infrastructure, with evaluations starting in 2026, first systems targeted for early 2028, and a production ramp planned for mid-2028. (Neurophos)
That timeline is important because it keeps the story grounded. This is not a product that has already arrived. It is a company claiming a very large leap, with a roadmap that is specific enough to be judged.
What Neurophos is building
At a high level, the idea is elegant.
Instead of doing all of the heavy math with electrons moving through conventional digital logic, Neurophos uses coherent optical computing. Data is encoded into light, and the chip uses tunable optical elements to manipulate phase and amplitude so that interference performs the matrix operations that dominate neural network inference. The official material describes the core breakthrough as these tiny metamaterial modulators on standard silicon, while independent reporting describes the broader system as a large optical systolic array operating at 56 GHz. (Neurophos)
This is one of the reasons photonic computing keeps resurfacing in AI. Matrix-heavy workloads are a natural place to look for analog optical acceleration. If the architecture works, the reward is not just speed. It is lower energy per useful unit of compute and lower heat for the same workload. That is the real prize. (Neurophos)
The other detail I think is worth highlighting is that Neurophos is not describing a purely optical computer. This is a hybrid electronic-photonic system. Electronics still matter for configuration and control, while optics handle the high-throughput core computation. That makes the story more believable than a total-replacement narrative, because most real hardware transitions happen as hybrids first. (Neurophos)
Why inference is the right first target
One of the more useful technical distinctions here is that photonic compute appears much better suited to inference than to training, at least in the nearer term.
That matches what outside reporting has said about the technology. Reuters noted that one of the historic problems for photonic computing has been precision, especially when very small values get lost or distorted. The Register similarly described Neurophos as aiming at inference acceleration rather than the full precision demands of modern training. That makes sense. Analog noise, drift, and jitter are much easier to tolerate in forward-pass inference than in backpropagation-heavy training workflows. (Reuters)
That is actually a strength in the near term, not a weakness.
Inference is where AI becomes an operating cost. It is the part that spreads into products, agents, assistants, search, robotics, and business workflows. If you can make inference radically cheaper and more power-efficient, you do not need to replace the entire training stack on day one to matter.
What makes this more credible than generic photonics hype
Two things.
First, the architecture is concrete enough to discuss in engineering terms. Neurophos is not only saying “we use light.” It is saying it uses metasurface modulators, integrated on standard silicon, with a path that it presents as CMOS-compatible and manufacturable on existing semiconductor lines rather than requiring entirely new fabs. That kind of compatibility claim matters because deep-tech hardware lives or dies on manufacturability. (Neurophos)
Second, the company is not alone in this general direction. Lightmatter has already become one of the most visible photonics players in the AI hardware space. Reuters reported in April 2025 that Lightmatter, valued at $4.4 billion after raising $850 million, had shown a chip that uses light for computation and data movement, though its CEO also said it could be about a decade before the technology goes mainstream. Lightmatter’s own site positions Envise as a photonic computing platform for AI, aimed at reducing datacenter operating cost and carbon footprint. (Reuters)
That comparison is useful because it shows the market structure more clearly. Lightmatter looks more mature and nearer-term. Neurophos looks more like the higher-upside compute moonshot.
What is still uncertain
This is still a company with very large claims and a product roadmap that is not yet in broad deployment.
There are at least four unresolved questions.
The first is real-world precision and reliability. Photonic systems have historically struggled with numerical precision, calibration, and the handling of small values, and Reuters explicitly described that as a longstanding problem in the field. (Reuters)
The second is software and systems integration. Even if the optical core is impressive, a new accelerator architecture still needs memory strategy, compiler support, orchestration, packaging, and developer tooling. Neurophos itself implicitly recognizes this by talking about OPU systems and evaluation timelines rather than just publishing a chip photo and calling it done. (Neurophos)
The third is workload specificity. Some benchmark claims in advanced hardware are very narrow. A design can be astonishing for certain matrix-heavy inference paths and much less transformative across broader, mixed, messy production workloads. The Register’s reporting on Neurophos is exciting, but it still reads as an early technical profile rather than a finished deployment story. (Reuters)
The fourth is timing. Neurophos is promising evaluations in 2026 and initial systems in 2028. That is ambitious, but it also means the company still has a lot to prove before this becomes ordinary infrastructure. (Neurophos)
If this works, the implications are much bigger than faster chips
This is where the story opens up.
If Neurophos, or anything like it, works at scale, the biggest consequence may be that AI inference stops being so infrastructure-hungry.
That would affect datacenters first. Lower power draw and lower cooling demand would change rack density, facility design, site economics, and operating margins. Neurophos’s own marketing is explicitly aimed at this problem, positioning the OPU as a way to break through the “power wall” of current AI infrastructure. (Neurophos)
It would also affect software economics. If the cost of serving inference drops materially, then a much larger class of AI products becomes viable. More always-on copilots, richer multimodal tools, more real-time assistants, more autonomous workflows, and more background intelligence become easier to justify when the ongoing cost per interaction falls. This is an inference story before it is a superintelligence story. (Neurophos)
Over time, it could also affect edge systems. The first-order impact is clearly on datacenter and enterprise infrastructure, but if photonic approaches mature, the downstream effect could be AI systems that are smaller, cooler, and easier to deploy closer to where the work actually happens. I would not promise “full local LLMs on phones by 2032” as a factual forecast, but I do think the general direction is plausible: cheaper, cooler inference infrastructure eventually changes what can move out of the cloud. That inference is mine, based on the efficiency claims and the industry direction, rather than something Neurophos has already demonstrated. (Neurophos)
And that is where this article starts connecting to the Donut Lab piece.
If Donut Lab represents one possible future where storing energy becomes lighter, safer, and more flexible, then Neurophos represents one possible future where using energy for compute becomes far more efficient. One reduces the friction of storing power. The other reduces the friction of spending it on intelligence. That is the kind of compounding shift that can reshape whole product categories.
The real promise of photonic compute is not just faster AI. It is lower friction between intelligence and the physical infrastructure required to serve it.
My take right now
I do not think Neurophos is a proven revolution.
I do think it is one of the more interesting AI hardware stories right now.
The company is targeting a real bottleneck. Its technical claims are specific enough to discuss seriously. Its investor base is credible. Its roadmap is concrete enough to judge. And the broader photonics market, especially with players like Lightmatter, makes it clear this is not just one isolated company spinning a fantasy. (Neurophos)
At the same time, this is still early. Precision, tooling, manufacturability, and deployment all matter. The most dramatic parts of the future vision should still be treated as speculation, not settled fact. (Reuters)
But the core idea is strong.
If AI keeps spreading, the world is going to need a more efficient way to run it than just stacking more power-hungry conventional hardware into bigger buildings.
That is why Neurophos is worth watching. (Neurophos)
