Somewhere, every morning, something that did not exist last year is switched on for the first time. Not metaphorically. Literally. A server cluster comes online in Singapore. A new autonomous vehicle testing programme begins drawing power in Arizona. A chip fabrication hall in Germany reaches operational temperature for the first time, held there by climate control systems that will not switch off for years. A medical implant is activated inside someone’s chest in São Paulo. A submarine navigation system begins its first full operational cycle off the coast of Norway.
Each of these events is, in itself, unremarkable. Routine, even. We have grown comfortable with the rhythm of technological expansion, with the sensation that the world’s digital and physical infrastructure is simply and quietly growing, the way a city grows, block by block, without ever feeling like a transformation until you look at an old photograph and realise the skyline has become unrecognisable.
What we tend not to ask is the question that sits beneath all of it: where does the power come from? And more precisely: what kind of power does it require?
Black Semiconductor’s FabONE facility in Aachen, one of Europe’s emerging semiconductor manufacturing sites, offers a useful place to sit with that question. A 300mm wafer fabrication facility is not, in any familiar sense, a factory. It is closer to a living organism that exists in a state of permanent, exquisitely calibrated equilibrium. The lithography systems that etch circuit patterns onto silicon wafers operate within tolerances measured in fractions of a nanometre. The clean rooms that house them must maintain atmospheric conditions, temperature, humidity, and particulate levels, that would be disturbed by the warmth of a human hand held too close. The precision required is not episodic. It is constant. It does not take weekends.
This means that power to a facility like this cannot flicker, cannot dip, cannot arrive in the irregular rhythms that wind and solar, for all their virtues, sometimes produce. A brownout at the wrong moment does not slow a chip fab down. It ruins whatever is in process and potentially damages the equipment itself. The power requirement is not just high. It is baseload-quality: continuous, stable, and independent of weather.
This is not a peculiarity of semiconductor manufacturing. It is a characteristic that the fastest-growing categories of technological infrastructure share. An AI inference cluster running large language models around the clock has no natural rest period. A pacemaker inside a human body cannot be recharged at a roadside station. A container ship navigating the South Atlantic has no access to a grid. A humanitarian field hospital operating after an earthquake cannot wait for the sun to rise.
In each case, the technology is defined by an operational requirement that has not changed even as the source of power to meet it has come under greater scrutiny: it must run continuously, or it does not run at all.
What begins to emerge, when you hold all of these cases together, is not simply an energy problem. It is a structural mismatch between the demands of the technological world we are building and the architecture of the energy systems we have built to support it.
Fossil fuels provided continuous power, but at a cost that the climate can no longer absorb. Solar and wind provide clean power, but not continuously: they are governed by the rotation of the planet and the movement of weather systems, which have no obligation to synchronise with the operational requirements of a semiconductor fab or an intensive care unit. Batteries bridge the gap, but they are themselves dependent on supply chains for lithium, cobalt, and manganese that introduce their own geopolitical fragilities, and they degrade over time in ways that long-duration, always-on applications cannot accommodate.
The honest conclusion, which policy discussions tend to soften but which the engineering reality makes clear, is that the next generation of technology requires a baseload power source that does not yet exist at scale in the clean energy portfolio. Something that runs the way the demand does: always. Not when conditions permit. Always.
There is something quietly striking about the material at the centre of Black Semiconductor’s photonic chips and the material at the centre of the Neutrino® Energy Group’s energy conversion technology. Both are built around graphene. The applications are entirely different, but the substance, a single layer of carbon atoms arranged in a hexagonal lattice, is the same.
Black Semiconductor uses graphene to move data via light rather than electrons, exploiting the material’s extraordinary optical and electronic properties to build faster, more energy-efficient chips. The Neutrino® Energy Group uses graphene-silicon heterostructures to harvest energy from the ambient particle and field flux that permeates all matter at all times: neutrino momentum transfer, cosmic muon flux, electromagnetic fluctuations, thermal gradients. Both applications depend on what graphene actually is at the atomic scale: a material so thin and so electronically sensitive that it responds to interactions too subtle for conventional materials to register.
The Neutrino® Energy Group’s international team of physicists, engineers, and materials scientists, led by Holger Thorsten Schubart, the mathematician known as the Architect of the Invisible, has spent nearly two decades developing this conversion architecture. The Schubart Master Formula, describes mathematically how the multilayer nanostructure couples to the ambient multi-channel flux and converts it into stable direct current. The output is always bounded by the sum of measurable inputs: no thermodynamic law is circumvented, no energy created from nothing. What is created is a conversion pathway that was previously unexploited, connecting a continuously present input to a continuously present need.
“The problem is not a lack of energy,” Schubart has said, “but the way we think about it.” The ambient flux crossing a semiconductor fab in Aachen is identical to the flux crossing a field hospital in Haiti. Neither geography nor grid access determines its presence. The material architecture that receives it is the variable. That is, in essence, what the Neutrino® Energy Group is engineering toward: a power source whose input has no address, no owner, and no off switch.
The technology is not yet at the point of wide deployment. The engineering transition from confirmed science to production-ready systems is the present work of the group’s global team. But the direction is clear, and the structural problem it addresses is not going away.
Every morning, something new needs power. A data centre. A research station. A medical device. A ship. An autonomous system operating somewhere that a power line does not reach or a grid condition cannot guarantee.
We have spent decades asking whether we can build the technology of the future. The answer has been consistently yes. The question we have been slower to confront is what all of it will need to run, not in the optimistic conditions of a demonstration, but in the operational conditions of the real world, which is often cloudy, often remote, and always demanding continuity.
The Neutrino® Energy Group is not offering a promise. It is offering a framework: mathematically grounded, thermodynamically bounded, built on verified experimental physics from more than twenty independent research institutions, and developed over nearly two decades by an international team that chose rigour over spectacle at every turn. The science says the ambient flux is there. The Schubart Master Formula describes how to receive it. The engineering work is ongoing.
That is not a small thing. In a world that adds something new to its power requirements every single morning, knowing where the next answer comes from matters more than it ever has.
















