The Genesis Mission
On November 24, 2025, President Trump signed an executive order launching the Genesis Mission. The White House framed it as a Manhattan Project for artificial intelligence — a national mobilisation to achieve ‘AI-accelerated innovation and discovery’ across scientific domains.
The order directs the Department of Energy to build a unified AI platform, train foundation models on federal datasets, and deploy AI agents to ‘test new hypotheses, automate research workflows, and accelerate scientific breakthroughs’.
Find me on Telegram:
Read in isolation, it sounds like ambitious science policy. Read against the infrastructure quietly assembled over the past year, it completes a circuit.
I’ve spent months on this substack documenting how three parallel developments — AI compute treated as national infrastructure, payment systems upgraded to carry structured compliance data, and government-wide AI governance frameworks — created the technical substrate for Al Gore and Leon Fuerth’s Anticipatory Governance. Policy encoded into transaction protocols, compliance verified algorithmically at the moment of payment, rules that update through AI rather than legislative debate.
All the pieces were in place, but they lacked the central structure — a ‘global brain’, for lack of a better term. Genesis provides one.
The order1 establishes the American Science and Security Platform. Michael Kratsios, the administration’s science and technology lead, described the aim as ‘fusing massive federal data sets, advanced supercomputing capabilities and world leading scientific facilities’2. This platform will integrate DOE supercomputers, cloud computing environments, and national laboratory resources into a unified system. It will host ‘domain-specific foundation models’ trained on what the order describes as ‘the world’s largest collection’ of scientific datasets, accumulated through decades of federal investment. These aren’t generic chatbots. They’re specialised AI systems designed to model complex domains — advanced manufacturing, biotechnology, critical materials, nuclear energy, quantum computing, semiconductors.
More significantly, the platform will run AI agents. The order specifies systems that ‘explore design spaces, evaluate experimental outcomes, and automate workflows’. This is the learning loop made explicit. Feed data in, let AI systems generate hypotheses, test them against simulations and experiments, refine the models, repeat. The order even mandates review of ‘robotic laboratories and production facilities with the ability to engage in AI-directed experimentation’ — infrastructure where AI doesn’t just recommend but carries into practice through physical experimentation.
Within 270 days, the Secretary of Energy must demonstrate ‘initial operating capability’ on at least one national challenge. Annual reports will track progress, and the challenge list will be updated yearly ‘to reflect progress achieved, emerging national needs, and alignment with Administration priorities’. This is iterative governance by design. Not policy debated and passed, but policy evolved through continuous optimisation against targets set by the executive branch.
The institutional architecture deserves attention. The order centralises authority under the Department of Energy, with a single political appointee potentially overseeing day-to-day operations. The platform operates under ‘security requirements consistent with its national security and competitiveness mission’ — classification standards, supply chain controls, federal cybersecurity mandates. Access requires ‘the highest standards of vetting and authorization’. This is not open scientific infrastructure. It’s a secured system where participation is conditional on clearance.
External public-private partnerships for commercialisation are contemplated but controlled. This, alone, sounds very much aligned with Eduard Bernstein’s principles, where the ‘controller’ is the one in charge.
The order directs the Secretary to develop ‘standardised partnership frameworks’ including data-use agreements and model-sharing agreements, with ‘uniform and stringent’ access standards for non-federal collaborators. Intellectual property policies will govern ‘innovations arising from AI-directed experiments’. Universities and private companies can participate, but on terms the platform dictates. Companies like Nvidia, Dell, and AMD are already expanding AI capacity at national laboratories through joint systems and investments — a deepening fusion of federal research infrastructure and corporate AI capability.
Now connect this to what already exists.
Executive Order 143183, signed in July, elevated hyperscale data centers as strategically vital to national security and streamlined federal permitting. That order provided the compute substrate — the raw processing power such a system requires. Genesis now directs that compute toward a specific purpose: training and running the models that will optimise national challenges.
Executive Order 142474, signed in March, mandated electronic payments across federal transactions while explicitly disclaiming any intent to create a central bank digital currency. Combined with Fedwire’s mid-year migration to ISO 20022 messaging standards5, this created payment rails capable of carrying rich, structured data — the kind of data against which compliance rules can be checked in real time.
OMB Memorandum M-25-216, issued in April, established AI governance requirements across federal agencies — policies, inventories, controls. GSA’s USAi platform, launched in August, gave agencies a standardised environment to evaluate and deploy AI capabilities. These created the governance layer, the institutional framework for AI-assisted implementation across government.
Genesis adds the analytical engine. Foundation models that understand complex domains. AI agents that can evaluate outcomes against objectives, enabling continuous experimentation and refinement. If the payment rails are the nervous system and the governance frameworks are the skeleton, Genesis is the brain — the digital twin — that processes inputs and generates outputs.
The original Manhattan Project operated under extreme secrecy with minimal oversight. Invoking it frames AI development as a national security emergency where speed matters more than deliberation. The order’s compressed timelines — 60 days to identify challenges, 90 days to inventory compute resources, 270 days to initial capability — reflect this urgency. There is no time for extended public debate when we are in a ‘race for global technology dominance’.
It’s much easier to push through controversial matter during a ‘crisis’.
But nuclear weapons were a discrete product. You build the bomb or you don’t. The administration also invokes Apollo — calling Genesis ‘the largest marshaling of federal scientific resources since the Apollo program’. But Apollo had a finish line. Anticipatory Governance infrastructure is continuous. Once operational it learns, adjusts, optimises… forever. The system improves itself against whatever objectives are programmed, and those objectives are set not by legislation decided upon through public demand, but by executive discretion, updated annually to reflect ‘Administration priorities’.
When things go wrong, who will accept responsibility?
The order never mentions adaptive governance or algorithmic policy. Then again, it doesn’t have to. When AI agents model national challenges, evaluate experimental outcomes, and automate workflows, and when the challenge list updates annually based on what the models learned — you have policy that evolves through optimisation, not deliberation. The implementation layer becomes the policy-making layer. Perhaps not just yet, but it’s only a matter of time.
Critics will note that this is ‘just an executive order’. But the order doesn’t create the capabilities from nothing. It coordinates and directs capabilities that already exist across national laboratories, federal datasets, and compute infrastructure built over decades — what the administration itself now describes as a ‘whole-of-government’ mobilisation around a unified, closed-loop AI platform. It provides institutional form to latent technical possibilities, and wires them into a single mechanic. And once operational systems demonstrate results, the political logic shifts. And much like Musk’s attempts to optimise government spending through the use of AI, it will be ‘sold’ through early success.
Success becomes the argument against restraint, especially when the same architecture can, in principle, be scaled and networked into a wider closed-loop ‘Spaceship Earth’ management system7.
The Genesis Mission is presented as science policy, but it actually is governance architecture.
Leon Fuerth’s Anticipatory Governance brain is coming online.













This reminds me of Lisa Miron's book "World on Mute," where AI assisted technocracy makes legislature and the courts obsolete.
Appreciation and blessings from Sydney Australia.