The most dangerous revolutions happen in academic journals, not on barricades. While public attention fixates on AI chatbots and autonomous vehicles, a more fundamental transformation is quietly unfolding across research institutions worldwide — the systematic construction of what we might call techno-governance infrastructure.
And — yes. All of this can be sourced via papers published in contemporary research journals.
Each paper or project addresses a discrete problem (making AI more ethical, improving policy drafting, optimising resource allocation), and viewed in isolation these efforts appear benign — perhaps even beneficial. But assembled together, they reveal a comprehensive pipeline for automating human judgment out of governance entirely. The architecture operates through five integrated stages, each supported by extensive academic literature that provides both theoretical justification and implementation roadmaps.
What emerges is not accidental convergence but systematic preparation for a post-human administrative state.
Before diving into the details, here is a brief overview of the five stages that progressively squeeze human agency down to — with time increasingly rare — exception handling:
Stage 1: Digital Twins → Quantifiable Models.
Every domain of human activity is converted into computational digital twin models, creating simulations of real-world systems at scale.Stage 2: Computational Ethics → Automatic Moral Judgments.
Moral reasoning is encoded into algorithms, with AI engines making context-sensitive ethical decisions in real time.Stage 3: AI Drafting → Policy & UN Resolutions.
Large language models (LLMs) generate policy documents and even international resolutions, effectively automating legislative and diplomatic drafting.Stage 4: Programmable Infrastructure → Enforced in the Wild.
Smart infrastructure like financial networks and IoT-enabled cities automatically enforce decisions through code — compliance is baked into the environment.Stage 5: Feedback Loop → Continuous Refinement.
Data from enforcement is fed back into the digital twins and ethics engines, creating self-improving governance systems. Humans are increasingly sidelined, reduced to supervising only when the machines flag an exception beyond their programmed scope.
Each stage is underpinned by active research and pilot implementations exist.
Taken together, these works document a techno-governance architecture in which humans become superfluous to day-to-day decision-making, relegated to handling only the system’s exceptions.
Stage 1: Digital Twins → Quantifiable Models
Totalising Simulation. The foundation begins with converting every corner of society into a quantifiable digital model. So-called digital twins1 create a virtual mirror of real-world domains — agriculture2, health3, urban infrastructure4, even entire ecosystems5 — enabling algorithmic monitoring and simulation. The United Nations has explicitly advocated this approach: a 2023 UN-backed Action Plan for a Sustainable Planet in the Digital Age6 calls to ‘Build Planetary Digital Twin’7 systems that can ‘measure, monitor and model the health of the planet’s biosphere and interactions with economic and social systems’8. In other words, develop shared, interoperable models spanning natural and human activity, at planetary scale. A working paper from the UN University that same year on ‘Governing Foundation Models’9 similarly lays groundwork for cross-agency AI models that could serve as the basis for global digital twins of society.
On the surface, digital twins are decision-support tools: they ‘answer causal queries through intervention analysis’ and ‘enhance evidence-based policymaking’10. But in practice they represent reality by proxy. When policy decisions flow from model outputs rather than direct human judgment, governance becomes fundamentally abstracted from the lived experiences of people. The digital twin doesn’t just mirror reality; it increasingly defines what counts as real. If a community’s on-the-ground observations conflict with the model’s forecasts, it is the model’s data that will drive action. Over time, authority shifts from human to machine interpretation.
The implications extend across domains. There are digital twin frameworks for precision agriculture11 that model farms down to each plant; health digital twins that simulate individual bodies or entire populations12; urban twins that mirror city traffic, energy use, and public behavior13. Each promises more efficient management. But collectively, they turn human affairs into inputs for algorithmic oversight. Decisions that once relied on political deliberation or expert intuition can be deferred to simulations. This creates a veneer of objectivity – after all, the model is just ‘following the data’. In reality, it inserts a layer of computational control between governors and governed. The world is filtered through a dataset and a codebase. Reality becomes legible only through the twin.
Crucially, these models are being built at transnational scales14. The UN working paper noted above speaks of planetary modeling capacities and shared cross-border systems. This transcends national sovereignty. If multiple governments come to rely on a single planetary simulator for, say, climate policy or pandemic response, real power shifts to whoever controls the model and its parameters. The groundwork is being laid for global technocratic coordination via unified simulations15. Policies can be harmonised (or constrained) by aligning them to the same digital twin outputs. In effect, digital twins create a soft infrastructure for global governance, under the auspices of neutral science and efficiency.
In summary, Stage 1 replaces messy, local, human understanding with clean, centralised, machine-readable models. It is the first step in reality’s subordination to algorithmic systems. Once every aspect of society is represented in data and code, the stage is set to replace human judgment with automated logic.
Stage 2: Computational Ethics → Automatic Moral Judgments
Moral Machines. Encoding values and norms directly into software is the next phase. Traditional policy decisions involve ethical judgments — trade-offs between supposed equity and utility, rights and risks. Stage 2 asks: why not have machines make those moral judgments automatically? If digital twins provide a quantitative model of the world, a computational ethics layer can provide a built-in conscience, evaluating each potential action against coded moral principles.
This is not science fiction; it’s an active area of research. Larissa Bolte and Aimee van Wynsberghe’s 2024 paper on ‘Sustainable AI and the third wave of AI ethics’16 argues that AI ethics needs a structural turn, expanding beyond narrow case-by-case issues to address the entire socio-technical system over an AI’s ‘lifecycle’17. Instead of just tweaking an algorithm to be ‘fairer’ or adding an ethics review here and there, they call for ethics to be baked into the infrastructure and workflows of AI development and deployment. In practice, that means moving moral considerations upstream and downstream — into data collection, model building, integration, and monitoring — rather than treating ethics as an external advisory process. This vision paves the way for embedding ethical oversight in code itself18. If every AI system is developed and run with structural ethical guidelines, then human moral intervention can fade into the background.
Concrete implementations are emerging. Upreti, Ciupa & Belle (2025) introduce a prototype ‘ethical reasoner’19 that integrates knowledge representation and reasoning (KRR) with probabilistic logic to deliver real-time, context-sensitive ethical verdicts20. Their framework combines formal rules such as ones from philosophy or law with statistical reasoning to handle uncertainty and nuance. The result is an AI system that can adapt moral decisions to different scenarios on the fly — essentially an automated ethics referee constantly judging the options of another AI or robotic system.
Another approach by Meenalochini Pandi21 (2025) uses multi-objective reinforcement learning to imbue AI agents with explicit ethical constraints. This method encodes deontological (rule-based) and utilitarian (outcome-based) principles directly into the AI’s reward function during training. In plain language, the AI’s definition of ‘success’ is modified to include moral goals alongside performance goals. For example, a self-driving car’s reward might be not just to reach a destination quickly (performance goal) but also to minimise risk to pedestrians (ethical goal). By shaping the learning objective itself, morality is no longer an external check — it’s internalised in the agent’s decision criteria from the start.
Meanwhile, Taofeek et al. (2024) outline practical software architectures for computational ethics22. They describe patterns for translating formal ethical theories — duties, rights, virtues — into pluggable modules that can be added to AI systems. One could imagine a library of ‘ethics plugins’: one module might enforce Kantian rules (never lie, never steal), another might calculate utilitarian trade-offs, and a third might emulate human virtues or biases for fairness. Developers of autonomous systems could then choose a mix of modules appropriate to their context. Crucially, these are swappable components. Morality becomes a matter of system design, something you configure with checkboxes and sliders: e.g. 30% deontology, 70% consequentialism. The very fungibility of it underscores how non-human this moral reasoning is — it’s morality as an app store.
To support these efforts, an ecosystem of standards and best practices is rapidly forming. The AAAI/ACM conference series on AI Ethics and Society23 (AIES 2024–25) features dozens of papers on technical ethics implementations. The Sustainable AI Coalition24 — a multi-stakeholder initiative funded by large enterprise — is publishing guidelines for ‘ethics-by-design’ in AI, essentially a how-to for engineers to build systems that comply with various ethical frameworks out of the box. In 2022, a landmark survey in Trends in Cognitive Sciences even laid out a taxonomy of computational ethics25 methods (rule-based vs. case-based vs. machine learning) and an evaluation framework for moral reasoning modules26. The clear message: the field already possesses a wide array of mature techniques to automate ethical decision-making. From logical reasoners to statistical value learners, the toolbox is full. This further marginalises any remaining ‘human in the loop’. If an AI can not only calculate outcomes but also determine their permissibility according to encoded ethics, why would you need a person to approve its choices?
By the end of Stage 2, we have the makings of a de facto technocratic moral code. Digital twins supply the facts; computational ethics modules supply the judgment criteria. Together they form an autonomous system that can decide what should be done in any given scenario. Human morals are absorbed into algorithmic form — consistent, tireless, and supposedly unbiased. What’s left for human decision-makers at this point? Perhaps to occasionally update the ethical parameters much like patching software, or to judge rare cases the machine declares undecidable. Day-to-day, the machine’s verdict increasingly becomes the final word.
Stage 3: AI Drafting → Policy & UN Resolutions
Automating the Law. If AI systems can understand the world (Stage 1) and judge right from wrong (Stage 2), the next logical step is: let them write the rules too. Stage 3 involves delegating the drafting of policies, laws, and even international agreements to AI. The rationale is straightforward — writing legal and policy documents is labor-intensive and complex, so why not have advanced language models handle the heavy lifting? Over the past two years, we’ve seen remarkable progress in this direction, especially in the context of the United Nations and environmental governance.
Liang et al. (2025) introduced UNBench, a benchmarking dataset to evaluate how well large language models perform on UN-style tasks27. They assembled records of the UN Security Council from decades past and set up tasks for AI like: drafting a resolution given a scenario, predicting how countries would vote, and generating diplomatic statements. The results show that current LLMs (like GPT-style models) can indeed produce text that looks very much like real UN resolutions and speeches. They can imitate the formal tone, incorporate the relevant factual context, and even simulate different national perspectives to some degree. In essence, the experiment proved that an AI can act as a junior diplomat or policy aide, capable of generating the first drafts of the very documents that govern international affairs.
Another example comes from the marine conservation domain. Ziegler et al. (2024) presented a case study of an AI question-answering chatbot for the newly negotiated Biodiversity Beyond National Jurisdiction (BBNJ) treaty28. This chatbot, built on a large language model, could answer questions about the treaty and even draft position statements or recommendations related to the treaty’s provisions. The motivation was to assist policy-makers, especially those from developing countries, by providing quick, articulate analysis on a complex treaty. The upside is ‘efficiency’ and leveling the playing field in negotiations (since not every delegation can afford legions of legal advisors). But the darker implication is that policy text itself is now machine-suggested. The chatbot was observed to have biases — it tended to mirror perspectives of Western experts and potentially marginalise developing country viewpoints. Imagine this bias at scale: if most countries rely on AI helpers to draft their statements, the narrative could subtly tip in favor of whatever worldview the AI’s training data reflects.
Kramer et al. (2024) took this further by using AI to analyse and generate content around a real U.S. policy document: Executive Order 14110 on AI governance. Their study ‘Harnessing AI for efficient analysis of complex policy documents’ had multiple AI systems summarise the executive order, extract key clauses, and answer policy questions about it29. Some AI models performed nearly as well as human experts in understanding the order’s content — and they did it far faster. This demonstration suggests that AIs can not only draft new policies but also interpret and critique existing ones. We have the beginnings of machine-to-machine policy loops: one AI writes a regulation, another AI evaluates and provides feedback on it, and humans may simply oversee the ping-pong to ensure nothing obviously insane happens.
Real-world adoption is already underway in government teams. Törnberg & Törnberg (2024) document instances of LLM-assisted drafting in environmental policy30 teams. Policy analysts are using GPT-based tools to generate first drafts of reports, to convert bullet-point ideas into full prose, and to translate technical data into policymaker-friendly language. What started as autocomplete in our email is now autocomplete for legislation. The UNU’s working paper on a UN role in AI goes so far as to propose that UN agencies could jointly build and govern their own large language model to assist in policy drafting. Why rely on OpenAI or Google when the UN could have a specialised ‘UN-GPT’ trained on decades of UN agreements and diplomatic language? Such a model, centrally controlled, could ensure consistency and uphold international norms (as coded into it) across all agencies.
Taken together, Stage 3 developments point toward a future where laws and policies are pre-vetted by AI at the moment of creation. An AI policy-drafter would ‘know’ all the ethical constraints from Stage 2 and the data from Stage 1, so any text it produces is by design aligned with those systems. This flips the traditional script: instead of people writing policies that AI must follow, we have AI writing policies that people are expected to follow. Human legislators and negotiators don’t exactly vanish, but their role migrates to one of editorial oversight. They might choose between AI-generated options, or fine-tune phrasing here and there, or handle the occasional scenario where the AI is unsure. The heavy lifting of weaving facts, norms, and desired outcomes into coherent legal language will be done by machine.
One might argue this is just advanced word-processing — a tool to save time. But consider the power of agenda-setting. If the first draft of a law comes from an AI, that shapes the discourse. Negotiations start from a text that the machine prepared. The AI could frame issues in certain ways, leave out certain concerns, or suggest compromises that subtly shift the policy’s emphasis. Unless humans are extremely vigilant, these choices get baked in. And since the AI drafts are faster and arguably more comprehensive than what a person or committee would produce, there will be pressure to trust them. ‘This is evidence-based and ethically screened’, they’ll say, ‘why reinvent the wheel?’ Thus, Stage 3 further diminishes human agency: we move from approving or tweaking AI-generated policies, to eventually just rubber-stamping them because they consistently meet all formal requirements.
Stage 4: Programmable Infrastructure → Enforced in the Wild
Code as Law, Literally. Drafting policy is one thing; enforcing it is another. Traditionally, even a well-written law relies on human institutions such as courts, police, and regulators to enforce it — with all the friction and discretion that entails. Stage 4 closes that gap by binding policies directly into programmable infrastructure. The physical and digital systems we interact with every day — financial networks, smart city sensors, surveillance systems, the Internet of Things31 — are being designed to enforce rules automatically at the point of action. This is ‘law as code’ in a very literal sense: if you try to do X and it’s not allowed, the transaction simply won’t go through.
Consider financial infrastructure. Central Bank Digital Currencies (CBDCs) are a key example. These are digital forms of money issued by governments, intended to eventually replace or complement cash. But unlike cash, CBDCs can be programmed (or their wallets can). One often-touted feature is the ability to attach conditions to how money is spent. For instance, a stimulus grant might be coded to only purchase certain items, or a carbon tax could be deducted automatically from transactions involving fossil fuels. In a Bank of England or BIS prototype, one can imagine ‘smart money’ that enforces carbon budgets: if you hit your personal carbon allowance for the month, your digital wallet could start declining purchases of plane tickets or gasoline. Similarly, funds could have built-in ‘social credit’ checks — if someone is flagged in a law enforcement database, their access to certain financial services might be curtailed automatically. These aren’t far-fetched scenarios; the technology is already being piloted. The German Bundesbank noted that CBDCs could enable ‘automated tax collection’ and ‘automated distribution of consumer aid’ by embedding rules into transactions32. Once money itself carries policy, compliance is enforced at the moment of payment – no need for auditors or police.
Now apply this concept to urban life. Smart-city infrastructures use networks of cameras, sensors, and access controls. Many cities already have automated traffic enforcement and congestion pricing. Expanding that, a 15-minute city framework could be coupled with IoT sensors33 to discourage or restrict movement beyond one’s local zone in the name of reducing carbon footprint or traffic. For example, license-plate cameras could automatically fine drivers for leaving designated districts too frequently. Smart building entry systems could deny access if you’re not authorised for a location at a given time. Drones could monitor whether people are congregating in off-limit areas and dispatch alerts or even robotic dispersal methods. These are all technically feasible now. It’s just a matter of policy configuration.
A vivid illustration is how Executive Order 1411034 in the U.S. spelled out requirements for ‘safe, secure, and trustworthy’ AI across federal agencies35. It mandates agencies to ensure AI systems are thoroughly tested for bias, secure against threats, and aligned with values — essentially embedding governance rules inside the AIs the government uses. But beyond internal use, it also directs the development of frameworks to monitor and control AI in the private sector. One can see the through-line: the government sets the standards (Stage 3 policies), and then demands that any AI deployed in finance, healthcare, or in critical infrastructure has those controls built-in (Stage 4 enforcement by design). The Memo implementing the EO even speaks of building a ‘global AI governance’ approach with international partners, hinting at worldwide interoperable compliance mechanisms.
Programmable infrastructure means that compliance is no longer a choice. If the speed limit is coded into your car’s navigation — perhaps through geofencing — it simply won’t accelerate beyond that in a given zone. If content rules are coded into the internet’s plumbing, disallowed information can be filtered or blocked in real time, without relying on individual moderators. If public benefits are coded with criteria, ineligible persons simply find their benefit wallet won’t execute transactions at unauthorised merchants.
The ambient automation of consequences fundamentally changes the social contract. Under human enforcement, laws have a degree of flexibility and mercy — police might issue a warning instead of a ticket, judges might show leniency, laws might be broken in civil disobedience to prompt reform. Under automated enforcement, rules are rigidly applied by algorithms with no inherent leeway. The only way to adjust is to change the code (likely controlled by a centralised authority or an AI itself) — not by appealing to human judgment in the moment. There, quite simply, will not be anyone to appeal to.
By Stage 4, we have effectively eliminated the distinction between law and enforcement. The policy outputs of Stage 3 are directly instantiated in the infrastructure around us. This could be framed positively as ‘rule of law, perfectly applied’. No bias of corrupt officials, no inconsistency. But it also means rule by code — and if you didn’t have a say in writing that code (or cannot even vote for someone who can) — then that’s too bad. Humans in this stage recede into maintaining the systems and handling edge cases, but the day-to-day governance (who can do what, when, where) is a settled matter executed by machines.
Stage 5: Feedback Loop → Continuous Refinement
Self-Optimising Governance. The final piece of the puzzle is to create a closed-loop system where outcomes from Stage 4 feed back into the models of Stage 1 and the ethics of Stage 2, continuously refining the entire pipeline. In cybernetic terms, this is establishing a second-order feedback loop: not only do we govern the system, but the system learns to better govern itself over time.
How does this work? As programmable infrastructure rolls out, it generates enormous amounts of data on compliance and outcomes. Every IoT sensor reading, every permitted or blocked transaction, every AI-monitored decision becomes telemetry. This data flows back into the digital twins and AI models36. For example, if a smart grid’s digital twin notices that households consistently find workarounds to an energy-saving measure (say, by using gas generators to bypass smart meters), that insight can prompt a policy tweak or an infrastructure adjustment. Perhaps the next iteration of smart meters will detect off-grid energy usage and report it, closing the loophole. If an autonomous ethical agent in a self-driving car encounters an unforeseen moral dilemma and flags it for human review, that scenario can be added to the ethical reasoning module’s knowledge base so next time it won’t need human help.
The key is institutionalising learning. The AAAI/ACM AIES conferences37 and the Sustainable AI Coalition38 aren’t just publishing papers in a vacuum – they feed directly into evolving standards. Suppose, for instance, that bias is detected in a facial recognition system used in public security. Under a feedback regime, that discovery would lead to new fairness metrics or training data guidelines issued by the coalition, which developers then integrate into next-gen systems. The whole community iteratively improves the ethical performance of AI (at least according to the values they encoded). Over time, these standards become more stringent and comprehensive as they incorporate ‘learnings from the field’. What we see is the emergence of governance of governance – meta-regulation where the process itself self-corrects.
Bolte & van Wynsberghe’s39 call for a lifecycle-wide structural approach (the ‘structural turn’) is essentially a plea for this kind of continuous re-calibration. They argue that focusing only on algorithmic bias in one model is myopic; we need to address systemic, structural issues. In practice, that means if the socio-technical system as a whole is producing supposed harms, one must tweak the system’s structure, not just patch individual components. Within our pipeline, that perspective translates to: adjust the models, the ethics criteria, the policies together in light of observed outcomes, aiming to fix root causes of harm. It’s sort of like how modern software is developed — constantly updated ‘over the air’ after release. Now imagine governance itself constantly updated via auto-patch. And now realise that no human would ever grasp a such, continuously updating system.
An example: if the data shows that a supposedly neutral policy AI still leads to ‘marginalised communities’ getting worse outcomes, the structural approach would demand a remedy beyond that local context. It might trigger adding new socio-economic factors into the digital twins, altering the ethical weights in the AI’s decision function to emphasise justice, and maybe changing the enforcement thresholds in infrastructure to be more lenient in affected areas. All these changes could happen through a coordinated update issued by an oversight body or even by an AI overseer that spots the pattern.
Thus, Stage 5 aspires to a kind of homeostatic governance: the pipeline monitors its own performance and adjusts to maintain the desired balance of values such as fairness, safety, or even claims of sustainability. Humans are involved primarily in setting high-level goals or responding to anomalies the system flags. But even those anomalies, as noted, shrink in number over time because each is a chance for the system to learn and incorporate a fix thus eventually rendering humanity obsolete.
By closing the loop, the techno-governance system becomes increasingly autonomous and opaque. It’s one thing for humans to design a static system with known rules. It’s another when the system is dynamically changing itself based on complex criteria. At that point, even experts will struggle to understand why a certain decision was made or a certain rule updated. The official answer will be, ‘Our models show this maximises fairness/safety/etc metrics’. The reality will be that no single person can parse the chain of logic from raw data, through machine-learned models, through multi-objective optimisations, to final decision. Governance becomes an AI-hard problem in itself, and appeal will be impossible because only the machine itself can understand the logic.
However, from the perspective of efficiency and ‘good governance’, Stage 5 is the capstone that promises a self-perfecting system. It suggests that we can finally overcome the perennial shortcomings of governance — corruption, rigidity, information lag — by letting the system tune itself. This is governance eating its own tail, hopefully in an upward spiral of improvement.
Having walked through the five core stages, we see a pipeline that goes from modeling the world, to morally evaluating actions, to drafting rules, to enforcing them automatically, and then to learning from the results. At each stage, humans have receded further from the loop: from active decision-makers to overseers, to technicians keeping the machinery running. But incredibly, the story doesn’t end here. There is one more implied stage — a Stage 6 — lurking just beyond the horizon of these papers.
Because if the only remaining weak link is the unpredictability of the human mind… then exactly what happens next?
Stage 6: The Neural Frontier → Direct Consciousness Integration
Cognitive Capture. The logical culmination of a system that models and manages society is to eventually model and manage individual human cognition itself. In a pipeline that aspires to remove human error and unpredictability, the ultimate ‘weak link’ remaining is the human mind. Thus, the next frontier — not yet fully realised, but clearly foreshadowed — is direct integration of human brains with the computational governance system via brain-computer interfaces40 (BCIs) and pervasive neuro-monitoring.
It sounds like science fiction, but consider the momentum: Governments41 and companies are heavily investing in neuroscience and BCIs42, with devices like Neuralink already in human testing as of 202543. The same institutions pushing computational ethics are beginning to explore neuroethics44 — how to regulate and design BCIs so that they align with ethical norms. The pattern of Stage 2 repeats itself but applied inward: just as we encoded moral principles into external AI agents, we will encode acceptable thought patterns and reactions into brain-linked devices. The line between ‘AI agent’ and ‘augmented human’ will blur with time.
Why would this be pursued? From a governance perspective, integrating human decision-makers directly with machines offers even greater control (and efficiency). Think of soldiers or pilots with BCIs that allow real-time AI guidance and monitoring of their decisions — the military advantages are obvious. Or consider judicial or executive decisions: a BCI could flag when a judge’s brain shows patterns of bias or emotion, prompting them to reconsider a ruling in light of ‘rational’ AI counsel. On the flip side, for ordinary citizens, a BCI might warn you when you’re about to violate a law or even nudge your emotions and impulses in a pro-social direction (a sudden calm washes over you during an argument, courtesy of your neural implant’s intervention).
This convergence points towards the realisation of what Julian Huxley (the first director of UNESCO and proponent of transhumanism45) envisioned: humanity using technology to deliberately transcend itself. Huxley saw our species as a work in progress, which we could consciously evolve to a higher state. In the late 1940s, he advocated for scientific human betterment — with controversial flirtations with eugenics — and coined ‘transhumanism’ as that goal. The pipeline we’ve described is essentially transhumanist in the realm of governance: it replaces fallible human judgment with a ‘higher’ form of decision-making. The neural integration would merge human minds with a computational system, fulfilling Huxley’s dream in a way — though perhaps not in the utopian manner he suggested.
Futurist Barbara Marx Hubbard spoke of a coming stage of human evolution she called Conscious Evolution. She imagined humanity as a whole consciously cooperating to evolve into a new planetary species, often invoking the idea of the ‘noosphere’46 — the collective consciousness of man. Teilhard de Chardin’s notion of the Omega Point was an almost mystical idea that eventually all minds would converge with each other and eventually the divine, reaching a point of ultimate complexity and consciousness. For Teilhard it was spiritual, but it could similarly be interpreted in a technological light today (ie, Kurzweil’s AI Singularity47, or a global brain48).
What’s striking is how our pipeline could be seen as an architecture for achieving an artificial quasi-Omega Point; by connecting everyone to the digital twins and AI ethics and policy networks, individual consciousness becomes part of the larger circuit. We don’t necessarily lose our personal awareness, but our decisions and perceptions would be increasingly synchronised and guided by the central intelligence. In effect, our human agency is adopted by the machine. The governance system would run right through our nervous systems.
At that stage, the distinction between human and AI decision-making all but disappears. We often talk about keeping a ‘human in the loop’. Stage 6 absorbs the human into the loop. Our thoughts would likely be augmented with AI queries by default, while our intentions could be monitored against ethical benchmarks with alerts like ‘This line of thought risks violating public harmony, consider alternatives’. Our brains could even serve as just another data source for the digital twins — your emotions, stress levels, and attitudes feeding into the societal model in real time, all the time. There would be no outside, because the system is always on.
Some no doubt would argue this is the ultimate democratisation of governance — everyone is contributing to the collective decisions, rather than a few elites. But that would be a very generous interpretation. It’s far more likely that this integration would be completely asymmetric: the system would use its access to shape individual cognition toward whatever it deems optimal for the whole, while the individual would have almost no chance of influencing the system in return. The centuries-old philosophical dream… well, nightmare… of Hermann Cohen, who subordinated ethics entirely to rational law, finds fulfillment here. Because as Cohen argued that moral laws and legal laws should be the same, in a fully integrated top-down techno-governance system, that unity is achieved: the ‘law’ is not just out there in legal codes, or even in AI Ethics49, attempting to ‘steer’ your query in the ‘right’ direction — but rather inside your brain, guiding and constraining your will through Neuroethics.
Likewise, various mystical traditions that speak of overcoming the self and being guided by a higher will could see a perverse technological echo — except the ‘higher will’ now actually is algorithmic enforcement. The recursive control structures of mysticism — where one submits to a divine or cosmic order — are mirrored in a system where individuals submit to the prompts of AI, perhaps ever-so subtly carried out through microincentives they truly believe it is their own will.
Stage 6 is not explicitly described in the papers cited, but it is implicitly the direction of travel. Each stage has required a deeper penetration of AI into what was previously human-only space. The mind is the final frontier. When even our exceptions — the moments where a human currently has to step in — can be reduced by altering the human, then the loop is fully closed.
At that point, human agency as we know it would be obsolete. We would have — in Marx’s terms — externalised our ‘general intellect’ into the machine, even fused with it50.
The Promise of ‘Good Governance’
Why would societies accept such a trajectory? The answer lies in a seductive promise of perfect governance — a promise always made by proponents of these technologies. Each stage of the pipeline is sold as a remedy to ‘flawed’ human governance:
Digital twins promise better understanding. Humans are said to suffer from limited, biased perspectives, whereas a comprehensive model can give a God’s-eye view of problems. Decisions based on the model’s predictions will be claimed to be more rational and far-sighted.
Computational ethics promises consistent morality. Human judges and officials might be corrupt or hypocritical; an AI ethics engine applies the same rules to everyone without prejudice, and can be audited for its decision criteria. Except, when you include factors such as intergenerational justice, this promise breaks flat down.
AI drafting promises efficiency and expertise. Human lawmaking is slow, messy, and often low-quality. AI can produce clear, logically coherent drafts in seconds, supposedly drawing on the sum of human knowledge.
Programmable infrastructure promises certainty and compliance. Human enforcement is patchy — some get away with crimes, others are wrongfully punished; some policies are even ignored. Code enforcement would mean policies achieve exactly their intended effect, and quickly. It’s the end of ‘crime’ in a sense: if you literally cannot break a rule (because the environment won’t let you) then illegalities supposedly cease. Except, when you include concepts such as different standards of neighbourhood policing, this again shows up as an empty promise.
Feedback loops promise continuous improvement. Human governance fossilise — laws linger even when they’re outdated, bureaucracies resist change. But a self-correcting AI system would adjust policies the moment data shows a problem. Governance becomes agile, experimental, and data-driven, always learning and never stuck in political gridlock. Of course, this could hypothetically also be turned against enemies of the system, with no possibility of appeal.
All of these benefits address real shortcomings in current governance. That’s what makes the pipeline so insidious: on paper, it is an upgrade. Who doesn’t want ‘evidence-based policy’? Who wants more bias and corruption when we could have less? Many reformers and rationalists would readily sign on to at least parts of this vision. In fact, they already are. For example, ‘AI for Good’ initiatives explicitly aim to use AI to better meet the UN Sustainable Development Goals51 — essentially Stage 3 and Stage 1 — draft supposedly better policies, simulate outcomes to reach environmental targets, etc. There’s huge momentum — and funding — behind the idea that more AI in governance will produce better outcomes for society and the planet. And never mind that western societies, in general, appear to have headed in the wrong direction, certainly since around the time global surveillance became ubiquitous52.
The rhetoric emphasises that this isn’t the removal of humans, but the removal of human error. Humans, with cognitive biases, limited attention, selfish interests, and susceptibility to misinformation, are considered inadequate to handle the complexity of a globalised, technologically advanced world. The solution offered is not to improve human decision-making through education, deliberative democracy, etc, but to augment or replace it with something supposedly less fallible — while completely eliminating responsibility in the process. Because when the Digital Twin yet again mispredict, no-one will shoulder the blame — and never mind the loss you suffered as a result of disastrous mispredictions.
This, of course, will not be framed as tyranny or loss of freedom. It will be framed as liberation: liberation from want, liberation from fear since decisions will be optimal, liberation from the arbitrariness of who you happened to be ruled by. It will be portrayed as the fulfillment of enlightened governance — rule by the best rational processes rather than by the accident of birth or the popularity contest of elections.
None can successfully argue against safer roads, financial systems that automatically prevent fraud, or international treaties drafted in minutes as opposed to months. The opposition to this pipeline will be painted as coming from two camps: the nostalgic who irrationally cling to messy human freedom at the expense of progress, or the nefarious corrupt officials, criminals, and others who benefit from the gaps in human governance. In other words, to oppose the system will be to portrayed as downright evil. That narrative hasn’t yet fully taken hold, though DOGE is proving an early start53 — but whenever someone says ‘AI will help eliminate corruption’ or ‘AI will take the politics out of policy-making’ it helps the cause. Perhaps not by much, but over the long run, it will be enough.
The promise of ‘good (computational) governance’ is powerful54, especially as it easily taps into a long-standing disillusionment with politics, feelings worsened with every political scandal. If an apparently neutral system promises decisions that are in everyone’s best interest — because they’re allegedly derived from all available data, and interpreted in an identical manner — many will find it appealing; DOGE is firm evidence thereof. It sounds like the ideal of technocracy: let the experts — or in this case, the expert systems — handle it.
And the very second you agree, you’ll never have to worry about that aspect again.
AI for Good
Perhaps the most insidious aspect of the techno-governance pipeline is how it cloaks itself in humanitarian rhetoric. The ‘AI for Good’55 movement represents an ideology framing resistance to automated governance morally wrong. By painting AI expansion as an ethical obligation to solve claimed global challenges, the initiative converts clear technological overreach into a moral crusade — with impunity. And the scope is breathtaking, with the UN System Staff College now offering courses on ‘Leveraging ChatGPT for Effective Communication at the United Nations’56, training international civil servants to integrate AI into diplomatic discourse. The ITU's AI for Good Summit explicitly targets the UN Sustainable Development Goals57, with dedicated tracks on using AI to ‘disrupt hunger’58, revolutionise agriculture59, and achieve ‘digital inclusion’ for every person on Earth. What emerges is a moral framework that demands AI penetration into every corner of human existence under guise of necessity.
Consider the agricultural domain. The ITU and FAO's joint reports investigate ‘AI's positive impact on agriculture’60, promoting digital twins of farms, algorithmic crop management61, and predictive models for food distribution62. The language is friendly, even humanitarian: we must feed the world's growing population, optimise resource use, prevent famine. But the infrastructure being built is identical to what Stage 1 describes — comprehensive digital modeling of food systems that can then be algorithmically managed. A farmer's decision about what to plant, when to harvest, how to distribute crops becomes subject to AI recommendation systems. ‘Disrupting hunger with AI’ means disrupting human agency in food production63.
The digital inclusion agenda operates similarly. Universal internet access and digital literacy are framed as human rights imperatives. But the infrastructure being advocated goes far beyond simple connectivity. The ITU's push ‘Toward AI-native 6G Networks’64 reveals the true scope. Where 5G enabled fast mobile internet and some connected devices, 6G promises to connect literally everything — every object, surface, and space becomes a node in a sensing and responding network. 6G's technical capabilities include near-instantaneous response times, massive device connectivity, and integration with AI processing at the network edge. In practical terms, this means every physical object could potentially be monitored and controlled in real-time through the network itself.
The ethical framing makes this total connectivity seem not just beneficial but morally required. How can you oppose ‘digital inclusion’? How can you argue against feeding the hungry or optimising agriculture? The Carnegie Endowment's work on ‘Ordinary Ethics of Governing AI’65 exemplifies this approach — it doesn't question whether AI should govern, but rather how to make that governance feel ethical. The focus shifts from ‘should we automate this decision?’ to ‘how do we automate it… ethically?’
This is where we see the weaponisation of ethics most clearly. Traditional ethics asks what we ought to do. But ‘AI for Good’ ethics asks what AI ought to do, taking AI deployment as a given and ethics as a controlling entity, jusfifying expansion into previously human-controlled domains. But, of course, should society suddenly find itself operating under another claimed emergency, then that ethical objective suddenly tends to become a requirement, and you’ll find yourself fired for committion an ‘ethical violation’ for refusing to comply fully. AI, however, will not think twice.
The pattern repeats across every domain: AI for healthcare, AI for education, AI for climate action, AI for governance. Each deployment is framed as an ethical imperative that makes resistance seem selfish or backward-looking.
Globe Ethics' conferences on ‘AI Good Governance’66 perfectly capture this inversion. The question is no longer whether humans should govern themselves, but how AI can govern humans in a way that can be justified as ‘good’. The ethical discourse has been captured to serve as a validation mechanism.
The genius of ‘AI for Good’ is that it transforms the techno-governance pipeline from an imposition into an invitation. People don't feel forced to accept automated governance — they feel morally obligated to demand it. Parents are made to want AI tutors for their children's education. Patients are made to want AI doctors for better health outcomes. Through continuous media campaigns, citizens will eventually be made to want AI policy-makers for more effective climate action. The system doesn't need to override human choice; it recruits human choice through moral framing. And this template is then continued in other aspects of our lives, where everything is framed as a moral call that you cannot resist.
By the time the infrastructure is fully deployed — when 6G networks can monitor and respond to every human activity in real-time, when AI systems make most day-to-day decisions about resource allocation and behavior modification, when human agency has been optimised away — it will have been installed not through coercion but through invitation. The final stage of the revolution won't feel like conquest but like salvation.
This represents perhaps the most sophisticated form of social control ever devised: a system that recruits its subjects' own moral intuitions to build their own cage. The bars are made of good intentions, the locks are made of ethical algorithms, and the guards are the subjects themselves — convinced they are liberating humanity rather than enslaving it.
And the final pieces of that particular puzzle emerged during the alleged pandemic, where a dashboard dictated whether you’d be under lockdown without a shred of democratic decision about it — an approach normalised (and expanded) into policy through the Pandemic Treaty.
The Completion of the Revolution
What we witness is the academic and technological construction of humanity’s replacement as the governing agent of our civilisation. Unlike a coup or a revolt, this revolution wears the mantle of objectivity and progress. Each piece arrives with a peer-reviewed stamp, a grant from a reputable foundation, maybe a pilot program in a progressive city, and almost no public fanfare.
The components are sliding into place:
Knowledge: The general intellect of society (to use Marx’s term) is being captured in the digital twins and AI models. Once they encapsulate all relevant knowledge, human judgment adds little value and is optimised out.
Ethics: The highest human values and norms are being codified into machines. Once the machines claim to understand our morality better than we do (aggregating philosophy, law, cultural norms, case precedents), why trust fickle human conscience?
Creation: The act of drafting new rules — arguably a sovereign human prerogative — is being handed off to LLMs. They can churn out options we wouldn’t have even thought of, so let them.
Execution: Enforcement is increasingly an automated affair. Hobbes’ proverbial ‘gavel’ and ‘sword’ of governance are both mechanised.
Optimisation: The system ensures its own improvement, eventually outpacing any human legislative cycle or learning process. In fact, soon it could be impossible for the wisest to even understand the law, with AI changing it rapidly.
We often talk about AI as augmenting human capabilities. But what this pipeline shows is an AI system that absorbs human capabilities. It doesn’t just help humans govern; it slowly becomes the governor. After successfully — yet, often without even realising — training the AI, humans then adjust to serve the system: initially by guiding it with research and input as these academics are doing, then by monitoring it, and finally perhaps by merging with it to whatever extent necessary to keep up — be it AI Ethics or Neuroethics.
Kant dreamt of a society governed purely by reason67 — a kingdom of ends where every rational being autonomously follows moral law. Hegel imagined the state as the march of God on earth, the embodiment of rational spirit68. Marx imagined the end of the state when social conflicts dissolve, but in the meantime a scientific administration which Bogdanov conceptually developed. In the 20th century, early cyberneticists played with ideas of governance by calculation. Each was limited by the technology of their time, but that bottleneck is now gone.
The academic pipeline is methodically eliminating every factor that made politics political: the contest of values is resolved by computational ethics, the uncertainty of outcomes is mitigated by simulations, the persuasion and rhetoric is outsourced to AI drafting, the enforcement dilemmas are handled by code, the feedback protests and demands are quieted by algorithmic adjustment. What’s left is essentially administration. The role of humans reduces to that of a maintenance crew for Spaceship Earth.
Perhaps the most chilling part is how normal this will feel if it comes to pass. There won’t be a day when people wake up and robots are visibly ruling. Instead, each step will have felt like an improvement, a convenience, a necessity even (especially given claimed crises like climate change or pandemics). By the time people realise what has happened, reversing it will be almost impossible. The expertise to govern in the old way could well be gone — who could manually process all the data or foresee the complexities that the system handles? Moreover, the infrastructure will be so embedded that opting out will be like trying to live without electricity today. Entire generations may grow up with the assumption that this is how governance works — algorithmically and automatically — and find it as natural as we find democracy or free markets.
In summary — the revolution arrives in white papers and conference proceedings — not in manifestos or manifest destiny. It is compiled into software, enacted by bureaucracies, and justified by spreadsheets claiming improved outcomes. But make no mistake: the revolution is ultimately about authority. We are constructing a system where, beyond a certain point, the common man on the street will have no impact on decisions whatsoever.
As for whether this is inevitable… perhaps the choice has effectively been made already — not through democratic votes or collective reflection, but by research agendas and foundation priorities. But then again, the fat lady is still yet to sing.
Perhaps most shockingly to those of us who experienced the 80s and 90s… the machine doesn’t even need T-800s — because all of these steps were typically taken in the form of research papers, published while the mainstream media drowned them out with irrelevant noise.
An appeal: My conversion rate isn’t great. Claims of 2–5%, even 10%, are far from materialising. If you appreciate the content and are in a position to contribute, please consider subscribing — otherwise, I will have to enable a full paywall.
To those who have — thank you.
Keep reading with a 7-day free trial
Subscribe to The price of freedom is eternal vigilance. to keep reading this post and get 7 days of free access to the full post archives.