Autopilot III
Part Three: The human interface
So far, we’ve described governance that works on you from the outside — controlling what you can buy, where you can go, what platforms you can access. But the trajectory doesn’t stop at the boundary of your wallet or your front door.
The same logic that builds governance into financial infrastructure is now being extended toward the human being itself.
Find me on Telegram: https://t.me/escapekey
Find me on Ghost: https://the-price-of-freedom-is-eternal-vigilance.ghost.io
Bitcoin 33ZTTSBND1Pv3YCFUk2NpkCEQmNFopxj5C
Ethereum 0x1fe599E8b580bab6DDD9Fa502CcE3330d033c63c
One of the strangest omissions in contemporary debate is how rarely three particular ideas are considered together.
Marx’s ‘Fragment on Machines’ predicted that the intelligence required for production migrates out of the worker and into the machinery itself. Julian Huxley’s transhumanism proposed that humanity will — and should — transcend its current biological form through integration with technology. And Teilhard de Chardin’s Omega Point envisioned evolution converging toward a unified planetary consciousness.
Read separately, these ideas seem to belong on different shelves: economics, futurist philosophy, mystical theology. But read together, they describe the same destination: intelligence migrates into systems, humans become nodes within them, and the convergence is dressed as ‘evolutionary destiny’.
And once intelligence migrates into systems, the old question — who takes the profits? — starts to appear almost nostalgic. Because the question that actually matters is far more basic: who sets the goals of the machine?
Marx’s Fragment, Updated
Marx’s point in the Fragment on Machines isn’t merely that machines replace human muscle. It’s that machines absorb their knowledge.
As production becomes more complex, the intelligence of the system stops being located primarily in the worker and migrates into the machinery and the organisation — in processes, techniques, coordination, discipline, measurement, and control. The worker's role changes accordingly. He's no longer the source of productive intelligence; he becomes an attendant, an operator, a monitor — someone who handles what the system can't yet process on its own. The general intellect now lives in the machine; the human worker handles its exceptions.
That’s the conceptual skeleton of modern automation — and AI makes it literal.
The knowledge required for production no longer lives just in physical machinery or management procedures. It lives in software: pattern recognition, drafting, prediction, sorting, optimisation, enforcement. What used to be judgment becomes a forward-predicting Digital Twin. What used to be intuition becomes logic and mathematics. What used to be experience becomes data. What used to be skill becomes an app or a software service you can license.
So the Fragment stops being 19th-century speculation and becomes a 21st-century design pattern: (1) digitise the workflow; (2) codify the decision criteria; (3) automate the routine; (4) assign humans to handle exceptions; (5) model the exceptions too, and repeat the process. The loop never ends.
There’s a catch, though. The Fragment describes a closed system — a factory where the limits are already factored in. But the planet isn’t a closed system, so the entire project of transferring society’s ‘general intellect’ into a planetary apparatus first requires a conceptual trick: you must treat the planet as if it were a closed loop. ‘Spaceship Earth’ is precisely that trick. It's not a metaphor; it's the necessary closure that makes planetary management thinkable. Today we call it the Circular Economy.
This isn’t a claim about physics; the planet isn’t literally a spacecraft. ‘Spaceship Earth’ is a governance premise — a way of framing the world as a bounded system so that management becomes possible through global ‘black box’ modelling. And once that framing is adopted, everything else follows: standardisation, harmonisation, global monitoring, unified coordination.
And this is what produces the perpetual ratchet. You cannot actually close an open system. Reality keeps generating novelty that escapes the model. A bank run spreads on group chats before regulators notice. Inflation spikes in ways the models said couldn’t happen. A protest movement organises on an app the surveillance grid doesn’t cover. A ship gets stuck in a canal and suddenly there’s no toilet paper.
Each time, the response is the same: expand the surveillance grid, add new ‘indicator’ variables, establish the thresholds. Every emergency protocol, every tightened threshold, is the system responding to its own failure — chasing a closure it can never achieve. The ‘emergencies’ it declares are often just the system detecting its own inability to model a world that refuses to behave like a spaceship.
The shadow of Taleb’s Black Swan keeps growing.
But this process isn’t only about job displacement. It’s about where agency lives. The intelligence that once lived in human beings now migrates into systems that are ownable, scalable, and enforceable. That’s why AI doesn’t merely ‘disrupt labour’; it shifts where power resides.
Huxley’s Vision, Operational
Strip the romance from Huxley’s transhumanism and you find the same shift viewed from the opposite direction: Marx speculated that the general intellect would migrate into the machine; Huxley proposed that humans should follow.
Julian Huxley got the template from his grandfather. In 1893, TH Huxley delivered his Romanes Lecture with a simple message: the cosmic process — evolution, nature, ungoverned forces — cannot be left to run on its own. Ethics must steer it, impose human values onto inhuman dynamics. His grandson applied the same logic to humanity itself: if we can steer our own evolution, we must. UNESCO gave this idea institutional form; transhumanism gave it philosophical form. Hans Küng’s Global Ethic (1993), the Earth Charter (2000), AI ethics, neuroethics — all run on the same operating principle: ungoverned power leads to catastrophe; ethics is the steering mechanism.
Huxley didn't just theorise — he established the institutions required to implement in practice. As UNESCO’s first Director-General, he created the educational and scientific infrastructure that now houses AI ethics and neuroethics frameworks. By 1949, UNESCO was already developing the concept of ‘world citizenship’ — a precursor to planetary-scale identity. As a founding member of the IUCN, he established the institutional home for planetary stewardship. As first president of the International Humanist and Ethical Union, he laid down the ethical humanist framework — the moral ‘ought’ obliging humans to care for one another and the planet.
Both IUCN and IHEU work as wrapper organisations that coordinate and certify member NGOs rather than replacing them. The architecture of governance-through-ethics was there from the very beginning.
Properly understood, transhumanism isn’t really about enhancement — it’s about redefining what counts as ‘human’ in governance. Human nature stops being a fixed premise and becomes a variable. A boundary to redesign; raw material for the ‘general intellect’ in the machine to optimise.
That sounds abstract until you look at what’s already being built: identity becomes a digital credential, behaviour becomes a score, access becomes permission — all gated through programmable money; CBDCs with conditional payment functionality. The human being is already becoming a node in a management loop. From the system's view, your digital identity is just an asset tag.
So the real question Huxley’s vision raises isn’t ‘should humans be upgraded?’ It’s: upgraded by whom, for what purpose, and under what authority?
Teilhard’s Omega Point
Teilhard de Chardin completes the picture. A Jesuit palaeontologist, he proposed that evolution itself is converging toward a unified planetary consciousness — what he called the Omega Point.
Teilhard imagined humanity developing a ‘noosphere’ — a layer of collective thought enveloping the Earth, eventually fusing into a single planetary mind. That perhaps sounds a tad weird until you notice how precisely it matches the stated ambition of global data infrastructure: planetary surveillance data, feeding a global Digital Twin, converging toward integrated management. When tech and governance circles talk about ‘planetary intelligence’ and the ‘global brain’, they’re not being poetic. They’re describing Teilhard’s noosphere in technical dress.
What Teilhard adds is the teleology — the conviction that this convergence isn’t just possible, it’s destined. That framing changes everything: a political project becomes an evolutionary inevitability. You don’t debate the Omega Point. You align with it or get left behind.
The question then becomes — how would a ‘global consciousness’ come to be?
AI Ethics as the Steering Layer
Once thinking becomes infrastructure, societies need a way to steer it without admitting what’s happening. You can’t very well sell the public on the proposition that ‘we’re transferring decision-making into systems and managing society through thresholds’. Instead, you sell safety, responsibility, fairness, trust, stewardship, and alignment.
That’s what ‘AI ethics’ functions as in practice: the legitimacy wrapper and parameter-setting layer for machine thinking. Laws tell you what you’re allowed to do; ethics tells you what you should do — and ‘should’ is more compatible with how computers operate: to a defined purpose. AI ethics defines the categories (harm, bias, risk, misuse), the duties (oversight, accountability), the constraints (human-in-the-loop, transparency), and the institutions authorised to interpret and enforce them.
In other words, AI ethics is how machine-level thinking becomes manageable and governable. And once it’s manageable, it becomes controllable through the same gates mapped elsewhere: standards, procurement, certification, finance, and platforms. Ethics becomes the interface between the machine’s power and the public’s consent.
This is ethics as governance in its purest form. The pattern isn’t limited to AI or neurotechnology — it even runs through environmental ethics, bioethics, business ethics, digital ethics. Each domain gets its own framework, its own vocabulary, its own expert class, operating out of sight, and with complete impunity. But the structure is identical: define the ethical categories, translate them into standards, embed them in infrastructure, make compliance a condition of access.
Hermann Cohen argued that law should progressively converge with ethics — that legal obligation would eventually become indistinguishable from moral duty. This vision has historically been catastrophic: Mussolini, Hitler, and Soviet leadership all sought the same convergence, collapsing the distinction between state law and moral truth so that dissent became not just immoral but illegal.
The project continues today: the Institute of Noahide Code, with UN ECOSOC General Consultative Status, explicitly seeks to codify UN resolutions on ‘environmental ethics’ and ‘social justice’ into national legislation — ‘Legislating for Global Ethics’, as they put it. The mechanism has shifted from ideology and secret police to code and compliance. The structure is the same — no external ground from which to contest — but the enforcement is automated.
Neuroethics: The Next Layer
AI is thinking externalised — intelligence operating outside the skull. Brain-computer interfaces invert that: the system extends into the skull. Not metaphorically. Literally. Neural data becomes readable; stimulation becomes possible; the boundary between person and platform starts to blur.
This is where the pattern holds most clearly: neuroethics is to AI ethics as brain-computer interfaces are to AI.
AI ethics governs thinking outside the body. Neuroethics governs where the system meets the nervous system — the point where governance stops being about actions and starts accessing thoughts.
And because that boundary is existential, neuroethics arrives early. It has to. If a society is going to normalise neural data extraction, cognitive enhancement, or interface dependency, it needs a moral vocabulary powerful enough to pre-empt dissent. And in November 2025, UNESCO adopted the first global Recommendation on the Ethics of Neurotechnology1 — the institutional arrival of this framework at planetary scale.
The same pattern repeats: define a domain as too important to leave ‘unregulated’; establish an expert ethics framework; translate that framework into standards and protocols; embed it into institutions and infrastructure; make compliance a condition of access.
This isn’t transhumanism as a lifestyle choice. It’s transhumanism as an administrative trajectory — the human becomes an endpoint in the loop, governed by the machine’s general intellect through ‘ethics’.
The Governing Question Changes Shape
This is where the question — who sets the parameters? — cuts deeper.
With AI, it’s mostly about permission: who gets access, who can transact, who can speak, who can work, who can access information. With brain-computer interfaces, it’s about personhood: cognitive liberty, mental privacy, identity continuity, autonomy at the level of sensation and impulse.
There’s a quiet asymmetry here that makes this more dangerous than normal political disputes. Laws are visible and contestable. Systems are opaque and executable. Ethics frameworks get treated as ‘non-political’ while doing deeply political work. And once systems are integrated, reversal becomes an engineering problem — not a democratic one.
So the old disputes — left versus right, party versus party — start to look like arguments inside a cabin while the route is being set from the cockpit of Spaceship Earth.
The End-State, Stated Plainly
Put all three parts of this essay together and the trajectory comes into focus:
First, the logic: science as authority, ethics as converter, measurement as language, gatekeeping as enforcement, emergency as system output.
Second, the infrastructure: the Earth Charter as ethics, the SDGs as targets, harmonised indicators as the sensing grid, programmable money as the primary enforcement layer.
Third, the destination: intelligence migrates into systems, the human migrates into the interface, and the process is framed as ‘evolutionary inevitability’.
At that point, governance becomes less about laws and persuasion, and more about thresholds and routing. Governance becomes less about citizens and more about compliance status.
If the trajectory continues.
The unsettling part is that none of this requires a villain, or a committee somewhere plotting world domination. All it requires is a system that sees human discretion as friction to be optimised, complexity as a reason for expert management, and thresholds as neutral technical matters. The system doesn't need intent, just incentives.
And because the world refuses to behave like a mechanical spacecraft, the system will keep tightening — not toward stability, but toward brittleness.
At that point, the question ‘who sets the goals?’ will have an answer — but not one arrived at through democratic deliberation. The navigator of Spaceship Earth is whoever controls the parameters: the expert committees, the standards bodies, the framework designers. The system’s purpose becomes perpetuation: closing the unclosable, managing the turbulence it generates, optimising for frictionless operation as an end in itself.
The autopilot won't specifically hate you. But it might decide you're surplus to requirements, and optimise for a world that doesn't include you — ‘ethically’. Or what passes for 'ethical' in the machine's logic, anyway.
So we're back to the question that won't go away: who sets the goals — once Marx’s ‘general intellect’ is in the machine, Teilhard’s ‘global consciousness’ runs it, and Huxley’s transhuman interface wires us in?
Anticipating Objections
‘You're overstating coherence. The Earth Charter, SDGs, CBDC pilots, and AI ethics frameworks aren't coordinated by any single authority’.
The essay doesn’t claim central coordination — it claims functional compatibility. The parts cohere not because a committee designed them together, but because they’re solving the same class of problem (legibility, risk management, scalability) with the same class of solution (standardisation, measurement, conditional access, threshold-based enforcement).
The BIS doesn’t need to coordinate with the IPCC or IEEE’s neuroethics working groups. They converge because they’re all optimising for governable systems. The logic converges because the incentives do.
‘You ignore countervailing forces — judicial pushback, political backlash, decentralised tech, implementation failure’.
These are real. But they describe friction, not reversal.
Brexit didn’t dismantle the infrastructure; it moved the UK outside one jurisdiction while the EU tightened its own. Court decisions strike down specific implementations; the next version routes around the ruling. Decentralised tech creates parallel systems that mainstream infrastructure then works to interoperate with or exclude. Political backlash elects governments that change the parameters — but never question the parameter-setting architecture itself.
Turbulence is what the system expects. That’s what ‘management by feedback’ means. Resistance becomes data for the next update cycle.
‘The “functions as” framing is unfalsifiable. If environmentalism “functions as” a governance carrier regardless of climate science, no evidence can challenge the claim’.
This misreads what kind of claim is being made.
The essay isn’t arguing about whether climate change is real. It’s analysing how a problem-space gets used. Environmentalism functions as a governance carrier whether the underlying science is correct or not — just as national security functions as a governance carrier whether any particular threat is real or not. True threat and exaggerated threat produce the same institutional response: expanded monitoring, expert authority, reduced contestability.
The claim is falsifiable: show that declaring a planetary crisis doesn’t generate pressure for planetary coordination, standardised measurement, and expert management. The historical record suggests otherwise.
‘The Marx-Huxley-Teilhard synthesis is poetic, not rigorous. These thinkers weren’t working on the same project’.
They weren’t collaborating. But they identified the same structural endpoint from different vantage points: intelligence migrating into systems (Marx), humans integrating into systems (Huxley), the process framed as inevitable convergence (Teilhard).
The question isn’t whether Marx read Teilhard. It’s whether the pattern they independently described is now being built. The BIS Unified Ledger exists. Purpose Bound Money pilots exist. Neuroethics frameworks exist. These are implementations of a trajectory those thinkers saw coming. The implementations are verifiable. What’s poetic about observing they’ve arrived?
‘You don’t engage with the possibility that some of this infrastructure might be necessary or beneficial’.
The essay addresses this directly. It doesn’t require a villain or a committee plotting world domination — just a system that sees human discretion as friction to be optimised.
Whether the problems the system claims to solve are real or exaggerated, the architecture produces the same effects. Usefulness doesn’t change structure. You can build a benevolent autopilot; it’s still an autopilot. The question is whether the architecture permits dissent once the parameters are set.
‘The tone is deterministic. You make resistance seem futile’.
The essay ends with a question, not a conclusion: who sets the goals — once Marx’s ‘general intellect’ is in the machine, Teilhard’s ‘global consciousness’ runs it, and Huxley’s transhuman interface wires us in?
That’s not determinism. That’s diagnosis. The essay describes a trajectory and asks whether we’re paying attention to where it leads. Accuracy isn’t pessimism. If the pattern is real, describing it clearly is the precondition for any meaningful response.
You can’t effectively counter things you don’t understand.
Now you do.
Merry Christmas and Happy New Year!























Thank you for these three essays. Very easy to understand and not a little frightening. I will share with others. However, there are so many without the ears to listen, or worse, just don't want this to come about and choose to cope by ignoring it. Thank you and merry Christmas and a happy new year!