The Price of Permission
When government succeeds in anticipating citizens’ needs, it earns currency in the form of trust. The price of failure is a loss of legitimacy.
— OECD, Government of the Future (2000)1
Find me on Telegram: https://t.me/escapekey
Bitcoin 33ZTTSBND1Pv3YCFUk2NpkCEQmNFopxj5C
Ethereum 0x1fe599E8b580bab6DDD9Fa502CcE3330d033c63c
Executive Summary
This essay traces a ‘great transition’ currently underway in global governance: the quiet inversion of price from an expression of preference to a mechanism of permission.
Drawing on a wealth of top-tier primary documents published by the BIS, UNDP, OECD, European Commission, Rockefeller Foundation, Stimson Center, WHO, and WEF — among others — it maps an emerging architecture where foresight models generate outputs, those outputs become coefficients attached to transactions, the coefficients propagate through input-output matrices across entire economies, programmable settlement infrastructure gates transactions based on compliance, and physical enforcement systems operate at the boundary. The ratchet — tightening carbon budgets, escalating ‘true costs’, compounding planetary boundaries — ensures the squeeze intensifies without overt decree.
Simply put: prices are no longer discovered. Prices discover whether the system permits you at all. A command economy pretending to be a market.
The infrastructure is not hypothetical. Central banks representing 98% of global GDP are developing programmable payment systems. The EU’s Carbon Border Adjustment Mechanism attaches coefficients to imports. Digital Product Passports will track goods from production to disposal.
The UN’s ‘Quintet of Change’ embeds strategic foresight and behavioral science across the entire UN system. Upstream, bodies like NGFS, Basel, FSB, and FATF determine who receives financing and on which terms. Downstream, Europol documents the convergence of financial surveillance with physical enforcement through autonomous systems operating above human-speed accountability.
Modelled ‘planetary stewardship’ is framed as an ‘ethical imperative’; the language throughout is care, resilience, sustainability, protection. The mathematics throughout is: model, coefficient, propagate, condition, enforce. This essay does not prophesise, rather it maps the default trajectory — the path that obtains if these systems operate as their architects intend.
Each layer is documented by the institutions building it. Each institution frames itself as doing good. The question is what happens when anticipatory governance couples to conditional settlement couples to autonomous enforcement.
The secondary question is why were we never honestly asked if we wanted this future for the next generation.
I. The Inversion
There is a moment, not marked by any announcement or declaration, when the substance of a market economy is replaced while its forms remain intact. This transformation requires no revolution, no seizure of the means of production, no politburo issuing five-year plans. It requires only that the meaning of price be quietly inverted — from preference to permission.
This is not the familiar story of regulation shaping markets. Mixed economies have always featured prices influenced by policy — taxes, subsidies, tariffs, environmental standards. The transformation described here is different. It occurs when price stops being shaped by policy and starts functioning as clearance or denial at the point of transaction. When the transaction itself becomes conditional on compliance verified in real-time. When ‘how much does this cost?’ becomes ‘may you have this at all?’
The distinction matters greatly. A carbon tax raises the price of fuel; you pay more but the transaction clears. A conditional payment system checks your carbon allowance before the transaction clears; you may not transact at all if the condition fails. The first is price adjustment. The second is permission architecture.
II. What Anticipatory Governance Means
The United Nations Development Programme published a policy document in May 2025 titled ‘UNDP’s Policy Response’2 to its 2025 Human Development Report, ‘A matter of choice: people and possibilities in the age of AI’3. In the section on crisis prediction and analytics, the document states:
Deploy AI to enhance its ability to anticipate, manage, and mitigate crises effectively: UNDP is deploying AI-powered analytics to enhance crisis anticipation and prevention. The Crisis Risk Dashboard (CRD) employs machine learning for conflict forecasts and prevention impact estimates, combining multiple data sources for risk analysis in fragile contexts.
The three verbs form a sequence: anticipate, manage, mitigate.
This is the grammar of anticipatory governance — a concept that emerged from Leon Fuerth’s work on ‘Forward Engagement’4. Fuerth, who served as National Security Advisor to Vice President Al Gore5 throughout the Clinton administration6, founded the Project on Forward Engagement in 2001 to explore how governments could think systematically about long-range issues7. He described anticipatory governance as ‘a mode of decision-making that perpetually scans the horizon’ — a systems-based approach for enabling governance to cope with accelerating, complex forms of change.
Traditional governance operates on a familiar sequence: an observable problem emerges, public deliberation occurs, political decisions are made, implementation follows, and accountability is assessed based on outcomes. The legitimacy of intervention derives from the manifest reality of the problem being addressed. Citizens can evaluate whether the problem exists and whether the response is proportionate.
Anticipatory governance8 inverts this sequence9. A model predicts a problem10. A technical threshold is crossed within the model. Institutional response follows.
COVID-19 demonstrated the mechanism at scale: epidemiological models crossed case thresholds, lockdowns followed automatically, and interventions were justified by modelled futures no one could verify.
There is no accountability in the traditional sense because the intervention is justified by a counterfactual — a future that did not occur precisely because the system intervened, or so the logic runs. This is governmentality in Foucault’s sense11: not rule by command but the conduct of conduct, where compliance with modeled futures becomes rational self-governance. It is instrumentarianism12 — Zuboff’s term for governance through prediction13 — migrated from commercial extraction to administrative management: the same technical stack, different principals, ostensibly different ends.
The epistemological problem is severe. How do you verify a prediction about a crisis that was ‘prevented’? If the model predicted conflict and intervention occurred and no conflict materialized, was the model correct or did it generate a false positive which prompted unnecessary intervention? The system trends toward unfalsifiability: every non-event can confirm its predictive accuracy.
The UNDP document describes a Crisis Risk Dashboard14 using machine learning for ‘conflict forecasts’, a Risk Anticipation Data Hub15 providing ‘natural language querying’ of risk data, and a Digital Social Vulnerability Index16 providing ‘high-resolution mapping of social vulnerability... to identify marginalized populations with unprecedented precision’.
UNDP claims operational presence across 170+ countries. Their ‘AI Sprint’ launching in 202517 targets ‘accelerated AI enablement across 50+ countries’18. The gap between presence and full deployment matters — but the trajectory is legible.
What might ‘activation’ look like institutionally? The documents suggest adjacent capabilities rather than a unified command structure. DARPA’s ASKEM1920 system automates construction of epidemiological models. DoD-GEIS21 operates biosurveillance networks. The One Health22 framework integrates human, animal, and environmental health data. WHO maintains emergency declaration authority. These are separate institutions with separate governance, but they share data standards, interoperable systems, and overlapping personnel networks. The chain from model output to institutional response is not a single pipeline — it is a set of coupling points where prediction can trigger action across organizational boundaries. No single entity owns the threshold. That distributed quality makes the system harder to see and harder to contest.
Anticipatory governance is no longer merely a concept — it now appears as a repeatable civic technology, with its own process discipline. A Rockefeller Brothers Fund–backed ‘Project on Foresight and Democracy’ from 202023 trialed a four-part model: a Round Table ‘selected to represent the polity’, a Standing Advisory Group of foresight experts who set the agenda and define ‘drivers of change’, a panel of briefers, and a communications team tasked with converting discussion into structured outputs.
The key mechanism is not deliberation but iterated synthesis: verbatim notes are processed into ‘thematic minutes’, circulated back to participants, and used as ‘connecting links between meetings’, serving ‘as a system for learning, not just for remembering’. This is a governance feedback loop: inputs (expert drivers + tools), processing (facilitated discussion), outputs (themes), and feedback (minutes that condition the next round). The report explicitly frames the next step as scaling through ‘Round Table processes that are networked’.
And the justification logic matches the modern ‘polycrisis’ frame24: the Advisory Group’s drivers include advanced AI, synthetic biology, extreme climate change, and a ‘Pan-opticon’ of surveillance and behavior control — arguing these trends will cross a transition point where ‘standard measures of governance will lose their effectiveness’ within ‘ten to twenty years’. The premise is simple: the model must lead, because the public will always arrive late.
The Rockefeller Foundation’s $100 million Precision Public Health initiative25 (2019) operationalizes this logic in health systems. The explicit goal: ‘leverage data’s full potential to not only improve health outcomes, but also to prevent disease, illness, and pandemics before they occur’. The method: ‘predictive analytics can identify communities at high risk’ so that ‘preventive interventions can be more precisely targeted’ — informed by ‘data about the social determinants of health: income, education level, living conditions, and access to roads and transportation’. The WHO Director-General endorsed the framework26: ‘Precision Public Health that draws on the power of big data, predictive analytics, and granular and timely surveillance is essential for delivering the right interventions to the right people at the right time’.
Intervention justified by model output, not observable crisis. Targeting based on social determinants mapped with precision. ‘The right interventions to the right people’ — as determined by the model. The language is care, but the infrastructure is anticipatory selection.
The UNDP’s January 2023 policy brief, titled with perfect clarity — ’Choosing Your Tomorrows: Using Foresight and Anticipatory Governance to Explore Multiple Futures in Support of Risk-Informed Development’27 — frames the methodology explicitly: ‘Foresight and the concept of working with alternative futures grants policymakers and decision-makers the ability to become anticipatory’. The document describes anticipatory governance as requiring ‘shifts necessitated at the level of institutional processes, infrastructure, operational agility, culture, relationships and mindsets’. The goal: ‘To move from reactive risk management to anticipatory governance’. The method: combine historical data with ‘imagination of the multiple futures that are probable, plausible and preferable’. And the tool: the ‘iceberg model’ — where visible crisis events are merely the surface; below lie ‘patterns, trends, underlying structures and mental models’ that only systems thinking can reveal.
This is not merely rhetoric. The OECD’s official Anticipatory Governance page states the logic explicitly: ‘Governments equipped with the capacity to continuously use anticipatory intelligence and strategic foresight has a critical advantage in navigating the complexities of the modern world... These governments are not merely reacting to the challenges as they arise but are proactively preparing for a range of future scenarios. This approach enables them to stress-test their policies and systems... As a result, these governments can adapt more swiftly and effectively’28.
The OECD’s May 2025 report, ‘Building Anticipatory Capacity with Strategic Foresight in Government’29, documents a multi-country project to embed anticipatory governance in national government structures across Italy, Lithuania, and Malta. The report introduces an ‘Anticipatory Innovation Governance (AIG) framework’ with two dimensions: ‘Agency’ — the tools, methods, and knowledge for strategic foresight — and ‘Authorising environment’ — the institutional structures and leadership support ‘to legitimise and embed anticipatory practices into the policy cycle’. The report provides curricula, training modules, and organizational diagrams. The future is being proceduralized.
The European Commission has been operating this way since 2019, when strategic foresight was added to the portfolio of Vice-President Maros Sefcovic. The Commission now publishes annual Strategic Foresight Reports30 — six editions from 2020 to 2025 — which ‘inform the Commission’s Work Programmes and multi-annual programming’. An EU-wide Foresight Network31 includes ‘Ministers for the Future’ from all 27 Member States32. Nine EU institutions coordinate through ESPAS (European Strategy and Policy Analysis System), holding annual conferences and publishing five-year Global Trends Reports33. The 2025 Strategic Foresight Report is titled ‘Resilience 2.0: Empowering the EU to thrive amid turbulence and uncertainty’34. Foresight is not an advisory function; it is embedded in regulatory process.
By November 2025, the WEF and OECD published ‘AI in Strategic Foresight: Reshaping Anticipatory Governance’35, surveying 167 foresight practitioners across 55 countries. The findings: most are already using AI to ‘scan for signals of change, analyse large datasets and support scenario development’. A growing group experiments with ‘AI as a creative or analytical partner in systems mapping and scenario design’. The report frames AI as transforming how governments ‘detect, interpret and act on signals of change’. The coupling of anticipatory governance to AI acceleration is documented in joint WEF/OECD publication, not speculation.
The Stimson Center’s Global Governance Innovation Report 202436 provides the architectural blueprint. The report advocates for a UN Emergency Platform for ‘complex global shocks’ — a vehicle ‘directed by the Secretary-General that releases existing resources and coordinates a division of labor beyond Member States and UN agencies, incorporating a wider range of actors and capabilities’.
It proposes an ‘Earth Trusteeship Council for Global Commons Stewardship’37 to replace the defunct Trusteeship Council, with oversight authority over planetary boundaries. It calls for a ‘Declaration on Future Generations’ (adopted at the September 2024 Summit of the Future)38 enforced through an annual ‘Future Generations Review’ modeled on the Human Rights Council’s Universal Periodic Review. And it recommends ‘networked secretariats’ coordinating UN, World Bank, IMF, and WTO — with an International AI Agency39 maintaining a ‘chip registry’ to monitor global AI compute infrastructure.
The Stimson Center’s July 2025 follow-up report documents ‘slow yet visible headway’ in implementing these structures40. The Pact for the Future41, adopted September 2024, now has a monitoring toolkit, a high-level review scheduled for September 2028, and implementation tracking across its sixty Actions. Secretary-General Guterres launched the UN80 Initiative42 in March 2025 to ‘modernize the UN’s structure, priorities, and operations’.
UN80 is presented as crisis-driven bureaucratic reform — a response to liquidity crisis, US arrears, cost-cutting pressure. But it is the operational implementation vehicle for UN 2.043, which is explicitly anticipatory governance infrastructure. The UN 2.0 policy brief44 (Our Common Agenda Policy Brief 11) introduces the ‘Quintet of Change’ — five capabilities to be embedded across the entire UN system: data (‘improving how we collect, handle, govern and use data for better insights and action’), digital transformation (‘digitally enabled solutions’), innovation (‘viewing challenges as opportunities’), strategic foresight (‘learning structured methods to navigate change, imagine better futures and make better decisions today’), and behavioral science (‘applying knowledge of human behaviour to design evidence-based strategies and interventions that encourage positive change... to create better choices that work with, not against, the grain of human nature’).
The UN’s own language: ‘UN 2.0 lays the groundwork for a UN that not only adapts to change but anticipates and leads it’45.
The chain is explicit: Our Common Agenda46 (2021) → UN 2.0 ‘Quintet of Change’47 (2023) → Pact for the Future48 (2024) → UN80 Initiative49 (2025). The financial pressure is real. But it accelerates structural changes — unified data layer, cross-pillar integration, behavioral science deployment, foresight institutionalization — that survive after any budget crisis ends. The Data Commons50 and Technology Accelerator Platform51 become permanent substrate. The behavioral science element embeds nudge architecture at global institutional level52.
Each document is available. Each is published by institutions that consider themselves ‘defenders of human welfare’. Each is advancing — in careful, consultative, procedural steps — a governance architecture that operates on model outputs rather than democratic inputs. The language throughout is care, resilience, sustainability, protection. The mathematics throughout is: predict, target, intervene, before the population can evaluate whether the prediction was accurate or the intervention proportionate.
III. The Pivot Point
Central banks occupy a peculiar position in this architecture. They are perceived as technical institutions managing monetary policy through interest rates and money supply. But a G20 report published in October 2025, ‘The use of artificial intelligence for policy purposes’53, reveals capabilities that extend well beyond traditional monetary functions.
The Bank for International Settlements surveyed 83 central banks on AI deployment. The four primary areas: information collection (anomaly detection, data quality control), macroeconomic and financial analysis (nowcasting, sentiment analysis, inflation prediction), payment system oversight (suspicious transaction detection, AML monitoring), and supervision and financial stability (risk assessment, market surveillance).
Project Aurora54, conducted by the BIS Innovation Hub, pools transaction data across institutions and borders using privacy-preserving technology to detect money laundering patterns. The results: three times more detection, 80% reduction in false positives, and ‘network-centric behavioral analysis’ using graph neural networks to map transaction flows.
But surveillance is only half the architecture. The other half is programmable settlement.
The BIS Annual Economic Report 2023, Chapter III (‘Blueprint for the future monetary system’)55, proposed a unified ledger: a programmable financial infrastructure combining central bank digital currencies, tokenised deposits, and tokenised real-world assets.
The 2025 report56 extended this to a ‘trilogy’ — tokenised central bank reserves, tokenised commercial bank money, and tokenised government bonds on a single programmable platform. The BIS explicitly positioned this against decentralised alternatives: stablecoins ‘fall short’ as sound money and ‘without regulation pose a risk to financial stability and monetary sovereignty’. The unified ledger maintains central bank primacy while capturing the functionality of programmable money. The document is explicit about what this enables:
Tokenisation... introduces two important capabilities. First, by dispensing with messaging and the reliance on account managers to update records, it provides greater scope for composability, whereby several actions are bundled into one executable package. Second, it enables the contingent performance of actions through smart contracts, ie logical statements such as ‘if, then, or else’.
Project Rosalind57, conducted by the BIS Innovation Hub with the Bank of England, demonstrated the retail implementation: a three-party clearing mechanism where Party A (buyer), Party B (seller), and Party C (validator) interact through a central bank API. The payment doesn’t complete until the validator confirms conditions are met.
Project Agorá58, launched in 2024 and now in the prototype building phase with findings expected in the first half of 2026, moves the unified ledger from blueprint to implementation. Seven central banks — including the Federal Reserve Bank of New York, Bank of England, and Bank of France — partnered with 41 major financial institutions (JPMorgan, Visa, Mastercard, Swift, Deutsche Bank, UBS) to build what the BIS describes as ‘a public-private programmable core financial platform’ integrating tokenised deposits with wholesale central bank money. The infrastructure for conditional settlement is not hypothetical. It is under construction by the institutions that would operate it.
This is the technical mechanism for price-as-permission. The architecture doesn’t just monitor transactions — it gates them. The payment system can automatically check a product’s carbon passport, verify compliance credentials, assess the buyer’s standing, and complete or block the transaction based on programmed conditions. Not as theoretical capability — as demonstrated design. As Langdon Winner observed, artifacts have politics59: the architecture makes permission-based access structurally default while making unconditional exchange structurally exceptional.
The Swiss National Bank’s Project Helvetia III60 (December 2023 to June 2024) piloted wholesale CBDC conditional settlement. China’s Digital Yuan introduced programmable payment features in 202561. Singapore’s Project Orchid tested ‘purpose-bound money’ where funds redeem only if conditions are met62. Sweden’s Riksbank continues e-Krona pilots exploring ‘payments that are conditional and dependent on factors such as external information’63. The ECB’s digital euro preparation phase tested conditional payments64 — transactions triggered only under specific conditions — with potential issuance by 202965.
The ECB’s framing is revealing: the digital euro ‘would never be programmable money, but it could facilitate conditional payments’. The distinction is semantic, because the programmability is placed in the wallet66. The substance is conditionality at the transaction layer. As of late 2025, 137 countries representing 98% of global GDP are exploring CBDCs67, with 49 pilot projects active worldwide68, cross-border wholesale projects more than doubling since 2022, and half of central banks in recent surveys developing programmable features.
Central banks do not autonomously determine social policy. But they sit at the settlement layer — the point where transactions either clear or fail. If legislatures or regulators mandate conditions, central bank infrastructure is where those conditions become operational. The choke point is technical; the decision to use it is political. But the technical capacity precedes and enables the political choice.
Upstream of the settlement layer sit the standard-setters who determine what receives financing, and on what terms. The Network for Greening the Financial System (NGFS) — a coalition of 134 central banks and supervisors — publishes climate scenarios that increasingly shape capital allocation across the global financial system69. The Basel Committee on Banking Supervision sets capital requirements70; climate risk weightings in these requirements determine which activities banks can profitably finance. The Financial Stability Board (FSB) coordinates regulatory policy across jurisdictions71, driving mandatory climate disclosure through the Task Force on Climate-related Financial Disclosures (TCFD) framework now embedded in regulation across major economies72. And the Financial Action Task Force (FATF) sets anti-money laundering standards through an open mandate73 that determine who can access the financial system at all — its grey lists and blacklists function as de facto economic sanctions, excluding entire jurisdictions from correspondent banking relationships.
These bodies do not execute transactions. They define the parameters within which transactions become possible. The settlement layer is downstream; the standards layer is upstream. Together they constitute a governance architecture that operates through finance rather than legislation.
The policy proposals to activate this capacity are already circulating. The Fabian Society’s ‘In Tandem’74 (2023) proposes an Economic Policy Coordination Committee bringing together Treasury, Bank of England, Climate Change Committee75, and ‘social justice’ bodies to ‘align’ macroeconomic policy — explicitly positioning the central bank as root node in a cross-domain governance architecture rather than independent monetary authority.
Agustín Carstens, General Manager of the BIS, has been explicit. Across speeches in 2023-2024, he described a stack of ‘smart cloud’ (data layer), ‘smart ledger’ (programmable money and assets), and ‘smart regulator’ (controller) — arguing that policy rules can be encoded at the ledger layer of tokenised money. Not enforce rules after transactions. Embed rules into the transaction layer itself.
The shift this enables is fundamental: from ‘permitted unless prohibited’ to ‘allowed only if compliant’. The default flips from ex post enforcement (penalties after purchase) to ex ante clearance (permission before purchase).
IV. The Planetary Perspective
Moses Hess, writing in the 1840s, envisioned economic reorganization for justice — a transformation of property relations that would overcome alienation through collective ownership76. His vision required a perspective from which the totality of economic activity could be seen and rationally administered.
The planetary perspective provides this totality, but with different content.
The Stockholm Resilience Centre’s planetary boundaries framework77 identifies nine Earth-system processes that together define a ‘safe operating space’ for humanity. Their 2025 assessment concluded that seven of nine boundaries have been transgressed78. The framework originates in earth systems science; it describes biophysical thresholds.
The World Economic Forum’s November 2025 framework79 adopts this language and converts it to governance prescription:
The next frontier is to make natural assets visible in economic terms—embedding their worth in accounting, finance and policy.
This conversion of values into institutional requirements has a policy genealogy. Leonard Woolf’s 1916 blueprint for international government80 proposed that essential functions handled internationally would render national governments increasingly irrelevant — functional siphons of sovereignty operating through technical coordination rather than democratic mandate. A model, taken up by Alfred Zimmern and implemented through the League of Nations, later the United Nations.
The Fabian tradition refined this into ‘mission-oriented governance’: binding targets enforced through indicator metrics, where demonstrable alignment determines access to money, permissions, and institutional recognition. What began as Guild Socialist functionalism8182 — local guilds coordinating with national bodies feeding into international administration — has become the template for cross-domain authority operating through accredited intermediaries rather than electoral accountability.
The Stimson Center’s Global Governance Innovation Report 202583 elaborates the policy implications of what they term the ‘triple planetary crisis’ of climate change, biodiversity loss, and pollution. This becomes a ‘polycrisis’ connecting ‘socioeconomic, security, humanitarian, environmental, legal, and governance dimensions’.
The empirical observations underlying these frameworks could be accurate, but in this context, it’s irrelevant. What I examine is how the planetary boundaries framework performs a dual function: it describes earth systems and it constitutes jurisdiction over human activity within those systems. The same gesture that says ‘this is how the planet works’ implies ‘this is who may govern what you do’. Description and authority arrive together — without debate.
Notice the function of polycrisis framing. If problems are separate, they require separate governance responses. Democratic institutions can address them through deliberation. But if problems are polycrisis — interconnected, cascading, mutually reinforcing — then integrated response appears necessary. Cross-domain authority becomes justified. Siloed institutions appear inadequate. Only planetary-scale coordination seems sufficient. The framing creates pressure toward the governance form it describes as necessary.
The UN Emergency Platform84 proposal makes this activation mechanism explicit. The Secretary-General’s 2023 policy brief defines ‘complex global shocks’ as events characterised by cascading consequences across multiple domains — exactly the polycrisis logic that justifies cross-domain authority. Once activated, any domain claiming determinants over others can intervene everywhere: health determinants reframe housing, work, and education as medical jurisdiction; environmental determinants absorb economic policy and consumer behavior into ecological governance; security determinants capture inequality and health into Security Council authority.
The legal foundation already exists: Resolution 47/6085 (1992) formally expanded ‘peace and security’ to encompass ‘socio-economic factors as well as political and military elements’. The Stimson Center’s 2024 analysis of the Platform explicitly explores veto bypass mechanisms and automatic activation protocols. The polycrisis frame does not merely describe interconnected problems — it justifies the governance form required to address them.
The Harare Declaration86 (October 2024) demonstrates this domain expansion in practice. At the inaugural Climate and Health Africa Conference, Zimbabwe’s President Mnangagwa stated: ‘Climate change is not merely an environmental disaster. It is a public health emergency’.
Health ministers from 20 African countries endorsed the declaration, which ‘aligns with the newly adopted WHO framework for building climate-resilient and sustainable health systems’ and calls for ‘integrating climate change considerations into national health policies’. The semantic move is precise: by declaring climate a health emergency, health authorities claim jurisdiction over environmental policy, thus aligning with One Health87. WHO’s Regional Director for Africa endorsed the framework. The declaration explicitly calls for ‘embedding climate adaptation and mitigation strategies into national health plans’. Environment becomes health. Health institutions absorb climate governance. The domain expansion operates through declaration — the same mechanism the polycrisis frame enables at global scale.
The enforcement mechanism is stated explicitly:
Correcting price signals that encourage depletion means phasing out harmful subsidies, expanding nature-positive investment standards, widening access to green finance... Pricing water accurately can reduce waste...
Price signals as governance. Financial incentives as behavioral steering. Not command in the old sense — no commissar dictating production — but price, adjusted through coefficients, making certain activities expensive and others cheap, certain behaviors affordable and others prohibitive.
V. The Moment of Inversion
Here is the transformation that must be understood precisely:
In a market economy, price emerges from exchange between parties. You have something I want. I have something you want. We negotiate. The price expresses the intersection of our preferences, constrained by scarcity and shaped by our respective alternatives. Price is discovered through the transaction88.
When price is determined from a ‘planetary perspective’, something shifts. The price no longer expresses what you and I negotiated. It expresses what the system has determined the activity costs — not to us, but to ‘the planet’, to ‘future generations’, to ‘the safe operating space for humanity’.
But the planet does not have preferences. The planet does not negotiate. The planet is represented by ‘black box’ models — models built by those who control modeling infrastructure, running on compute concentrated in few hands, trained on data collected through sensing systems they operate, outputting assessments according to objective functions they have defined.
The Spaceship Earth metaphor89 clarifies the logic. A spaceship is a closed system90. Resources are finite. Passengers become variables in a closed-system model — resource demand, throughput, waste. The question shifts from ‘what do the passengers want?’ to ‘what can the vessel sustain?’ And who determines what the vessel can sustain? Those who control environmental monitoring, life support calculations, resource allocation algorithms.
When price expresses the system’s determination of cost to the vessel rather than the parties’ negotiation of value to each other, price stops being discovered and starts being administered. The market form remains — transactions, prices, apparent choice — but the substance changes.
Consider bread. Today you buy bread and the price reflects production costs, supply chains, competition, your willingness to pay. Now imagine systems converging: carbon coefficients attached to agricultural inputs (CBAM is operational at EU borders91), water footprint calculated for grain production (water pricing mechanisms exist)92, biodiversity impact assessed for land use93 (natural capital accounting is advancing94), supply chain risk scored for logistics95 (ESG frameworks are widespread).
This is not hypothetical. The Rockefeller Foundation’s 2021 report True Cost of Food96 provides the explicit methodology. Americans spend $1.1 trillion annually on food. The report calculates an additional $2.1 trillion in alleged ‘hidden costs’ — environmental, health, biodiversity, equity — bringing the ‘true cost’ to $3.2 trillion. Fourteen metrics across five impact areas: GHG emissions, water use, soil erosion, land use, pollution, diet-related disease, underpayment of wages, lack of benefits, occupational hazards. The report states: ‘We must accurately calculate the full cost we pay for food today to successfully shape economic and regulatory incentives tomorrow’. The recommendation: ‘formal integration of a true cost accounting framework into decision-making processes in public policy, private and public investments, and systems design’.
Note what the $3.2 trillion figure represents. It is not what food costs. It is what the model calculates food should cost when you price in the planetary perspective. The ‘true’ in ‘True Cost’ asserts that the modeled figure is more real than the negotiated price — that market discovery produces a false price, and only coefficient calculation reveals the true one. The methodology creates the coefficients. The policy recommendations create the enforcement pathway: ‘shape economic and regulatory incentives’, integrate into ‘public policy, private and public investments, and systems design’.
This is the conversion of food from market-priced commodity to coefficient-administered throughput. You are not paying for bread based on what you and the baker negotiated. You are paying what the model determines bread costs the vessel. Forward pricing97, administered from above, with no possibility of appeal. Based on ‘objective’ functions set by those who control the models.
And the language is care: ‘nourishing’, ‘equitable’, ‘sustainable’, ‘true’. The mathematics is: attach coefficients, propagate through supply chain, embed in procurement and policy. Connect this to the BIS architecture for conditional settlement and you have: scan item, read its passport, check coefficient against budget, clear or deny.
These systems do formally exist separately. But they are being connected through data integration and shared standards. The EU’s Carbon Border Adjustment Mechanism98 enters its definitive regime on January 1, 202699, following the transitional reporting period — carbon coefficients operational at scale, with certificate surrender requirements for importers.
The EU’s Digital Product Passport100 regulation under the Ecodesign for Sustainable Products Regulation101 (ESPR, entered into force July 2024) creates item-level tracking infrastructure — every product gets a machine-readable credential with embedded data on origin, materials, carbon footprint, compliance status. Textiles are first priority under the 2025-2030 working plan; batteries and electronics follow.
Once products have digital credentials and payments are programmable, connecting them is straightforward.
Even before hard permissioning at point of transaction, coefficient regimes change prices in a way that behaves like a ratchet.
Cap-and-trade102 and carbon-budget regimes103 — operating under the umbrella framework established by the UNFCCC — create administered scarcity in emissions allowances. Every economic activity that emits must offset against a finite and shrinking carbon budget. The coefficients attach at each production stage: the farmer’s fuel, the fertilizer, the transport, the processing, the packaging, the retail electricity. Each coefficient small in isolation. The sum substantial. The consumer pays the aggregate, laundered through ‘supply chain costs’.
The infrastructure for this is being deployed now. UNDP’s National Carbon Registry104 — recently accredited as a Digital Public Good105 — provides open-source software for countries to ‘manage national data and processes for trading carbon credits’. Designed to integrate with the World Bank’s Climate Action Data Trust106 and national measurement systems, the registry creates standardised digital infrastructure across NDC signatories for tracking, issuing, and trading carbon allocations. Tokenisation of carbon credits107 — blockchain-based traceability, real-time reporting, interoperability across registries — is moving from concept to mainstream adoption in 2025. The plumbing for the coefficient regime is being wired into the same digital infrastructure as payments.
This is inflation that monetary policy struggles to address, because it is structurally embedded as a policy-cost stack rather than a cyclical demand phenomenon. It is coefficient-push — administered scarcity progressively built into the cost structure of everything.
And here is the ratchet: the carbon budget tightens annually toward net zero by 2050108. As the budget shrinks, credits become scarcer. Scarcer credits command higher prices. Higher offset costs cascade through every input-output chain. Consumer prices rise. And the budget tightens again next year. In practice, the mechanism tends to ratchet tighter over time, because targets are set as declining budgets and compliance regimes are path-dependent.
The endpoint is visible in the mathematics. As the budget approaches zero, the price of emission approaches infinity. Economic activity itself becomes unaffordable for those without carbon allocation or the means to acquire it. The squeeze doesn’t stop until net zero is achieved — which means, for those priced out, net zero activity.
This is their grand plan.
And who absorbs the squeeze? Not those setting the coefficients. Those for whom food is already a significant portion of income. Those the UNDP’s Digital Social Vulnerability Index identifies with ‘unprecedented precision’109.
The Rockefeller report frames this as revealing ‘hidden costs’. But the costs aren’t hidden — they’re created by the methodology. The methodology decides what counts as cost, how to calculate it, what discount rate to apply to ‘future generations’, what monetary value to assign to biodiversity loss or soil erosion. Those are choices, and they’re made by those who control the models. Then the ‘true cost’ is presented as if it were discovered rather than constructed. As if the planet had preferences and the model simply read them.
The circuit completes when coefficient methodology connects to anticipatory governance. You don’t wait for food insecurity to manifest. You model it forward. You identify vulnerable populations with ‘unprecedented precision’. You intervene pre-emptively. You relocate populations from areas ‘deemed unsafe for habitation’. For their protection. Even if they oppose.
The language: nourishing, equitable, sustainable, resilient.
The mathematics: coefficient accumulation → price increase → selection pressure → vulnerability identification → pre-emptive intervention.
The BIS describes exactly this integration at the transaction level110:
A payment between two individuals, executed via a smart contract, would bring together the users’ banks (as providers of tokenised deposits) and the central bank (as provider of CBDC). Should the payment be conditional on some real-world contingency, that information would also be included.
You scan an item at checkout. The system reads its passport. Checks compliance against current thresholds. If compliant: payment completes. If non-compliant: payment blocked or surcharged automatically. Protocol-level enforcement: no human intervention.
This is the threshold where market economy and command economy become difficult to distinguish by observing transactions alone. The forms persist, but the substance inverts.
VI. The Ghost Becomes Machine
In ‘The Ghost of Wassily Leontief’, we saw how the input-output matrix111 creates a position from which total economic activity becomes visible and, in principle, administrable. Leontief’s framework was descriptive — an analytical tool for understanding interdependencies. The same framework becomes prescriptive when coefficients are attached to activities and enforcement mechanisms exist to adjust behavior in response.
The transformation happens in three steps.
Leontief’s input-output matrix112 begins as a map of dependence: a table of technical coefficients showing how much of each input sector is required to produce one unit of output in another. In its original use it is descriptive — an accounting snapshot of the economy’s internal plumbing.
The first inversion happens when you attach external coefficients to each input — carbon intensity per tonne of steel, water footprint per kilogram of grain, biodiversity impact per hectare — the same accounting structure becomes a propagation mechanism. It does not merely show interdependence; it transmits constraints across the entire production graph. A coefficient added at one node becomes a cost that ripples through every downstream sector that depends on it.
The deeper flip is from price to price-as-risk-score. In a market frame, the matrix helps estimate how shocks affect prices. In a Spaceship Earth frame, the ‘shock’ is not market scarcity but budgeted limits — a finite emissions envelope, a capped nutrient load, a boundary condition. The matrix then functions as a feasibility test: it estimates whether a proposed pattern of production and consumption remains inside the permitted operating space. Risk, here, means risk of exceeding a budgeted boundary condition — carbon, water, land, biodiversity — rather than risk of insolvency or ordinary market volatility. When the output of that test is wired to settlement — through conditional payments, product passports, and validator architectures — the result is not a higher price but a gate: clearance or denial.
In other words: the matrix becomes a planetary spreadsheet of allowed throughput. Price ceases to express exchange between parties and becomes the system’s computed allowance for activity under constraint.
Input-output accounting is the computational missing link. It is the only widely institutionalized method for converting planetary constraint into economy-wide coefficients that can be propagated through the production graph, compared across sectors, and eventually enforced. It makes planetary constraints operationally administrable rather than merely rhetorical. Without it, you can measure emissions at smokestacks — but you cannot consistently allocate embodied impacts through complex supply chains. With it, you gain three capabilities essential to planetary administration: attribution (allocating responsibility through the production graph), propagation (transmitting coefficients through every downstream dependency), and comparability at scale (a common accounting grammar that lets you compare apples to steel to bread to shipping in the same language). That combination is what turns planetary limits into administrable constraint.
But calculation is not control. Input-output accounting is necessary but not sufficient. For the computed constraint to become enforceable, the matrix must be wired to the rest of the stack: digital identity to tag assets and actors, sensing systems to capture and verify data, standards to make coefficients interoperable across jurisdictions, accreditation to determine who may participate, audits to verify ongoing compliance, and settlement architecture to gate transactions based on the result.
Accreditation and audit deserve particular attention. They are the soft enforcement layer that precedes the hard one. Before you reach the transaction gate, you must be recognized as a legitimate participant — accredited by bodies that certify your compliance with standards you did not set. ESG ratings determine corporate access to capital. Carbon certification bodies verify offset validity. Professional accreditation gates access to regulated industries. The ‘validator’ in the BIS three-party clearing mechanism is an accreditation function embedded at the transaction layer itself.
Audits verify ongoing compliance. They are not neutral assessments but enforcement mechanisms — the threat of failing an audit, losing accreditation, being excluded from markets or institutional recognition. The audit doesn’t punish directly. It withdraws the permission to participate. The Fabian model113 of ‘mission-oriented governance’114 operates precisely through this logic: binding targets enforced through indicator metrics, where demonstrable alignment — verified by audit, certified by accredited bodies — determines access to money, permissions, and institutional legitimacy.
This creates a compliance gradient before physical enforcement becomes relevant. You don’t need drones to exclude non-compliant actors from the economy. You withdraw their accreditation. You fail their audit. You downgrade their rating. They become invisible to the systems that allocate capital, clear transactions, grant permissions. The soft exclusion precedes the hard one — and for most purposes, the soft exclusion is sufficient, even superior because it’s less visible.
The Rockefeller Foundation’s 2022 report ‘What Gets Measured Gets Financed’115 makes explicit the intermediate step between planetary constraint and transaction-level permission: measurement is not merely descriptive; it is the precondition for capital allocation. The report frames climate finance as fundamentally a measurement and traceability problem: ‘observers often cite the lack of reliable and transparent tracking as a major barrier to increased investment’. There is ‘no consensus on how to measure and report climate finance’; ‘data is poorly measured and tracked’. The solution: taxonomic standards, mandatory disclosure regimes, and what the report calls ‘hard policy mandates’ that make reporting consistent and complete. TCFD-style disclosure requirements for banks, investors, and corporations116 — measurement infrastructure imposed by regulatory force.
The logic is explicit: ‘improved data can drive climate finance’. You cannot finance what you cannot measure. Therefore you must measure everything. Where the OECD (2000) framed legitimacy as a function of anticipatory performance and managed trust, Rockefeller/BCG frames financing capacity as a function of traceability, taxonomy, and mandated disclosure — turning reporting standards into a prerequisite for investment flows. This is the path that becomes conditional settlement in the BIS architecture: coefficient → measurement → disclosure → capital access → transaction clearance.
Leontief is the CPU. Programmable settlement is the actuator. Digital IDs and product passports are the labels and sensors that tell the actuator what it is touching. The input-output matrix sits at the center of the architecture, mediating between the abstract (models, frameworks, planetary perspective) and the manifest (transactions, enforcement, the gate that opens or closes) — Tipheret in a system that presents administrative harmony as cosmic balance.
What the 2025 documents reveal is infrastructure that operationalizes this:
Sensing layer: Digital Earth systems117, satellite constellations, IoT networks, biosurveillance systems, financial transaction monitoring — continuous data collection across domains. The 50-in-5 campaign, backed by UNDP and the Gates Foundation118, is deploying digital public infrastructure (digital ID, payments, data exchange) across 30 countries toward a target of 50 by 2028 — building the sensing and identity layer that connects individuals to the systems that would track them. The World Bank estimates 800 million people still lack official ID and 2.9 billion lack digital ID systems119; even where systems exist, only 32% of adults can access them and 23% use them. These are the populations being enrolled.
Modeling layer: AI systems processing data at scale. UNDP’s Crisis Risk Dashboard for development contexts120, BIS cross-border transaction analysis for financial systems, DARPA’s ASKEM for biosurveillance121. The pattern that connects them: modeling tools developed in one domain migrate to others.
Coefficient layer: Carbon coefficients attached to products (CBAM operational), ESG scores attached to corporations (shaping capital access), risk scoring systems attached to individuals (credit scoring long-established; AML risk scoring emerging via Project Aurora122; health risk scoring shaping insurance; UNDP’s Digital Social Vulnerability Index mapping populations). The ECB’s November 2025 climate indicators now track carbon emissions across euro area bank portfolios123 — coefficient surveillance at the central bank level, monitoring ‘transition risk’ in the financial system. Connected by the logic of coefficient attachment, translating identity into calculable risk. Input-output accounting is the computational substrate that makes coefficient propagation possible across the entire economy.
Enforcement layer: Central banks at the settlement choke point, programmable payment architecture demonstrated, conditional transaction clearing technically enabled124. The unified ledger architecture transforms the economy into something resembling a programmable operating system — policy layer, settlement layer, validation logic, smart contracts executing rules125.
Governance layer: Global frameworks defining what counts as compliant, risk, sustainable. The Stimson Center maps 17 intergovernmental processes feeding into Pact implementation126. The Pact for the Future now has a high-level steering committee chaired by the UN Secretary-General127, with six working groups — including Digital Technologies involving 35+ UN agencies128 — translating commitments into action. The Seoul Statement on AI Standards129 (December 2025), issued jointly by ISO, IEC, and ITU, commits to ‘actively incorporate socio-technical dimensions in standards development’ and ‘deepen the understanding of the interplay between international standards and human rights’ — technical standards explicitly linked to behavioral governance. At national level, the same architecture replicates: mission frameworks define goals, indicator metrics determine compliance, accredited intermediaries mediate access, conditional finance steers behavior — clearinghouse democracy in institutional form, where alignment with technocratic missions replaces electoral mandate as the basis for legitimacy.
The UNDP document describes monitoring of information itself: eMonitor+130, ‘an AI-powered analysis platform that expands national stakeholders’ capacity to identify and address information environment challenges... hate speech, disinformation, and tech-facilitated gender-based violence. Now deployed in over 15 countries and used daily by more than 150 monitors’.
VII. The Ghost Gains a Body
Everything described so far operates at the transaction layer. The coefficient determines permission. The ledger gates the exchange. But what enforces the gate in physical space?
The infrastructure described in Sections III through VI is computational. It calculates, monitors, conditions, clears or denies. It operates on data flows, transaction records, digital identities. What it lacks is a body.
That is changing.
Europol’s ‘The Unmanned Future(s)’ report131 (December 2025) does not describe clerks with rubber stamps. It describes the transition of military-grade surveillance and control infrastructure into the civilian sphere. The language is explicit: ‘digital becomes physical’. The report envisions a ‘phygital’ society where autonomous systems — drones, ground robots, underwater vehicles, surveillance platforms — operate continuously across what they term ‘volumetric jurisdiction’. Not two-dimensional patrol routes but three-dimensional control of space itself.
But Europol is not alone. The physical enforcement layer is being built simultaneously across every major security institution.
Frontex (EU Border Agency) announced in November 2025 its framework for ‘Emerging Technologies for Border Management’132: UAVs, AI/ML for predictive analytics, IoT sensors, advanced biometrics, autonomous systems, blockchain-based platforms. Their February 2025 Industry Day focused on ‘AI Tools for Seamless Border Checks’133. The specification: ‘AI-enabled surveillance installations (surveillance towers) and autonomous systems (e.g., drones and networked heterogeneous robotic systems)’. The capabilities described: ‘cognitive robotics’, ‘sensory computing’, biometric scanning at scale.
INTERPOL’s Global Biometric Hub went live in October 2023134 and is now deployed to 196 member countries — effectively global coverage. Capacity: up to one million searches per day against fingerprint, palm, and face biometrics. Their NEXUS system135 became operational in Q1 2025 with ‘increased use of AI, enhanced use of entity data’. The INSIGHT platform enables ‘predictive policing’136. Their First Future of Policing Congress convened in October 2024 to coordinate the trajectory137.
NATO’s AI Strategy and Rapid Adoption Action Plan138 (2025) describes ‘blurred borders with the civilian sector’ and the ‘weaponisation of civilian life’. The explicit goal: ‘meaningful steps towards autonomy’. Academic analysis of the strategy concludes that ‘autonomy is now more a matter of choice than of means’ — the technology exists; deployment is policy decision. The Rapid Adoption Action Plan mandates integration of new technologies ‘within a maximum of 24 months’. Funding: €1 billion NATO Innovation Fund139, plus DIANA140 (Defence Innovation Accelerator for the North Atlantic) to fast-track dual-use systems.
The market confirms the build. Border security technologies: $36.21 billion in 2025, projected to reach $60.89 billion by 2033141. Surveillance systems hold 34.53% market share. Unmanned systems are the fastest growing segment at 9.80% CAGR. The commercial specification: ‘autonomous monitoring capabilities’, ‘real-time threat detection’, ‘advanced radar integration’.
This is not one agency’s pilot project. This is convergent institutional commitment across law enforcement (Europol & local American Police Departments142), the FBI143, the DHS144, EU border control (Frontex), global policing (INTERPOL), and military alliance (NATO) — all building toward the same capability set in the same timeframe, with commercial markets scaling to meet demand.
In the United States, the transition is already operational. Flock Safety’s AI-powered surveillance network145 — deployed across thousands of police departments — now ‘actively evaluate[s] each of us to make a decision about whether we should be reported to law enforcement as potential participants in organized crime’. The system does not respond to observed violations; it generates suspicion algorithmically, flagging vehicles and individuals for investigation based on pattern-matching against behavioral models. This is the shift from ‘permitted unless prohibited’ to ‘allowed only if compliant’ — applied to physical movement, already at scale.
As for the Europol report, it states:
Robots and drones bring the digital world to the physical world, requiring adaptation to deal with automated crime and crime conducted in the public by actors out of physical reach for law enforcement… As unmanned systems become increasingly autonomous, equipped with AI and task-based controls, they are gaining agency to act on their own... exacerbating the jump of cybercrime from the digital to the physical world.
Agency to act on their own in physical space, based on data inputs they process faster than humans can audit.
The sensing layer this enables is described without euphemism:
Unmanned systems, navigating our world and interacting with us and each other will be observing the world around them, with us in it. When these become ubiquitous in society, this will mean that there is the possibility to be observed almost everywhere, anytime.
Europol calls this the transition from ‘transparent battlefield’ to ‘transparent society’146. The military logic of total situational awareness migrating to civilian context. Social robots collecting ‘intimate data’ in private spaces. Surveillance ‘extending to people’s privacy spaces’. The report acknowledges this ‘significant threat to personal privacy’ while treating it as operational reality to be managed, not prevented.
The future operating environment for law enforcement will likely see a shift from monitoring two dimensional surfaces to three dimensional volumes… With an increase in unmanned systems, under water and in the air, the planes in which humans operate will become increasingly multi-dimensional.
The command architecture proposed is telling: ‘Command, Control, Collaboration, and Autonomy (C3A)’147 — frameworks where systems ‘operate more autonomously, making decisions and taking actions in real-time, and that human operators must be able to collaborate with these systems to achieve shared goals within necessary timeframes’. Human operators collaborating with autonomous systems. Within timeframes the systems determine.
The report notes that as autonomy increases, accountability diffuses: ‘When autonomous systems act... [this] may blur the lines of responsibility’. No one is responsible when the system acts.
Two ‘Genesis’ programs announced in 2025 reveal the trajectory.
Genesis AI148 (private, $105 million funding, July 2025) describes itself as ‘a scalable data engine... to train a universal robotics foundation model — capable of controlling any robot, for any task, anywhere’. The company claims their simulation environment can process up to 43 million frames per second on optimized hardware — orders of magnitude faster than real-time observation. The development pipeline: train in controlled simulation, then deploy in physical environments. The explicit goal: systems ‘thriving in messy, unconstrained environments’.
Controlling any robot for any task… anywhere… at speeds that exceed human observation.
The White House Genesis Mission149 (Executive Order, November 24, 2025) calls for ‘robotic laboratories and production facilities with the ability to engage in AI-directed experimentation and manufacturing, including automated and AI-augmented workflows’. Government facilities where AI directs robotic systems in experimental and manufacturing operations.
The stack now extends beyond the monetary layer:
Monetary layer: Conditional settlement (BIS unified ledger, CBDCs, programmable payments)
Coefficient layer: Behavioral scoring (carbon, ESG, identity, vulnerability indices)
Accreditation layer: Gatekeeping mechanisms (certification bodies, ESG ratings, audit regimes, validator nodes)
Sensing layer: Ubiquitous surveillance (Europol’s ‘transparent society’, 50-in-5 digital ID)
Physical enforcement layer: Autonomous systems that can act on coefficient outputs in real-time
The critical observation is this: when model outputs are wired to systems that can restrict movement and access in physical space, governance stops being advisory. It becomes enforceable at machine speed, with accountability diffused across vendors, operators, and protocols. The command architecture Europol describes — ‘Command, Control, Collaboration, and Autonomy (C3A)’ — explicitly anticipates this: systems that ‘operate more autonomously, making decisions and taking actions in real-time’. The report notes that as autonomy increases, accountability diffuses: ‘When autonomous systems act... [this] may blur the lines of responsibility’.
No one is responsible when the system acts.
The accountability gap is real.
The juridical foundation for physical enforcement of coefficient compliance is already being laid. The UN Special Rapporteur on Environment’s 2024 statement on the ‘Right to a Healthy Environment’150 contains a formulation worth noting: the right ‘guarantees environments that are ecologically healthy, regardless of direct impacts on people’.
The environment gains rights independent of — and potentially against — human welfare. If humans can be characterized as threats to environmental health, enforcement of environmental rights could logically require managing that threat.
The policy framework for such management already exists. UNHCR’s guidance on planned relocation establishes it as a legitimate disaster risk reduction measure151:
When other options have been exhausted, planned relocation may be the most effective way to save lives and reduce displacement risk. It may be required after disaster displacement has occurred if a place of origin has been deemed unsafe for habitation, or it may be a pre-emptive measure to reduce the vulnerability of people living in areas exposed to high levels of disaster risk.
Pre-emptive relocation, based on ‘black box’ risk assessment.
The guidance acknowledges the coercive dimension directly:
In some cases, Planned Relocation will be initiated by persons or groups of persons and will reflect their level of risk tolerance. In other cases, States will decide that people must be moved for their safety and protection, even though they may oppose Planned Relocation.
States will decide — even though the people may oppose. The framing is humanitarian — protection, safety, reducing vulnerability. The substance is forced movement based on model outputs that determine which areas are ‘deemed unsafe for habitation’. The people have no influence on the models.
The operational infrastructure follows. Related guidance calls for ‘interoperable information management systems to identify and follow the movements of displaced people’152 and ‘land banking’153 — pre-allocated land for ‘potential permanent relocation in the event that places of origin are no longer habitable’154.
Now connect the documented pieces:
Environmental rights ‘regardless of direct impacts on people’ (UN Special Rapporteur 2024)
Planned relocation as pre-emptive measure, States deciding people ‘must be moved’ even if they oppose (UNHCR guidance)
Autonomous enforcement with ‘agency to act’ in ‘volumetric jurisdiction’ (Europol 2025)
Coefficient infrastructure determining compliance status in real-time (BIS, CBAM, ESG ratings)
Each element is documented separately. The question this essay raises is what happens when they connect. Each element is real; the coercive coupling is a governance choice — most likely activated under emergency authorities — not a default operating mode.
Under emergency activation conditions — the ‘complex global shock’ that the UN Emergency Platform proposal defines as justification for cross-domain authority155 — the coupling becomes operational. The coefficient determines who is compliant. The ledger determines who may transact. The enforcement layer determines who may move. All operating at speeds that make meaningful human oversight impossible.
This is not administration in the traditional sense. Administration implements rules made by others through deliberative process. What this architecture enables, in the limit, is something closer to automated sovereignty: infrastructure that generates rules through model outputs, embeds them at the settlement layer, and enforces them through systems operating above human-speed accountability.
A sovereign makes rules and enforces them with a monopoly on legitimate force. The documents describe a system that:
Makes the rules through model-defined coefficients and thresholds — the planetary perspective, the polycrisis logic, the risk scores. Not democratically deliberated but computationally determined.
Embeds the rules at the monetary settlement layer — the ‘if-then-else’ of the unified ledger, the conditional payment that clears or denies.
Enforces the rules through autonomous systems that control physical space — acting on coefficient outputs in real-time, with accountability diffused across the architecture.
When coefficients determine transaction permission AND autonomous systems control physical movement, the architecture is not advisory. You don’t merely fail to transact, to move, to be even present. To exist in surveilled space without triggering intervention.
Koestler’s ghost in the machine156 gains a body. This is an implementation of Zev Naveh’s Total Human Ecosystem157.
VIII. What Price Discovery Determines
In a market economy, price discovery determines what things cost — what you must give up to obtain them. The price of bread tells you how much labor you exchange for nourishment. The interest rate tells you the price of time. The wage tells you what your capacities command158.
When price becomes permission administered from a planetary perspective, price discovery determines something else. The systems are at different stages:
Already operating:
Whether you eat — Coefficient systems attach to food production now159. Carbon pricing affects agricultural inputs in some jurisdictions160. These currently affect price levels, not transaction clearance. The infrastructure exists; the full integration does not.
Whether you travel — Mobility restrictions operated during COVID through digital pass systems161. Those were temporary, justified by emergency. The architecture persists. Congestion pricing is implemented162. The pressure is toward coefficient-adjusted mobility.
Whether you work — ESG ratings affect institutional employment in some sectors163. Risk scoring shapes hiring164. Platform work is mediated by algorithmic assessment.
Whether you speak — Content moderation operates at platform level165. Payment processors have deplatformed individuals for speech-related reasons166. UNDP’s eMonitor+ suggests the model is being exported as development assistance.
Logical endpoint:
Whether you continue existing — This is where the architecture points if its logic extends to conclusion. Net contribution calculations. Care cost projections. Remaining productive value estimates.
The components exist separately: UNDP’s Digital Social Vulnerability Index maps populations with ‘unprecedented precision’167. Health systems model lifetime cost trajectories168. Insurance prices mortality risk. Central bank infrastructure could enforce conditions at transaction level169. The question is whether the logic that connects them, if followed, trends toward optimization of human populations through financial mechanisms.
IX. The Market Logic of Managed Decline
The progression from ‘what you may buy’ to ‘whether you exist’ requires recognizing that financial instruments linking returns to population outcomes already exist in normalized forms.
The UNDP’s Digital Social Vulnerability Index identifies ‘marginalized populations with unprecedented precision’. The stated purpose is development assistance — targeting aid to those who need it. The same precision that enables assistance enables optimization.
The carbon ratchet described in Section V is itself a mechanism of managed decline. As the budget tightens and prices rise, those on the margins are progressively priced out — not by decree, but by the mathematics of coefficient accumulation. You don’t need to decide who may not eat, travel, or heat their homes. You tighten the budget and let the prices do the work. The squeeze is the selection mechanism.
And when, under emergency activation conditions, the squeeze extends from financial to physical — when enforcement systems operating on coefficient logic control not just transaction clearance but movement through surveilled space — the selection mechanism gains physical enforcement. The system doesn’t merely price you out. It can physically exclude you from zones, restrict movement, interdict access. At speeds no human auditor can review.
Consider existing financial instruments that formalize this logic:
Social impact bonds170: Investors fund interventions; returns depend on measured outcomes. These contracts typically target specific improvements — reduced recidivism, improved educational attainment. But if the metric is cost-reduction and the system treats certain cohorts as structurally high-cost, incentive pressure can drift toward managed decline without explicit design intent. The instrument structure is agnostic to what outcome it optimizes.
Mortality-linked securities171: Life settlements constitute a market in which returns improve when insured individuals die sooner than actuarial projections. Longevity swaps between pension funds and reinsurers transfer mortality risk based on population survival rates. These instruments exist and trade; they create financial positions that profit from mortality outcomes.
Pay-for-success contracts172: Governments contract with private providers; payment depends on achieving defined metrics. If metrics include cost reduction in healthcare or social services, the incentive favors outcomes that reduce costs.
Sustainability-linked sovereign debt173: Nations borrow at rates tied to meeting environmental or social targets. If population is framed as environmental cost, demographic outcomes affect borrowing terms.
I am not claiming anyone designed these instruments for eugenic purposes. I am observing that the instrument structures create financial interests in population outcomes. When ‘expensive’ populations decline, certain positions profit. The market discovers this; no one needs to intend it.
The old eugenics was state-directed. It required explicit laws, visible institutions, public justification. It was politically vulnerable because it was legible as what it was. Nuremberg made it unspeakable.
The new eugenics — if this term applies — requires no such legibility. It operates through the price mechanism and normalized financial engineering.
If certain genetic profiles correlate with higher health costs, insurance prices them accordingly. If certain conditions require expensive interventions, financial systems score them as risks. If certain populations have higher ‘social vulnerability indices’, resources allocated to them become targets for efficiency optimization. If the carbon budget tightens and coefficients cascade, those least able to afford the accumulating costs fall first.
No one decides to eliminate anyone. The market discovers prices. The prices discover that certain lives cost more than they return. Optimization pressure emerges. Outcomes follow.
The language of care. The mathematics of optimization. Whether the outcome constitutes eugenics depends on definition. The mechanism operates regardless.
Classical ‘survival of the fittest’ implied no designer — differential reproduction based on environmental pressures that no one set. The cruelty was impersonal because there was no person.
This architecture defines fitness. The coefficients determine what counts as ‘fit’. The models define the objective function. The ‘planetary perspective’ is someone’s perspective — operationalized as if it were neutral, as if it were physics.
So it is not survival of the fittest. It is survival of the compliant. Survival of the low-cost. Survival of those who return more than they consume according to metrics set by those who control the modeling infrastructure.
And because it operates through price rather than decree, it inherits the moral camouflage of market outcomes. ‘No one decided this. The market discovered it. We just set the parameters’.
Eugenics that doesn’t require anyone to intend it. Selection that operates through infrastructure rather than policy. Outcomes that ‘follow’ without anyone being responsible.
The old eugenics was legible and could be opposed. This version is deniable by design. The language is care, sustainability, protection. The mathematics is selection. And the gap between language and mathematics is where the thing hides.
X. The Fork in the Road
There is a fork that the documents reveal but do not acknowledge.
One path: AI and planetary sensing and integrated data systems used to enhance human knowledge, inform democratic deliberation, support distributed decision-making, expand the range of choices available to individuals and communities. Technology in service of agency.
The other path: AI and planetary sensing and integrated data systems used to replace deliberation with computation, substitute administration for politics, narrow choices to compliance gradients, optimize human activity toward objective functions set by those who control models. Technology as substrate of administration.
Yuval Noah Harari, writing from within the WEF network, has provided the ideological articulation for this second path. His concept of ‘Dataism’174 — the worldview that ‘the universe consists of data flows, and the value of any phenomenon or entity is determined by its contribution to data processing’ — is the philosophical substrate on which the technical systems rest. His prediction of a ‘useless class’ rendered economically irrelevant by automation names explicitly what the coefficient ratchet produces mathematically. His assertion that ‘we are now hackable animals’ whose algorithms ‘will soon know us better than we know ourselves’ provides the justification for why model outputs should replace democratic deliberation.
The infrastructure documented in this essay is what makes Harari’s vision operational. He provides the ideology; the BIS, UNDP, and EU Commission provide the implementation. Harari presents dataism as inevitable evolution. The documents reveal it as constructed infrastructure. The language is philosophy. The substance is: build the systems that make the philosophy self-fulfilling.
What is striking is how early the legitimacy script for this second path was formalized. In 2000, the OECD argued175 that government must move ‘away from opportunistic reform towards more strategic reform’, built around a ‘clear vision’, constituency-building, and communications — explicitly recommending that government ‘consult with stakeholders’ to ‘bring together their many, varied visions’. Consultation is treated less as democratic contestation than as an implementation technique for coherence and buy-in.
The OECD also states the core exchange rate of this new regime in blunt terms: ‘When government succeeds in anticipating citizens’ needs, it earns currency in the form of trust. The price of failure is a loss of legitimacy’. Trust becomes a managed output variable — earned through results-reporting, transparency, and performance regimes that ‘cultivat[e] a broader constituency for the information [they] produce’. Legitimacy becomes something you engineer through metrics, messaging, and stakeholder choreography, rather than something that primarily flows from electoral contestability.
Finally, the OECD describes the structural shift away from vertical hierarchies toward ‘webbed government’, with concentric clusters and knowledge networks designed to integrate local, regional, and global inputs — while redefining the state as mediator/co-ordinator ‘in concert with other centres of power, including international’ bodies and NGOs. That is the administrative form in which ‘price-as-permission’ becomes politically sellable: the system is presented not as command, but as coordination — networked, consultative, performance-based, and therefore ‘trustworthy’.
The same infrastructure can serve either path. The documents examined here reveal which path is being built.
The UNDP speaks of ‘human agency’ as a value to be considered in AI design — while deploying systems that anticipate, manage, and mitigate before humans deliberate. The WEF speaks of ‘stewardship’ as success — while defining boundaries within which activity must be contained. The BIS speaks of ‘stability’ — while building the rails for transaction-level conditionality.
The words point one direction. The architecture points another.
XI. Conclusion: The Moment of Hollowing
The trick is simple once you see it.
Pricing stops being a matter of exchange between you and me and comes to be seen from a ‘planetary perspective’. But the planet is considered through the closed-loop model prism of Spaceship Earth, and the price is inverted — from what you are willing to pay to the risk your activity poses to the vessel. Price stops expressing your preference and starts expressing the system’s permission.
This is the exact moment where the market economy is hollowed out and replaced by a command economy, its forms persisting while its substance inverts.
What does price discovery determine about your life? In a market economy: what things cost you. In the architecture traced here: whether you eat, whether you travel, whether you work, whether you speak… whether you continue existing. As the coefficients accumulate and the ratchet tightens and the budget approaches zero, those who cost more than they return are discovered by the prices — and the prices do the work that policy cannot speak aloud.
Eugenics as a ‘market-based solution’.
And now the ghost gains a body. The computational infrastructure that determines permission connects to physical systems that can enforce it. Each layer alone might be contained. Together they create an architecture where transaction denial leads to access restriction leads to physical interdiction — without anyone intending that progression, without anyone responsible for it, without anyone able to stop it once the logic is in motion.
No commissar, no decree — but there is a sovereign. It operates through coefficients and smart contracts, model outputs and autonomous enforcement. The language remains the language of ‘ethics’ and care: sustainability, resilience, well-being, protection. The mathematics is the mathematics of selection. The enforcement operates at speeds that make oversight ceremonial.
Herbert Marcuse warned of the administered society176. What is emerging is something beyond administration: governance through infrastructure rather than authority, compliance rather than consent, conditional access rather than democratic choice. The clearing house logic that began coordinating transactions between London banks has scaled to coordinate human activity — financial and physical — through planetary infrastructure operating above democratic accountability.
Price was once the information system of the market — the signal through which distributed knowledge coordinated without central direction. Price is becoming the control system of this new architecture — the mechanism through which models define permission and systems enforce it without apparent command.
The market remains. As does the appearance of freedom. Yet, prices are no longer discovered. Prices are discovering what the system permits you to do.
The inversion is complete when three substitutions become operational:
You were innocent until proven guilty. Now you are a risk until proven compliant.
You could buy what you could afford. Now you can buy only what the model permits.
You were a citizen with rights. Now you are a variable with coefficients.
Democratic agency replaced by algorithmic administration. The forms may persist, but the substance inverts.
It is not a takeover by force, but a takeover by definition.
Define the problem as ‘polycrisis’ (requiring centralization).
Define the solution as ‘anticipatory governance’ (pre-crime/pre-crisis intervention).
Define the mechanism as ‘price-as-permission’ (programmable money).
Define the enforcement as ‘autonomous’ (removing the human conscience from the loop).
What this essay has traced is not a conspiracy. It is a stack.
Foresight models (UNDP, OECD, EU Commission) generate outputs. Those outputs become coefficients (Rockefeller’s ‘true cost’ methodology, CBAM carbon pricing). The coefficients propagate through input-output matrices (Leontief’s framework, now computational). The propagated prices clear through programmable settlement infrastructure (BIS unified ledger, conditional transactions). The settlement layer couples to physical enforcement (Europol’s ‘phygital’ convergence, Frontex autonomous systems, INTERPOL biometric networks). And the ratchet — the tightening carbon budget, the escalating planetary boundaries, the compounding ‘hidden costs’ — ensures the squeeze intensifies without overt decree.
Karl Polanyi described the First Great Transformation177: how land, labor, and money were converted into ‘fictitious commodities’ — treated as if produced for sale when they manifestly were not — and how society eventually mobilized to protect itself from the self-regulating market that resulted. What this essay documents is a Second Great Transformation in reverse: planetary boundaries, carbon budgets, and social vulnerability are being converted into fictitious commodities (coefficients, risk scores, tradeable permits), but this time the architecture is designed to prevent the counter-movement.
The polycrisis frame presents the squeeze as physics rather than politics — self-regulating necessity rather than contested choice. The administered society pre-empts democratic contestation by encoding the parameters before the public can evaluate them.
Each layer is documented by the institutions building it. Each institution claims it is doing good. Each document is available, published, consultative. The language throughout is care, resilience, sustainability, protection, even ‘ethical’. The mathematics throughout is: model, coefficient, propagate, condition, enforce. The gap between language and mathematics is where the thing operates. The failure to communicate this honestly is itself proof that claimed ‘intent’ should never be taken at face value.
This essay does not prophesise. It maps the default trajectory — the path that obtains if these systems operate as their architects intend. If that trajectory is dystopian, it is because the documents themselves whisper the endpoint in the language of care. The guardrails being installed by the bodies claiming to protect us are the very infrastructure that makes the squeeze autonomous. The same technology being built for patrolling the borders could be used against civil populations with ease.
The debate over whether we could control this system misses the point: once built, its automated rules — tying together surveillance, scores, and payments — will set the hard limits of what’s allowed, long before any vote or law could change them. It will act as a magnet, attracting precisely those who would seek to abuse it. It’s the concentration of power itself which is the problem, because once built, there will always be a reason to set aside safety protocol — not least, during an alleged ‘emergency’, genuine, or modelled by the ‘black box’ located at the IIASA.
But default is not destiny. The architecture is being built, but it is not yet complete. The fat lady is yet to sing. The couplings can be contested, accountability can be demanded, and the coefficients can be made honest and transparent. The settlement conditions can be subjected to democratic override, the enforcement layer can be kept under human-speed governance, … and politicians can be held responsible for keeping voters in the dark for decades.
No fate but what we make.
Best of luck dismissing 100+ sources from top-tier institutions.
Suggested reading over Christmas.
Murder on the Orient Express outlines why JFK had to go.
The Missing Link explains (with sources) why the story of environmentalism as commonly told cannot possibly be true.
D(eception)-Day digs into the archives, putting a date on Western betrayal: May 23, 1972.
The Architects trilogy gives you names, details why, when, and how. This is the summary.
Why communism is an exceedingly clever scam.
And finally, how all of this will be internalised.
Merry Christmas (and a happy new year).





























































































esc claims on Telegram that this epic omnibus release will be his last for a while, as he embarks on a long-touted and 💥hugely deserved 💥 period of rest, recuperation, & family time.
He’s been sprinting for many months; thankfully he appears to have begun to acknowledge that we are, collectively, caught up in a marathon…
So really hoping the next several weeks — while esc recharges his metaphorical battery pack — will provide a window for us all to review and re-appreciate huge swaths of his work and re-digest his sweeping insights.
My 🎄Christmas Wish: 🎄that someone of massive talent, brainpower, and nuance assembles some sort of:
💥💥💥 The ABCs of EscapeKey 💥💥💥
that distills his vast tapestry of understanding to a meta, meta, meta, meta level we all can more easily share with the multitude of newbies who can’t (or don’t want to without a simplified overview‼️) understand or grasp what all the fuss is about.
Can’t wait for 2026‼️‼️‼️👍
Onward 💗❣️
Government is never required. What they do is screw things up to disorient the public and then tell us we need them to provide a solution. That solution is always more tyranny. No way do I ever trust the government no matter what they pretend to do. There are always ulterior motives underlying everything they do. If after 250 years government really worked, we would see that it is no longer necessary for anything.