How did global governance evolve from managing factories to managing minds?
Over the past century, a single vision — systems control — has quietly shaped every major domain of life. From early Soviet theories of organisation to today's AI and neuroethics treaties, the project has remained the same: engineer reality itself through information, models, and management.
The roots of this century-long narrative lie in early systems thinking. In 1912–17, Russian polymath Aleksandr Bogdanov outlined Tektology, a ‘universal science’ of organisation uniting all fields into a coherent system. His contemporaneous philosophy, Empiriomonism (1904–06), merged Marxism with Machian empiricism, promising to interpret objective data through a collectively-subjective lens, thus leading to a global ethic. These ideas underpinned efforts like the Proletkult movement, which from 1917 onward sought to reshape culture and art along the lines of an industrialised proletarian science and aesthetics. In effect, from Bogdanov onward there was a programmatic belief that society could be constructed by universally applicable organisational principles, leading to the concept of a human super-organism, living within a Total Human Ecosystem.
As World War II unfolded, this vision resurfaced in calls to harness science for global planning. In 1941, British scientists convened a Science and World Order conference to align wartime research with postwar reconstruction, prompting committees to examine how science could serve national planning. After the war, such technocratic ambition found institutional form: UNESCO’s first director Julian Huxley spearheaded the 1948 founding of the International Union for Conservation of Nature (IUCN), aiming to coordinate biodiversity data globally. And in 1949, the UN created the Expanded Programme of Technical Assistance (UN-EPTA) to channel expert knowledge to developing nations. In each case, nations embraced an idea that world problems could be solved by organising knowledge and resources into international systems and instruments.
While the direct influence of Bogdanov’s Tektology was obscured in the West by ideological divides, its core principles resurfaced quietly through postwar systems theorists. Ludwig von Bertalanffy formalised General Systems Theory in the 1940s, proposing that biological, social, and technological systems all obeyed universal organisational laws. Kenneth Boulding, Erich Jantsch, and C. West Churchman expanded the model into economics, education, and management science, creating new languages for describing interdependence, feedback loops, and complex adaptivity. Simultaneously, Wassily Leontief’s input-output analysis, derived from Bogdanov’s earlier work on supply chain analysis, modeled economies as flows of resources through structured networks. The cumulative effect was to reframe human society, the economy, and even natural ecosystems as coherent, analysable systems — a view that seamlessly merged with the cybernetic planning ethos emerging across both sides of the Cold War divide. Although rarely acknowledged, the lineage of Tektology had been rebuilt in a new form: the total systems management worldview.
At each stage, the systems governance project was framed not in the cold language of engineering but in the warm rhetoric of humanitarianism. Appeals to care, global solidarity, social justice, and human rights consistently accompanied the construction of increasingly centralised structures. The promise was that only by pooling sovereignty, merging information systems, and standardising governance could inequality be addressed, poverty alleviated, or planetary crises averted. This emotional framing served to mask the reality that the architecture being built was cybernetic at its core: designed for predictability, control, and feedback optimisation, not for local autonomy or true pluralism. By anchoring technocratic expansion to the moral aspirations of the public, resistance could be neutralised long before the deeper implications became visible.
The Cold War saw this trend intensify with a focus on information itself. In 1958, the U.S.-sponsored International Conference on Scientific Information (ICSI) in Washington brought together scientists and librarians to standardise and control scientific data. Within a few years, U.S. policymakers debated creating a National Information Center to centralise scientific data storage, a concept which Kennedy had resisted. Post-assassination, President Johnson rolled out the same Planning-Programming-Budgeting System (PPBS) in 1965, which McNamara applied to the Department of Defense in 1961, after which Robert Amory Jr of the CIA looked to introduce through the CIA. This approach applied systems analysis and cost-benefit calculations to federal budgeting and agency planning, but the context was soon broadened into spheres of human health, environmentalism, and land-use planning. In short order, both the content and process of knowledge were brought under formal management. These moves reflected a conviction that information was not a neutral byproduct but a resource to be captured, categorised, and channeled through global networks of expertise and bureaucracy.
By the late 1960s, the natural environment itself was viewed through this same cybernetic lens. UNESCO hosted the first Biosphere Conference in Paris in 1968, which launched the ‘Man and the Biosphere’ program to scientifically map and protect the planet’s key ecosystems. U.S. officials like Daniel Moynihan, thinking ahead to climate concerns, urged turning NATO into an environmental monitoring alliance: in a 1969 memo he recommended establishing a ‘worldwide monitoring system’ for carbon dioxide via NATO, noting that only the U.S. had even rudimentary measurements at that time. The rhetoric was stark: the biosphere had to be surveilled and managed across borders. Nature was no longer an abstract backdrop but a complex system demanding global data feeds and political coordination.
The early 1970s crystallised these impulses into concrete institutions. On May 23, 1972, the US and USSR signed a bilateral Agreement on Environmental Protection, committing even Cold War rivals to collaborate on pollution and natural resource management. Five months later they helped found the International Institute for Applied Systems Analysis (IIASA) at Laxenburg, Austria, an interdisciplinary think-tank chartered to use systems modeling for shared problems. In parallel, the Club of Rome published The Limits to Growth (1972), a computer simulation warning that unchecked population and industrial growth would collapse Earth’s systems. These milestones showed that mainstream science and policy now took seriously the notion of the whole-Earth economy as a single system in need of global control, especially considering the aspects investigated through Limits to Growth were those soon modelled by the IIASA.
By the mid-1970s, the United Nations began wiring up an environmental feedback loop. The Scientific Committee on Problems of the Environment (SCOPE) was created by the International Council of Scientific Unions to synthesise research across countries. Then in 1973, the newly formed UNEP launched Earthwatch, meant to coordinate every UN agency’s monitoring projects. One of Earthwatch’s pillars was GEMS (Global Environment Monitoring System), designed to provide early warning of planetary-scale changes. Thus by decade’s end a world-encompassing sensing apparatus was envisioned: a cybernetic network of satellites, sensors, and databanks to keep continual tabs on air, water, forests, and more. In this worldview, governance meant measurement plus modeling – feed ecological data into technocrats who would then adjust policies in a virtuous control loop.
After the Cold War, these efforts were formalised into the architecture we know today. In 1991, countries set up the Global Environment Facility (GEF) as a multilateral fund to finance biodiversity, climate, and other global environmental projects. This replaced earlier ad-hoc funding with a stable institution (conceptually born from the 1980s idea of a ‘World Conservation Bank’). The 1992 Earth Summit in Rio then codified the global system: the UN Framework Convention on Climate Change (UNFCCC) and the Convention on Biological Diversity (CBD) were opened for signature. These ‘Rio Conventions’ made it explicit that climate, species, and ecosystems were to be managed under collective treaties. Through GEF funding and binding agreements, the planetary system was now under a new kind of multilateral stewardship—one that translated the environmental feedback loop into legal and financial instruments.
In the 21st century, the target of ‘systems governance’ has shifted inward toward knowledge and cognition. The rise of computers and AI prompted analogous debates: if ecosystems need oversight, so too might algorithms that shape information. Thus the last two decades have seen waves of AI ethics frameworks. From industry principles to academic guidelines, thinkers sought rules to control AI’s impact on society. This effort culminated in UNESCO’s 2021 Recommendation on the Ethics of Artificial Intelligence, a first-ever global standard urging that AI align with human rights and sustainability. In effect, the impulse is the same: intelligence of any kind—natural or artificial—must be governed. Data flows and neural nets have joined carbon in the purview of these cross-border management regimes.
Finally, in the 2020s the focus has zoomed into our own brains. Neurotechnology (brain implants, neural interfaces, etc.) promises to revolutionise medicine but also to potentially manipulate thought and identity. Recognising the stakes, UNESCO has launched a full-scale neuroethics initiative: in 2024 it appointed an international expert group to draft a global framework on neurotechnology ethics, aiming for a treaty by 2025. The language is striking—terms like ‘neurorights’ and ‘mental privacy’ now appear in policy papers. Once the arc started with organising society and nature, it has arrived at organising the mind itself.
Beneath the surface, the ethical turn itself has become a governance tool. Modern ethics frameworks — especially in AI — do not defend objective truth but reinterpret information flows to align with systemic goals. They provide flexible justifications for bending, filtering, or suppressing knowledge, particularly during declared emergencies, where centralised interpretations become mandatory. COVID-19 demonstrated how ethical rhetoric could override empirical variance; AI ethics aims to automate this reflex. Meanwhile, the environmental architecture followed a parallel path. Surveillance grids like GEMS were established before meaningful understanding of key systems, such as the oceanic carbon cycle, even existed. The First World Climate Conference (1979) did not focus primarily on understanding climate mechanisms, but on outlining the need to plan the future structure of human society itself. By the Second World Climate Conference (1990), the call for comprehensive global satellite surveillance was made explicit, formalising the shift from environmental observation to planetary management. Within a few years, UNCTAD’s Combating Global Warming reports (1992 and 1994) reframed atmospheric gases as financial commodities, preparing the mechanisms for global emissions trading. Surveillance, restructuring, and monetisation were fused into a single planetary governance trajectory.
What emerges, in the end, is not a scattered series of technical improvements but a single philosophical arc. AI Ethics today is, in essence, a cybernetic form of Empiriomonism: not the pursuit of objective truth, but the modulation of knowledge to sustain systemic stability, and tailored to the individual. Global citizenship education and cultural engineering programs extend the original logic of Proletkult, aiming to reformat identities and collective perceptions toward an administrable planetary society. And all of it traces back to Bogdanov, who not only envisioned systems theory through Tektology but also co-founded the Bolshevik Party with Lenin in 1903, carrying forward the conviction that social reality itself could and must be scientifically constructed. The modern governance structure is not an accidental accretion of innovations. It is the quiet realisation of a revolutionary architecture over more than a century.
Across the century, these developments reveal a coherent trajectory: from Bogdanov’s abstract Tektology and Soviet cultural experiments to today’s AI and neuroethics debates, there has been an expanding project of treating all domains—art, science, nature, information, even human consciousness—as systems to be quantified and managed. Each step reinforced the next, building an ever-larger architecture of global governance through data, modeling, and ‘systems theory.’ The tone of policy history is remarkably consistent: calls for new world orders, global monitoring systems, or transnational science programs. What once sounded utopian or technical now functions as the practical machinery of our era. This arc suggests that the modern global order was consciously engineered as a unified feedback system—and it raises urgent questions about who controls the switches and to what ends.
Beneath all the humanitarian language around technology transfer, one finds a colder logic at work. The purpose was never simply to make middlemen wealthy or to guarantee lucrative export orders for select corporations—though both outcomes were convenient bonuses. The deeper goal was to ensure that every technological infrastructure, from energy grids to water systems to agricultural supply chains, would be built according to globally standardised, interoperable, and externally manageable systems. ‘Reliable’ technology did not mean the best or most appropriate for local conditions; it meant predictable, monitorable, and harmonised with the emerging architecture of planetary management. Wealth creation for intermediaries oiled the gears, but the true prize was the silent construction of a global lattice of dependent nodes, each slotted neatly into an increasingly cybernetic control grid. The technology transferred was never neutral. It was—and remains—the primary instrument of Earth Systems Governance.
Great piece.
I vehemently reject all forms of planning. I also reject all forms of "predictability, control, and feedback optimisation".
The Taleb/Portesi view of leverage, optimization, and fragility is correct.
What was the seed that grew this on steroids arrogance and hatred of all humans. I swear if they get there way eventually, we will be no better off than chickens waiting to go to slaughter. I take it when the limits of wealth are reached the supposed elites turn their sights on us, the peon's for sport and when we are all gone except for the transhuman crowd they will turn on each other as they really are mad as hatters. Dim the sun is the next step in the throwing of anything at the wall. Will we ever say enough is enough or will our cell phone win in the end and entertainment be our ticket to paradise or parasite.