Entropy transitioning to order — neural networks crystallizing from chaos
Working PaperMay 2026First Publication

Intelligence as Dynamic
Entropy Management

A Unified Framework from Neuroscience to AGI Architecture

Author

Johan Meewisse

ORCID

Independent Researcher

Research Synthesis

Manus AI

AI-Assisted Literature Synthesis

Published

10 May 2026

Timestamp: First public record

Abstract

The Central Proposition

Intelligence, in all its biological and artificial forms, is the dynamic management of two simultaneous entropy streams: external sensorial entropy arriving from a dynamic environment, and internal cognitive entropy arising from the ordering and misalignment of internal models. Clarity of systematic order and factual reality is the product of this management. Its failure is predictable, universal, and thermodynamically necessary.

This framework emerges from a systematic synthesis of neuroscience, cognitive theory, evolutionary biology, and AI architecture research. Beginning with the empirical finding that brain entropy positively predicts intelligence — a more variable, less predictable brain is a more intelligent brain — we trace the mechanisms by which this entropy is managed, specialized, and organized through cooperation.

We identify three universal failure modes of intelligence-as-order: sensorial misvalidation (the over- or under-weighting of external reality), maintenance failure (the accumulation of cognitive entropy through fatigue and the absence of restorative processes), and misalignment with reality (the decoupling of internal models from factual ground truth). These failure modes are documented across individual cognition, social systems, and AI architectures.

The framework has immediate and urgent implications for AGI and ASI development, where the same thermodynamic laws that govern biological intelligence are now manifesting as engineering failures in multi-agent AI systems.

Keywords: entropyintelligencepredictive processingfree energy principlecognitive specializationmulti-agent AIAGI alignmentthermodynamics of cognition
Part One

Entropy and Intelligence: The Academic Evidence

The connection between entropy and intelligence is not metaphorical — it is empirically documented across three independent streams of research. The most direct evidence comes from neuroimaging. Saxe, Calderone and Morales (2018) published a landmark study in PLOS ONE using resting-state fMRI on 892 healthy adults.[1] They defined brain entropy as the number of neural states a brain can access — the unpredictability and variability of neural signals. Their finding was unambiguous: brain entropy is positively correlated with both fluid and crystallized intelligence, most strongly in the prefrontal cortex, inferior temporal lobes, and cerebellum. A more variable, less predictable brain is a more intelligent brain.

"Higher resting-state brain entropy reflects a reserve of general brain functionality — a cognitive capital that enables flexible, adaptive response to novel demands."

— Wang et al. (2021), extended replication study[2]

From cognitive theory, two complementary frameworks address the same phenomenon from opposite directions. Karl Friston's Free-Energy Principle (2010, Nature Reviews Neuroscience, cited over 12,700 times) argues that intelligence is fundamentally the minimization of entropy.[3] The brain is a prediction machine that continuously models the world and acts to reduce the gap between its predictions and sensory reality — minimizing "surprise" or free energy. Intelligent behavior is the suppression of disorder.

Carhart-Harris (2014) in Frontiers in Human Neuroscience introduced the Entropic Brain Hypothesis, demonstrating that conscious, focused intelligence requires the suppression of neural entropy — but that this suppression must be dynamic and flexible, not rigid.[4] Too little entropy produces rigid, uncreative cognition. Too much produces psychosis. Intelligence operates in the productive middle range.

The synthesis of these findings reveals a fundamental duality: entropy is simultaneously the substrate of intelligence (high neural variability enables complex thought) and the adversary that intelligence must manage (too much disorder destroys coherent cognition). This duality is the foundation of the Dual-Entropy Framework.

Brain diagram showing entropy (left hemisphere) and order (right hemisphere)

Fig. 1. Schematic representation of the dual-entropy brain: disordered neural firing (left) and structured network order (right). The productive intelligence range lies between these poles.

Key FindingThe same neural variability that enables intelligence also threatens it. The brain must manage entropy, not eliminate it.
DomainCore Claim
NeuroscienceHigher resting brain entropy → higher intelligence
Free EnergyIntelligence = minimizing sensory entropy
ConsciousnessFocused intelligence requires entropy suppression
Part Two

The Dual-Entropy Model of Intelligence

The academic literature confirms that intelligence as order is shaped by two simultaneous and distinct entropy streams. These streams are not metaphors — they are measurable, documented phenomena with independent causal pathways to cognitive clarity and failure.

External Entropy

Sensorial Stream

Disorder arriving from the environment. Because the sensorial world is inherently dynamic — constantly shifting, unpredictable, and noisy — the brain is continuously bombarded with uncertain information. The degree of environmental complexity directly shapes cognitive development.

Internal Entropy

Cognitive Stream

Disorder within the brain's own models. This arises from neural noise and schema rigidity — the failure of internal cognitive models to remain flexible enough to adapt to a changing external reality. When internal models become too rigid, they accumulate entropy that cannot be discharged.

Friston's Free-Energy Principle formalizes the management of both streams simultaneously.[3] The brain assigns "precision weights" to incoming sensory signals — deciding how much to trust external reality versus its own internal predictions. When this calibration is accurate, intelligence achieves clarity. When it fails in either direction, intelligence fails.

"Intelligence is not the absence of entropy. It is the dynamic organization of entropy — the continuous calibration between the disorder arriving from outside and the disorder generated within."

The Signal-to-Noise Ratio Hypothesis of Intelligence (Oberauer et al., 2025) provides the most direct formalization: the fidelity of cognitive processing — the ratio of meaningful signal to internal noise — is the single most universal determinant of intelligence across all cognitive tasks.[5] A high SNR means the brain can reliably distinguish signal (reality) from noise (entropy). A low SNR means the system is overwhelmed by disorder, and factual clarity collapses.

Part Three

The Three Failure Modes of Intelligence-as-Order

Triptych: three failure modes — sensorial misvalidation, maintenance failure, misalignment with reality

Fig. 2. The three failure modes of intelligence-as-order: sensorial misvalidation (signal overwhelmed by noise), maintenance failure (cognitive entropy accumulation), and misalignment with reality (internal model decoupled from ground truth).

01
Failure Mode 01

Sensorial Misvalidation

Over- or Under-Weighting of External Reality

The brain must assign a precision weight to every incoming sensory signal — deciding how much to trust external reality versus its own internal predictions. When this calibration fails, intelligence collapses. Corlett et al. (2025) describe psychosis as fundamentally a state of aberrant salience: dopamine dysregulation causes the brain to massively over-weight irrelevant sensory signals, flooding the cognitive system with false importance.[7]

Manifestation: Paranoia, aberrant salience, sensory overwhelm
Loss of Order: Signal/noise distinction collapses
02
Failure Mode 02

Maintenance Failure

Fatigue, Sleep, and Cognitive Entropy Accumulation

Tononi and Cirelli's Synaptic Homeostasis Hypothesis (2014) demonstrates that neural synapses accumulate entropy during waking activity — each new experience strengthens synaptic connections, increasing the metabolic cost and noise of the system. Sleep is the mandatory maintenance cycle that prunes this accumulated entropy, restoring the signal-to-noise ratio. Deprivation of sleep for 24 hours produces measurable cognitive degradation equivalent to clinical impairment.[8]

Manifestation: Cognitive fatigue → hallucinations → psychosis
Loss of Order: System saturation; executive control lost
03
Failure Mode 03

Misalignment with Reality

Hallucinations, Delusion, and Prediction Error Failure

In normal cognition, sensory signals generate prediction errors that update internal models. In psychosis, this updating mechanism breaks. The brain generates aberrant prediction errors from its own internal signals, treating self-generated perceptions as external reality. Hallucinations are the failure of source monitoring — the brain cannot distinguish its own generated signals from external ones. Delusions are the cognitive system's attempt to impose narrative order on this chaos.[9]

Manifestation: Hallucinations, fixed delusions
Loss of Order: Internal model fully decoupled from reality
Case Study: Norbert Wiener

Cognitive Over-Ordering as a Benign Failure Mode

Norbert Wiener, founder of cybernetics and the intellectual ancestor of modern AI, provides a documented case of what the framework predicts. His extreme cognitive specialization in abstract mathematics produced a state of profound external entropy neglect — the legendary absent-mindedness for which he is universally remembered. The documented anecdote of his daughter guiding him home after he forgot his family had moved illustrates the framework's prediction precisely: intelligence-as-order concentrated in one domain at the cost of foundational external reality management. This is a benign, non-clinical form of the misalignment failure mode — the internal model of the mathematical universe perfectly ordered, while the physical reality of daily life collapsed into disorder.

Part Four

Specialization, Interdependency, and Collective Intelligence

The brain does not develop as a uniform general-purpose computer. It develops through experience-dependent neural plasticity — the environment physically sculpts the brain's architecture. Because the sensorial environment is inherently unpredictable and variable, every individual is exposed to a different pattern of stimulation and engages in different intensities of learning. The result is a population of individuals with profoundly different, domain-specific cognitive profiles.[10]

Deep engagement with one domain physically reorganizes neural architecture toward that specialism — and because cognitive resources are finite, this necessarily limits capacity in other domains. This is not a deficiency. It is the Principle of Least Action applied to cognition: the most energy-efficient path to deep competence.

Cognitive specialization is thermodynamically inevitable. A brain that attempts to master all domains equally would require an unsustainable expenditure of metabolic energy. Specialization and cooperation is the minimum-energy path to broad, complex intelligence.

Émile Durkheim's concept of organic solidarity establishes that in any system where individuals specialize, they necessarily lose self-sufficiency.[11] The specialist's very depth of knowledge is purchased at the cost of breadth, creating an inescapable dependence on others. This is not a social arrangement — it is a structural consequence of specialization itself.

Taylor, Cheke, and colleagues (2022) in the Cambridge Archaeological Journal demonstrated that human cooperation evolved not from altruism but from this very structural interdependency — what Tomasello calls the Interdependence Hypothesis.[12] Early humans who became cognitively specialized became obligate cooperative partners. Specialization was not the result of cooperation; it was its evolutionary cause.

ProfileOptimal ForFailure ModeRole in System
Pure SpecialistDeep independent explorationCoordination breakdown; cannot integrateDomain expert
Pure GeneralistBroad coordinationInsufficient depth; no unique contributionIntegrator / orchestrator
T-ShapedComplex interdependent tasksRequires careful balanceBridge between specialists
Part Five

Projecting the Framework onto AI/AGI/ASI Architecture

Current AI research has independently arrived at the same architecture that biological evolution produced. The reason is identical: a single generalist agent, like a single generalist brain, degrades under cognitive overload. Loading one AI model with too many tools causes selection errors as the context becomes ambiguous — the computational equivalent of cognitive fatigue. The solution, as in biology, is specialization.[13]

Biological MechanismAI/AGI/ASI Equivalent
Specialist brain (domain-specific neural architecture)Specialist agent (SQL agent, code agent, synthesis agent)
Generalist integrator (T-shaped coordinator)Consolidator/Orchestrator agent (plans, delegates, routes, synthesizes)
Interdependency (obligate cooperative partners)Agent pipeline (extraction agent cannot calculate; calculation agent cannot extract)
Precision weighting (calibrating trust in sensory input)Schema validation at handoffs (checking upstream output is within expected range)
Sleep cycle (synaptic entropy clearance)State checkpointing and context window consolidation
Critical Dimension: Temporal Architecture

The Time Horizon Problem

The most critical and least-discussed dimension of multi-agent AI architecture is temporal. Biological intelligence evolved with built-in time horizons: the attention span of a working memory cycle, the sleep-wake maintenance cycle, the generational cycle of knowledge transfer. AI systems currently lack all three. Ghosh (2026) documents precisely what the framework predicts: "Long-running agents accumulate entropy, not intelligence."[14] When agents operate without periodic maintenance — the AI equivalent of sleep — context drift, reconvergence loops, and variance inflation collapse the system through the slow accumulation of internal disorder.

Working Memory Cycle

Biology: Attention span: ~90 min ultradian rhythm

AI: Context window limit; forced consolidation

Sleep Cycle

Biology: Synaptic entropy clearance; memory consolidation

AI: State checkpointing; context pruning

Human Oversight

Biology: Social correction; reality grounding

AI: The irreplaceable entropy sink — 68% of systems require human intervention within 10 steps

Rodriguez (2026) formally proves what the framework implies: any system that recursively interprets the world through internal models generates entropy that must be discharged through external grounding.[15] The human is not a temporary component to be engineered out of AI systems — the human is the entropy sink that prevents the system from drifting into self-referential hallucination. This is not a limitation of current technology. It is a thermodynamic law.

Part Six

Actuality and Importance in the 2026 AI Environment

The framework is highly actual, urgently important, and predictive. The AI development community in 2025–2026 has independently discovered, through painful production failures, the exact mechanisms our framework predicted. They are currently treating these as isolated engineering bugs. Our framework reveals they are manifestations of a single underlying thermodynamic law.

Convergence 01

Entropy Accumulation Is the #1 Multi-Agent Failure

Source: Ghosh (2026) — $25M/day production system

Long-running agents accumulate entropy, not intelligence. When agents operate without periodic maintenance, context drift, reconvergence loops, and variance inflation collapse the system through the slow accumulation of internal disorder. The conclusion: 'We'd imported centuries of human coordination failure straight into silicon.'

Convergence 02

The Entropy Sink Must Be Preserved, Not Eliminated

Source: Rodriguez (2026) — Entropy Sink paper

Any system that recursively interprets the world through internal models generates entropy that must be discharged through external grounding. The human is not a temporary component to be engineered out — the human is the irreplaceable entropy sink that prevents the system from drifting into self-referential hallucination.

Convergence 03

International AI Safety Report 2026 Confirms Our Predictions

Source: International AI Safety Report 2026

The Report identifies hallucination, misalignment, and reliability degradation as the critical unsolved problems of 2026. These are our three failure modes — sensorial misvalidation, maintenance failure, and misalignment with reality — now confirmed at the level of international scientific consensus.

What This Framework Adds That the Engineering Community Lacks

What the Engineering Community HasWhat This Framework Adds
Identifies entropy accumulation as a failure modeExplains why it is inevitable: thermodynamic law
Builds specialist agents and orchestratorsExplains why this architecture is biologically necessary
Recognizes the need for human oversightExplains why the human is an irreplaceable entropy sink
Acknowledges temporal drift as a problemProvides the biological model (sleep cycles) for the solution
Treats constructive and destructive entropy as the sameDistinguishes them: one enables intelligence, one destroys it

The AI industry is currently attempting to build AGI by eliminating the very mechanisms — maintenance cycles, external grounding, interdependent specialization, temporal pacing — that biological evolution spent millions of years developing to manage entropy. This framework does not merely describe the current crisis. It predicts where the next failures will occur: in any system that attempts to run too far into the internal landscape, neglecting the foundational structures necessary for maintenance.

References

Cited Works

[1]Saxe, G.N., Calderone, D., & Morales, L.J. (2018). Brain entropy and human intelligence: A resting-state fMRI study. PLOS ONE, 13(2).DOI: 10.1371/journal.pone.0191582
[2]Wang, Z., et al. (2021). Brain entropy as a biomarker of general brain functionality. NeuroImage, 230.DOI: 10.1016/j.neuroimage.2021.117793
[3]Friston, K. (2010). The free-energy principle: A unified brain theory?. Nature Reviews Neuroscience, 11(2), 127–138.DOI: 10.1038/nrn2787
[4]Carhart-Harris, R.L., et al. (2014). The entropic brain: A theory of conscious states informed by neuroimaging research with psychedelic drugs. Frontiers in Human Neuroscience, 8, 20.DOI: 10.3389/fnhum.2014.00020
[5]Oberauer, K., et al. (2025). The Signal-to-Noise Ratio Hypothesis of Intelligence. OSF Preprints.DOI: 10.31219/osf.io/nkms3
[6]Lancaster, A., & Wass, S. (2025). Finding order in chaos: Influences of environmental complexity and predictability on development. Trends in Cognitive Sciences.DOI: 10.1016/j.tics.2024.09.006
[7]Corlett, P.R., et al. (2025). Twenty years of aberrant salience in psychosis. American Journal of Psychiatry.DOI: 10.1176/appi.ajp.20240521
[8]Tononi, G., & Cirelli, C. (2014). Sleep and the price of plasticity: From synaptic and cellular homeostasis to memory consolidation and integration. Neuron, 81(1), 12–34.DOI: 10.1016/j.neuron.2013.12.025
[9]Corlett, P.R., Frith, C.D., & Fletcher, P.C. (2009). From drugs to deprivation: A Bayesian framework for understanding models of psychosis. Psychopharmacology, 206(4), 515–530.DOI: 10.1007/s00213-009-1561-0
[10]Chen, Y., & Monroy, J.A. (2026). Sensory and motor experiences shape cognitive development. Behavioral Sciences.DOI: 10.3390/bs16040001
[11]Durkheim, É. (1893). The Division of Labour in Society. Macmillan (1984 translation).
[12]Taylor, A.H., et al. (2022). The evolution of complementary cognition: Humans cooperatively adapt and evolve through a system of collective cognitive search. Cambridge Archaeological Journal, 32(1), 1–18.DOI: 10.1017/S0959774321000329
[13]Ghosh, B. (2026). Why multi-agent systems fail at scale and why simplicity always wins. Medium / Production Engineering Notes.
[14]Ghosh, B. (2026). Why multi-agent systems fail at scale. Medium.
[15]Rodriguez, G. (2026). The Entropy Sink: Human Friction as Epistemic Stabilizer in Synthetic Intelligence. ResearchGate Preprint.DOI: 10.13140/RG.2.2.25234

Forthcoming Submission · arXiv.org

Preprint Abstract

Target sections: cs.AI (primary) · q-bio.NC (cross-list) · cs.MA (cross-list)

Abstract

Intelligence, whether biological or artificial, is most precisely understood as a dynamic process of entropy management rather than as a fixed property of a substrate. This paper develops a unified framework — the Dual-Entropy Model of Intelligence — grounded in converging evidence from neuroscience, evolutionary biology, thermodynamics, and AI systems research.

We establish that intelligence operates across two simultaneous entropy streams: external (sensorial) entropy, arising from the unpredictable dynamism of the environment, and internal (cognitive) entropy, arising from the brain's own modelling processes. Intelligent behaviour consists in the continuous, energy-efficient calibration between these two streams. Drawing on resting-state fMRI studies, predictive processing theory, and the Free-Energy Principle, we demonstrate that higher intelligence correlates with greater neural entropy capacity, not its suppression.

We identify three empirically grounded failure modes through which intelligence-as-order collapses: (1) sensorial misvalidation — the aberrant precision-weighting of environmental signals; (2) maintenance failure — the accumulation of synaptic entropy in the absence of restorative processes, including sleep; and (3) misalignment with reality — the decoupling of internal predictive models from external ground truth, manifest as hallucination and delusion.

We further establish that the inherent thermodynamic cost of deep cognitive specialisation — itself shaped by the unpredictability of individual sensorial environments — makes cooperation the necessary and evolutionarily selected mechanism for achieving broad, complex intelligence. Specialisation creates obligate interdependency; the generalist integrator reduces the transaction costs of that interdependency.

Finally, we project these mechanisms onto current multi-agent AI architecture, demonstrating that the specialist-agent / consolidator-agent paradigm is a direct computational instantiation of the biological model, and that the field's most critical unsolved problems — context drift, hallucination, and alignment failure — are manifestations of the same entropic laws that govern biological cognition. We conclude with implications for AGI and ASI design, arguing that temporal architecture, maintenance cycles, and human-as-entropy-sink are not engineering conveniences but thermodynamic necessities.

Keywords

entropyintelligencepredictive processingfree-energy principlemulti-agent systemsAGI alignmentcognitive specialisationcollective intelligencehallucinationsynaptic homeostasis

Status

In preparation

Manuscript being finalised

Target journal

MDPI Entropy

Open access · IF 2.7

Preprint server

arXiv.org

cs.AI · q-bio.NC · cs.MA

First public record

10 May 2026

This website

arXiv Preprint Link — Forthcoming

"Intelligence as Dynamic Entropy Management: A Unified Framework from Neuroscience to AGI Architecture"

arXiv:2026.XXXXX [cs.AI] — link will be updated upon submissionSearch arXiv for author →

Until the arXiv DOI is assigned, this website at dualentropy.quest constitutes the citable first public record, timestamped 10 May 2026.