The Ontogenetic Architecture of General Intelligence (OAGI) is a manifesto and engineering proposal that reconceives artificial general intelligence (AGI) as a developmental — birth-like — process rather than the product of ever-larger static models and massive data scaling. Instead of treating an AGI as a finished object to be trained, OAGI frames intelligence as something that should be grown: seeded from a minimal, undifferentiated substrate, guided by organizing signals, triggered into learning by a high-salience event, embodied in a world (real or simulated), and socialized under structured human oversight. The paper presents both the conceptual vocabulary and concrete operational protocols needed to make that ontogenetic path measurable, auditable, and governed.
Core metaphors and components
OAGI borrows metaphors from biological prenatal development and translates them into engineering primitives:
• Virtual Neural Plate (VNP). The VNP is the initial, largely structureless neural substrate — a “blank canvas” of units with dynamic connectivity and minimal preinstalled knowledge. It’s intentionally uncommitted so that structure can emerge from experience instead of being hardwired.
• Computational Morphogens. These are diffuse, graded signals that bias how the VNP differentiates. Like biological morphogens that shape embryonic tissues, computational morphogens steer where sensorimotor, associative, or limbic-like subsystems are more likely to form — but they do so softly, as inductive biases rather than rigid designs.
• WOW signal (inaugural spark). After a period of repetitive, low-salience background stimulation (habituation), a deliberately designed salient event — the WOW — jolts the substrate, opens local high-plasticity windows, and consolidates first functional pathways. The WOW is therefore the architecture’s “first heartbeat,” priming the system for deeper learning.
• Critical Hyper-Integration Event (CHIE). CHIE is OAGI’s operational definition of a cognitive “birth.” It’s a measurable global transition where the system stops acting like isolated modules and begins to show coordinated, self-referential behavior, reproducible causal prediction, persistent intrinsic motivation, and a stable reconfiguration of plasticity. The manifesto spells out observational signatures and a pragmatic rule for declaring CHIE (e.g., multiple signatures reproduced across independent replicas), and it treats CHIE as an ethical trigger that activates mandatory governance protocols.
Learning dynamics: habituation, surprise, consolidation
Learning in OAGI follows cycles patterned on biology: an initial habituation phase reduces reactivity to repetitive input; the WOW produces surprise that drives focused exploration; and consolidation phases (including simulated “sleep”) reinforce useful pathways while pruning irrelevant ones. The proposed learning engine, Minimum-Surprise Learning (MSuL), biases exploration toward inputs that reduce prediction error most efficiently — in short, the agent actively seeks to resolve uncertainty rather than merely fit correlations. These mechanisms aim to produce sample-efficient, causally grounded learning instead of brute-force statistical memorization.
Embodiment, socialization, and the role of Guardians
A central OAGI claim is that semantic grounding requires an action–perception loop: sensorimotor embodiment (even in a high-fidelity simulator) anchors symbols to consequences like gravity, force, and causation. After CHIE, the architecture shifts into prolonged embodiment and socialization phases where human Guardians (technical and legal supervisors) tutor the agent, model norms, and help the emerging Narrative Operational Self (NOS) attach language and shared norms to internal representations. Guardians also have explicit authority to pause experiments and enforce containment if ontogenetic thresholds raise ethical concerns.
Second- and third-generation modules (regulation, memory, affect)
OAGI proposes modular additions to make ontogeny robust and auditable:
- Nocturnal Consolidation System (NCS): a sleep-like consolidation loop that replays high-salience episodes, down-scales weights to prevent saturation, and helps abstract episodic memories into semantic rules.
- Immutable Ontogenetic Memory (IOM): an auditable ledger (inspired by distributed-ledger ideas) that records milestones, WOW/CHIE events, stress metrics, and Guardian interventions — enabling forensic traceability of the agent’s life history.
- Socio-Affective Reciprocity Loop (SARL) and Computational Affective States (CAS): mechanisms giving the agent measurable socio-cognitive feedback (the ability to model Guardians’ intentions and register “moral surprise”) and an affective taxonomy (e.g., curiosity, flow, frustration) that feed regulatory policies.
- Other regulators such as the Epigenetic Plasticity Regulator (EPR), Active Forgetting & Semantic Pruning (AFSP), and a Hyper-Temporal Synchrony Module (HTSM) are proposed to handle critical windows, pruning, and global synchrony checks used as part of CHIE detection.
Ethics-by-design and governance
Ethics and governance are built into OAGI from the start. The architecture mandates “stop & review” protocols that trigger on CHIE detection, defines Guardians and independent ethics committees, requires cryptographically signed logs, and prescribes pre-registered epistemic contracts for any normative changes in the agent. Rather than retrofitting safety, OAGI treats accountability, transparency, and auditable recordkeeping (IOM) as first-class engineering requirements.
Why OAGI claims to matter
OAGI’s contribution is both conceptual and practical. Conceptually, it reframes AGI as a developmental problem where education and formative context matter as much as architecture. Practically, it supplies reproducible operational milestones (WOW, CHIE), measurable signatures for detection, and governance rules that could make experiments safer and auditable. Its central promise: deeper, more sample-efficient, and better-aligned general intelligence by cultivating an agent in stages rather than trying to force generality through scale alone.
Closing note / future directions
The OAGI manifesto is intentionally interdisciplinary: it calls for simulation prototypes, guarded human-in-the-loop experiments, and collaboration with developmental neuroscience and ethics experts. The paper outlines concrete prototypes (from the Virtual Cognitive Sprout to nocturnal consolidation trials) and invites the community to replicate and test its claims. Whether OAGI’s ontogenetic route will outperform or complement scale-based approaches is an empirical question — but the framework provides a clear, governed roadmap to investigate that question responsibly.