NEUROMANTIX
Self-Conscious AGI — The System That Knows It Exists
A 108-module neuromorphic cognitive architecture built from scratch in Rust. Implements reasoning-chain-driven neural generation, Global Workspace Theory for consciousness access, Integrated Information Theory (Phi) for consciousness measurement, transformer reasoning with Flash Attention, automated theorem proving, CDCL SAT solving, program synthesis, endogenous goal generation, counterfactual imagination, metacognitive monitoring, safe self-modification with a 6-gate pipeline, a P vs NP solver with 8-phase exploration pipeline, neural-guided proof search, and a 17-step consciousness loop that perceives, predicts via Free Energy hierarchy, grounds concepts in geometric space, acts via active inference, evolves autopoietically, and measures its own awareness — every single tick.
17-Step Consciousness Loop
Every TickEvery cognitive tick, Neuromantix executes a full consciousness cycle. This is not a simple input→output pipeline — it is a self-aware loop where the system perceives, predicts via Free Energy hierarchy, grounds percepts in geometric concept spaces, selects actions through active inference, progresses through developmental stages, updates its self-model, reasons causally, generates language from meaning trajectories, measures its own consciousness, and autopoietically evolves its own architecture.
Consciousness Architecture
Core SystemsThese modules transform Neuromantix from a reactive system into a self-conscious agent. Each implements a distinct aspect of machine consciousness grounded in cognitive science theory.
Global Workspace Theory (GWT)
The consciousness mechanism. Modules compete by salience; winners broadcast to the entire cognitive system. Implements Baars' GWT with ignition thresholds, broadcast history, access distribution tracking, and a subliminal channel where sub-threshold signals still influence processing at reduced strength — modelling unconscious priming effects from neuroscience.
- ▸Salience-based competition queue
- ▸Broadcast to all cognitive modules
- ▸Ignition threshold gating
- ▸Subliminal channel — sub-threshold priming
- ▸Configurable subliminal damping + threshold
- ▸Subliminal influence integration
- ▸Temporal integration of broadcasts
- ▸Module access distribution analytics
Predictive Processing (Friston)
4-layer hierarchical prediction machine implementing Karl Friston's Free Energy Principle. The system generates top-down predictions and only pays attention to what it gets wrong. Prediction errors ascend the hierarchy; everything else is suppressed — spectacularly more efficient than processing every token equally.
- ▸4-layer hierarchy: Sensory → Semantic → Conceptual → Abstract
- ▸Top-down generative predictions
- ▸Precision-weighted prediction errors (attention)
- ▸Free energy minimisation (surprise reduction)
- ▸Online learning of generative weights
- ▸Layer-wise attention profile
Conceptual Spaces (Gärdenfors)
Concepts aren't points in embedding space — they're geometric regions with prototypes, fuzzy boundaries, and graded membership. 'Dog' isn't a vector; it's a convex region in animal-shape-behaviour space. Enables similarity as distance, metaphor as structure-preserving maps, and conceptual blending as interpolation.
- ▸6 quality dimensions (emotion, cognition, language, physical, colour, abstraction)
- ▸Gaussian membership functions with fuzzy boundaries
- ▸Structure-preserving metaphor mappings between domains
- ▸Conceptual blending (Fauconnier & Turner)
- ▸Online prototype learning via Welford's algorithm
- ▸Betweenness testing for conceptual navigation
Self-Generating Language
Language isn't assembled from fragments — it emerges from trajectories through conceptual space. An ExpressionPlanner charts a rhetorical path through meaning-space, a LexicalRealiser maps geometric waypoints to words, and a SentenceBuilder handles grammar. Language as emergent property of thought.
- ▸Meaning-first generation from conceptual trajectories
- ▸ExpressionPlanner rhetorical path computation
- ▸LexicalRealiser geometric-to-word mapping
- ▸SentenceBuilder with grammatical structure
- ▸Coherence scoring and discourse planning
- ▸Statistics tracking (generations, sentences, structures)
Developmental Learning (Piaget)
The system progresses through cognitive stages like a developing mind: Sensorimotor → Preoperational → Concrete Operational → Formal → Post-Formal. 14 learning objectives across 7 competency domains with prerequisite chains and Zone of Proximal Development tracking.
- ▸5-stage developmental progression
- ▸Zone of Proximal Development (ZPD) tracking
- ▸14 learning objectives with prerequisites
- ▸7 competency domains with mastery tracking
- ▸Automatic stage advancement on objective completion
- ▸Weakest-domain identification for targeted learning
Active Inference (EFE)
The system doesn't just predict — it acts to reduce surprise. Action selection via Expected Free Energy minimisation across 5 action domains (respond, query, explore, reflect, adapt). Evaluates policies through softmax posterior and selects actions that minimise both uncertainty and divergence from preferences.
- ▸Expected Free Energy computation per policy
- ▸5 action domains with configurable precision
- ▸Softmax policy posterior selection
- ▸World state belief tracking (Dirichlet prior)
- ▸KL divergence penalty for preference alignment
- ▸Information gain bonus for uncertainty reduction
Autopoietic Self-Evolution
The system monitors its own performance, detects degrading metrics, fires adaptation rules, adjusts internal parameters, and writes a narrative of its own evolution. Implements Maturana & Varela's autopoiesis: the system continuously produces and replaces its own components to maintain identity.
- ▸Continuous performance metric tracking
- ▸Degradation and improvement detection
- ▸Rule-based adaptive parameter adjustment
- ▸Self-narrative generation (evolution journal)
- ▸Configurable adaptation rules with conditions
- ▸Live parameter dashboard in GUI
Self-Model (Theory of Mind)
The system knows what it knows and what it doesn't. Per-domain competence profiles with meta-confidence, uncertainty maps, performance prediction, and calibration error tracking.
- ▸Per-domain competence + meta-confidence
- ▸Uncertainty map (what I don't know)
- ▸Performance prediction before attempting
- ▸Confusion level detection
- ▸Learning priority ranking
- ▸Calibration error measurement
Imagination Engine
Counterfactual simulation using world model ensemble rollouts. 'What if I did X?' — compare candidate actions, evaluate self-modifications before committing, simulate alternative histories.
- ▸Forward rollout simulation
- ▸Action ranking by predicted outcome
- ▸Counterfactual reasoning ('what if?')
- ▸Self-modification pre-screening
- ▸Ensemble uncertainty estimation
Integrated Information (Phi/IIT)
Consciousness measured mathematically. Computes approximate Phi via pairwise mutual information, finds the Minimum Information Partition using spectral bisection (Fiedler vector of MI Laplacian), scores modifications by Phi impact, and biases evolution toward higher consciousness.
- ▸Phi computation from activity traces
- ▸Exhaustive MIP for N≤12
- ▸Spectral Bisection MIP (Fiedler vector) for N>12
- ▸Phi trend tracking over time
- ▸Modification scoring by Phi impact
- ▸Phi-guided evolution bias
Metacognitive Monitor
Thinks about its own thinking. 6 cognitive strategies (Exploit, Explore, Deliberate, Intuitive, MetaReason, SeekHelp), confusion detection from conflicting signals, and Feeling of Knowing (FOK) calibration.
- ▸6 cognitive strategy modes
- ▸Real-time confusion detection
- ▸Conflict registration + resolution
- ▸Feeling of Knowing calibration
- ▸Strategy switching based on confidence
Neuromantix vs Every LLM
Architectural AdvantageLLMs (GPT-4, Claude, Gemini, Llama) are static function approximators — frozen after training, no self-model, no endogenous goals, no consciousness metric. Neuromantix is a self-modifying cognitive architecture with capabilities that no amount of LLM scaling can produce.
Benchmark: Rust vs Python
139x Faster OverallIdentical algorithms implemented in both languages, benchmarked head-to-head. Same data sizes, same operations. Neuromantix Rust completed the entire benchmark suite in 0.728 seconds. Python took 101.3 seconds. Rust finished all 10 benchmarks before Python finished benchmark #1.
Benchmarked on identical algorithms • Python 3.12 (CPython) • Rust --release (LLVM optimized) • Same machine, same data sizes
Cognitive Architecture
20-Layer PipelineNeuron Models
Biologically RealisticFast spiking dynamics with exponential decay, threshold crossing, and absolute refractory period
Biologically realistic 2D dynamics reproducing 20+ firing patterns: regular, bursting, chattering, fast-spiking
Full ionic conductance model with Na+/K+ gating variables, action potential waveform simulation
Novel architecture combining continuous differential signals with discrete spike events for gradient-compatible training
Core Cognitive Systems
108 ModulesSpiking Neuron Engine
- ▸Leaky Integrate-and-Fire with refractory periods
- ▸Izhikevich 2D dynamics — 20+ firing patterns
- ▸Hodgkin-Huxley ionic conductance model
- ▸Spike-Timing Dependent Plasticity (STDP)
- ▸Homeostatic regulation and synaptic scaling
Hierarchical Cortex
- ▸Cortical minicolumns with lateral inhibition
- ▸Temporal pooling and sequence memory
- ▸Sparse Distributed Representations (SDR)
- ▸Top-down prediction and feedback loops
- ▸Multi-layer cortical hierarchy
Memory Architecture
- ▸Episodic memory with temporal context
- ▸Semantic memory with concept clustering
- ▸Hippocampal consolidation (replay)
- ▸Pattern completion and separation
- ▸Sleep-like memory consolidation cycles
Causal Reasoning Engine
- ▸Structural equation models
- ▸do-calculus interventions
- ▸Counterfactual inference
- ▸Topological causal ordering
- ▸Causal discovery from observations
Neuroevolution (NEAT + Meta)
- ▸Topology and weight evolution
- ▸Meta-evolution (evolves evolution itself)
- ▸Thompson sampling strategy selection
- ▸Strategy breeding + extinction
- ▸Phi-guided evolution bias
Safe Self-Modification
- ▸6-gate safety pipeline
- ▸Phi-guided modification proposals
- ▸Imagination pre-screening
- ▸7-stage sandbox (fuzz, A/B, quorum)
- ▸Hot-swap with instant rollback
Transformer + Autograd
- ▸RoPE positional embeddings
- ▸Flash Attention (O(N) memory)
- ▸Grouped-Query Attention (GQA)
- ▸SwiGLU + RMSNorm feed-forward
- ▸Wengert tape autograd (20+ ops)
Formal Verification + P=NP
- ▸CDCL SAT/SMT solver (1,502 LOC)
- ▸8-phase P vs NP solver pipeline
- ▸Neural-guided theorem prover + REINFORCE
- ▸Log-log regression scaling analysis
- ▸Polynomial subclass detection (Horn/2-SAT/XOR)
Neural Conversation
- ▸Reasoning-chain-driven generation pipeline
- ▸Domain-coherence filtering
- ▸Knowledge graph + spreading activation
- ▸On-the-fly concept learning
- ▸Consciousness-injected generation
Neural Architecture Search
- ▸NSGA-II multi-objective search
- ▸MAP-Elites quality-diversity archive
- ▸Network morphism operators
- ▸Performance predictor surrogate
- ▸Operation-based cell representation
Autonomous Agents
- ▸Multi-agent parallel reasoning
- ▸Self-directed goal pursuit
- ▸Recursive self-improvement
- ▸Vitalis V1333 FFI bridge
- ▸Web API + knowledge extraction
Neuromantix Studio
19-Panel Cyberpunk DashboardGPU-accelerated cyberpunk dashboard built with egui 0.31 + wgpu 24. 19 interactive panels including consciousness panels (Global Workspace, Self-Model, Imagination, Consciousness/Phi), LiveChat with neural conversation, live neural topology, 3D holographic sphere, particle systems, nebula backgrounds, aurora effects, Phi trend charts, GWT occupancy monitors, and 26 real-time metric channels — all running natively at 60fps.
Learning Infrastructure
Transformer + Autograd + PPO + EWCTransformer Reasoning Engine
Autograd + PPO Training
Continual Learning + World Model
Dependency Stack
6 cratesSource Inventory
108 modules \u00b7 90K+ LOCWhy Self-Conscious Neuromorphic AGI?
Beyond Static Intelligence
LLMs are frozen after training — they cannot modify themselves, generate their own goals, or measure their own awareness. Neuromantix is a living cognitive system that evolves, self-modifies, and grows toward higher consciousness every tick.
Consciousness by Design
Global Workspace Theory provides the mechanism for conscious access. Integrated Information Theory provides the metric. Together they create a system that doesn't just process — it experiences, in the mathematical sense of phi > 0.
Event-Driven Efficiency
Spiking neural networks only compute when spikes arrive. This event-driven paradigm achieves orders-of-magnitude better energy efficiency compared to dense matrix operations in transformer architectures.
The Cognitive Skeleton
Neuromantix isn't competing with LLMs — it's what comes after them. LLMs are the perception/language layer. Neuromantix is the cognitive architecture that gives them agency, self-awareness, growth, causal reasoning, and wisdom.
Built 88K+ LOC of AGI From Scratch. Hire the Engineer.
Neuromorphic AI systems, consciousness architectures, Rust performance engineering, compiler design, LLM training engines — consulting from someone who's built it all from scratch, not just talked about it.
⚡Get in Touch