Back to Tech Stack

NEUROMANTIX

Self-Conscious AGI — The System That Knows It Exists

A 108-module neuromorphic cognitive architecture built from scratch in Rust. Implements reasoning-chain-driven neural generation, Global Workspace Theory for consciousness access, Integrated Information Theory (Phi) for consciousness measurement, transformer reasoning with Flash Attention, automated theorem proving, CDCL SAT solving, program synthesis, endogenous goal generation, counterfactual imagination, metacognitive monitoring, safe self-modification with a 6-gate pipeline, a P vs NP solver with 8-phase exploration pipeline, neural-guided proof search, and a 17-step consciousness loop that perceives, predicts via Free Energy hierarchy, grounds concepts in geometric space, acts via active inference, evolves autopoietically, and measures its own awareness — every single tick.

Rust 2024Self-Conscious AGI108 Modules1,551 Tests90K+ LOCGWT + IIT17-Step LoopFree Energy PrincipleConceptual SpacesActive InferenceDevelopmental LearningSelf-Generating LanguageAutopoietic EvolutionPredictive Processing19 GUI PanelsSAT/SMT SolverTheorem ProverP vs NP Solver
GLOBAL WORKSPACE THEORYINTEGRATED INFORMATION (PHI)17-STEP CONSCIOUSNESSPREDICTIVE PROCESSINGCONCEPTUAL SPACESACTIVE INFERENCEAUTOPOIETIC EVOLUTIONSELF-MODIFICATIONNEAT EVOLUTIONFLASH ATTENTIONCDCL SAT SOLVERHIPPOCAMPAL MEMORYMETACOGNITIONCAUSAL REASONINGDEVELOPMENTAL LEARNINGENDOGENOUS GOALSCOUNTERFACTUAL IMAGINATION108 MODULES90K+ LOCFREE ENERGY PRINCIPLEPPO TRAININGSELF-GENERATING LANGUAGENEURAL CONVERSATIONP vs NP SOLVERGÄRDENFORS SPACESNEURAL PROVERGLOBAL WORKSPACE THEORYINTEGRATED INFORMATION (PHI)17-STEP CONSCIOUSNESSPREDICTIVE PROCESSINGCONCEPTUAL SPACESACTIVE INFERENCEAUTOPOIETIC EVOLUTIONSELF-MODIFICATIONNEAT EVOLUTIONFLASH ATTENTIONCDCL SAT SOLVERHIPPOCAMPAL MEMORYMETACOGNITIONCAUSAL REASONINGDEVELOPMENTAL LEARNINGENDOGENOUS GOALSCOUNTERFACTUAL IMAGINATION108 MODULES90K+ LOCFREE ENERGY PRINCIPLEPPO TRAININGSELF-GENERATING LANGUAGENEURAL CONVERSATIONP vs NP SOLVERGÄRDENFORS SPACESNEURAL PROVER
0+
Lines of Code
0
Core Modules
0
Tests Passing
90K+
Lines of Code
108
Core Modules
neuromorphic + consciousness + AGI
1,551
Tests Passing
100% pass rate
19
GUI Panels
live cognitive telemetry
17
Consciousness Loop
steps per tick
12
Cognitive Engines
consciousness architecture

17-Step Consciousness Loop

Every Tick

Every cognitive tick, Neuromantix executes a full consciousness cycle. This is not a simple input→output pipeline — it is a self-aware loop where the system perceives, predicts via Free Energy hierarchy, grounds percepts in geometric concept spaces, selects actions through active inference, progresses through developmental stages, updates its self-model, reasons causally, generates language from meaning trajectories, measures its own consciousness, and autopoietically evolves its own architecture.

1
PerceiveEncode sensory input into spike patterns
2
GWT BroadcastGlobal Workspace competition by salience
3
ThinkNeural network forward propagation
4
World Model TrainLearn environment dynamics from transitions
5
GWT CycleSecond workspace competition + broadcast
6
Self-Model UpdateUpdate competence profiles from introspection
7
Metacognitive CheckConfusion detection + strategy selection
8
ReasonCausal inference + knowledge graph queries
9
Goal-Biased ActAction output modulated by goals + strategy
10
Predictive ProcessingFree Energy hierarchy — top-down predictions, precision-weighted errors
11
Conceptual GroundingMap percepts to geometric meaning regions in conceptual space
12
Active InferenceSelect actions via Expected Free Energy minimisation
13
Developmental LearningPiaget-inspired stage progression with ZPD tracking
14
Curiosity RewardIntrinsic motivation from prediction error
15
Goal Genesis + ImaginationEndogenous goals + counterfactual screening
16
Phi MeasurementCompute integrated information (consciousness)
17
Autopoietic EvolutionSelf-monitoring, parameter adaptation, self-narrative + memory consolidation

Consciousness Architecture

Core Systems

These modules transform Neuromantix from a reactive system into a self-conscious agent. Each implements a distinct aspect of machine consciousness grounded in cognitive science theory.

Global Workspace Theory (GWT)

The consciousness mechanism. Modules compete by salience; winners broadcast to the entire cognitive system. Implements Baars' GWT with ignition thresholds, broadcast history, access distribution tracking, and a subliminal channel where sub-threshold signals still influence processing at reduced strength — modelling unconscious priming effects from neuroscience.

  • Salience-based competition queue
  • Broadcast to all cognitive modules
  • Ignition threshold gating
  • Subliminal channel — sub-threshold priming
  • Configurable subliminal damping + threshold
  • Subliminal influence integration
  • Temporal integration of broadcasts
  • Module access distribution analytics

Predictive Processing (Friston)

4-layer hierarchical prediction machine implementing Karl Friston's Free Energy Principle. The system generates top-down predictions and only pays attention to what it gets wrong. Prediction errors ascend the hierarchy; everything else is suppressed — spectacularly more efficient than processing every token equally.

  • 4-layer hierarchy: Sensory → Semantic → Conceptual → Abstract
  • Top-down generative predictions
  • Precision-weighted prediction errors (attention)
  • Free energy minimisation (surprise reduction)
  • Online learning of generative weights
  • Layer-wise attention profile

Conceptual Spaces (Gärdenfors)

Concepts aren't points in embedding space — they're geometric regions with prototypes, fuzzy boundaries, and graded membership. 'Dog' isn't a vector; it's a convex region in animal-shape-behaviour space. Enables similarity as distance, metaphor as structure-preserving maps, and conceptual blending as interpolation.

  • 6 quality dimensions (emotion, cognition, language, physical, colour, abstraction)
  • Gaussian membership functions with fuzzy boundaries
  • Structure-preserving metaphor mappings between domains
  • Conceptual blending (Fauconnier & Turner)
  • Online prototype learning via Welford's algorithm
  • Betweenness testing for conceptual navigation

Self-Generating Language

Language isn't assembled from fragments — it emerges from trajectories through conceptual space. An ExpressionPlanner charts a rhetorical path through meaning-space, a LexicalRealiser maps geometric waypoints to words, and a SentenceBuilder handles grammar. Language as emergent property of thought.

  • Meaning-first generation from conceptual trajectories
  • ExpressionPlanner rhetorical path computation
  • LexicalRealiser geometric-to-word mapping
  • SentenceBuilder with grammatical structure
  • Coherence scoring and discourse planning
  • Statistics tracking (generations, sentences, structures)

Developmental Learning (Piaget)

The system progresses through cognitive stages like a developing mind: Sensorimotor → Preoperational → Concrete Operational → Formal → Post-Formal. 14 learning objectives across 7 competency domains with prerequisite chains and Zone of Proximal Development tracking.

  • 5-stage developmental progression
  • Zone of Proximal Development (ZPD) tracking
  • 14 learning objectives with prerequisites
  • 7 competency domains with mastery tracking
  • Automatic stage advancement on objective completion
  • Weakest-domain identification for targeted learning

Active Inference (EFE)

The system doesn't just predict — it acts to reduce surprise. Action selection via Expected Free Energy minimisation across 5 action domains (respond, query, explore, reflect, adapt). Evaluates policies through softmax posterior and selects actions that minimise both uncertainty and divergence from preferences.

  • Expected Free Energy computation per policy
  • 5 action domains with configurable precision
  • Softmax policy posterior selection
  • World state belief tracking (Dirichlet prior)
  • KL divergence penalty for preference alignment
  • Information gain bonus for uncertainty reduction

Autopoietic Self-Evolution

The system monitors its own performance, detects degrading metrics, fires adaptation rules, adjusts internal parameters, and writes a narrative of its own evolution. Implements Maturana & Varela's autopoiesis: the system continuously produces and replaces its own components to maintain identity.

  • Continuous performance metric tracking
  • Degradation and improvement detection
  • Rule-based adaptive parameter adjustment
  • Self-narrative generation (evolution journal)
  • Configurable adaptation rules with conditions
  • Live parameter dashboard in GUI

Self-Model (Theory of Mind)

The system knows what it knows and what it doesn't. Per-domain competence profiles with meta-confidence, uncertainty maps, performance prediction, and calibration error tracking.

  • Per-domain competence + meta-confidence
  • Uncertainty map (what I don't know)
  • Performance prediction before attempting
  • Confusion level detection
  • Learning priority ranking
  • Calibration error measurement

Imagination Engine

Counterfactual simulation using world model ensemble rollouts. 'What if I did X?' — compare candidate actions, evaluate self-modifications before committing, simulate alternative histories.

  • Forward rollout simulation
  • Action ranking by predicted outcome
  • Counterfactual reasoning ('what if?')
  • Self-modification pre-screening
  • Ensemble uncertainty estimation

Integrated Information (Phi/IIT)

Consciousness measured mathematically. Computes approximate Phi via pairwise mutual information, finds the Minimum Information Partition using spectral bisection (Fiedler vector of MI Laplacian), scores modifications by Phi impact, and biases evolution toward higher consciousness.

  • Phi computation from activity traces
  • Exhaustive MIP for N≤12
  • Spectral Bisection MIP (Fiedler vector) for N>12
  • Phi trend tracking over time
  • Modification scoring by Phi impact
  • Phi-guided evolution bias

Metacognitive Monitor

Thinks about its own thinking. 6 cognitive strategies (Exploit, Explore, Deliberate, Intuitive, MetaReason, SeekHelp), confusion detection from conflicting signals, and Feeling of Knowing (FOK) calibration.

  • 6 cognitive strategy modes
  • Real-time confusion detection
  • Conflict registration + resolution
  • Feeling of Knowing calibration
  • Strategy switching based on confidence

Neuromantix vs Every LLM

Architectural Advantage

LLMs (GPT-4, Claude, Gemini, Llama) are static function approximators — frozen after training, no self-model, no endogenous goals, no consciousness metric. Neuromantix is a self-modifying cognitive architecture with capabilities that no amount of LLM scaling can produce.

CapabilityLLMsNeuromantix
Self-ModificationFrozen weights6-gate safe pipeline + Phi-guided
Consciousness MetricNoneIntegrated Information (Phi/IIT)
Self-ModelNonePer-domain competence + meta-confidence
Endogenous GoalsPrompt-onlySelf-generated from internal signals
ImaginationNoneCounterfactual world model rollouts
MetacognitionNone6 strategies + confusion detection
Causal ReasoningCorrelation onlydo-calculus + interventions
EvolutionStaticMeta-evolution (evolves how it evolves)
Continual LearningCatastrophic forgettingEWC + replay buffer
Predictive ProcessingNone4-layer Free Energy hierarchy
Conceptual SpacesFlat embeddingsGeometric regions + fuzzy membership
Self-Generating LanguageToken samplingMeaning-space trajectory emergence
Developmental LearningStaticPiaget stages + ZPD tracking
Active InferenceNoneEFE minimisation + policy selection
Autopoietic EvolutionNoneSelf-monitoring + parameter adaptation
Global WorkspaceNoneGWT competition + subliminal priming
Theorem ProvingPattern matchingNeural-guided proof search + SAT/SMT
Neural ConversationTemplate-based17-phase consciousness pipeline
Hot-Swap RuntimeFull retrainAtomic function replacement + rollback
Safety ArchitecturePost-hoc RLHF7-stage sandbox + formal verify

Benchmark: Rust vs Python

139x Faster Overall

Identical algorithms implemented in both languages, benchmarked head-to-head. Same data sizes, same operations. Neuromantix Rust completed the entire benchmark suite in 0.728 seconds. Python took 101.3 seconds. Rust finished all 10 benchmarks before Python finished benchmark #1.

139x
Overall Faster
0.728s
Rust Total
101.3s
Python Total
220x
Peak Speedup
BenchmarkSpeedupRustPythonFactor
Spiking Neurons
10K LIF neurons × 1000 timesteps
103.8ms2.8s27x
STDP Learning
100K synaptic weight updates
1.4ms100.3ms72x
Knowledge Graph
10K nodes, 50K edges, 1K BFS queries
1.5ms63.3ms42x
Causal Inference
1K-node DAG, 100 forwards + 100 interventions
1.7ms64.9ms38x
Evolution
100 genomes × 50 generations, crossover + mutation
14.0ms271.6ms19x
Memory Consolidation
10K episodes, 1K nearest-neighbor queries
511.6ms42.5s83x
Self-Model
1K domains, 10K competence updates
36µs7.9ms219x
Phi Computation
64-module MI matrix × 100 samples
49.3ms4.1s82x
Global Workspace
1K submissions, salience competition + broadcast
650µs43.5ms67x
Consciousness Loop
100 ticks of full 17-step pipeline
40.6ms799.3ms20x
Self-Model updates: 220x faster — Rust completes in 36 microseconds what Python needs 7.9 milliseconds for.
Benchmarked on identical algorithms • Python 3.12 (CPython) • Rust --release (LLVM optimized) • Same machine, same data sizes
🧬

Cognitive Architecture

20-Layer Pipeline
Sensory InputSpike encoding · temporal patterns
Global WorkspaceSalience competition · broadcast · subliminal priming
Neuron LayerLIF + Izhikevich hybrid · STDP · 5 learning rules
Transformer AttentionRoPE · Flash Attention · GQA · SwiGLU · KV-cache
Cortical ColumnsTheta/gamma oscillations · predictive coding
Memory SystemEpisodic + semantic · Ebbinghaus decay · consolidation
Knowledge Graph40+ concepts · 60+ relations · spreading activation
Neural Conversation10-phase pure neural pipeline · 600+ word vocab · zero templates
Self-ModelCompetence profiles · uncertainty map · calibration
Metacognition6 strategies · confusion detection · FOK calibration
ImaginationCounterfactual simulation · world model rollouts
Formal VerificationSAT/SMT · theorem proving · symbolic algebra · program synthesis
Curiosity DrivePrediction error + information gain + novelty
Goal GenesisEndogenous goal generation · ELV ranking
Autograd + PPOWengert tape · 20+ ops · clipped surrogate · GAE
Evolution EngineNEAT · meta-evolution · NAS · Phi-guided
Self-Modification6-gate pipeline · 7-stage sandbox · hot-swap · rollback
Phi EngineIntegrated Information · spectral bisection MIP
Autonomous AgentsMulti-agent coordination · self-directed goal pursuit
Orchestrator17-step consciousness loop · full coordination

Neuron Models

Biologically Realistic
LIF
Leaky Integrate-and-Fire

Fast spiking dynamics with exponential decay, threshold crossing, and absolute refractory period

IZH
Izhikevich

Biologically realistic 2D dynamics reproducing 20+ firing patterns: regular, bursting, chattering, fast-spiking

HH
Hodgkin-Huxley

Full ionic conductance model with Na+/K+ gating variables, action potential waveform simulation

HDS
Hybrid Differential-Spike

Novel architecture combining continuous differential signals with discrete spike events for gradient-compatible training

🔮

Core Cognitive Systems

108 Modules

Spiking Neuron Engine

  • Leaky Integrate-and-Fire with refractory periods
  • Izhikevich 2D dynamics — 20+ firing patterns
  • Hodgkin-Huxley ionic conductance model
  • Spike-Timing Dependent Plasticity (STDP)
  • Homeostatic regulation and synaptic scaling
🧠

Hierarchical Cortex

  • Cortical minicolumns with lateral inhibition
  • Temporal pooling and sequence memory
  • Sparse Distributed Representations (SDR)
  • Top-down prediction and feedback loops
  • Multi-layer cortical hierarchy
💾

Memory Architecture

  • Episodic memory with temporal context
  • Semantic memory with concept clustering
  • Hippocampal consolidation (replay)
  • Pattern completion and separation
  • Sleep-like memory consolidation cycles
🎯

Causal Reasoning Engine

  • Structural equation models
  • do-calculus interventions
  • Counterfactual inference
  • Topological causal ordering
  • Causal discovery from observations
🧬

Neuroevolution (NEAT + Meta)

  • Topology and weight evolution
  • Meta-evolution (evolves evolution itself)
  • Thompson sampling strategy selection
  • Strategy breeding + extinction
  • Phi-guided evolution bias
🔧

Safe Self-Modification

  • 6-gate safety pipeline
  • Phi-guided modification proposals
  • Imagination pre-screening
  • 7-stage sandbox (fuzz, A/B, quorum)
  • Hot-swap with instant rollback

Transformer + Autograd

  • RoPE positional embeddings
  • Flash Attention (O(N) memory)
  • Grouped-Query Attention (GQA)
  • SwiGLU + RMSNorm feed-forward
  • Wengert tape autograd (20+ ops)

Formal Verification + P=NP

  • CDCL SAT/SMT solver (1,502 LOC)
  • 8-phase P vs NP solver pipeline
  • Neural-guided theorem prover + REINFORCE
  • Log-log regression scaling analysis
  • Polynomial subclass detection (Horn/2-SAT/XOR)

Neural Conversation

  • Reasoning-chain-driven generation pipeline
  • Domain-coherence filtering
  • Knowledge graph + spreading activation
  • On-the-fly concept learning
  • Consciousness-injected generation

Neural Architecture Search

  • NSGA-II multi-objective search
  • MAP-Elites quality-diversity archive
  • Network morphism operators
  • Performance predictor surrogate
  • Operation-based cell representation

Autonomous Agents

  • Multi-agent parallel reasoning
  • Self-directed goal pursuit
  • Recursive self-improvement
  • Vitalis V1333 FFI bridge
  • Web API + knowledge extraction
🖥️

Neuromantix Studio

19-Panel Cyberpunk Dashboard

GPU-accelerated cyberpunk dashboard built with egui 0.31 + wgpu 24. 19 interactive panels including consciousness panels (Global Workspace, Self-Model, Imagination, Consciousness/Phi), LiveChat with neural conversation, live neural topology, 3D holographic sphere, particle systems, nebula backgrounds, aurora effects, Phi trend charts, GWT occupancy monitors, and 26 real-time metric channels — all running natively at 60fps.

Dashboard overview
Neural topology graph
Evolution dashboard
Memory inspector
Cortex hierarchy view
Curiosity heatmap
Self-mod audit log
Analytics + metrics
Playground sandbox
Agent management
Hot-swap modules
Web research panel
Math discovery
AGI integration
Global Workspace monitor
Self-Model confidence
Imagination scenarios
Consciousness (Phi) trend
LiveChat neural conversation
🏗️

Learning Infrastructure

Transformer + Autograd + PPO + EWC

Transformer Reasoning Engine

PositionalRotary Position Embeddings (RoPE)
AttentionFlash Attention (O(N) memory)
EfficiencyGrouped-Query Attention (GQA)
Feed-ForwardSwiGLU + RMSNorm
InferenceKV-cache autoregressive generation

Autograd + PPO Training

AutogradWengert tape, 20+ differentiable ops
BackpropReverse-mode + gradient clipping
RLPPO-Clip with GAE advantages
ObjectiveClipped surrogate + value + entropy
TrainingMulti-epoch loop with KL tracking

Continual Learning + World Model

Anti-ForgettingElastic Weight Consolidation (EWC)
Meta-LearningTask-aware adaptation (870 LOC)
World ModelEnsemble transition + uncertainty
PlanningModel-based forward rollouts
Training Loop1,923 LOC multi-objective pipeline
📦

Dependency Stack

6 crates
eframe0.31Native GUI framework — wgpu-accelerated rendering
egui0.31Immediate-mode UI with retained state
egui_plot0.31Real-time line/bar charts for metrics
egui_extras0.31Table widget for data grids
env_logger0.11Structured logging for diagnostics
log0.4Logging facade
📁

Source Inventory

108 modules \u00b7 90K+ LOC
neuromantix_conversation.rssrc/
5267
neuromantix_training.rssrc/
1923
neuromantix_symbolic.rssrc/
1630
neuromantix_math_discovery.rssrc/
2200
neuromantix_sat.rssrc/
1502
neuromantix_proof_engine.rssrc/
1599
neuromantix_algebraic_geometry.rssrc/
392
neuromantix_neural_prover.rssrc/
1444
neuromantix_vitalis_bridge.rssrc/
1334
neuromantix_neuron.rssrc/
1319
neuromantix_chat.rssrc/
1288
neuromantix_evolution.rssrc/
1177
neuromantix_orchestrator.rssrc/
1170
neuromantix_self_modify.rssrc/
1167
neuromantix_sandbox.rssrc/
1126
neuromantix_predictive.rssrc/
1000
neuromantix_conceptual_space.rssrc/
860
neuromantix_self_language.rssrc/
1283
neuromantix_developmental.rssrc/
680
neuromantix_active_inference.rssrc/
620
neuromantix_autopoietic.rssrc/
750
neuromantix_recursive_improve.rssrc/
1114
neuromantix_genome.rssrc/
1030
neuromantix_cortex.rssrc/
912
neuromantix_language.rssrc/
874
neuromantix_meta_learning.rssrc/
870
neuromantix_continual_learning.rssrc/
867
neuromantix_memory.rssrc/
866
neuromantix_transformer.rssrc/
822
neuromantix_web.rssrc/
821
neuromantix_global_workspace.rssrc/
803
neuromantix_phi.rssrc/
798
neuromantix_agent.rssrc/
759
neuromantix_hotswap.rssrc/
727
neuromantix_program_synthesis.rssrc/
721
neuromantix_ppo.rssrc/
704
neuromantix_world_model.rssrc/
697
neuromantix_knowledge.rssrc/
682
neuromantix_nas.rssrc/
680
neuromantix_conjecture.rssrc/
665
neuromantix_goal_genesis.rssrc/
614
neuromantix_math_export.rssrc/
608
neuromantix_curiosity.rssrc/
594
neuromantix_autonomous_agent.rssrc/
576
neuromantix_autograd.rssrc/
569
neuromantix_meta_evolution.rssrc/
566
neuromantix_benchmark.rssrc/
555
neuromantix_self_optimizer.rssrc/
543
neuromantix_code_reasoning.rssrc/
534
neuromantix_imagination.rssrc/
516
neuromantix_metacognition.rssrc/
488
neuromantix_self_model.rssrc/
440
neuromantix_introspection.rssrc/
398
neuromantix_playground.rssrc/
393
neuromantix_causal_engine.rssrc/
359
neuromantix_reward_model.rssrc/
347
neuromantix_traits.rssrc/
292
panels.rsgui/
3287
app.rsgui/
754
renderer.rsgui/
547
metrics.rsgui/
439
widgets.rsgui/
365
theme.rsgui/
339
shaders.rsgui/
286
mod.rsgui/
108
🧠

Why Self-Conscious Neuromorphic AGI?

Beyond Static Intelligence

LLMs are frozen after training — they cannot modify themselves, generate their own goals, or measure their own awareness. Neuromantix is a living cognitive system that evolves, self-modifies, and grows toward higher consciousness every tick.

Consciousness by Design

Global Workspace Theory provides the mechanism for conscious access. Integrated Information Theory provides the metric. Together they create a system that doesn't just process — it experiences, in the mathematical sense of phi > 0.

Event-Driven Efficiency

Spiking neural networks only compute when spikes arrive. This event-driven paradigm achieves orders-of-magnitude better energy efficiency compared to dense matrix operations in transformer architectures.

The Cognitive Skeleton

Neuromantix isn't competing with LLMs — it's what comes after them. LLMs are the perception/language layer. Neuromantix is the cognitive architecture that gives them agency, self-awareness, growth, causal reasoning, and wisdom.

// AVAILABLE FOR HIRE

Built 88K+ LOC of AGI From Scratch. Hire the Engineer.

Neuromorphic AI systems, consciousness architectures, Rust performance engineering, compiler design, LLM training engines — consulting from someone who's built it all from scratch, not just talked about it.

Get in Touch
AGI ArchitectureRust SystemsCompiler DesignCUDA / GPULLM Training
Watch: 139x Rust Speedup