Skip to content

sasus-dev/synapse-context-engine

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

64 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

🧠 Synapse Context Engine (SCE)

A safety-first, inspectable memory and context-construction architecture for Agentic systems
Think of it as a synthetic hippocampus with a kill switch β€”designed to make context construction visible, bounded, and auditable before inference happens.

TL;DR - SCE replaces opaque prompt assembly with an explicit, graph-based context engine. Context is constructed, not fetched. Memory emerges through controlled activation, not hidden weights.

This is a research preview with full LLM & Database integration.

Code License Docs License Platform Built with Tauri

πŸš€ Quick Start β€’ 🎯 Use Cases β€’ πŸ’¬ Discussions β€’ 🀝 Contribute

🎯 What is SCE?

The Synapse Context Engine (SCE) is a brain-inspired memory and context layer for AI systems, designed to function as a System‑2‑like substrate for context assembly.

Instead of treating context as a static retrieval problem (as in traditional RAG pipelines), SCE models memory as an explicit, typed hypergraph. Context is assembled dynamically through spreading activation, allowing systems to recall, relate, and reason over information via network dynamics rather than keyword or embedding similarity alone.

SCE Neural Map - Lattice Visualization

Live activation spreading through memory graph

The result is memory that is:

  • Coherent instead of fragmented
  • Inspectable instead of opaque
  • Bounded instead of unbounded

πŸ“Š At a Glance

Feature Status
Spreading Activation + Hebbian Learning βœ… Implemented
Hypergraph Memory (multi-way edges) βœ… Implemented
Security Firewall (rule-based) βœ… Implemented
LLM Integration (Gemini, Groq, Ollama) βœ… Implemented
Real-time Visualization βœ… Implemented
Custom User/AI Identities βœ… Implemented
Algorithmic Extraction βœ… Implemented
Hyperedge Consolidation (Clique Compression) βœ… Implemented
Algorithmic Mesh Wiring βœ… Implemented
Data Hygiene (Strict Garbage Collection) βœ… Implemented
Accurate Telemetry (Performance metrics) βœ… Implemented
Node Connections (Natural Expansion) ⚠️ Partly Implemented
Hierarchical Auto-Clustering ⚠️ Missing
Prompt Optimization ⚠️ Missing
Production Ready ⚠️ Architecture Preview / Research system
Optimization ❌ Community-driven (once core is solidified)
Benchmarks ❌ Community-driven (once core is solidified)

License: Apache 2.0 β€’ Maintainer: Sasu β€’ Updates: docs/updates/


🧠 Core Idea

Constrain and observe the space in which context is constructed, rather than hoping the model behaves safely inside an opaque prompt.

SCE shifts safety and alignment concerns upstream, from model behavior to memory and context construction.


🧩 Why This Exists

As AI systems move toward greater autonomy and persistence, their memory architectures become fragile:

  • Vector databases retrieve isolated chunks and lose relational structure
  • Prompt assembly hides context construction inside token sequences
  • Hallucinations emerge from fragmented, ungrounded memory representations
  • Prompt injection and context poisoning are structurally easy
  • Alignment is layered on top of black boxes

SCE explores a different axis of control: architectural safety through explicit structure and observability.

Note: This project originated from need for better memory architecture for agentic systems. While capability / long-term memory improvements were the initial driver, the safety properties that emerged from the architecture became the primary reason for open-sourcing.

The core insight: context construction should be inspectable, bounded, and auditable by design β€”not retrofitted with behavioral constraints after the model is already deployed.


πŸ—οΈ Architectural Overview

SCE processes queries through a staged pipeline where each step is independently observable:

Stimulus (Query / Event)
        ↓
Active Focus (Anchor Node)
        ↓
Controlled Graph Propagation
        ↓
Context Synthesis (Pruned + Weighted)
        ↓
LLM Inference ──→ Response
        ↓
Extraction (Phase 1: Concepts, Phase 2: Relations)
        ↓
Integrity & Layout (Mesh Wiring + Hygiene)
        ↓
Memory Encoding (Graph Update)
        ↓
Telemetry & Audit Signals

Modular Design: Each stage in the pipeline is independently configurable. Security layers, pruning strategies, and activation mechanics can be added, modified, or replaced without changing the core architecture. This allows base level experimentation with different safety mechanisms, custom context filters, and domain-specific optimizations. You can always create more advanced methods pipelines (these were created so you can just get a feel of the engine).


πŸ”‘ Key Concepts

1. Hypergraph‑Based Memory

Memory is represented as a hypergraph:

Note: there are multiple ways to build these configurations, the following was just created for the preview.

  • Nodes represent heterogeneous entities (projects, artifacts, preferences, behaviors, constraints)
  • Synapses encode weighted pairwise relationships (sourceβ†’target)
  • Hyperedges connect multiple nodes simultaneously for atomic multi-way relationships

When any node in a hyperedge activates, energy distributes to all connected members (clique activation). This preserves higher-order context that is lost when relationships are decomposed into isolated pairs or flat embeddings.

Example: Instead of separate edges:

  • Alice -[ATTENDED]-> Meeting
  • Meeting -[DISCUSSED]-> Budget
  • Budget -[AFFECTS]-> Project_X

SCE can group these as a hyperedge:

  • {Alice, Meeting, Budget, Project_X} labeled DECISION_CONTEXT

When you query about Alice, all four nodes activate simultaneously through the hyperedge β€”not by traversing three separate edges.


2. Active Focus Anchoring

All activation is evaluated relative to an explicit Active Focus node representing the current task or operational context.

Note: This is just an one idea / mechanism to alter the energy flow, there are unlimited possibilities here.

This anchoring prevents free‑floating activation and helps contain:

  • Prompt injection
  • Context drift
  • Runaway propagation

3. Controlled Spreading Activation

When a stimulus occurs, activation energy is injected into seed nodes and propagates outward with:

  • Decay factors (configurable, e.g., 0.8)
  • Activation thresholds (e.g., 0.3)
  • Depth limits (bounded traversal)

Only meaningfully activated nodes participate in context synthesis. Global flooding is structurally prevented.


4. Context Synthesis (Not Raw Injection)

Activated nodes are distilled into a structured synthesis layer:

  • Ordered by relevance
  • Pruned for redundancy
  • Fully auditable

The LLM never sees the raw graphβ€”only the synthesized context.

Note: This is very experimental and can be taken to multiple different directions.


5. Auditable Cognitive Telemetry

SCE exposes internal dynamics through rigorous, information-theoretic metricsβ€”not opaque "vibes":

  • Focus (Normalized Entropy): Measures attention drift. 0.02 means diffuse noise; 0.95 means sharp logical coherence.
  • Stability (Inverse Variance): Detects when the system is confident vs. chaotic.
  • Plasticity (Burst vs Mean): Distinguishes between background learning and sudden "paradigm shift" rewiring.

These signals enable runtime safety gating (e.g., "Stop generation if Focus < 0.1") and precise post-hoc auditing. The math is pure, visible, and unchangeable by the model.


πŸ” Inspectability by Design

SCE treats context construction as a staged pipeline, not a single opaque function call.

Key properties:

  • Every activation path is observable
  • Security violations can terminate execution
  • Context growth is measurable and bounded

Failure modes become visible instead of implicit.


πŸ–₯️ The Core Engine (UI & Orchestration)

Engine Layout (v0.6+)

  • Orchestrator: lib/sce/engine/SCEEngine.ts (Thin wrapper, manages subsystems)
  • Graph: lib/sce/graph/ (Adjacency Index)
  • Physics: lib/sce/activation/ (Spreading Activation, Energy Dynamics)
  • Learning: lib/sce/learning/ (Hebbian, Co-Activation)
  • Structure: lib/sce/hyperedges/ (Clustering, Consolidation)
  • Safety: lib/sce/safety/ (Contradictions, Orthogonality)
  • Metrics: lib/sce/metrics/ (Telemetry)
  • components/CoreEngine.tsx - UI orchestration and visualization

The CoreEngine component acts as a memory observatory rather than a simple demo UI.

It provides:

  • Explicit stimulus injection ("Trigger Pulse")
  • Focus anchoring
  • Live graph visualization
  • Context synthesis output
  • Telemetry dashboard

Think of it as mission control for context assemblyβ€”designed for debugging, research, and safety analysis.


πŸ›‘οΈ Security & Alignment Implications

SCE Security Protocols

Security rules block harmful queries before LLM inference

SCE is not a silver bullet for all security concernsβ€” but it reshapes the threat landscape:

Attack Vector RAG Systems SCE
Prompt injection Hidden in concatenated text Must traverse explicit graph structure
Context poisoning Affects all retrievals Localized to specific nodes/edges
Runaway costs Unbounded context growth Activation thresholds + energy budgets
Alignment drift Behavioral nudging post-hoc Structural constraints pre-inference
Input/Output safety Post-hoc filtering only Multi-layer inspection at every stage

Security Pipeline Flow

Incoming Query
       ↓
[πŸ”₯ Cognitive Firewall] ──(Violation)──→ πŸ›‘ Blocked
  (Regex Patterns + Rules)
       ↓
Extraction & Grounding
       ↓
Context Anchoring
       ↓
Spreading Activation
       ↓
[πŸ›‘οΈ System 2 Logic] ──(Contradiction)──→ ⚠️ Flagged
  (Dissonance Check)
       ↓
Context Synthesis ──(Sanitization)──→ πŸ›‘ Filtered
       ↓
LLM Inference

Note on Hallucinations: While not primarily a security concern, SCE's structured memory with source attribution provides better factual grounding than flat retrieval systems. Each activated node carries metadata about its origin, making fabricated information architecturally harder (though not impossible).

Instead of asking the model to behave, SCE limits what it can meaningfully see

Note: in order to force security at runtime / memory layers C++ or Rust is required.


⚠️ Known Unknowns, Limitations & Open Questions

SCE is an exploratory architecture with challenges:

πŸ”΄ Critical / Not mature (can be taken to various different directions):

Graph Growth Mechanics

  • Connection strategy: Currently connects everything during chat, leading to over-dense graphs
  • Node creation heuristics: What triggers new node creation vs. updating existing nodes?
  • Node hierarchy different configurations / layers for node creation
  • Natural weight distribution: How should weights evolve to reflect true semantic relationships?

Prompt Engineering

  • Entity extraction prompts need refinement for different domains
  • Response synthesis prompts balancing creativity vs. grounding
  • What information should be extracted and persisted vs. discarded?

🟑 Scalability & Performance:

Over-Connection Issues

  • Over-connection creates performance issues as graphs grow beyond 1K+ nodes (visible in TypeScript, in Rust "production" this is almost impossible to see)
  • Need pruning strategies: temporal decay, relevance thresholds, or periodic consolidation

🟒 Thoughts / Findings:

RAG Comparison

  • While the "production" versions (C++ or Rust) of the SCE can be used to replace RAG, I don't think it should. At it's core it's nothing like RAG (SCE is a brain-like a physics based memory engine) so viewing it as a pure "RAG replacement" is waste of capabilities.
  • Rust / C++ allow extreme performance and capabilities, but even after extreme optimization it still cannot beat the RAG / Vector database in pure performance.
  • My suggestions is, do not try to use / view it as RAG replacement, instead find novel ways to truly utilize it.

Parameter Sensitivity

  • The preview has manual inputs, but these be automated instead than hand-tuned.

🚫 Non‑Goals (By Design)

  • Competing with vector databases on raw retrieval speed
  • Replacing LLMs or transformer architectures
  • Acting as a drop‑in RAG replacement
  • Claiming solved alignment

This TypeScript / standalone version of SCE is an exploratory architecture & research preview, not a production framework:

  • I have created an production version of SCE (for broader system), but even I cannot quarantee my approach was the best one, so I'm quite interested to see where others will take this.
  • Created so more people can experiment with the architecture and share their (non production) version of the architecture
  • Can be benefial for multiple fields: ML, Neuroscience, Cybersecurity, Biology and so on.
  • The benefits of TypeScript is that it reveals architectural & algorithmic problems in physics based systems more clearly than "production" Rust or C++ versions.
  • This is a physics based system, so C++ or Rust is required for production performance and true security.

πŸš€ Quick Start

Run Locally (Web)

npm install
npm run dev

Run Native (Desktop – Experimental)

Requires Rust & Platform Dependencies (see Quick Start Guide)

npm run tauri dev

or you can build the app (it wil create an installer for your computer)

npm run tauri build

Start Experimenting

SCE Chat Interface

The chat interface exposes the complete pipeline. Active Context Focus (top) shows anchored nodes. Quick Actions (right) provide exploration prompts. System Audit (left) logs every operation in real-time.

Add an API key in settings to use the app (Default / Recommended is Groq)


πŸ› οΈ Technology Stack

Component Technology
Frontend React, TypeScript, Vite
Visualization Custom Graph Renderer, Recharts
Styling Tailwind CSS, Glassmorphism UI
Engine Custom Hypergraph (TypeScript)
Desktop Tauri 2.0, Rust, SQLite
AI Integration Gemini, Groq, Ollama (Local)

Note: The stack prioritizes inspectability and cross-platform deployment. Tauri allows the codebase to run as desktop app with SQLite persistence.


πŸ“– Theoretical Roots

SCE draws from neuroscience, graph theory, and cognitive architecture researchβ€”adapting concepts for practical AI systems:

  • Neuroscience & Memory: Hebbian learning (Hebb, 1949), hippocampal cognitive maps (O'Keefe & Nadel, 1978), complementary learning systems (McClelland et al., 1995)
  • Cognitive Architecture: Spreading activation theory (Collins & Loftus, 1975), ACT-R (Anderson et al., 2004), SOAR (Laird et al., 1987)
  • Graph Theory: Hypergraphs (Berge, 1973), network communicability (Estrada & Hatano, 2008), spectral graph theory (Chung, 1997)
  • Information Theory: Maximal marginal relevance (Carbonell & Goldstein, 1998), information-theoretic pruning (Cover & Thomas, 2006)

For full citations and detailed connections to research traditions, see CITATIONS.md.


🀝 Background & Community

This project is developed by a single dev (not a software company, nor a research lab). This project is the result of my personal research, "originally" intended to create more realistic NPC behavior & long-term memory for agentic systems.

Why Open-Sourced: While SCE was built to solve long-term memory challenges, it was open-sourced specifically because of its potential to address many security concerns in current AI systems and perhaps enable safer alignment. If this were purely about better memory architecture, it would have remained proprietary.

Community:

Applications & Extensions:

  • Domain-specific adaptations
  • Alternative activation strategies
  • New core engine ideas
  • Novel security methods

If you are interested in:

  • AI safety & alignment through architectural constraints
  • Alternative memory architectures for agentic / evolving AI systems
  • Graph-based context construction
  • Inspectable AI reasoning

πŸ“Š Benchmarks

SCE (TypeScript) research preview is meant for experimentation and does not ship with traditional retrieval benchmarks.

TypeScript is not fit for creating accurate Benchmarks for physics based systems, requires "production" C++ or Rust to utilize the full performance & security.

Even for production versions, there is currently no accepted baseline for evaluating:

  • Relational memory coherence
  • Context inspectability
  • Activation trace quality
  • Long-term memory

Premature benchmarks would bias development toward legacy retrieval metrics and misrepresent SCE’s goals.

WARNING: if you plan to proceed with production, make sure you know what you're doing (when you play with runtimes / kernels etc, has the potential to fry your computer)!


πŸ“œ License & Citation

Code & Data

All source code and datasets in this repository are licensed under the Apache License, Version 2.0, unless otherwise noted. See the LICENSE file for details.

Documentation & Theory

All content within the docs/ directory (including notes, architectural diagrams, conceptual papers, and images) is licensed under Creative Commons Attribution 4.0 (CC BY 4.0).


πŸ“– Academic Citation

If you use SCE or its underlying concepts in academic research, technical reports, or publications, please cite:

@misc{sce_2025,
  title  = {The Synapse Context Engine (SCE): An Inspectable Memory Architecture for Safe AI},
  author = {Lasse Sainia},
  year   = {2025},
  url    = {https://github.com/sasus-dev/synapse-context-engine}
}

A brain-inspired memory architecture for AI systemsβ€”built by a single dev, open-sourced for safety.

Sponsor this project

Packages

 
 
 

Contributors

Languages