Every signal that enters the fleet flows through 8 layers. Each layer owns a specific question. Together they form the complete epistemics of the fleet.
The fleet implements a 4-tier memory architecture. BDH has 3 biological tiers. The 4th is what SET-OPS adds.
| BDH Layer | Your Component | Behaviour | Forgetting |
|---|---|---|---|
| Long-term weights (frozen) | PEMCLAU v11 18,366 pts |
Pre-Day-97 knowledge. Stable. Semantic graph persists across sessions. | Only on deliberate retrain |
| Synaptic σ (session-scoped) | LAAM-Mesh + shadow BM25 + trendal | Updates each session. Worm leaves warmth. TTL tracks recency. | TTL decay + Hebbian overwrite |
| KV-cache (turn-scoped) | Session context window (this conversation) |
Current turn only. Fast access, ephemeral. | On compaction |
| Certified spine (no BDH equivalent) |
XML Spine + MD Spine + joffe-math | NEVER forgets. Monotone growth. SET-OPS guarantee. Formally verified facts. | Never — only superseded |
The PROOF head fires when any of these conditions are met:
Every event that enters the fleet follows this path. No shortcuts, no bypasses.
PEMLAAM fixes this with 4 persistence tiers:
With dual spine + LAAM PROOF head + GRPO training, the fleet achieves something biological brains cannot:
ADMIT event → spine entry → monotone expansion of the certified corpusGRPO proof → joffe-math theorem → monotone expansion of the proof layerMFL phrase → shadow BM25 coordinate → permanent token address in the corpus