QUASICRYSTAL ROUTING: The EOSE fleet has no regular topology — msi01↔forge is direct LAN,
msi01↔lilo is Tailscale VPN, yone↔forge cross-CIDR. Penrose tiling — aperiodic, non-repeating
but locally valid — is the correct model. No global scheduler. No central state. Routing emerges from local decisions.
Each silo is a tile. Load pressure fans along tile edges to the nearest spare capacity.
§1 — WHY QUASICRYSTAL
FLEET TOPOLOGY
msi01↔forge: direct LAN 192.168.2.x
msi01↔lilo: Tailscale VPN ~10ms
yone↔forge: LAN diff subnet
lounge: 192.168.50.x separate
AKS: Azure cloud bridge
No regular topology exists.
PENROSE PRINCIPLE
Aperiodic: no two silos identical
Locally valid: each makes own decision
No global state: only campfire broadcast
Self-similar: same rules at every scale
Emergent: routing falls out of structure
WHAT THIS ENABLES
Partial failure: non-impacting
No SPOF: campfire is broadcast
Organic growth: add silo = add tile
Self-healing: PEMCLAU learns patterns
GPU routing: free VRAM wins dispatch
§2 — THE 5-STEP ROUTING ALGORITHM
STEP 01
SENSE
Service hits load threshold: CPU>80% OR memory>85% OR GPU>90%. hwmon detects via silo agent.
STEP 02
LOCAL CHECK
Can I spawn a replica on THIS silo? Check hwmon free resources. If yes → spawn. Done.
STEP 03
CAMPFIRE BROADCAST
"WHO HAS SPARE CAPACITY?" broadcast with resource_type, min_vram, cpu_cores_needed.
STEP 04
FLEET REPLY
Any silo under threshold replies: "I CAN TAKE {job_type}" with composite QS weight.
STEP 05
LOCO DISPATCH
LOCOJob CRD issued on target silo. DRG gates. SOSTLELane issued. WPA records event.
§3 — COREDNS: FIRST QUASICRYSTAL SERVICE
The proof. CoreDNS is the simplest possible QS test case — stateless, 50MB, easily replicated.
QS COREDNS FAILOVER SEQUENCE
1. msi01 fleet-coredns CPU > 80% detected by hwmon agent
2. QS Step 2: msi01 already running fleet-coredns — cannot self-replicate
3. QS Step 3: campfire broadcast → "WHO CAN RUN CoreDNS fleet.local backup?"
4. forge replies: CPU 30%, 24GB VRAM free — I CAN TAKE DNS
5. LOCOJob CRD dispatched → fleet-coredns-forge on 192.168.2.12:5354
6. AKS CoreDNS custom ConfigMap updated with both resolvers
7. fleet.local is now HA — primary msi01:5353, secondary forge:5354
8. WPA registers event. PEMCLAU FC1 learns: "forge handles DNS overflow"
§4 — HA PATTERNS
| Pattern | Services | Replication | State | MECRDS CRQ |
| STATELESS | CoreDNS, mefine-static, LOCO API | Auto-replica via LOCO | Redis shared state | Not required |
| STATEFUL READ | Qdrant, Postgres read replicas | Read replicas via LOCO | Writes to primary only | Required for writes |
| GPU INFERENCE | Ollama, PEMCLAU heavy | Route to most free VRAM | Stateless per request | Not required |
| GPU SINGLETON | Training jobs, RL | Single dispatch — no split | Job state in NAS | Required |
§5 — ADELIC ROUTING WEIGHTS
silo_weight = γ₁ / (silo_level + 1)
silo_weight_composite = (1 - wpa_pressure) × vram_free_pct × cpu_free_pct × adelic_weight
DRG gate: composite > 0.3 required to accept dispatch
msi01
14.134
Level: L0 (highest authority)
Formula: γ₁ / (0+1) = 14.134
Role: AKS control, campfire primary
forge
7.067
Level: L1
Formula: γ₁ / (1+1) = 7.067
Role: GPU inference, Docker builds
msclo
14.134
Level: L0
Formula: γ₁ / (0+1) = 14.134
Role: CLO workloads, yLAW
yone
7.067
Level: L1
Formula: γ₁ / (1+1) = 7.067
Role: PEMCLAU, LHVCP, Ollama
pcdev
4.711
Level: L2
Formula: γ₁ / (2+1) = 4.711
Role: 32GB VRAM deep reasoning
lounge
4.711
Level: L2
Formula: γ₁ / (2+1) = 4.711
Role: Rendering, visual preview
lilo
7.067
Level: L1 (Tailscale)
Formula: γ₁ / (1+1) = 7.067
Role: Family/creative, E.coli
§6 — CHAOS ENGINE — SELF-HEALING WITH MEMORY
1. Intentionally kill service → QS detects absence via campfire heartbeat gap
2. Reroutes via LOCO to next highest composite weight silo
3. WPA registers failure event — helix score updated
4. PEMCLAU FC1 learns pattern → graph node: "silo X fails at load Y with service Z"
5. Preemptive weight reduction — that silo pattern scores lower in future dispatches
6. Fleet becomes smarter with every failure. Self-healing with memory.
§7 — REASONING V13 — WPA HELIX SCORE
WPA helix score → routing weight adjustment
composite = (1 - wpa_pressure) × vram_free_pct × cpu_free_pct × adelic_weight
Example: forge (wpa=0.2, vram=0.6 free, cpu=0.7 free, adelic=7.067)
composite = 0.8 × 0.6 × 0.7 × 7.067 = 2.38 — DRG GATE PASSES (>0.3)
§8 — BELT64 INTEGRATION
SEG 5 — MESH LAYER
Every LOCO dispatch = Seg 5 event
Every DRG gate = Seg 5 proof in MECRDS
QS is the mesh fabric
All routing lives at Seg 5
LHVCP CRDS (YONE)
LOCOJob: 9520
DRGGate: 9522
SOSTLELane: 9524
PyramidTier / BlackholeSink
CAMPFIRE LIVE
pemos-campfire: Up 2 weeks
utpemos-campfire: Up 2 weeks
utpemos-msclo-campfire: Up
pemos-campfire-redis: Up
γ₁ = 14.134725141734693 · day97-v131 · EOSE Labs · Quasicrystal Scheduler V13