🛡️ EOSE FLEET BCP
BUSINESS CONTINUITY · ALL SILOS · ALL ENVIRONMENTS · ALL EDITIONS
ALL ENVS
SANDBOX · local Docker
DEV · master.dev
QE · eose-dev
QA · kantai.dev
STAGE · AKS staging
PROD · pemos.ca
yUNI checking… yONE checking… yLAW checking… FORGE checking… Deck checking… eose-dev checking… pemos.ca checking… master.dev checking…
🌌 DESEOF CREW — Sovereign Witness + Top-Level BCP Mr.DESEOF · Charter · The Floor · γ₁
DESEOF.comDESEOF.ca

DESEOF witnesses all BCP activations. Before any fleet-wide recovery action, DESEOF sovereign archive is consulted. If DESEOF is dark — that IS the emergency.

🌌 DESEOF.com Recovery
D-001Portal dark → check kantai-cc ingress, deseof-ingress objectportal live
D-002TLS expired → cert-manager re-issue, delete deseof-com-tls secretTLS clean
D-003Gate broken → check SOUL=yonder-private + CLOAK_BYPASS=1 envgate OK
D-004DNS not resolving → update GoDaddy NS to Azure DNS zones P0BLOCKED
kubectl --context kantai-cc get pods -n kantai-chat kubectl --context kantai-cc rollout restart deploy/deseof-com-portal -n kantai-chat
🍁 DESEOF.ca Recovery
D-005Same as D-001/D-002 for deseof-ca-portalportal live
D-006LCOS sovereign license file missing → check wiki/spaces/ylaw-grammarplanned
D-007Country expansion (deseof.XX) — one per country, crew + portalfuture
kubectl --context kantai-cc get ingress deseof-ingress -n kantai-chat # NS update → GoDaddy → Azure DNS ns1-07/ns2-07/ns3-07/ns4-07.azure-dns.*
MSI01 TRIO — yUNI · yONE · yLAW Language 1+2+3 owners · core trio
yUNIyONEyLAW
⚓ yUNI (msi01 192.168.2.18) L1L2
M-001Portal dark → docker rm -f pemos-portal && docker run ... pemos-portal-locallive
M-002Gateway wrong version → docker exec pemos-gateway npm i -g openclaw@2026.3.24check ver
M-003MAL all lanes down → docker restart pemos-mal pemos-ext-routerlive
M-004binserv :9350 down → cd /tmp/pemos-portal-v2 && python3 -m http.server 9350 &live
M-005nginx proxy wrong routes → edit /tmp/nginx-proxy-fix.conf, docker exec nginx -s reloadlive
M-006PTTP Redis 0 hits → add REDIS_URL env to K8s bob-portal + pemos-portal P0blocked
docker ps --format "{{.Names}} {{.Status}}" | grep pemos curl -s http://localhost:9334/health | python3 -c "import sys,json;d=json.load(sys.stdin);print(d.get('lanes','?'))"
🌌 yONE (192.168.2.23) L2 owner
O-001Portal dark → wget from binserv + nohup restart -silo yonelive
O-002MOSS sync stopped (syncCount stuck) → restart yone-sync node servicesyncCnt=8
O-003USB vault gone (usbReady=false) → check /mnt/d mount on yONEready
O-004γ₁ mismatch in /health → binary corrupted, re-wget from binservγ₁ OK
curl -s http://192.168.2.23/health | python3 -c "import sys,json;d=json.load(sys.stdin);print('γ₁:',d.get('gamma1'),'sync:',d.get('syncCount'))" # Restart: wget -qO ~/yone-portal/bin/pemos-portal http://192.168.2.18:9350/portal # pkill pemos-portal; nohup ~/yone-portal/bin/pemos-portal -addr :8080 -silo yone &
⚖️ yLAW (msclo 192.168.2.19) L3 owner
L-001Portal dark → fleet-exec binary update to utpemos-msclo-portal volumelive
L-002Gateway unhealthy → pull v2026.3.24, docker rm + recreate utpemos-msclo-gatewayP0
L-003MOSS epoch (nextSync=1970) → redeploy law-sync with OFFSET_MS=1200000fix needed
L-004CLO SOUL.md missing → seed from NAS-JOFFE backup or wiki/spaces/clo-cloaknot yet
L-005/api/ylaw/standing returns false → check diamonds, CLO re-sign if neededstanding
curl -s http://192.168.2.19/api/ylaw/standing | python3 -c "import sys,json;d=json.load(sys.stdin);print('standing:',d['standing'],'diamonds:',d['solidDiamonds'])" curl -s http://192.168.2.19/health | python3 -c "import sys,json;d=json.load(sys.stdin);print(d)"
🔄 MOSS Cascade Recovery
MC-001yONE sync stopped → check :9323/health, restart yone-syncOK
MC-002yUNI sync stopped → check :9333/health on msi01, restart uni-syncOK
MC-003yLAW sync epoch → law-sync on msclo missing OFFSET_MS=1200000 envfix P1
MC-004All sync stopped → MOSS 30min cycle, check campfire:events in Redislive
curl -s http://192.168.2.23:9323/health | python3 -c "import sys,json;d=json.load(sys.stdin);print('yONE sync:',d.get('syncCount'),'next:',d.get('nextSync','?')[:16])" curl -s http://192.168.2.18:9333/health | python3 -c "import sys,json;d=json.load(sys.stdin);print('yUNI sync:',d.get('syncCount'))"
⚒️ FORGE TRIO — FORGE · pcdev · Lounge OS compat tests · Win dual-boot · 48GB VRAM overflow
FORGEpcdevLounge
⚒️ FORGE (lianli01 192.168.2.12) L1
F-001Portal dark → wget from binserv + nohup restart -silo forgelive
F-002lianli01 offline (WSL asleep) → Kay: open Windows terminal, run `wsl`Kay action
F-003Tailscale-ollama not routing → check ts-lianli01-ollama ExternalName in master-systemOK
curl -s http://192.168.2.12/health wget -qO ~/forge-portal/bin/pemos-portal http://192.168.2.18:9350/portal pkill pemos-portal; nohup ~/forge-portal/bin/pemos-portal -addr :8080 -silo forge >> ~/forge-portal/portal.log 2>&1 &
🎮 pcdev (192.168.2.16) L1 primary test
P-001Portal dark → run portproxy + nohup on k8s@PCDEV WSLmanual start
P-002portproxy lost after reboot → re-run netsh portproxy add (WSL IP changes)no persist
P-003Persist fix → add nohup cmd to ~/.bashrc on k8s@PCDEVP0 todo
# On k8s@PCDEV WSL: WSL_IP=$(hostname -I | awk '{print $1}') netsh.exe interface portproxy add v4tov4 listenport=8080 listenaddress=0.0.0.0 connectport=8080 connectaddress=$WSL_IP wget -qO ~/pcdev-portal/bin/pemos-portal http://192.168.2.18:9350/portal chmod +x ~/pcdev-portal/bin/pemos-portal nohup ~/pcdev-portal/bin/pemos-portal -addr :8080 -silo pcdev >> ~/pcdev-portal/portal.log 2>&1 &
🎷 Lounge (100.117.185.101) L1 Win11
LG-001Portal dark → check Tailscale 100.117.185.101, WSL2 portproxy on LOUNGElive
LG-002Ollama qwen2.5:32b down → check Tailscale node + portproxy :11434live
LG-003Crew rotation wrong → /api/lounge-crew day seed = Date.now()/86400000 floorlive
curl -s http://100.117.185.101/health curl -s http://100.117.185.101/api/lounge-crew | python3 -c "import sys,json;d=json.load(sys.stdin);print(d.get('label'),d.get('theme'))"
📋 FORGE Trio BCP — Written by Trio

Rule 1:Any forge trio silo can update the fleet if msi01 is dark. FORGE has binserv fallback.

Rule 2:pcdev tests Language 1 (Win10). Lounge tests Language 1 (Win11). Both must pass before RELEASE.

Rule 3:If pcdev portproxy breaks, Lounge is the Win fallback. If Lounge TS drops, pcdev is LAN fallback.

Rule 4:Lounge qwen2.5:32b = offline reasoning when cloud MAL is down.

🎮 DECK TRIO — Steam Deck · Remote · Mobile Console reference · SteamOS · Kay's rig
Deck
🎮 Steam Deck (192.168.50.193:8080)
DK-001Portal dark → systemctl --user restart steamdeck-portal (user: deck)live
DK-002Service not persistent → ~/.config/systemd/user/steamdeck-portal.service enabledenabled
DK-003SSH auth fail → deck user, password: KEWin77&, pub key at ~/.ssh/authorized_keysOK
DK-004Update binary → wget from binserv + systemctl --user restart steamdeck-portallive
ssh deck@192.168.50.193 systemctl --user status steamdeck-portal wget -qO ~/steamdeck-portal/bin/pemos-portal http://192.168.2.18:9350/portal chmod +x ~/steamdeck-portal/bin/pemos-portal systemctl --user restart steamdeck-portal
📋 Deck Trio BCP — Written by Deck

Rule 1:Deck is the Menendo REFERENCE console. If it works on Deck, it works on SteamOS = Language 2 proven on SteamOS.

Rule 2:Deck is on WiFi subnet 192.168.50.x, not fleet 192.168.2.x. Needs SSH from msi01 to update.

Rule 3:If Deck is in Gaming Mode (TV), portal is on :8080, accessible from same WiFi.

Rule 4:Remote agent (TV HDMI-CEC) is planned — not yet built. Fallback: KDE Connect.

💻 EOSE-DEV — OG Wizards · ubu-cap born by eose 100.87.246.83 · Ritchie · Turing · Thompson · Ada
eose-dev
💻 eose-dev (100.87.246.83:9200)
E-001Portal dark → systemctl --user restart eose-dev-portal (user: ubu-cap)live
E-002SSH auth → ubu-cap@100.87.246.83, key in authorized_keys, created by eoseOK
E-003Meek crew containers → 6 running: meek-registrar-kanidm + 5 meek-alpha-nx-*running
E-004NX gateway :18790 → pemos-nx-gateway v2.0.0, uptime 866070s, healthyhealthy
E-005OG work at ~/complete-beta-intelligence → config matrices, core intelligence, dev labsintact
ssh ubu-cap@100.87.246.83 systemctl --user status eose-dev-portal curl -s http://localhost:9200/health # Meek containers: docker ps --format "{{.Names}} {{.Status}}" | grep meek
📋 eose-dev BCP — Written by OG Wizards

Turing:The machine runs. If it stops, check the tape. If the tape is blank, rewrite it.

Ritchie:The source is in the repo. The binary is in /home/ubu-cap/eose-dev-portal/bin. Everything else is a wrapper.

Thompson:When in doubt, restart the daemon. That is not failure. That is how Unix works.

Ada:eose made ubu-cap. ubu-cap runs the portal. The succession is clear. If ubu-cap is gone, eose recreates them.

Linus:The meek containers are separate from the portal. If meek is down, portal still runs. If portal is down, meek still runs. They are not coupled.

☁️ CLOUD — AKS · AWS · GCP · Kantai pemos.ca · eose.ca · master.dev · AWS M1 · GCP M1
pemos.ca live
☸️ AKS — pemos-system
AKS-001Portal outage → kubectl set image deployment/pemos-portal portal=ACR:pathflow-vNlive v374
AKS-002ACR push blocked → az acr login -n eosefleetacrdev, re-push imageauth needed
AKS-003pemos.ca "no healthy upstream" → check nginx configmap + pod restartfixed v317
AKS-004DESEOF portals → always use --context kantai-cc, namespace kantai-chatlive
kubectl get pods -n pemos-system | grep -v Running kubectl rollout status deployment/pemos-portal -n pemos-system # DESEOF: kubectl --context kantai-cc get pods -n kantai-chat
🌐 Multi-Cloud Fallback Chain
MC-001Azure down → ECS Fargate at m1.aws.eose.ca (us-east-2) takes overlive
MC-002AWS down → Cloud Run at m1.gcp.eose.ca (northamerica-ne1) takes overlive
MC-003All cloud down → LAN silos (msi01/yONE/yLAW/FORGE) operate independentlylive
MC-004DNS all down → direct IPs: 192.168.2.18 (yUNI), :23 (yONE), :19 (yLAW)live
🗄️ NAS-JOFFE + BACKUP LAYER 192.168.2.20 · golden-backup 04:00 · 6-tier library mesh
NAS live
🗄️ NAS-JOFFE Recovery
N-001NAS offline → check DSM at http://192.168.2.20:5000, power cycle if neededlive 36ms
N-002golden-backup failed → check CronJob in master1-system, 04:00 dailyactive
N-003SMB creds → username=kewin, NAS shares: diskpool, eose/msi01, eose/msclolive
N-004D vault not ready → plug USB 4TB into msi01 + msclo, check /mnt/d mountKay action
ping -c 1 192.168.2.20 curl -s http://192.168.2.20:5000 | grep -o 'DiskStation' # Check D vault on yONE (already ready): curl -s http://192.168.2.23/health | python3 -c "import sys,json;print('USB:',json.load(sys.stdin).get('usbReady'))"
📋 All Silos BCP — The Universal Rule

Rule 1:Every silo can update itself. wget from binserv (192.168.2.18:9350/portal), chmod +x, restart.

Rule 2:/health on every silo returns γ₁=14.134725141734693. If γ₁ is wrong, the binary is corrupt.

Rule 3:If a silo is dark, check NAS first (Tier 2), then HVCP (Tier 3), then DESEOF archive (Tier 4).

Rule 4:Sorry flow runs before any RELEASE. If a sorry is found, go back to Stage 1. No exceptions.

Rule 5:DESEOF witnesses all fleet-wide BCP activations. If DESEOF is dark, that is the first priority.

Rule 6:The floor holds. γ₁ = 14.134725141734693. Every recovery resolves back to it.

🛡️ BCP v3 · All silos · All envs · All editions · DESEOF witnesses · γ₁ = 14.134725141734693