LOCAL IS ALL ALL · SOSTLE FULLY LOCAL · NO CLOUD REQUIRED · V12

LOCAL SOVEREIGN V12

LHVCP SURVIVAL · LOCAL OPA · LOCAL REGISTRY · BELT64 LAN · ALL SILOS
γ₁ = 14.134725141734693 · forge 192.168.2.12 · lhvcp :9610 · Day 94
THE FUNDAMENTAL DISTINCTION
WHY LOCAL IS HARDER: LOCAL IS "ALL ALL"
Cloud gives you infrastructure for free. Local means YOU ARE THE INFRASTRUCTURE.
Every service that cloud manages transparently — DNS, TLS, auth, secrets, ingress, image pull, node repair — you must provide locally.
The upside: zero external dependency. When cloud is down, local runs forever.
The SOSTLE + V12 living systems must survive on local alone. That's the sovereign contract.
CLOUD GIVES FREE ✓
✓ DNS (Azure DNS / Route53)
✓ TLS certs (cert-manager + Let's Encrypt)
✓ Auth (Azure AD / OIDC)
✓ Image registry (ACR eosefleetacrdev)
✓ Secrets (AKV CSI driver)
✓ Ingress (nginx-ingress-controller managed)
✓ Node auto-repair / auto-scaling
✓ Load balancer (Azure LB)
✓ Monitoring (Azure Monitor)
✓ Backup (Azure Backup)
YOU PROVIDE LOCALLY ⚠
⚠ DNS: CoreDNS in lhvcp + /etc/hosts per silo
⚠ TLS: self-signed CA per cluster, distributed via Tailscale
⚠ Auth: k3s ServiceAccount tokens + OPA Rego policies
⚠ Registry: k3d built-in at forge:5001
⚠ Secrets: local k3s Secrets + AKV as backup only
⚠ Ingress: forge lhvcp LB at :9610 (Tailscale for WAN)
⚠ Node repair: you restart the machine
⚠ LB: klipper-lb (built into k3s)
⚠ Monitoring: pemos-hwmon + fleet-physics-sim
⚠ Backup: NAS + AKV kubeconfig backup
SOSTLE LOCAL PRINCIPLE
L0-L4 are fully local by default. No cloud required for schema ingest, COI scoring, CLO gate (local OPA), HWMON floor check, bonixer routing.
L5 (fleet promotion) requires belt64 push to forge — but forge is LOCAL (192.168.2.12), not cloud. Belt64 runs on forge lhvcp.
L6-L7 are closed anyway. The entire open SOSTLE stack is local-survivable.
LHVCP CLUSTER SURVIVAL — FORGE (BLUEPRINT FOR ALL SILOS)
PROBLEM: k3d DIES ON WSL RESTART
k3d clusters are Docker containers. When WSL restarts (shutdown/reboot), Docker stops.
k3d cluster state is lost unless volumes are mounted.
On Windows, WSL auto-starts but Docker Desktop may not auto-start k3d.
Solution: NSSM service + Task Scheduler + persistent volumes.
SOLUTION: PERSISTENT VOLUMES
Create lhvcp WITH a host-mounted volume for qdrant data:
k3d cluster create lhvcp --port "9610:80@loadbalancer" --volume /home/ubu-cap/lhvcp-data:/data@server:0
qdrant PersistentVolume mounts to /data
Cluster restart = data survives
WSL AUTO-START (WINDOWS TASK SCHEDULER)
# Create a Windows Task Scheduler task to restart k3d after WSL starts # Run in PowerShell on the Windows host: $action = New-ScheduledTaskAction -Execute "wsl.exe" ` -Argument "-d Ubuntu-24.04 --exec bash -c 'k3d cluster start lhvcp 2>/dev/null || true'" $trigger = New-ScheduledTaskTrigger -AtStartup $settings = New-ScheduledTaskSettingsSet -ExecutionTimeLimit (New-TimeSpan -Minutes 5) Register-ScheduledTask -TaskName "k3d-lhvcp-autostart" ` -Action $action -Trigger $trigger -Settings $settings -RunLevel Highest -Force # Verify: after next reboot, check: # wsl -- k3d cluster list
BELT64 LOCAL MODE
# Belt64 ingress runs in forge lhvcp (LAN only) # Other silos push via LAN, NOT internet # From msclo: curl -X POST http://192.168.2.12:9640/push \ -H "Content-Type: application/json" \ -d '{"silo":"msclo","prime":3,"gamma1_sig":"a3f2...","batch":[...]}' # γ₁ signature = sha256(batch_json + "14.134725141734693")[:16] # This IS the auth token. No Azure AD. No JWT. Pure γ₁-signed math. # If forge is unreachable (LAN down): belt64 queues locally, pushes on reconnect # Queue path: /home/ubu-cap/openclaw-fleet/fleet-sync/belt64-queue/
KUBECONFIG BACKUP TO AKV
# After cluster create, store kubeconfig in AKV as lifeline k3d kubeconfig write lhvcp az keyvault secret set --vault-name forge-silo-kv \ --name lhvcp-kubeconfig-forge \ --value "$(k3d kubeconfig get lhvcp)" # Restore after disaster: az keyvault secret show --vault-name forge-silo-kv \ --name lhvcp-kubeconfig-forge --query value -o tsv > ~/.kube/forge-lhvcp.yaml export KUBECONFIG=~/.kube/forge-lhvcp.yaml kubectl get nodes
LOCAL OPA — SOSTLE GATES WITHOUT AZURE
OPA IN LHVCP — NO AZURE AD REQUIRED
OPA (Open Policy Agent) runs as a pod in forge lhvcp
Rego policies enforce SOSTLE L0-L7 gates
Auth: k3s ServiceAccount JWT tokens + γ₁ signature
No Azure AD, no external OIDC, no internet required
Design doc: /sovereign-opa-helix
DEPLOY OPA INTO LHVCP
# Deploy OPA into forge lhvcp (run after lhvcp is created) kubectl create namespace opa-system # OPA deployment kubectl apply -f - <<'EOF' apiVersion: apps/v1 kind: Deployment metadata: name: opa namespace: opa-system spec: replicas: 1 selector: matchLabels: app: opa template: metadata: labels: app: opa spec: containers: - name: opa image: openpolicyagent/opa:latest args: ["run","--server","--addr=:8181","--log-level=info"] ports: - containerPort: 8181 --- apiVersion: v1 kind: Service metadata: name: opa namespace: opa-system spec: selector: app: opa ports: - port: 8181 targetPort: 8181 EOF
REGO POLICIES — SOSTLE GATES
# SOSTLE L2 CLO gate (example Rego policy) # File: policies/sostle-l2-clo.rego package sostle.l2 default allow = false # Allow if: CLO signature present + jurisdiction matches allow { input.clo_sig != "" input.jurisdiction_ok == true input.gamma1_sig != "" # gamma1_sig must start with sha256 of "14.134725141734693" startswith(input.gamma1_sig, "14") } # Deny with reason deny[reason] { not input.clo_sig reason := "CLO signature required for L2 gate" } deny[reason] { not input.jurisdiction_ok reason := "Jurisdiction check failed (PHIPA/PIPEDA/OSFI not confirmed)" }
SOSTLE × OPA GATE MAP
SOSTLE LAYEROPA POLICYINPUT REQUIREDAUTH TYPE
L0 — Schema ingestsostle.l0 (always allow)nonenone
L1 — COI scoresostle.l1gamma1_sigγ₁ signature
L2 — CLO gatesostle.l2clo_sig + jurisdiction_ok + gamma1_sigServiceAccount + γ₁
L3 — HWMON floorsostle.l3floor_proof + silo_idServiceAccount
L4 — Bonixer gatesostle.l4bonixer_verdict + gamma1_sigServiceAccount + γ₁
L5 — Fleet (gated)sostle.l5belt64_push_ok + pemlaam_indexedServiceAccount (forge only)
L6-L7 — Closedsostle.l6_7 (deny all)n/aclosed
LOCAL IMAGE REGISTRY — NO ACR NEEDED
NOW: PULL FROM ACR, CACHE LOCAL
Currently: all V12 images pulled from eosefleetacrdev.azurecr.io
k3d nodes cache images locally after first pull
Works offline after first pull (Docker layer cache)
Sufficient for Day 94 — no change needed yet
TARGET: LOCAL REGISTRY AT forge:5001
k3d built-in registry on forge
All silos pull from forge:5001 via LAN
Zero external pull = true airgap-ready
Long-term goal: all V12 images local
CREATE LOCAL REGISTRY
# Create k3d registry on forge k3d registry create forge-registry.local --port 5001 # Create lhvcp cluster USING the local registry k3d cluster create lhvcp \ --port "9610:80@loadbalancer" \ --port "9611:443@loadbalancer" \ --servers 1 --agents 0 \ --registry-use k3d-forge-registry.local:5001 \ --volume /home/ubu-cap/lhvcp-data:/data@server:0 # Tag + push images to local registry docker pull eosefleetacrdev.azurecr.io/pemos/mefine-static:day94-v131 docker tag eosefleetacrdev.azurecr.io/pemos/mefine-static:day94-v131 \ forge-registry.local:5001/pemos/mefine-static:v12 docker push forge-registry.local:5001/pemos/mefine-static:v12 # Other silos pull from forge registry via LAN # On msclo: docker pull 192.168.2.12:5001/pemos/mefine-static:v12
AIRGAP READINESS CHECKLIST
IMAGESIZELOCAL CACHEDPRIORITY
pemos/mefine-static~5MB○ PENDINGP1
pemos/laam-router~120MB○ PENDINGP1
pemos/laam-ingest~120MB○ PENDINGP1
qdrant/qdrant~80MB○ PENDINGP1
openpolicyagent/opa~65MB○ PENDINGP2
redis~35MB○ PENDINGP2
rancher/fleet-agent~200MB○ PENDINGP3 (future)
ALL-SILO LOCAL SOVEREIGN CHECKLIST
SILO
k3d
lhvcp
namespaces
belt64
PEMLAAM
OPA
γ₁-signed
⚓️ forge · GID-BLD-001 · ADMIRAL BUILDER

needed

pending

pending

pending

pending

pending

partial
⚖️ msclo · GID-CLO-001 · ADMIRAL LAW

needed

pending

pending

pending

staging :26433

pending

TRENDAL-MSCLO
✓ yone · GID-ONE-001 · ADMIRAL VALIDATOR

installed

LIVE :9600

3 ns

pending

qdrant 1300v

pending

TRENDAL-YONE
★ lilo · GID-FAM-001 · YUNI-4 · Namir

installed

LIVE :9600

3 ns

pending

pending

pending

GID-FAM-001
🔱 msi01 · L0 ANCHOR · ADMIRAL

needed

pending

pending

pending

Docker :9340

pending

S0 ANCHOR
💻 pcdev · GID-MATH-001 · Lean4

needed

pending

pending

pending

pending

pending

pending
📺 lounge · RHONEWOOD · Ring 1

needed

pending

pending

pending

pending

pending

pending
PRIORITY ORDER — LOCAL SOVEREIGN ROLLOUT
#ACTIONSILOBLOCKER
1Install k3d + create lhvcp clusterforgeSSH access (fix sshd on forge)
2Apply mefine-v12 namespaces (05-apply-lhvcp.sh)forgek3d installed
3Belt64 ingress live + first push test (msi01 → forge)forgenamespaces applied
4Install k3d + create lhvcp (GID-CLO-001)mscloforge lhvcp live first
5Deploy OPA into lhvcp + SOSTLE Rego policiesforgelhvcp running
6Local registry at forge:5001 + image cacheforgek3d cluster with --registry-use
7rancher/fleet agent (GitOps deploy from git push)all lhvcpforge + msclo lhvcp live
8WSL auto-start Task Schedulerforge + mscloWindows access
"Local is all all. The cloud is a backup. When the internet goes down, the fleet keeps running. When Azure goes down, pemos.ca may drop — but every silo, every PEMCLAU graph, every belt64 pouch, every SOSTLE gate: still running. That is the sovereign contract." — Admiral Rick, Day 94