Cloud gives you infrastructure for free. Local means YOU ARE THE INFRASTRUCTURE.
Every service that cloud manages transparently — DNS, TLS, auth, secrets, ingress, image pull, node repair — you must provide locally.
The upside: zero external dependency. When cloud is down, local runs forever.
The SOSTLE + V12 living systems must survive on local alone. That's the sovereign contract.
CLOUD GIVES FREE ✓
✓ DNS (Azure DNS / Route53)
✓ TLS certs (cert-manager + Let's Encrypt)
✓ Auth (Azure AD / OIDC)
✓ Image registry (ACR eosefleetacrdev)
✓ Secrets (AKV CSI driver)
✓ Ingress (nginx-ingress-controller managed)
✓ Node auto-repair / auto-scaling
✓ Load balancer (Azure LB)
✓ Monitoring (Azure Monitor)
✓ Backup (Azure Backup)
YOU PROVIDE LOCALLY ⚠
⚠ DNS: CoreDNS in lhvcp + /etc/hosts per silo
⚠ TLS: self-signed CA per cluster, distributed via Tailscale
⚠ Ingress: forge lhvcp LB at :9610 (Tailscale for WAN)
⚠ Node repair: you restart the machine
⚠ LB: klipper-lb (built into k3s)
⚠ Monitoring: pemos-hwmon + fleet-physics-sim
⚠ Backup: NAS + AKV kubeconfig backup
SOSTLE LOCAL PRINCIPLE
L0-L4 are fully local by default. No cloud required for schema ingest, COI scoring, CLO gate (local OPA), HWMON floor check, bonixer routing. L5 (fleet promotion) requires belt64 push to forge — but forge is LOCAL (192.168.2.12), not cloud. Belt64 runs on forge lhvcp. L6-L7 are closed anyway. The entire open SOSTLE stack is local-survivable.
LHVCP CLUSTER SURVIVAL — FORGE (BLUEPRINT FOR ALL SILOS)
PROBLEM: k3d DIES ON WSL RESTART
k3d clusters are Docker containers. When WSL restarts (shutdown/reboot), Docker stops.
k3d cluster state is lost unless volumes are mounted.
On Windows, WSL auto-starts but Docker Desktop may not auto-start k3d.
Solution: NSSM service + Task Scheduler + persistent volumes.
SOLUTION: PERSISTENT VOLUMES
Create lhvcp WITH a host-mounted volume for qdrant data: k3d cluster create lhvcp --port "9610:80@loadbalancer" --volume /home/ubu-cap/lhvcp-data:/data@server:0
qdrant PersistentVolume mounts to /data
Cluster restart = data survives
WSL AUTO-START (WINDOWS TASK SCHEDULER)
# Create a Windows Task Scheduler task to restart k3d after WSL starts
# Run in PowerShell on the Windows host:
$action = New-ScheduledTaskAction -Execute "wsl.exe" `
-Argument "-d Ubuntu-24.04 --exec bash -c 'k3d cluster start lhvcp 2>/dev/null || true'"
$trigger = New-ScheduledTaskTrigger -AtStartup
$settings = New-ScheduledTaskSettingsSet -ExecutionTimeLimit (New-TimeSpan -Minutes 5)
Register-ScheduledTask -TaskName "k3d-lhvcp-autostart" `
-Action $action -Trigger $trigger -Settings $settings -RunLevel Highest -Force
# Verify: after next reboot, check:
# wsl -- k3d cluster list
BELT64 LOCAL MODE
# Belt64 ingress runs in forge lhvcp (LAN only)
# Other silos push via LAN, NOT internet
# From msclo:
curl -X POST http://192.168.2.12:9640/push \
-H "Content-Type: application/json" \
-d '{"silo":"msclo","prime":3,"gamma1_sig":"a3f2...","batch":[...]}'
# γ₁ signature = sha256(batch_json + "14.134725141734693")[:16]
# This IS the auth token. No Azure AD. No JWT. Pure γ₁-signed math.
# If forge is unreachable (LAN down): belt64 queues locally, pushes on reconnect
# Queue path: /home/ubu-cap/openclaw-fleet/fleet-sync/belt64-queue/
KUBECONFIG BACKUP TO AKV
# After cluster create, store kubeconfig in AKV as lifeline
k3d kubeconfig write lhvcp
az keyvault secret set --vault-name forge-silo-kv \
--name lhvcp-kubeconfig-forge \
--value "$(k3d kubeconfig get lhvcp)"
# Restore after disaster:
az keyvault secret show --vault-name forge-silo-kv \
--name lhvcp-kubeconfig-forge --query value -o tsv > ~/.kube/forge-lhvcp.yaml
export KUBECONFIG=~/.kube/forge-lhvcp.yaml
kubectl get nodes
LOCAL OPA — SOSTLE GATES WITHOUT AZURE
OPA IN LHVCP — NO AZURE AD REQUIRED
OPA (Open Policy Agent) runs as a pod in forge lhvcp
Rego policies enforce SOSTLE L0-L7 gates
Auth: k3s ServiceAccount JWT tokens + γ₁ signature
No Azure AD, no external OIDC, no internet required
Design doc: /sovereign-opa-helix
# SOSTLE L2 CLO gate (example Rego policy)
# File: policies/sostle-l2-clo.rego
package sostle.l2
default allow = false
# Allow if: CLO signature present + jurisdiction matches
allow {
input.clo_sig != ""
input.jurisdiction_ok == true
input.gamma1_sig != ""
# gamma1_sig must start with sha256 of "14.134725141734693"
startswith(input.gamma1_sig, "14")
}
# Deny with reason
deny[reason] {
not input.clo_sig
reason := "CLO signature required for L2 gate"
}
deny[reason] {
not input.jurisdiction_ok
reason := "Jurisdiction check failed (PHIPA/PIPEDA/OSFI not confirmed)"
}
SOSTLE × OPA GATE MAP
SOSTLE LAYER
OPA POLICY
INPUT REQUIRED
AUTH TYPE
L0 — Schema ingest
sostle.l0 (always allow)
none
none
L1 — COI score
sostle.l1
gamma1_sig
γ₁ signature
L2 — CLO gate
sostle.l2
clo_sig + jurisdiction_ok + gamma1_sig
ServiceAccount + γ₁
L3 — HWMON floor
sostle.l3
floor_proof + silo_id
ServiceAccount
L4 — Bonixer gate
sostle.l4
bonixer_verdict + gamma1_sig
ServiceAccount + γ₁
L5 — Fleet (gated)
sostle.l5
belt64_push_ok + pemlaam_indexed
ServiceAccount (forge only)
L6-L7 — Closed
sostle.l6_7 (deny all)
n/a
closed
LOCAL IMAGE REGISTRY — NO ACR NEEDED
NOW: PULL FROM ACR, CACHE LOCAL
Currently: all V12 images pulled from eosefleetacrdev.azurecr.io
k3d nodes cache images locally after first pull
Works offline after first pull (Docker layer cache)
Sufficient for Day 94 — no change needed yet
TARGET: LOCAL REGISTRY AT forge:5001
k3d built-in registry on forge
All silos pull from forge:5001 via LAN
Zero external pull = true airgap-ready
Long-term goal: all V12 images local
CREATE LOCAL REGISTRY
# Create k3d registry on forge
k3d registry create forge-registry.local --port 5001
# Create lhvcp cluster USING the local registry
k3d cluster create lhvcp \
--port "9610:80@loadbalancer" \
--port "9611:443@loadbalancer" \
--servers 1 --agents 0 \
--registry-use k3d-forge-registry.local:5001 \
--volume /home/ubu-cap/lhvcp-data:/data@server:0
# Tag + push images to local registry
docker pull eosefleetacrdev.azurecr.io/pemos/mefine-static:day94-v131
docker tag eosefleetacrdev.azurecr.io/pemos/mefine-static:day94-v131 \
forge-registry.local:5001/pemos/mefine-static:v12
docker push forge-registry.local:5001/pemos/mefine-static:v12
# Other silos pull from forge registry via LAN
# On msclo: docker pull 192.168.2.12:5001/pemos/mefine-static:v12
AIRGAP READINESS CHECKLIST
IMAGE
SIZE
LOCAL CACHED
PRIORITY
pemos/mefine-static
~5MB
○ PENDING
P1
pemos/laam-router
~120MB
○ PENDING
P1
pemos/laam-ingest
~120MB
○ PENDING
P1
qdrant/qdrant
~80MB
○ PENDING
P1
openpolicyagent/opa
~65MB
○ PENDING
P2
redis
~35MB
○ PENDING
P2
rancher/fleet-agent
~200MB
○ PENDING
P3 (future)
ALL-SILO LOCAL SOVEREIGN CHECKLIST
SILO
k3d
lhvcp
namespaces
belt64
PEMLAAM
OPA
γ₁-signed
⚓️ forge · GID-BLD-001 · ADMIRAL BUILDER
✗ needed
✗ pending
✗ pending
✗ pending
✗ pending
✗ pending
○ partial
⚖️ msclo · GID-CLO-001 · ADMIRAL LAW
✗ needed
✗ pending
✗ pending
✗ pending
✓ staging :26433
✗ pending
✓ TRENDAL-MSCLO
✓ yone · GID-ONE-001 · ADMIRAL VALIDATOR
✓ installed
✓ LIVE :9600
✓ 3 ns
✗ pending
✓ qdrant 1300v
✗ pending
✓ TRENDAL-YONE
★ lilo · GID-FAM-001 · YUNI-4 · Namir
✓ installed
✓ LIVE :9600
✓ 3 ns
✗ pending
✗ pending
✗ pending
○ GID-FAM-001
🔱 msi01 · L0 ANCHOR · ADMIRAL
✗ needed
✗ pending
✗ pending
✗ pending
✓ Docker :9340
✗ pending
✓ S0 ANCHOR
💻 pcdev · GID-MATH-001 · Lean4
✗ needed
✗ pending
✗ pending
✗ pending
✗ pending
✗ pending
✗ pending
📺 lounge · RHONEWOOD · Ring 1
✗ needed
✗ pending
✗ pending
✗ pending
✗ pending
✗ pending
✗ pending
PRIORITY ORDER — LOCAL SOVEREIGN ROLLOUT
#
ACTION
SILO
BLOCKER
1
Install k3d + create lhvcp cluster
forge
SSH access (fix sshd on forge)
2
Apply mefine-v12 namespaces (05-apply-lhvcp.sh)
forge
k3d installed
3
Belt64 ingress live + first push test (msi01 → forge)
forge
namespaces applied
4
Install k3d + create lhvcp (GID-CLO-001)
msclo
forge lhvcp live first
5
Deploy OPA into lhvcp + SOSTLE Rego policies
forge
lhvcp running
6
Local registry at forge:5001 + image cache
forge
k3d cluster with --registry-use
7
rancher/fleet agent (GitOps deploy from git push)
all lhvcp
forge + msclo lhvcp live
8
WSL auto-start Task Scheduler
forge + msclo
Windows access
"Local is all all. The cloud is a backup. When the internet goes down, the fleet keeps running. When Azure goes down, pemos.ca may drop — but every silo, every PEMCLAU graph, every belt64 pouch, every SOSTLE gate: still running. That is the sovereign contract." — Admiral Rick, Day 94