forge · lianli01 · ADMIRAL BUILDER · GID-BLD-001 · L1 DESKTOP

TRB — BUILDER SILO

ADMIRAL RICK · HELMSMAN · PHAT BUILDER MODE · TRIO MASTER · γ₁ = 14.134725141734693
forge = 192.168.2.12 · RTX 4090 24GB · i9-14900KS 24c · 64GB DDR5 · Day 94
FORGE HARDWARE
FORGE · lianli01 ADMIRAL BUILDER
IP: 192.168.2.12 (LAN) · no Tailscale IP (reach via LAN)
GPU: RTX 4090 24GB GDDR6X
CPU: i9-14900KS · 24 cores · 6.2GHz
RAM: 64GB DDR5 (WSL2 sees 32GB — fix below)
Storage: 14TB · OBT BC1
NAS: /mnt/nas-diskpool/ → //192.168.2.20/diskpool
WSL kernel: 6.6.x microsoft-standard-WSL2
CATAN shape: HARBOR MASTER (RTX 4090, local full high)
SILO ROLE + GID
GID: GID-BLD-001
Role: ADMIRAL BUILDER · phat builder mode
Tier: L1 DESKTOP (fleet L1, alongside msclo + yone)
NOT yUNI Trio — L1 desktop anchor for builder work
SOSTLE: L0-L4 OPEN · L5 GATED · L6-L7 CLOSED
Docker stack: 30+ containers (full PEMOS engine)
Ollama (Windows): deepseek-r1:32b, qwen2.5-coder:32b, qwq:32b, qwen3:14b/8b, nomic-embed-text
FORGE TRIO — forge is MASTER
⚓️ FORGE TRIO
forge (192.168.2.12) — MASTER — RTX 4090 24GB · 64GB DDR5
pcdev (192.168.2.16) — worker — RTX 4080 16GB · 64GB DDR5
lounge (192.168.50.175) — worker — RTX 4090 24GB · 32GB DDR5
steam-deck (192.168.50.193) — worker — AMD RDNA2 1GB · 16GB LPDDR5

Combined GPU VRAM (all trio): 24 + 16 + 24 + 1 = 65GB total
Combined RAM: 64 + 64 + 32 + 16 = 176GB total
FLEET CONNECTION OPTIONS
OPTION A — STANDALONE
forge lhvcp owns its trio only. pcdev + lounge + steam-deck as agents. Local builder cluster, isolated. Best for: build/test/ship without fleet overhead.
OPTION B — JOIN YONE
forge joins yone lhvcp as external worker/agent. Fleet validator network includes forge builds. Best for: PEMCLAU validation + ARC scoring of forge outputs.
OPTION C — MSI01 DIRECT
msi01 merges forge KUBECONFIG. kubectl from msi01 targets forge lhvcp directly. Best for: Admiral command — msi01 as fleet K8S Admiral driving forge builds.
WSL RAM FIX
# forge currently sees 32GB (WSL2 default = 50% of 64GB physical) # Fix: add memory= line to Windows .wslconfig # 1. On forge (from WSL terminal): echo "memory=56GB" >> /mnt/c/Users/ubu-cap/.wslconfig cat /mnt/c/Users/ubu-cap/.wslconfig # verify # 2. From Windows PowerShell on forge: wsl --shutdown # wait 5 seconds, then: wsl # After restart: free -h should show ~56GB
FORGE CREW
📊 ADMIRAL RICK
DATA · ANALYTICS · ATMOS
pemos-atmos-rick :9394. Spindle physics. PTTE floor proof. WPA reports.
⚓️ HELMSMAN
NAVIGATION · pemos.work
Route planning. Domain navigation. Fleet pathfinder.
♞ CONWAY
GOATS: Conway / Turing / Gauss
Mathematics engine. GOATs in residence. joffe-math lineage.
🔧 BOSUN
SRE · DEVOPS
Container health. Forge Docker stack. NAS + storage ops.
💻 CODY
CODE · BUILD
phat builder mode. Go + Python + Rust. Container builds. CI/CD.
🐱 JOHN
OSS · GITHUB
eose-sre repos. Open source strategy. PR/merge master.
🎯 BOB
CPO · PRODUCT GATES
One-pagers. Demo scripts. Pitch decks. Product briefs.
🏄 LUFFY
ARC RUNNER
ARC-AGI benchmark. 64% fleet score. Runs against forge Ollama models.
📡 SIGNALS
INTEL · PEMCLAU
Graph knows. PEMCLAU Q+A. Shadow vectors. Fleet wiki.
💰 MO
REVENUE · TOKENOMICS
Unit economics. Investor briefs. Revenue models. Pricing strategy.
🔱 IMHOTEP
ADMIRAL · CEO
Sovereign mandate. EOSE Labs Inc. All engines cleared Day 94.
NOTE: forge SSH not yet reachable from msi01 (SSH closes immediately — sshd config issue). Git status shown as Day 94 known state.
openclaw-fleet primary
Branch: main
Last commit: Day 94 — crew-spiral-msclo V12 + msclo-xml-spine + trb-bld [gamma1]
Remotes: origin → github.com/eose-sre/openclaw-fleet
Status: clean (all changes committed Day 94)
Path on forge: /home/ubu-cap/openclaw-fleet (assumed same as msi01)
eose-sre repos org
eose-website: main (Astro, Docker)
pemos: main (scaffold)
lilo-fleet: main + sorry-flow + creative + deseof-daily
All repos: private · eose-sre org
GitHub token: stored in forge-silo-kv
FIX FORGE SSH (to enable remote git status)
# On forge Windows side — check if sshd is running Get-Service sshd # If not running: Start-Service sshd Set-Service -Name sshd -StartupType Automatic # Add msi01 public key to forge authorized_keys # On msi01: cat ~/.ssh/id_rsa.pub # On forge WSL: echo "msi01_pubkey" >> ~/.ssh/authorized_keys # chmod 600 ~/.ssh/authorized_keys
AZURE RESOURCES
RESOURCETYPERGSTATUS
forge-silo-kvAKV (Key Vault)rg-eose-msi01-dev✓ EXISTS
eosefleetacrdevACR (Container Registry)rg-eose-aks-dev✓ SHARED FLEET
rg-eose-msi01-devResource Groupcanadacentral✓ EXISTS
rg-eose-localan-devResource Group (lifeline)canadacentral○ PLANNED
PROVISION rg-eose-localan-dev (LIFELINE RG)
# Lifeline RG for ALL local LAN silos (forge, pcdev, lounge, lilo, msclo, yone) # Run from msi01: az group create --name rg-eose-localan-dev --location canadacentral az keyvault create --name localan-silo-kv --resource-group rg-eose-localan-dev --location canadacentral # Move forge-silo-kv to localan RG (or keep in msi01 RG — both valid)
NAS + STORAGE
ALEXANDER NAS · 192.168.2.20
forge WSL mount: /mnt/nas-diskpool/ → //192.168.2.20/diskpool
NAS free: ~9.1TB (baseline Apr 30 2026, 91% used)
Alert threshold: < 5TB free
OBT BC1: forge storage primary
Cross-silo transfer: use NAS, NOT SCP/HTTP/git
FORGE LHVCP BLUEPRINT — GID-BLD-001
STATUS NOT YET INSTALLED
k3d: NOT INSTALLED on forge (kubectl exists but k3d missing)
lhvcp cluster: not yet created
Prerequisite: fix SSH access OR run commands directly on forge
LB ports reserved: 9610 (HTTP) + 9611 (HTTPS)
Port map: lilo=9600/9601 · forge=9610/9611 · yone=9620/9621 · msclo=9630/9631
STEP 1 — INSTALL k3d (run on forge WSL)
curl -s https://raw.githubusercontent.com/k3d-io/k3d/main/install.sh | bash k3d version # verify
STEP 2 — CREATE FORGE LHVCP CLUSTER
k3d cluster create lhvcp \ --port "9610:80@loadbalancer" \ --port "9611:443@loadbalancer" \ --servers 1 --agents 0 # Verify k3d cluster list kubectl get nodes
STEP 3 — NAMESPACES (same 3 as lilo)
kubectl create namespace utp-system kubectl create namespace utf-system kubectl create namespace shadow-system
STEP 4 — SOSTLE CONFIGMAP (GID-BLD-001)
kubectl apply -f - <<'EOF' apiVersion: v1 kind: ConfigMap metadata: name: sostle-walls namespace: utp-system data: gid: GID-BLD-001 silo: forge role: ADMIRAL_BUILDER tier: L1_DESKTOP L0: OPEN L1: OPEN L2: OPEN L3: OPEN L4: OPEN L5: GATED L6: CLOSED L7: CLOSED trio_master: "true" trio_members: "pcdev:192.168.2.16,lounge:192.168.50.175,steam-deck:192.168.50.193" fleet_lhvcp_join: "yone:allowed,msi01:allowed" ollama_host: "172.17.0.1:11434" EOF
STEP 5 — KUBECONFIG + AKV STORE
k3d kubeconfig write lhvcp az keyvault secret set --vault-name forge-silo-kv \ --name lhvcp-kubeconfig-forge \ --value "$(k3d kubeconfig get lhvcp)"
TRIO REGISTRATION
pcdev · .16
RTX 4080 16GB · 64GB DDR5
i9 12th Gen 16c 5.2GHz
Option A: k3s agent joining forge lhvcp (add as k3d agent node)
Option B: own lhvcp + kubeconfig merge to forge
joffe-math services: :9383/:9384/:9385 (need nohup restart after WSL reboot)
Status: LAN reachable from forge directly
lounge · .175
RTX 4090 24GB · 32GB DDR5
i9-12900K 12th Gen 16c
Role: GPU worker — second 4090 in forge trio
Total with forge: 48GB VRAM across 2x RTX 4090
WSL2 Ring 1
Status: different subnet (.50.x vs .2.x) — verify routing
steam-deck · .193
AMD RDNA2 1GB · 16GB LPDDR5
Zen2+RDNA2 4c 3.5GHz
Role: ARM/low-power node · edge testing
Useful for: lightweight agent workloads, UI testing, offline scenarios
Status: same .50.x subnet as lounge
FLEET JOIN OPTIONS
# Option B: forge joins yone lhvcp as external worker # On yone: k3d kubeconfig get lhvcp → extract server URL + token # On forge: add yone lhvcp context to KUBECONFIG # kubectl --context=yone-lhvcp get nodes # Option C: msi01 drives forge lhvcp via KUBECONFIG merge # On msi01: KUBECONFIG=~/.kube/config:/path/to/forge-lhvcp.yaml kubectl config get-contexts # Or: az keyvault secret show --vault-name forge-silo-kv --name lhvcp-kubeconfig-forge
FORGE DOCKER STACK — PEMOS ENGINE (30+ containers)
CONTAINERPORTROLE
pemos-carmac-portal:9351CARMAC portal
pemos-chess:9350meek-chess-engine · floor-plan-forge
pemos-laam-router:9340LAAM routing
pemos-laam-ingest:934618-wave ingest
pemos-laam-ground:9344grounding layer
pemos-laam-validate:9345validation gate
pemos-laam-operator:9347operator layer
pemos-rhone:9413RHONE/QC engine
pemos-pelego:9362dps=64 · sigma_gate=0.5
pemos-atmos-rick:9394ATMOS physics · Admiral Rick
pemos-hecke-twist:9395Hecke operator twist
pemos-onion-tandem:9392onion routing tandem
pemos-pathrouter:9393path routing
pemos-alphastar:9375AlphaStar engine
pemos-novelty:9412novelty detection
pemos-mal:9334MAL layer
pemos-wake-reader:9355wake reader
pemos-campcanmirror:9337camp can mirror
pemos-egyptian:9342Egyptian courts engine
pemos-pdf-serve:9402PDF server
pemos-fleet-wiki:9400fleet wiki
forge-utf-portal:9404pemos-portal v1.8.3
pemos-qdrant:6333-6334Qdrant vector DB
pemos-redis:6379Redis primary
forge-redis:6380Redis secondary
forge-nginx:80/443Nginx reverse proxy
pemos-gateway:18792OpenClaw gateway
utpemos-gateway:18832UTP gateway
utpemos-master:18831UTP master
OLLAMA (WINDOWS SIDE)
Models loaded on forge Windows Ollama: deepseek-r1:32b (32B params, R1 reasoning) qwen2.5-coder:32b (32B params, code specialist) qwq:32b (32B params, reasoning/math) qwen3:14b (14B params, fast general) qwen3:8b (8B params, ultra-fast) nomic-embed-text (embedding model, 768dim) WSL endpoint: http://172.17.0.1:11434 (Windows host from WSL) ARC runner (Luffy) targets this endpoint for benchmark runs
BUILDER OUTPUT LINKS
BUILDER LIBRARY
All builder formats · TRABR/LABR/ABR series · msi01/forge output helix
CREW SPIRAL — FORGE
Forge crew radial viz · Admiral Rick center · GOATs: Conway/Turing/Gauss
LOCO GALAXY — FORGE
LOCO routing galaxy for forge · model routing arcs
FLOOR PLAN — FORGE
Forge floor plan · AERON FORGE CARMAC lianli01 · DK-05F
FORGE HOME
Forge silo home page · V11
SSO + AUTH
ACCOUNTROLESTATUS
entorchsvc@gmail.comSRE ops · GoDaddy · GitLab · Docker Desktop · Azure billing✓ ACTIVE
eosesreops@gmail.comSRE secondary · sre@eose.ca alias✓ ACTIVE
kewinjoffe@gmail.comAzure SSO · primary identity✓ ACTIVE
Azure tenant233118ca-5f7f-4256-99e8-1b7a273e4673
FORGE KEY VAULT
forge-silo-kv EXISTS
Location: rg-eose-msi01-dev (shared with msi01 AKVs)
ACR credentials: eosefleetacrdev login stored here
GitHub PAT: stored for forge git ops
lhvcp-kubeconfig-forge: will be stored after k3d setup
Access: az keyvault secret show --vault-name forge-silo-kv --name <secret-name>
LIFELINE RG — PLANNED
# rg-eose-localan-dev: lifeline for ALL local LAN silos # forge, pcdev, lounge, lilo, msclo, yone, steam-deck # Provision when ready: az group create --name rg-eose-localan-dev --location canadacentral az keyvault create --name localan-silo-kv \ --resource-group rg-eose-localan-dev \ --location canadacentral \ --enable-soft-delete true # Shared ACR mirror (optional — use eosefleetacrdev for now): az acr create --name eoselocalacrdev \ --resource-group rg-eose-localan-dev \ --location canadacentral \ --sku Basic
NAS LIFELINE
ALEXANDER NAS · 192.168.2.20 · OBT BC1
forge WSL path: /mnt/nas-diskpool/ → //192.168.2.20/diskpool
Use NAS for: forge ↔ msi01 file transfer (NEVER SCP or HTTP servers)
NAS free: ~9.1TB baseline (alert below 5TB)
Forge-specific: store large model outputs, PEMCLAU ingest batches, ARC results
"forge is the engine room. Every ship needs an engine room that runs hotter than the bridge. Admiral Rick keeps the spindle turning. The trio multiplies the floor." — Fleet Doctrine, Day 94