01 - AKS Cluster Status - aks-eose-aaas-dev
CLUSTER SPEC
Nodes 4 total
Node types 3x Standard_D2s_v3 + 1 system
k8s version v1.34.6
OS Ubuntu 22.04.5 LTS
Namespaces 100+ active
CoreDNS import custom/*.server hook
RUNNING SERVICES
Istio ASM RUNNING
External DNS RUNNING
External Secrets RUNNING
Flux RUNNING
cert-manager RUNNING
oauth2-proxy RUNNING
TLS CERTIFICATES
eose.caTrue
pemos.caTrue
feedles.caTrue
serlf.comTrue
eose-ca-tlsTrue
id certsTrue
02 - master-dev-system - 7/7 Running - Istio Sidecar Mesh
| Service | Port | Type | Status | Notes |
| pemos-hvcp | :9380 | ClusterIP | RUNNING | HVCP orchestration layer |
| pemos-orbit | :9372 | ClusterIP | RUNNING | Orbit scheduling engine |
| pemos-selberg | :9374 | ClusterIP | RUNNING | Selberg zeta computation |
| pemos-wonderland | :9341 | ClusterIP | RUNNING | Wonderland API gateway |
| pemos-everytime | :9376 | ClusterIP | RUNNING | Everytime sync layer |
| pemos-publication | :9377 | ClusterIP | RUNNING | Publication / export layer |
| oauth2-proxy | :4180 | ClusterIP | RUNNING | Auth proxy, Istio integrated |
03 - yone-system - FIXED - Now Running
YONE-SYSTEM FIX APPLIED - Day 97
Both yone-agent and yone-portal-v2 were stuck in Pending state due to CPU exhaustion on the yone node.
Fix: kubectl patch JSON-patch on both deployments to set resource limits (cpu:50m mem:256Mi requests).
Both pods are now Running. yone-system is operational.
yone-agent
Was Pending (CPU exhausted on yone node)
Fix kubectl patch json-patch - cpu:50m mem:256Mi
Now Running
Namespace yone-system
yone-portal-v2
Was Pending (same node pressure)
Fix kubectl patch json-patch - cpu:50m mem:256Mi
Now Running
Namespace yone-system
04 - Fleet CoreDNS - NEW - LIVE - Day 97
FLEET COREDNS DEPLOYED - msi01 port 5353
Container fleet-coredns running on msi01 (192.168.2.18) port 5353.
Serves two authoritative zones: fleet.local and eose.lan.
AKS CoreDNS configured to forward fleet.local to 192.168.2.18:5353.
NAS backup: /mnt/nas-diskpool/eose/fleet-coredns-backup/
A RECORDS
msi01.fleet.local 192.168.2.18
forge.fleet.local 192.168.2.12
msclo.fleet.local 192.168.2.19
yone.fleet.local 192.168.2.23
pcdev.fleet.local 192.168.2.16
nas.fleet.local 192.168.2.20
lounge.fleet.local 192.168.50.175
steamdeck.fleet.local 192.168.50.193
SERVICE ALIASES
pemclau.fleet.local -> yone
lhvcp.fleet.local -> yone
joffe-math.fleet.local -> pcdev
campfire.fleet.local -> msi01
Test cmd dig @192.168.2.18 -p 5353 yone.fleet.local A
AKS forward fleet.local -> 192.168.2.18:5353
Backup /mnt/nas-diskpool/eose/fleet-coredns-backup/
05 - LHVCP on yone - k3d cluster lhvcp - v1.31.5+k3s1
k3d CLUSTER SPEC
Cluster lhvcp
k3s version v1.31.5+k3s1
SSH port 2222 (non-standard)
PEMCLAU 55,787+ vectors (pemclau-v12)
PEMCLAU host yone:6333
Ollama qwen3:8b + qwen3:14b pulling
LHVCP ENGINES + CRDs
LOCOJob:9520
DRGGate:9522
SOSTLELane:9524
PyramidTier:9525 (implicit)
BlackholeSink:9528
MECRDS gate:9506 (CRQ-YONE-* IDs)
06 - Root CA on AKS - cert-manager via Let's Encrypt
ACTIVE ISSUERS
Provider Let's Encrypt (ACME)
cert-manager RUNNING
ClusterIssuer letsencrypt-prod
Challenge HTTP-01 / DNS-01 (Azure DNS)
ACTIVE CERTS
eose-ca-tlsTrue
pemos.ca wildcardTrue
feedles.ca wildcardTrue
serlf.comTrue
id certsTrue
FUTURE: FLEET-LOCAL CA
Plan Self-signed ClusterIssuer for fleet.local
Scope Internal fleet TLS without Let's Encrypt
Status PLANNED
Prereq fleet-coredns stable (now live)
Prereq 2 CoreDNS fleet.local CNAME round-trip
07 - ubu-cap Crew - PAYG + OpenAI Codex - SOSTLE L0-L3
ubu-cap IS THE V13 TEST PILOT
ubu-cap (192.168.2.x / AKS openclaw-agents namespace) tests ALL V13 k3s patterns before msi01 prod promotion.
Full belt64 access. PAYG Azure. OpenAI Codex configured. SOSTLE L0-L3 clearance.
Every pattern validated here first - then promoted. msi01 is production. Never test on msi01.
CREW SPEC
Identity ubu-cap
Runtime WSL2 Ubuntu + AKS
k8s namespace openclaw-agents
SOSTLE L0-L3
Belt64 Full access
TOOLING
Cloud Azure PAYG
AI OpenAI Codex configured
kubectl Full dev cluster access
k3d Local lhvcp cluster
OpenClaw Fleet agent runtime
VALIDATION GATE
Step 1 k3d local test
Step 2 k3s yone validation
Step 3 AKS dev (here)
Step 4 AKS prod (msi01)
Rule NEVER test on msi01 directly
08 - V13 k3s Pattern - Promotion Pipeline
k3d local to k3s yone to AKS dev to AKS prod
k3d LOCAL→Local development on ubu-cap / yone. Fastest iteration. No external DNS needed.DEV
k3s YONE→yone (192.168.2.23 SSH:2222). LHVCP CRDs. PEMCLAU. Validates fleet DNS, campfire, MECRDS.VALIDATE
AKS DEV→AKS aks-eose-aaas-dev. 4 nodes. Full Istio. cert-manager. External DNS. This cluster.STAGING
AKS PROD→Production AKS (msi01 managed). Only promoted patterns from AKS dev reach here.PROD
V13 PATTERN COMPONENTS
CoreDNS custom zones fleet.local + eose.lan (import hook)
cert-manager fleet-local ClusterIssuer (planned) + LE prod
external-dns Azure DNS integration (live)
LHVCP CRDs LOCOJob / DRGGate / SOSTLELane / PyramidTier / BlackholeSink
campfire agent per namespace (planned: GID-registered)
MECRDS gate :9506 CRQ-YONE-* IDs, gamma1 stamped
belt64-v13 ConfigMap per namespace (seg assignment)
DAY 97 MILESTONES
yone-system fix DONE - both pods Running
fleet-coredns NEW - Live on msi01:5353
campfire-v13-bonixer DEPLOYED
campfire-v13-boabixer DEPLOYED
eose-dev-v13-galaxy THIS PAGE
v130 tag DEPLOYED