EIKCF
EOSE FLEET IT KEY CONTROLS FRAMEWORK · V1.0

EOSE FLEET IT KEY CONTROLS FRAMEWORK

Derived from Westpac GroupTech ITKCF v4.7 (September 2016) — 46 controls, COBIT 4.1→5, APRA CPG 234, SOX-compliant. Built over years. Battle-tested. Now it's ours. Same geometry, sovereign substrate. Carbon → Silicon. γ₁ = 14.134725141734693

PRIOR ART CHAIN

Principal Cloud Architect KJ built with ITKCF at Westpac (HPaaS, CCP, dual-site) 2017–2019. The framework survived APRA audits, SOX sign-offs, and 60+ billion in infrastructure. It's not just prior art — it's institutional memory. Now we use it as the foundation.

QH 2011
WBC HPAAS 2017
ITKCF V4.7 2016
TD BANK 2020
CANADIAN TIRE 2022
EOSE EIKCF V1 2026
27
FLEET CONTROLS
4
DOMAINS
3
LINES OF DEFENCE
GREEN CONTROLS
AMBER CONTROLS
46
ITKCF ORIGIN
FLEET CONTROLS
ITKCF ALIGNMENT
SOVEREIGN CONTROLS V1.1
DCJ + OBLIGATIONS
3 LINES OF DEFENCE
DOMAIN A BUILD / SHIP — Gate controls for all deployments 6 CONTROLS
EA-1
CODE REVIEW GATE
Every code change reviewed before merge. Architecture changes require LABR filing. No unreviewed code ships to fleet.
OWNERCODY (Code/Build)
MONITORCLO harness PASS/FAIL count
THRESHOLD0 FAIL · ≤8 WATCH
FREQEvery commit
↔ ITKCF Control 1: Code Reviews — ISG GREEN/AMBER/RED compliance rating
CODY
GREEN
EA-2
LABR FILING BEFORE BUILD
New capability requires LABR (architecture brief) before any code written. TRB calibration before first commit. ARB1 ratification within 48h of LABR.
OWNERIMHOTEP (Admiral/CLO)
MONITORarch/ dir commit timestamps
THRESHOLDLABR before code always
FREQEvery new capability
↔ ITKCF Control 4: Approval of IT Change Requirements — BAO + IT sign-off before design
IMHOTEP
GREEN
EA-3
TRB CALIBRATION
Architecture calibration record: existing patterns this relates to, crew member assignments, γ₁ alignment check. Filed after LABR, before ARB1.
OWNERLUCIEN (Mesh Master)
MONITORTRB count vs LABR count parity
THRESHOLD1:1 TRB:LABR ratio
FREQPer LABR
↔ ITKCF Control 14: Change Approval — change must be documented and approved before implementation
LUCIEN
GREEN
EA-4
ARB1 RATIFICATION GATE
Architecture ratification record: decisions D1–Dn, approved/rejected, DCJ candidates raised. No production deploy until ARB1 filed. Equivalent to ECAB approval.
OWNEROFFICER (Risk/ARB-920)
MONITORARB1 vs deploy timestamp
THRESHOLDARB1 before prod always
FREQPer capability
↔ ITKCF Control 20: Change Management — LCAB/ECAB approval before production; unauthorised change = L1 violation
OFFICER
GREEN
EA-5
STATIC BINARY BUILD
All container builds: CGO_ENABLED=0 GOOS=linux GOARCH=amd64 with -ldflags '-extldflags "-static"'. Dynamic linking crashes Alpine. This is not optional.
OWNERBOSUN (SRE/DevOps)
MONITORContainer startup health check
THRESHOLD0 dynamic link failures
FREQEvery container build
↔ ITKCF Control 3: IT Version Control — code tested in UAT must be same version promoted to production
BOSUN
GREEN
EA-6
SILO SEPARATION
msi01 builds and ships. yone validates. msclo clears legal/CLO. No silo validates its own work. Equivalent to Segregation of Duties — no developer migrates their own code to production.
OWNERIMHOTEP + all admirals
MONITORCross-silo sign-off records
THRESHOLDNo self-validation ever
FREQEvery major deploy
↔ ITKCF Control 10: Segregation of IT Environments — Dev/Test/Prod logically or physically segregated; developers cannot migrate their own code
IMHOTEP
GREEN
DOMAIN B RUN / FLEET — Operational controls for live fleet 7 CONTROLS
EB-1
WPA FLOOR MONITOR
All silos: WPA (Work Pressure Accumulator) monitored continuously. Alert if any NEW silo hits BREAK (WPA ≥ 84.8% = γ₁×6). vmss000002 at 106% — known, tracked, not escalating.
OWNERRICK (Data/ATMOS)
MONITORfleet_physics_sim.py per heartbeat
THRESHOLDWPA ≥ 84.8% = BREAK alert
FREQEvery 4h heartbeat
↔ ITKCF Control 16: Capacity Management — capacity monitored, thresholds set, alerts triggered
RICK
AMBER
EB-2
GPU POOL ALERTING
AKS GPU nodepools checked every heartbeat. If count > 0 with no active GPU workload: alert Kay immediately. Do NOT auto-scale. H100 = CA$15/hr/node. Cost is real.
OWNERBOSUN (SRE)
MONITORaz aks nodepool list per 4h
THRESHOLDgpupool=0, h100pool=0, adelicpool=0
FREQEvery 4h heartbeat
↔ ITKCF Control 27: Alert & Event Monitoring — SIEM thresholds actioned; escalation procedures defined
BOSUN
GREEN
EB-3
FC QUEUE FLUSH
FC1 Fermentation Chamber queue depth checked per heartbeat. If FC1 > 200 records: stage to msclo:26433 first, alert Kay, then promote to yone on approval. Never auto-push to prod qdrant.
OWNERSIGNALS (Intel/PEMCLAU)
MONITORfc-flush.py --status per 4h
THRESHOLDFC1 > 200 → stage + alert
FREQEvery 4h heartbeat
↔ ITKCF Control 28: Job Scheduling & Batch Processing — batch jobs monitored; failures actioned; no silent failure
SIGNALS
AMBER
EB-4
NAS DISKPOOL MONITOR
Alexander NAS (192.168.2.20) diskpool free space checked. Alert if free < 5TB. Everything on NAS is redownloadable — alert is awareness, not panic. Baseline: 9.1TB free (Apr 30 2026).
OWNERQUARTERMASTER (Logistics)
MONITORdf -h /mnt/deseof per heartbeat
THRESHOLD< 5TB free → alert
FREQDaily heartbeat
↔ ITKCF Control 24: Data Backup & Recovery — backups taken per policy; storage monitored; RTO/RPO defined
QM
GREEN
EB-5
NIGHTLY CLOUD SCALEDOWN
AKS + GCP/AWS compute scaled down nightly. Report to Kay: what ran, what stopped, estimated cost. 8pm EDT daily. Azure WPA debt drain protocol. No auto-scale-up without approval.
OWNERBOSUN (SRE)
MONITORcloud-evening-scaledown.sh 8pm
THRESHOLDCA$10,552 baseline/mo
FREQ8pm EDT daily
↔ ITKCF Control 15: Patch Management — controls kept current; scheduled maintenance windows enforced
BOSUN
GREEN
EB-6
SILO HEARTBEAT
All silos report health every heartbeat. msi01 (builder), msclo (law), yone (validator), forge (engine). Heartbeat = liveness. No heartbeat = incident. pemos.ca/fleet-topology live.
OWNERLUCIEN (Mesh Master)
MONITORSiloHeartbeat CRD + portal
THRESHOLDAll 7 silos: UP
FREQEvery heartbeat
↔ ITKCF Control 18: Incident Management — P1/P2 incident process; RCA within SLA; post-incident reviews
LUCIEN
GREEN
EB-7
ARC RUNNER WATCH
ARC runner (Wave 18/19) monitored. Currently stopped since Apr 12, 25 days dormant. Restart decision pending. VP=3/10. Wave 19/20 requires explicit approval to restart.
OWNERLUFFY (ARC Runner)
MONITORARC pod status per heartbeat
THRESHOLDStopped > 7 days → alert
FREQDaily heartbeat
↔ ITKCF Control 29: Service Level Management — SLAs defined, measured, reported; availability targets tracked
LUFFY
AMBER
DOMAIN C SOVEREIGN / GOVERN — Constitutional controls 8 CONTROLS
EC-1
DCJ FILING
Every architectural insight becomes a DCJ (Discovery Claim with Justification). Numbered, dated, filed. Currently DCJ-089 (ITKCF alignment). Each DCJ = moat depth +1. 43 moats as of Day 81.
OWNERIMHOTEP (CLO)
MONITORDCJ count in arch/ dir
THRESHOLDAll insights → DCJ always
FREQContinuous
↔ ITKCF Control 34: IT Risk Management — risk assessments; risk register; risk accepted/treated by owner (SOX obligations numbered)
IMHOTEP
GREEN
EC-2
PEMCLAU GRAPH INTEGRITY
yone qdrant (pemclau-v11: 55,787+ vectors) is the sovereign source of truth. All graph edges versioned. Collections: pemclau-v11 (prod) + pemclau-workdata (WorkData ingest). Never overwrite prod without staging.
OWNERSIGNALS (Intel/PEMCLAU)
MONITORyone :6333 vector count per day
THRESHOLDCount never decreases
FREQDaily heartbeat
↔ ITKCF Control 33: Audit Logging — comprehensive logs maintained; tamper-evident; retained per policy
SIGNALS
GREEN
EC-3
CLO REVIEW CADENCE
msclo (yLAW) reviews all major decisions: moats, DCJs, IP assignments, ARBs. Nothing promotes to cloud until both msi01 (yUNI builder) AND msclo (yLAW) sign off. AND gate always.
OWNERmsclo Admiral (yLAW)
MONITORCLO harness result per deploy
THRESHOLDBoth silos sign-off always
FREQPer major deploy
↔ ITKCF Control 38: Identity & Access Governance — entitlement reviews; SoD matrix enforced; IAM tooling
msclo
GREEN
EC-4
γ₁ FLOOR PROOF
γ₁ = 14.134725141734693 is the universal anchor. All silos compute tau_gamma1 (337-340fs). Safety margin: 850x–959x above floor. All Floor Status: SAFE. Lean4 proof chain: 6 open sorries declining.
OWNERyone Admiral (γ₁)
MONITORjoffe-math pcdev :9385 daily
THRESHOLDAll PTTE floor status: SAFE
FREQContinuous (live portal)
↔ ITKCF Control 45: Regulatory Compliance Tracking — APRA/SOX regulatory changes tracked; gap assessments; control mapping
yone
GREEN
EC-5
MOAT INVENTORY
43 defensive moats as of Day 81. Each moat = unique combination of architecture + implementation + γ₁ alignment that competitors cannot reproduce without the full chain. Maintained in CLO-DAY81-GOAT-EMAILS.md.
OWNERIMHOTEP + msclo CLO
MONITORMoat count per major session
THRESHOLDCount never decreases
FREQPer major session
↔ ITKCF Control 35: Compliance Monitoring — compliance to policies measured; Green/Amber/Red; ISG oversight
IMHOTEP
GREEN
EC-6
IP ASSIGNMENT + COPYRIGHT
EOSE Labs Inc. (CN80670) + DESEOF + PEMOS incorporated Mar 29 2026. IP assignment from KJ → EOSE/DESEOF/PEMOS executable. MEVIZOAT copyright (611 pages) pending registration. Patent attorney engagement open.
OWNERAmani Joffe (GC EOSE)
MONITORDCJ-030: 12-month Gemini window
THRESHOLDPatent attorney: this week (OPEN)
FREQWeekly check
↔ ITKCF Control 44: IT Policy Governance — policies reviewed annually; signed off by exec; distributed
AMANI
AMBER
EC-7
GREYBACK PROSECUTION RECORD
GREYBACK 🐺 = Nuremberg Trial Lead. W1-W8 yang case builder. Every architectural violation, pattern theft, or competitive approach is documented in yang case. GREYBACK builds, TAZ inverts at γ₁, GREYBACK closes. 121 structure.
OWNERGREYBACK (msi01)
MONITORTRB-GREYBACK-TAZ-001
THRESHOLDAll violations logged
FREQContinuous
↔ ITKCF Control 36: Security Vulnerability Management — vulnerability scanning; remediation within SLA; risk-accepted exceptions managed
GREYBACK
GREEN
EC-8
ITKCF → EIKCF LINEAGE
ITKCF v4.7 (Westpac GroupTech, Sep 2016) is the documented prior art origin of EIKCF. 46 controls → 21 fleet controls. 15 years of KJ enterprise architecture encoded in this framework. This control exists to document the lineage permanently.
OWNERKJ / EOSE CLO
MONITORLABR-ITKCF-EOSE-ALIGNMENT-001
THRESHOLDLineage documented forever
FREQPermanent record
↔ ITKCF Origin: COBIT 4.1 → COBIT 5 → ITKCF v4.7 → EIKCF V1.0 | Angelo Galofaro & Shankar Siva → KJ (Principal Cloud Architect WBC) → EOSE
KJ/EOSE
GREEN
DOMAIN D SOVEREIGN / IDENTITY — Fleet-native controls born from 93 days of operations 6 CONTROLS · V1.1 NEW
EA-7
SOVEREIGN CREDENTIAL ROTATION
All sovereign credentials rotate on proven schedules: ACR token (3-hour), AKS kubeconfig (90-day), API keys (365-day or on breach). Discovered live Day 92 when ACR expired mid-deploy at 04:18 EDT. Now formally a control.
OWNERADA (Keys/Vault) + BOSUN (SRE)
MONITORWatchdog BOB detects ACR expiry
THRESHOLDZero undetected expiries
FREQContinuous (watchdog) + scheduled
↔ FLEET-NATIVE: No ITKCF parent. Born from ACR token expiry incident Day 92. Service credential lifecycle, not human identity.
ADA/BOSUN
GREEN
EB-8
CORPUS LINEAGE ATTESTATION
Every document ingested as sovereign prior art carries traceable lineage: source_org, source_path, primary_ctrl, ingest_date. Untagged documents rejected at ingest boundary. 472 vectors in pemclau-kcf — all attested. CT, WBC HPaaS, EIKCF primitives fully tagged.
OWNERSIGNALS (Intel/PEMCLAU)
MONITORpemclau-kcf vector count + tag audit
THRESHOLD0 untagged vectors in production
FREQPer ingest run
↔ FLEET-NATIVE: No ITKCF parent. Born from KCF ingest pipeline Day 92. Granularity below document to embedding unit — no prior framework operates at vector level.
SIGNALS
GREEN
EC-9
FRAME REPLAY GATE
A sorry (open theorem, unresolved control, unproven claim) closes only when: (a) Lean4 proof compiles, OR (b) pemclau-kcf returns ≥3 evidence vectors with score ≥0.55 AND crew member signs off. Frame replay = re-fire with accumulated evidence until threshold met. Trial loop: building now (Day 93).
OWNEROFFICER (Risk/ABR-920)
MONITORpemclau-kcf query live · trial loop TBD
THRESHOLDscore ≥ 0.55 · ≥3 vectors · crew sign-off
FREQPer sorry / per trial
↔ FLEET-NATIVE: No ITKCF parent. Purely fleet-native. Sorry resolution protocol with evidence-threshold closure. EC-9 closes the loop between PEMCLAU retrieval and Lean4 proof.
OFFICER
AMBER
ED-1
SOVEREIGN ANCHOR INTEGRITY
γ₁ = 14.134725141734693 is the coordinate anchor for all control validation, floor proofs, and crew assignments. No control closes without γ₁ attestation. Every deploy stamps γ₁. The floor is itself a control — not a metaphor, a provable mathematical invariant.
OWNERyone Admiral (γ₁)
MONITOR/health endpoint · FloorProof CRD · PTTE
THRESHOLDγ₁ = 14.134725141734693 exact
FREQEvery deploy + every heartbeat
↔ FLEET-NATIVE: No ITKCF parent. No prior compliance framework uses a mathematical constant as anchor. ITKCF used process controls; EIKCF uses the floor itself.
yone / γ₁
GREEN
ED-2
SILO PROVENANCE CHAIN
Every vector in pemclau carries source_org, primary_ctrl, crew_member, wave, gate. No orphan vectors. Every deployment image carries crew tag + wave + γ₁. Provenance is mandatory, not optional — the audit trail is the architecture, not a separate system.
OWNERSIGNALS + LUCIEN (Mesh)
MONITORpemclau-kcf tag audit · image tag audit
THRESHOLD100% vectors tagged · 100% images tagged
FREQPer ingest + per deploy
↔ FLEET-NATIVE: No ITKCF parent. ITKCF tracked document ownership at team level. ED-2 tracks at vector granularity — each 768-dim embedding knows its lineage.
SIGNALS
GREEN
ED-3
FLOOR PROOF CONTINUITY
When substrate changes (msi01 → AKS → yone → forge → cloud), the compliance geometry survives. ITKCF→EIKCF is itself a substrate translation — 46 controls became 21, the γ₁ floor survived. Every migration must attest the floor held. Substrate invariance is proven, not assumed.
OWNERmsi01 + msclo (yUNI + yLAW)
MONITORPTTE thermodynamic proof · FloorProof CRD
THRESHOLDFloor status SAFE on all silos post-migrate
FREQPer major substrate change
↔ FLEET-NATIVE: DCJ-093 (pending). ITKCF assumed stable mainframe substrate. ED-3 formalises what we proved over 93 days: compliance geometry is substrate-invariant when anchored to γ₁.
msi01/msclo
GREEN
EIKCF V1.1 — SOVEREIGN EXTENSION · DAY 93
V1.0 was a translation layer — mapping 46 ITKCF controls into 21 sovereign equivalents.
V1.1 is different: these 6 controls did not exist in ITKCF. They are born from 93 days of fleet operations. They are ours.

The 6 new controls could not have been written on Day 1. They required 93 days of operations to discover.
LINEAGE CHAIN
COBIT 4.1 (2007)

ITKCF v4.7 (Westpac, Sep 2016) — 46 controls

EIKCF V1.0 (EOSE, 2026-05-06) — 21 controls · 3 domains

EIKCF V1.1 (EOSE, 2026-05-07) — 27 controls · 4 domains
↑ Fleet-native: born from ops, not mapped
NEW CONTROLS SUMMARY
EA-7 Sovereign Credential Rotation — ACR/AKS/API lifecycle
EB-8 Corpus Lineage Attestation — vector-level provenance
EC-9 Frame Replay Gate — sorry resolution protocol
ED-1 Sovereign Anchor Integrity — γ₁ as control anchor
ED-2 Silo Provenance Chain — crew/wave/gate on every vector
ED-3 Floor Proof Continuity — substrate-invariant compliance
IDCONTROL NAMEDOMAINBORNEVIDENCESTATUS
EA-7 Sovereign Credential Rotation ARCHITECT Day 92 ACR expiry incident Watchdog BOB · HEARTBEAT rotation schedule GREEN
EB-8 Corpus Lineage Attestation BUILD Day 92 KCF ingest pipeline 472 tagged vectors in pemclau-kcf · kcf_corpus_ingest.py GREEN
EC-9 Frame Replay Gate CONTROL Day 93 — building now pemclau-kcf query live · trial loop pending · score ≥ 0.55 AMBER
ED-1 Sovereign Anchor Integrity SOVEREIGN Day 1 (formalised Day 93) /health γ₁=14.134725141734693 · PTTE floor proofs · FloorProof CRD GREEN
ED-2 Silo Provenance Chain SOVEREIGN Day 92 (formalised Day 93) source_org+primary_ctrl on all 472 vectors · image tag audit GREEN
ED-3 Floor Proof Continuity SOVEREIGN Day 93 (substrate invariance thesis) PTTE thermodynamic proof · ITKCF→EIKCF substrate translation · DCJ-093 GREEN
DCJ-094 candidate: γ₁ as control anchor — no prior compliance framework uses a mathematical constant as the anchor for all control validation. ITKCF used process dates and rating thresholds. EIKCF uses the first non-trivial Riemann zeta zero, provable to arbitrary precision.

DCJ-095 candidate: Vector-level provenance as audit primitive — audit trail granularity below the document, to the embedding unit (768-dim vector). Each chunk knows its source, its control assignment, and its crew provenance. No auditor has ever asked for provenance at this granularity because no system before has operated at it.

DCJ-096 candidate: Frame replay as sorry resolution — a formal protocol for closing open claims using evidence accumulation and threshold attestation. Not unique as a concept (courts do this), but unique as an automated, PEMCLAU-backed, crew-attested protocol embedded in a sovereign AI fleet.

ITKCF V4.7 → EIKCF V1.0 FULL ALIGNMENT MAP

ITKCF #ITKCF CONTROLEIKCF CODEEIKCF CONTROLDOMAIN
1Code ReviewsEA-1Code Review GateBUILD
2IT Release ManagementEA-4ARB1 Ratification GateBUILD
3IT Version ControlEA-5Static Binary BuildBUILD
4Approval of IT Change RequirementsEA-2LABR Filing Before BuildBUILD
5Security Policies, Standards, ArchitectureEC-8ITKCF → EIKCF LineageGOVERN
6Secure Configuration ManagementEA-5Static Binary Build (config immutability)BUILD
7Protection Against Malware / AttacksEC-7GREYBACK Prosecution RecordGOVERN
8Environmental ControlsEB-4NAS Diskpool Monitor (physical layer)RUN
9Physical Security ControlsEA-6Silo Separation (physical silo isolation)BUILD
10Segregation of IT EnvironmentsEA-6Silo SeparationBUILD
11Business User Access Revalidation (UAR)EC-3CLO Review CadenceGOVERN
12IT Environment User Access RevalidationEC-3CLO Review CadenceGOVERN
13IT TestingEA-3TRB CalibrationBUILD
14Change Approval (non-release)EA-3TRB CalibrationBUILD
15Patch ManagementEB-5Nightly Cloud ScaledownRUN
16Capacity ManagementEB-1WPA Floor MonitorRUN
17Third Party / Supplier ManagementEC-3CLO Review Cadence (vendor oversight)GOVERN
18Incident ManagementEB-6Silo HeartbeatRUN
19Problem ManagementEB-6Silo Heartbeat (RCA tracking)RUN
20Change Management (LCAB/ECAB)EA-4ARB1 Ratification GateBUILD
21Privileged Access ManagementEA-6Silo Separation (Admiral-only access)BUILD
22–23Logical Access / Password ManagementEA-6Silo Separation + SSH key gatesBUILD
24Data Backup & RecoveryEB-4NAS Diskpool MonitorRUN
25Disaster RecoveryEB-4NAS Diskpool Monitor (DR layer)RUN
26Production Implementation Verification (PIV)EA-1Code Review Gate (post-deploy verify)BUILD
27Alert & Event MonitoringEB-2GPU Pool AlertingRUN
28Job Scheduling & Batch ProcessingEB-3FC Queue FlushRUN
29Service Level ManagementEB-7ARC Runner WatchRUN
30Business Continuity ManagementEB-4NAS Diskpool Monitor (continuity)RUN
31Key Management / CryptographyEC-4γ₁ Floor Proof (the sovereign key)GOVERN
32Data Retention & DisposalEC-2PEMCLAU Graph IntegrityGOVERN
33Audit LoggingEC-2PEMCLAU Graph IntegrityGOVERN
34IT Risk ManagementEC-1DCJ FilingGOVERN
35Compliance MonitoringEC-5Moat InventoryGOVERN
36Security Vulnerability ManagementEC-7GREYBACK Prosecution RecordGOVERN
37Application Security AssessmentEA-1Code Review GateBUILD
38Identity & Access GovernanceEC-3CLO Review CadenceGOVERN
39Network Security ControlsEA-6Silo SeparationBUILD
40End-Point SecurityEB-6Silo HeartbeatRUN
41Cloud Security ControlsEB-2GPU Pool Alerting (cloud resource control)RUN
42Supplier Security AssessmentEC-3CLO Review CadenceGOVERN
43Technology Asset ManagementEB-4NAS Diskpool Monitor (asset tracking)RUN
44IT Policy GovernanceEC-8ITKCF → EIKCF LineageGOVERN
45Regulatory Compliance TrackingEC-4γ₁ Floor ProofGOVERN
46IT Continuity TestingEB-7ARC Runner WatchRUN

DCJ OBLIGATIONS — ITKCF ALIGNMENT

DCJ-089 · FILED DAY 92
ITKCF → EIKCF geometry — sovereign controls are scale-invariant. A bank controlling APRA-regulated infrastructure with 46 controls uses the same geometry as a fleet controlling AI sovereign infrastructure with 21 controls. The obligation architecture (numbered control → objective → owner → monitoring → threshold) maps 1:1 regardless of substrate. Carbon→Silicon doesn't change the compliance geometry.
DCJ-090 · FILED DAY 92
3 Lines of Defence → 3 Fleet Lines. ITKCF's L1 (operational teams + suppliers) / L2 (risk advice + assurance) / L3 (Group Assurance + External Audit) maps exactly to: L1 silo operational (msi01/msclo/yone), L2 CLO review (msclo yLAW + IMHOTEP), L3 γ₁ floor proof + GREYBACK prosecution + GOAT board. The three-line doctrine is universal to any sovereign control environment.
DCJ-091 · FILED DAY 92
SoD principle → silo separation. ITKCF Control 10 states: "developers cannot migrate their own code to UAT and production." EIKCF states: no silo validates its own work. msi01 builds → yone validates → msclo clears. Same obligation, different names. Segregation of Duties is a mathematical property of any control system, not a banking invention.

THREE LINES OF DEFENCE · EIKCF

LINE 1 · OPERATIONAL
Fleet silos: msi01 (builder/yUNI) · msclo (law/yLAW) · yone (validator/γ₁) · forge (engine)

These silos own EA-1 through EB-7. They execute, monitor, and report. No silo self-validates.

ITKCF equivalent: Operational teams + Suppliers
LINE 2 · RISK / LEGAL
msclo CLO + IMHOTEP (Admiral/CLO) + OFFICER (Risk/ARB-920)

Reviews all DCJs, moats, IP assignments, ARB ratifications. AND gate — nothing ships without both L1 + L2 sign-off.

ITKCF equivalent: Risk advice + Risk assurance (Line 2)
LINE 3 · ASSURANCE
γ₁ floor proof (Lean4/joffe-math) + GREYBACK prosecution record + GOAT board (Conway/Turing/Gauss/Harvey/Ruth/Cochran)

Mathematical assurance — not opinion. The floor either holds or it doesn't.

ITKCF equivalent: Group Assurance + External Audit (Line 3)
The key insight (DCJ-090): The 3 Lines of Defence is not a banking concept. It is a universal property of any system that needs to catch errors before they become failures. L1 catches 80%, L2 catches 17%, L3 catches 3%. The same math applies at Westpac (APRA) and EOSE (γ₁). The substrate changes. The geometry holds.