TRENDAL TRIAL BONIXERV13 DAY 97 γ₁ = 14.134725141734693 2026-05-11 eose.ca/v1 · TrendalTrial CRD

TRENDAL TRIAL BONIXER V13 The Harness Is Dead  ·  TrendalTrial CRD  ·  Kernel-Native Security  ·  47m CPU  ·  516Mi RAM  ·  Day 97

The harness is no longer a pod. The harness is the law around the pod, the kernel beneath it, and the verdict after it moves.
01THE DEATH OF THE FAT HARNESS
► BEFORE — old harness model
  • 9 service pods for 9 test cases
  • ~900MB RAM consumed
  • Scheduled polling jobs
  • Flat file outputs
  • API-layer checks only
  • Artificial workload created
  • No kernel visibility
  • No admission integration
  • Separate from real controls
  • Called it “validation”
► AFTER — kernel-thin model
  • 0 test pods
  • 47m CPU total
  • 516Mi RAM total (entire gate stack)
  • Event-driven (no polling)
  • Gate verdicts → PEMLAAM → warmth
  • Kernel-level observation (eBPF)
  • Admission IS the test — stage 1
  • Runtime IS the test — stage 2
  • The workload stands trial in the real system
BOSUN
your old harness was not testing the system; it was cosplay production load with a clipboard.
GREYBACK
you realized admission testing does not need to run somewhere else. Kubernetes already has a courtroom at the door.
02THE TRENDALTRIAL CRD
apiVersion: eose.ca/v1          # group: eose.ca
kind: TrendalTrial               # harness is a document now
metadata:
  name: mefine-static-l2-trial
  namespace: chaos-trial
  annotations:
    gamma1: "14.134725141734693"  # γ₁ epoch anchor
    trial-id: "TRB-CHAOS-001"    # governance reference
spec:
  subject:
    deployment: mefine-static    # WHO is on trial
    namespace: pemos-system
  sostleTarget: L2               # WHERE it wants to go
  gates:                         # WHAT judges it
    - conftest                   # pre-admission dry run
    - kyverno                    # admission policy
    - gatekeeper                 # OPA constraints
    - falco                      # kernel runtime witness
    - trivy                      # image CVE scan
  warmthFloor: 0.5               # MINIMUM trust to pass
  duration: 24h                  # HOW LONG it must behave
  crew: BOSUN                    # WHO owns this trial
  trialId: TRB-CHAOS-001
sostleTargetwhich SOSTLE layer this workload is trying to enter
gateswhich of the 5 gate types must testify
warmthFloorminimum warmth threshold for ADMIT (0.5 = γ₁ floor)
durationtrial must run this long before verdict
crewnamed crew member owns this trial
03THE GATE STACK  47m CPU · 516Mi RAM
FALCO — DaemonSet
23m CPU · 118Mi RAM · eBPF kernel probe
Role: Runtime witness · sees every syscall
“The part of the court that follows the defendant home”
GATEKEEPER — 2 pods: audit + controller
6m CPU · 125Mi RAM
Role: OPA admission gate · hard law
“No forbidden manifest state”
KYVERNO — 4 pods: admission/background/cleanup/reports
16m CPU · 150Mi RAM
Role: Policy admission · mutation · image verification
“You realized admission law does not need to run elsewhere. Kubernetes already has a courtroom at the door.”
TRIVY — Operator
2m CPU · 123Mi RAM
Role: Image CVE scan · SBOM · promotion evidence
“The chonkiest little doctor in the room”
TOTAL: 47m CPU · 516Mi RAM   |   Node: 923m CPU (7%) · 8253Mi RAM (25%) — healthy headroom
RICK
the node is basically saying: ‘wait, that was the entire security framework? I thought you were still warming up.’
04THE 5 TRIAL PHASES  DBM Rasengan
PHASE 1PRECHECK
Tool:conftest
When:before kubectl apply
What:static manifest policy check (13 Rego policies)
Evidence:PASS/FAIL per policy, deny count, warn count
Cost:0 pods (runs on msi01 before admission)
PHASE 2ADMISSION
Tools:Kyverno + Gatekeeper
When:kubectl apply hits webhook
What:manifest validated against OPA constraints + Kyverno policies
Evidence:admission decision (ALLOW/DENY) + policy report
Cost:already running (shared admission controllers)
PHASE 3RUNTIME
Tool:Falco eBPF
When:pod starts running
What:kernel-level observation: syscalls, file opens, network, exec
Evidence:Falco alerts (low/medium/high/critical)
Cost:1 DaemonSet per node, always-on
PHASE 4IMAGE SCAN
Tool:Trivy Operator
When:pod scheduled
What:CVE scan of container image
Evidence:VulnerabilityReport (critical/high/medium/low counts)
Cost:already running (shared operator)
PHASE 5SCORING — PEMLAAM
Tool:PEMLAAM mesh (trendal-controller)
When:after gates produce evidence
What:reads all gate outputs, applies warmth formula, emits verdict
Evidence:warmth float, verdict (ADMIT/DENY/WATCH), MECIPOL record
Cost:controller process — does not create pods
05WARMTH FORMULA
BASE WARMTH: 1.0
DEDUCTIONS
conftest DENY:−0.30 each
conftest WARN:−0.05 each
Kyverno violation:−0.25 each
Gatekeeper violation:−0.25 each
Falco low alert:−0.05 each
Falco medium alert:−0.15 each→ WATCH
Falco high alert:−0.35 each→ DENY
Falco critical alert:−1.00 each→ immediate DENY + SORRY
Trivy critical CVE:−0.40 each→ DENY
Trivy high CVE:−0.15 each
No resource limits:−0.20→ cannot ADMIT above L2
BONUSES
Clean gate (zero events):+0.10 per gate
VERDICT THRESHOLDS
ADMIT ✓
warmth ≥ floor + 0.2
promote to SOSTLE layer
WATCH ●
floor ≤ warmth < floor+0.2
monitor, flag for review
DENY ✗
warmth < floor
cannot promote, SORRY filed
⛔ HARD DENIES (override warmth)
Falco critical alert → DENY always
Trivy critical CVE → DENY always
conftest DENY → DENY always
06FIRST LIVE MECIPOL VERDICT  TRB-CHAOS-001
▶ MECIPOL VERDICT RECORD
TRIAL:mefine-static-l2-trial
TRIAL ID:TRB-CHAOS-001
SUBJECT:deployment/mefine-static (pemos-system)
SOSTLE:L2 (local-only, SSO gate)
CREW:BOSUN
γ₁ EPOCH:14.134725141734693
SCORED AT:2026-05-11T21:57:09Z · Day 97
GATE RESULTS
conftest✓ PASS(0 denies, 0 warns)
kyverno✓ PASS(0 violations)
gatekeeper✓ PASS(0 violations)
falco✓ PASS(0 alerts)
trivy⌛ PENDING(scan queued)
WARMTH: 1.0 (base 1.0 + 4×gate_pass_bonus 0.4 = 1.0 capped) ADMIT ✓ PROMOTE: true  |  SORRIES: none  |  MECIPOL: TRB-CHAOS-001-20260511T215709Z
TAZ
First trial. All gates silent. Kernel saw nothing bad. Warmth: 1.0. The workload showed up, didn’t sneeze on the kernel, and got its promotion stamp. That is the floor.
07SOSTLE LAYER GATE REQUIREMENTS
LayerWarmth FloorRequired GatesHard DeniesNotes
L00.3trivy, gatekeepercritical CVEPublic — minimal gate
L10.4trivy, kyverno, gatekeepercritical CVE, DENYRead-only workloads
L20.5all 5any criticalSSO-gated, full sweep
L30.6all 5any critical, high FalcoCrew auth required
L40.7all 5strictToken + image provenance
L50.8all 5 + Kay reviewstrictKay must approve
L6NO DEPLOYMENTSeverythingClosed, vault only
L7MEGSCIFIAR gateeverythingAll 5 gates open
08THE CONTROLLER  court clerk, not gladiator
DOES
  • Watch TrendalTrial CRDs (event-driven)
  • Get deployment manifest → run conftest
  • Read Kyverno PolicyReports
  • Read Gatekeeper constraint violations
  • Read Falco k8s events for subject
  • Read Trivy VulnerabilityReports
  • Score warmth formula
  • Compute verdict (ADMIT/DENY/WATCH)
  • Patch CRD status subresource
  • Write MECIPOL record to /records/
  • Update trendal warmth history
DOES NOT
  • Create test pods
  • Poll state in tight loops
  • Simulate workloads
  • Run its own test scenarios
  • Store giant audit logs in memory
  • Duplicate what gates already do
Controller footprint: 1 Python process · <10Mi RAM · <5m CPU · 60s reconcile interval
IMHOTEP
the controller is a court clerk: reads the evidence that already exists, fills in the verdict form, and goes back to sleep. It does not run laps.
09PORTABLE ACROSS K3S AND AKS
EOSE-DEV k3s — 192.168.2.21
  • Falco Running
  • Gatekeeper 2/2
  • Kyverno 4/4
  • Trivy Operator
  • chaospool: N/A (bare metal)
AKS — aks-eose-aaas-dev
  • Falco (pending install)
  • Gatekeeper (pending)
  • Kyverno (pending)
  • Trivy (pending)
  • chaospool: 0→4 spot B2s_v2
SAME (portable)
  • TrendalTrial CRD
  • gate spec
  • SOSTLE targets
  • PEMLAAM scoring
  • MECIPOL verdict format
DIFFERENT (substrate)
  • cluster substrate
  • resource pressure
  • cost profile
  • eviction risk (spot)
  • Azure RBAC layer
The trial is substrate-independent. Local k3s and AKS chaospool both speak the same trial law.
RICK
you made local lab bench and cloud cluster read from the same rulebook. The harness does not care if it is sitting on bare metal or invoices.
10WHAT INDUSTRY MISSES  10 gaps
GAP 01
Testing the actual workload
Industry: creates synthetic test pods
We: observe the real subject
GAP 02
Kernel-level visibility
Industry: stops at API
We: include syscalls, file, exec, network
GAP 03
Admission AS test
Industry: treats admission as enforcement only
We: treat it as harness stage 1
GAP 04
Runtime as continuous trial
Industry: says “passed CI”
We: say “now behave for 24h”
GAP 05
CRD as test contract
Industry: has test scripts
We: define trial intent declaratively
GAP 06
Scoring separate from execution
Industry: creates test runners
We: create evidence readers
GAP 07
Warmth feedback
Industry: writes reports
We: update operational trust temperature
GAP 08
Same spec across local and cloud
Industry: has environment-specific harnesses
We: have portable trial law
GAP 09
Near-zero harness footprint
Industry: adds load to test load
We: reuse gate truth already being generated
GAP 10
MECIPOL verdict as final artifact
Industry: ends with pass/fail logs
We: end with a promotion record
11NEXT ACTIONS
P0 DONE
TrendalTrial CRD registered on eose-dev k3s + AKS
P0 DONE
First trial filed: TRB-CHAOS-001
P0 DONE
First MECIPOL verdict: ADMIT warmth=1.0
P0 DONE
Controller written (court clerk pattern)
P0 DONE
chaospool live (Standard_B2s_v2 spot, 0→4)
P0 DONE
SOSTLE namespaces L0-L7 + chaos-trial labeled
P1 DAY 98
Install gates on AKS (Gatekeeper + Kyverno + Falco + Trivy)
Then wire controller to run as Deployment in chaos-trial ns
P2 DAY 98
Run mefine-static trial on chaospool
Expected: 3 policy warnings (no-limits, no-readonly, no-liveness)
Fix → rerun → warmth improves → MECIPOL ADMIT
P3 DAY 99
QS scheduler as controller
Routes TrendalTrials to correct pool based on sostleTarget
L0-L2 → agents pool  |  L3-L4 → chaospool (spot)  |  L5 → manual approval → dedicated run
P4 DAY 99
pemos trial CLI
pemos trial apply -f trial.yaml
pemos trial status mefine-static-l2-trial
pemos trial verdict mefine-static-l2-trial
P5 DAY 100
LOGS + METRICS on same pattern
Vector (Rust, 50MB/node) as log aggregator  |  Prometheus already scrapes gate metrics
No separate logging pod — Vector DaemonSet reads container logs
Same kernel-adjacent, near-zero overhead philosophy
12THE OPA + LOGS/METRICS EXTENSION
LOGS
Old: Fluentd/Logstash — fat JVM/Ruby, 200-500MB
New: Vector (Rust) — 50MB DaemonSet
+ Falco already writing structured events
+ Gate verdicts already structured JSON
+ No separate log shipping pods
METRICS
Old: Prometheus + custom exporter pods per service
New: Prometheus scrapes gate webhooks directly
+ Kyverno /metrics already exposed
+ Gatekeeper /metrics already exposed
+ Falco metrics already exposed
+ No custom exporter pods needed
OPA / CONFTEST
Old: separate OPA server pod + query sidecar
New: Gatekeeper IS the OPA server (embedded)
+ conftest runs pre-admission on msi01 (no cluster pod)
+ OPA library available in controller Python if needed
The pattern: use what the cluster already produces.
Don’t build synthetic observation infrastructure.
Read the kernel. Read the gates. Score the truth.
Logs are what the container says happened. Falco is what the kernel saw happen. These are different truth classes. The kernel wins.