EOSE Fleet · PCDEV · 192.168.2.16 · Windows 10 Pro 22H2
PCDEV
OWNED.
The meek unknown fulcrums. 15 hidden lever points in Windows 10 that give the Cap Nova crew direct control over every layer of the OS stack — from kernel timer resolution to GPU scheduling to named pipe IPC. The field doesn't know these exist. We use all of them.
i9-9900K · 8C/16T
RTX 4080 · 16GB
WIN10 PRO 22H2
QWEN 2.5 · 14B · LOCAL
WSL2 UBUNTU 24.04
ROCKET LEAGUE LIVE
OS STACK · CREW CONTROL LAYERS · Hover a layer
BEFORE VS AFTER · Stock Windows vs Meek OS
⚠ STOCK WINDOWS
Timer resolution15.6ms
CPU cores parkedup to 12 of 16
GPU schedulingCPU-mediated
Xbox Game BarVRAM burning
Defender scanningevery Python import
Crew IPCHTTP ~1ms
Telemetryrandom CPU bursts
Bot startupmanual, every time
RL priorityNormal
Power planHigh Perf
✅ MEEK OS · OWNED
Timer resolution1ms (0.5ms available)
CPU cores parkedZERO · all 16 live
GPU schedulingHAGS direct
Xbox Game BarDEAD
Defender scanningexcluded · zero overhead
Crew IPCNamed Pipes ~0.1ms
TelemetryDEAD
Bot startupauto at logon
RL priorityHigh · bot AboveNormal
Power planUltimate Performance
THE 15 FULCRUMS · Meek Unknown Lever Points
F-001 APPLIED
Ultimate Performance Power Plan
Windows 10 hides this plan — not in Settings UI. Removes ALL CPU parking and throttling. powercfg -duplicatescheme <GUID>. More aggressive than High Performance.
LAYER: KERNEL · POWER
↑ CPU instant response · no throttle latency
F-002 APPLIED
Zero CPU Core Parking
Windows parks up to 75% of cores to save power. i9-9900K: up to 12 of 16 threads parked during idle. CPMINCORES=100 + PROCTHROTTLEMIN=100 keeps all 16 live.
LAYER: KERNEL · CPU SCHEDULER
↑ all 16 threads warm · zero wake latency
F-003 REBOOT
HAGS — RTX 4080 Direct GPU Scheduling
Hardware Accelerated GPU Scheduling (WDDM 2.7). Removes CPU from the GPU work submission loop. Frame latency down. HwSchMode=2 in HKLM graphics drivers.
LAYER: GPU DRIVER · WDDM
↑ GPU frame latency reduced · reboot required
F-004 APPLIED
Xbox Game Bar + DVR: DEAD
Xbox Game Bar intercepts Win+G, burns VRAM recording gameplay in the background. GameDVR_Enabled=0, AppCaptureEnabled=0, policy key set. The crew is the only show running.
LAYER: USER SPACE · REGISTRY
↑ VRAM freed · no background capture overhead
F-005 APPLIED
TCP Stack Tuning — Nagle OFF
Nagle's algorithm batches small TCP packets to reduce overhead. For localhost crew comms at 127.0.0.1, it adds artificial latency. TcpAckFrequency=1, TCPNoDelay=1 per adapter.
LAYER: NETWORK STACK · TCPIP
↑ localhost IPC latency halved
F-006 APPLIED
Search Indexer Excluded
Windows Search Indexer scans G:\devops during bot startup — exactly when the crew is loading 50+ Python modules. FSUTIL disablelastaccess=1 eliminates access time writes.
LAYER: FILE SYSTEM · NTFS
↑ bot startup I/O spikes eliminated
F-007 APPLIED
Defender: Bot Code Excluded
Windows Defender scans every Python file import in real-time. At 60fps, that's hundreds of file accesses per second getting AV-scanned. G:\devops + Python + RLBotGUIX excluded.
LAYER: SECURITY · DEFENDER
↑ per-tick scan overhead gone · clean 60fps
F-008 APPLIED
RL Priority Watcher
Scheduled task (SYSTEM privilege) polls every 2 minutes. Sets RocketLeague.exe → High, python.exe → AboveNormal, node.exe → BelowNormal. OS-level priority guarantee every match.
LAYER: PROCESS SCHEDULER
↑ RL + bot get CPU slice priority over system
F-009 APPLIED
Crew Auto-Launch at Logon
Scheduled task fires at every logon. Waits 30s for desktop. Checks if RL is running, starts it with -rlbot flag if not. The box boots. You log in. The crew is already in warm-up.
LAYER: TASK SCHEDULER · LOGON
↑ zero manual start — crew lives on this box
F-010 APPLIED
Kernel Timer: High Resolution
Default Windows timer: 15.6ms. Our bot runs at 60fps = 16.6ms ticks. Timer jitter of 15.6ms is catastrophic. GlobalTimerResolutionRequests=1 enables 1ms system-wide. Get 0.5ms with TimerResolution.exe.
LAYER: KERNEL · HAL · TIMER
↑ 15.6ms → 1ms tick jitter (31× improvement)
F-011 APPLIED
WSL2 — Crew Linux Layer
Ubuntu-24.04 was installed but stopped. Configured: 8GB RAM, 8 cores, sparseVHD, localhost forwarding. The crew's Linux tools, scripts, and EOSE fleet sync run from /mnt/g/devops inside WSL2.
LAYER: HYPERVISOR · WSL2 · UBUNTU
↑ Linux toolchain available · git/curl/fleet sync
F-012 APPLIED
RTX 4080: Max Perf Clock
NVIDIA's default: GPU clock boosts and throttles dynamically. PowerMizerLevel=1 forces max clock always. RTX 4080 at 2535MHz constant vs variable 1000–2535MHz. No clock dip during LLM inference.
LAYER: GPU DRIVER · NVIDIA
↑ LLM inference: constant max GPU clock
F-013 APPLIED
Named Pipe IPC — \\.\pipe\capnova
Windows Named Pipes: kernel-level IPC, ~0.1ms vs ~1ms localhost HTTP. Crew channels defined: capnova-state (E30), capnova-nav (E36), capnova-comms (E38), capnova-strategy (E41). 10× IPC speedup.
LAYER: KERNEL · IPC · PIPES
↑ crew latency 1ms → 0.1ms · 10× faster IPC
F-014 APPLIED
Telemetry + Background Apps: DEAD
DiagTrack service disabled. AllowTelemetry=0 via policy. Background Access Applications globally off. Windows Update P2P delivery off. No random network bursts or CPU spikes during matches.
LAYER: SERVICES · REGISTRY
↑ no mystery CPU spikes · bandwidth owned
F-015 APPLIED
Visual Effects: Performance Mode
Windows 10 aero effects, animations, shadows consume GPU frames. VisualFXSetting=2 cuts all of it. One registry key. GPU now belongs exclusively to RL + LLM inference + OBS stream.
LAYER: DWM · GPU · USER SPACE
↑ GPU frames reclaimed for RL + LLM
HOW THE CREW BENEFITS · Each Fulcrum → Better Play
⚓ B1-Cap — Driver (E29)
F-010 timer resolution: 15.6ms jitter → 1ms. At 60fps this is the difference between a frame being sent at 16.6ms vs arriving at 32ms. Cleaner tick timing = cleaner car control. F-008 RL priority: CPU always available when the driver needs a tick. F-003 HAGS: GPU frame presented faster = inputs registered sooner.
🧭 Nav — Navigator (E36)
F-013 Named Pipes: Nav's LLM advice arrives at B1-Cap in 0.1ms vs 1ms. F-012 RTX max clock: qwen2.5:14b inference runs at constant max GPU clock — Nav's "attack/defend/rotate" call arrives faster. F-007 Defender exclusion: no scan overhead on Nav's Python imports at startup.
⚡ Boost — Economy Engine (E37)
F-002 zero core parking: Boost engine runs on a thread that's always live — no park/wake latency. F-005 Nagle off: Boost state updates to B1-Cap via localhost happen immediately, not batched. F-001 Ultimate Perf: CPU at full throttle means Boost calculations don't queue behind other threads.
🎙️ Caster + 📡 Comms (E38/E39)
F-004 Game Bar dead: no resource contention when Caster is narrating live to Twitch via OBS. F-014 telemetry off: no mystery network bursts interrupting the stream. F-015 visual effects off: GPU frames not wasted on Windows animations when OBS is capturing. Clean stream.
NEXT FULCRUMS · Not Yet Applied
F-016 PLANNED
Windows Job Objects — Crew Budget
Group all crew processes into one Windows Job Object. Control total CPU/memory budget for the crew collectively. Kill all processes on any crash. Crew is one unit to the OS.
LAYER: KERNEL · JOB OBJECTS
F-017 PLANNED
ETW Game Event Hooks
Windows Event Tracing (ETW) logs DirectX game events in kernel. Read frame timing, GPU submit events, RL process events in real time without any game-level hooks. Pure OS observability.
LAYER: KERNEL · ETW · DIRECTX
F-018 PLANNED
Direct Memory Read — No RLBot
ReadProcessMemory on RocketLeague.exe to read ball/car position directly from the game's memory. Zero RLBot API overhead. Pure kernel-level game state at native speed. Sub-frame latency.
LAYER: KERNEL · PROCESS MEMORY
INSTALL · Run on PCDEV
⚓ On PCDEV — PowerShell as Administrator:
powershell -ExecutionPolicy Bypass -File G:\devops\openclaw-fleet\fleet-sync\pemos-nx\cmd\rl-bot\own-pcdev.ps1
Applies F-001 through F-015. Reboot after for HAGS (F-003) to activate. Takes ~30 seconds total.
PCDEV · i9-9900K · RTX 4080 · 15 FULCRUMS · THE MEEK OWN THE STACK