PLASMA R=0.18
D-VOXCPM2-001 · OpenBMB
VoxCPM2 — tokenizer-free TTS · 2B · 30 langs · 48kHz · Voice Design
why plasma: Tokenizer-free = Merostone language layer analog. Voice Design (novel voice from text desc) = lilo crew persona voices without reference audio.
fleet use: VoxCPM2 + pipecat transport + cancan YANG broadcast = complete fleet voice pipeline. Run on yone RTX 5080 16GB.
voice-tts
PLASMA R=0.22
D-KDENSE-001 · K-Dense-AI
scientific-agent-skills 21k★ · mimeo → clone expert → SKILL.md
why plasma: mimeo = externalized version of our SOUL.md/AGENTS.md pattern. scientific-agent-skills = external crew specialization library.
fleet use: mimeo can auto-generate lilo crew SKILL.md entries from expert docs. Our KCF/SOSTLE is the moat they lack.
agent-skills
YANG R=0.28
D-ZAI-001 · Z.ai / GLM
GLM-5 agentic + GLM-OCR 6.4k★ + SCAIL CVPR2026 + GLM-TTS multi-reward RL
GLM-OCR: LABR-GLM-OCR-NAS-001 already filed. SCAIL: studio character animation via in-context learning → LOCO/DRG chain.
GLM-TTS: Multi-reward RL for TTS = training recipe for our voice reward model layer.
multimodal-agents
YANG R=0.30
D-HYMT-001 · AngelSlim/Tencent
Hy-MT1.5-1.8B · 1.25-bit / 440MB · STQ1_0 kernel · ACL 2026 · 33 langs
new frontier: Sub-2-bit quantization. STQ1_0 kernel in llama.cpp PR. 440MB on-device translation.
fleet use: Merostone YIN channel model. lilo/msclo/msi01 all run this trivially on 5090 GPUs. CATAN STAR FORT ⭐
quantization-local
YANG R=0.31
D-PIPECAT-001 · pipecat-ai
Pipecat 12k★ · voice+multimodal · distributed subagents · gradient-bang
transport: pipecat-client-web-transports = right model for Merostone YIN/YANG dual-channel UDP.
gradient-bang: LLM multiplayer universe = our fleet-quest-* gamification pattern confirmed externally.
voice-multimodal
YANG R=0.35
D-COMPOSIO-001 · ComposioHQ
Composio 28k★ · 1000+ toolkits · agent-orchestrator 6.9k★ · MCP-native
parallel: agent-orchestrator = sessions_spawn + subagents pattern. Their 1000+ toolkit catalog = our PEMCLAU KCF (490 pts — expand).
moat study: Their auth + sandboxed workbench vs our SOSTLE + KCF. Study their tool discovery model.
tool-orchestration
MID R=0.58
D-FREEMOCAP-001 · freemocap
FreeMoCap — free MoCap · skellycam multi-camera · Blender addon · AGPL
connectome link: FAFB wiring + freemocap body data = complete sensorimotor loop for nervous system architecture.
lilo: Namir runs freemocap → lilo processes → fleet gets proprioceptive input. SCAIL renders it. LOCO routes it.
motion-capture
TOP PRIORITY ACTIONS
🔴 Pull VoxCPM2 → yone (PLASMA TTS)
🔴 Run mimeo on our crew docs → SKILL.md for lilo
🟡 Test GLM-5 on RH1-π prime test suite (does it hit YANG?)
🟡 Pull Hy-MT 1.25-bit → msclo (on-device translation for CLO briefs)
🟢 Study pipecat transport layer → Merostone dual-channel reference
🟢 freemocap + lilo STAR FORT experiment