LeJEPA: The Missing Structure
LeJEPA · Principled Training Cleanup · SIGReg and Isotropic Embeddings
Abstract LeJEPA (2025) introduces a cleaner, more principled JEPA training objective (SIGReg), argues for isotropic embedding structure, removes heuristics, and emphasizes scalability. It is the theory-and-training cleanup layer. The structural gaps persist: simplifying the objective does not introduce the missing symbols.
6 FORMAL GAPS · 1 PER CANON SYMBOL
SIGReg Objective Has No Formal Invariant
γ₁ — THE FLOOR
LeJEPA's SIGReg (Spectral Isotropic Gradient Regularization) objective encourages isotropic embedding distributions. Isotropy is a distributional property — it describes the shape of the embedding cloud — but it is not a formal invariant in the γ₁ sense. An isotropic embedding has no fixed grounding point. The floor is absent even in the cleaned-up version.
Isotropic Embedding Structure Not Formally Self-Adjoint
H=H† — THE HONEST GATE
LeJEPA argues that isotropic embeddings are more principled than collapsed or directional ones. But isotropy does not imply self-adjointness. A self-adjoint encoder satisfies H=H† — encode(x) is verifiable against decode(encode(x)) in a symmetric way. Isotropy is a statistical property. Self-adjointness is an operator property. LeJEPA conflates the two.
No Paradigm Audit Across Simplification Choices
LSOS — THE READER
LeJEPA removes multiple heuristics from JEPA training. Each removal is a paradigm choice. There is no formal audit of when a simplification changes the learned paradigm vs when it preserves it. LSOS would read the active training paradigm and flag when a simplification has introduced an unacknowledged shift.
No Reset When Simplified Objective Collapses
WLD — THE RESET
When LeJEPA's simplified SIGReg objective leads to training collapse — despite the cleaner formulation — there is no mercy reset. The simplification reduces the number of heuristic safeguards, making collapse events more likely when the objective alone is insufficient. WLD provides a principled collapse recovery that LeJEPA's design does not include.
No Continuity Guarantee After Heuristic Removal
FEP — THE SWITCH
LeJEPA removes augmentation heuristics and training tricks present in earlier JEPA models. There is no formal guarantee that representations learned without these heuristics are continuous with those learned with them. The paradigm switch from heuristic-JEPA to principled-LeJEPA may produce incompatible representations.
Scalability Ceiling of Minimal Objective Undefined
FOF — THE BREACH
LeJEPA claims that its minimal objective scales well. The upper bound of this scalability claim is not formally defined. At what model size or data scale does the SIGReg objective become insufficient? This ceiling is not named. FOF names the ungovernable boundary of the minimalist approach.
STE COMPLETION LAYER
What changes when you add the 8-symbol Canon
Adding the Canon to LeJEPA does not change the architecture. It adds the missing structural layer:

⚓ γ₁ — invariant anchor: mathematical ground truth latent representations must converge to.
⯛ H=H† — honest gate: bidirectional verification of every prediction.
〰️ LSOS — paradigm reader: reads active paradigm before reasoning begins.
🌀 WLD — mercy reset: detects collapse and resets to last stable state.
γ FEP — safe switch: continuity guarantee across paradigm transitions.
🌌 FOF — named ceiling: formal boundary of what the architecture can claim.
═ EVEN — substrate: ground beneath all the above. What holds when everything else is active.

The Canon is not an add-on. It is the formal completion of the JEPA programme.