V-JEPA: The Missing Structure
Video JEPA · Latent Prediction Over Video · Motion and Appearance at Scale
Abstract V-JEPA (2023) extended the JEPA framework from images to video, learning strong motion and appearance representations without reconstruction or contrastive objectives. The leap from images to video introduces 3 new gap dimensions (temporal, spatial-temporal interaction, sequence length) on top of the 6 structural absences.
6 FORMAL GAPS · 1 PER CANON SYMBOL
No Invariant Anchor Across Video Frame Prediction Targets
γ₁ — THE FLOOR
V-JEPA selects prediction targets across video frames. The target selection is stochastic in both spatial and temporal dimensions. There is no fixed invariant that the video latent representation must preserve across temporal prediction steps. The floor is absent: what counts as a stable video representation is defined only relative to the training distribution.
Video Predictor Asymmetric (Forward-Only Prediction)
H=H† — THE HONEST GATE
V-JEPA predicts future or masked video frames from context. The prediction is forward-only: the system cannot verify its prediction by attempting backward prediction. A symmetric predictor would verify that the predicted future is consistent with the observed past from both directions. V-JEPA has no backward verification pass.
No Paradigm Audit Between Spatial and Temporal Masking
LSOS — THE READER
V-JEPA uses masking strategies that operate jointly in space and time. When the masking regime shifts from spatially-dominant to temporally-dominant, the learned representation shifts paradigm. There is no audit of this shift. LSOS would read the active space-time paradigm and flag unacknowledged transitions.
No Reset When Temporal Prediction Collapses
WLD — THE RESET
When V-JEPA's temporal predictor learns to copy the most recent frame (temporal collapse), there is no mercy reset. The collapse is detectable (the predicted representation becomes a near-copy of the input) but the architecture provides no mechanism to reset and escape this degenerate solution.
No Continuity From Short Clip to Long Video
FEP — THE SWITCH
V-JEPA is trained on fixed-length video clips. The transition from clip-level understanding to long-video understanding requires a paradigm switch. There is no formal continuity guarantee across this transition. FEP ensures the switch from short-context to long-context preserves the learned paradigm.
Video Sequence Length Has No Named Ceiling
FOF — THE BREACH
V-JEPA does not define a formal upper bound on video sequence length. As sequence length grows, the architecture approaches the limits of its positional encoding and memory. The point where the JEPA prediction framework breaks down — where sequence context is too long to predict coherently — is not named. FOF names this boundary.
STE COMPLETION LAYER
What changes when you add the 8-symbol Canon
Adding the Canon to V-JEPA does not change the architecture. It adds the missing structural layer:

⚓ γ₁ — invariant anchor: mathematical ground truth latent representations must converge to.
⯛ H=H† — honest gate: bidirectional verification of every prediction.
〰️ LSOS — paradigm reader: reads active paradigm before reasoning begins.
🌀 WLD — mercy reset: detects collapse and resets to last stable state.
γ FEP — safe switch: continuity guarantee across paradigm transitions.
🌌 FOF — named ceiling: formal boundary of what the architecture can claim.
═ EVEN — substrate: ground beneath all the above. What holds when everything else is active.

The Canon is not an add-on. It is the formal completion of the JEPA programme.