The verification layer for visual intelligence.
We build the alignment infrastructure for video AI, so frontier models learn to see the world the way it really is, from alpenglow to shadow.
Why verification is the hardest problem in AI
The prevailing narrative frames AGI as a language problem. But language is only a fraction of human intelligence. The vast majority of economically valuable tasks are embodied - physical, spatial, visual. Our founders believe AGI will be multimodal visual intelligence: systems that see, reason, and act within the physical world.
Video understanding and video generation are converging into unified omni-models. This convergence is already underway at every frontier lab. But alignment for visual systems is fundamentally harder than alignment for language. The signal space is high-dimensional. Subtle pixel-level variations carry enormous semantic weight. Existing alignment pipelines, designed for text, break down when applied to video.
We are building the verification layer that enables frontier labs to train video AI systems that are not just capable, but safe, controllable, and aligned with human intent.
Video understanding and generation will unify into omni-models. These models will need a unified verification layer. We are building it.
Our thinking
We write to clarify our ideas and share our perspective on where video AI is heading. These essays explain the intellectual foundations of our work.
Researchers, physicists, artists, and philosophers
We are a small team at the intersection of machine learning, visual perception, and philosophy of mind. We come from frontier AI labs where we built multimodal systems, video world models, and large-scale evaluation infrastructure. We left because we believe the verification problem for visual AI deserves dedicated, first-principles attention.
We are based in San Francisco.
Our partners and investors
We are backed by top-tier venture capital - the first VCs to back DoorDash, Higgsfield, Runware, and many other video AI companies. Our angels include executives from Stability AI, Together AI, and Dyna Robotics, and research leadership from frontier labs building omni models.
Build the verification layer with us
We are hiring people who want to work on the hardest unsolved problem in AI alignment. If you believe the alignment problem extends far beyond language, we would like to talk.