The Future of AI is in Lower Dimensions

In a recent interview with Lex Fridman entitled “Dark Matter of Intelligence and Self-Supervised Learning,” outspoken AI pioneer Yann Lecun suggested the next leap in Artificial Intelligence (AI) will come from learning in lower-dimensional latent spaces. “You don’t predict pixels, you predict an abstract representation of pixels.” - Yann Lecun What does he mean and how is it relevant to the future of AI? Let’s back up and consider the context in which this statement was made. Yann was discussing the limitations of current AI systems, particularly those based on deep neural networks. In a previous article, we touched on one such example — Large Language Models (LLMs). LLMs have demonstrated impressive performance across an array of language-related tasks. So popular, a recent AWS study found a “shocking amount of the web” is already LLM-generated. This is problematic, as LLMs trained on this kind of synthetic content break down and lose their ability to generalize. A recent Nature article described this “model collapse” phenomenon in detail. ...

29 Sep 2025 · 5 min · tjards

The Machines Built The Matrix to Avoid Model Collapse

A new theory for why the Machines kept humans alive in The Matrix —inspired by recent discoveries in scaling large language models. One measure of a film’s quality is the diversity of fan theories it inspires. When a story has the right blend of depth, ambiguity, and cultural timing, the entertainment value extends past the credits—it compels audiences to dissect and reinterpret decades later. The Matrix is a great example of this: 25 years on, people are still following the white rabbit down Reddit threads. A quick internet search reveals a myriad of fan theories about the true nature of its characters and storylines. One even claims John Wick is actually a sequel to The Matrix. ...

8 Jun 2025 · 7 min · tjards