The Future of AI is in Lower Dimensions

In a recent interview with Lex Fridman entitled “Dark Matter of Intelligence and Self-Supervised Learning,” outspoken AI pioneer Yann Lecun suggested the next leap in Artificial Intelligence (AI) will come from learning in lower-dimensional latent spaces. “You don’t predict pixels, you predict an abstract representation of pixels.” - Yann Lecun What does he mean and how is it relevant to the future of AI? Let’s back up and consider the context in which this statement was made. Yann was discussing the limitations of current AI systems, particularly those based on deep neural networks. In a previous article, we touched on one such example — Large Language Models (LLMs). LLMs have demonstrated impressive performance across an array of language-related tasks. So popular, a recent AWS study found a “shocking amount of the web” is already LLM-generated. This is problematic, as LLMs trained on this kind of synthetic content break down and lose their ability to generalize. A recent Nature article described this “model collapse” phenomenon in detail. ...

29 Sep 2025 · 5 min · tjards

Multi-agent Coordination Simulator

A fully open architecture implementation of modern multi-agent coordination techniques. Project description and code is available on GitHub.

4 May 2025 · 1 min · tjards

New publication in Automatica!

Emergent homeomorphic curves in swarms This work introduces the concept of geometric embeddings, which permit the application of linear control policies to produce globally-stable emergent curves in swarms of unmanned aerial vehicles. The vehicles make decisions based only on local observations, without knowing their role in the larger group. Below is an animation of the technique being used to produce a lemniscatic arc. Article available here. Code available here. ...

28 Feb 2025 · 1 min · tjards