There's something haunting about Ilya Sutskever's recent commencement speech at the University of Toronto. Not what he said, but what he couldn't say. And when you connect it to what we're seeing in the ARC AGI benchmarks, a disturbing picture emerges.

Everyone's been asking since Ilya left OpenAI: What did Ilya see? His departure, his cryptic safety warnings, and now this speech are all pieces of the same puzzle. And that puzzle suggests we're much closer to AGI than anyone is publicly admitting.

Listen to how Ilya talks about AI. He doesn't speak like someone discussing future possibilities. He speaks like someone who's seen the future. Quote: "The day will come when AI will do all of our things, all the things that we can do, not just some of them, but all of them. Anything which I can learn, anything which any one of you can learn, the AI could do as well." That's not speculation. That's a warning from someone who knows something we don't.

Now look at this ARC AGI benchmark chart. In December, OpenAI demonstrated o3-preview achieving over 90% performance on ARC AGI. This benchmark was specifically designed to test genuine reasoning ability, not just pattern matching. A 90% score represents a massive leap toward general intelligence.

But here's the kicker. When OpenAI released o3 and o3-pro to the public, performance dropped dramatically. o3-pro scores around 60%, o3 even lower. Why would you release a weaker version of a model you've already built?

The official explanation is cost. But I think there's another explanation. What if the labs already have AGI-level models? What if o3-preview represents capabilities too dangerous to release publicly? What if we're getting deliberately constrained versions while the real breakthroughs remain locked away?

This explains Ilya's urgency. He talks about "the greatest challenge of humanity ever" and emphasizes the need to "pay attention" and "generate the energy required to overcome the huge challenge that AI will pose." He's not talking about gradual improvement. He's talking about an imminent transformation.

Think about his position. You've seen capabilities that would fundamentally change everything. But you're bound by NDAs and safety concerns. You can't directly tell people what's coming, but you can try to prepare them.

The ARC AGI scores support this theory. The gap between o3-preview and released models isn't just about cost. It's about the difference between what's possible and what's safe to release. The labs are sitting on models that approach AGI, but they're not ready for the societal implications.

Ilya's speech becomes chilling when you realize he's probably talking about models that already exist. When he says "slowly but surely, or maybe not so slowly, AI will keep getting better," he knows exactly how not slowly we're talking about. When he warns about "very profound issues about making sure that they say what they say, and not pretend to be something else," he's describing current internal challenges, not future concerns.

The timeline makes sense. Ilya mentions "three years, five years, ten years" for when AI will do everything humans can do. But what if that's not a prediction? What if that's how long it takes to make already-existing capabilities safe and efficient enough for public deployment?

The race isn't about building AGI anymore. It's about building AGI that can be safely deployed at scale. The technical breakthrough has happened. We're waiting for the engineering breakthrough that makes it accessible.

Look at the cost trajectory. o3-preview is extraordinarily expensive to run. But AI costs drop exponentially over time. What we might be seeing is deliberate throttling, releasing weaker models while making the stronger ones economically viable.

Ilya's warning takes on new urgency. He's not telling graduates to prepare for gradual AI evolution. He's telling them to prepare for a discontinuous leap that's already happened internally but hasn't been released externally.

The question isn't whether we'll reach AGI. The question is how much time we have before the models currently locked in the labs become cheap enough and safe enough to release. Based on Ilya's tone, that window is much shorter than people realize.

This is why he emphasizes that "whether you like it or not, your life's going to be affected by AI to a great extent." This isn't about future possibility. This is about present reality that hasn't been fully disclosed.

The ARC AGI benchmark was supposed to be a canary in the coal mine for AGI. The canary stopped singing in December. What we're seeing in public releases isn't the cutting edge of AI capability. It's a carefully managed rollout of capabilities the labs have already surpassed internally.

Ilya's speech isn't a prediction. It's a warning from someone who's seen behind the curtain and is trying to prepare us for what's coming. The question is: are we listening?