Eric Schmidt: Can any of you define text to action? (coughing) (papers rustling) Taking text and turning it into an action. Right, right here. Go ahead.

Right. So another definition would be language to Python, a pro- programming language I never wanted to see survive.

And y- and everything in AI is being done in Python. There's a new language called Mojo that has just come out which looks like they finally have addressed AI programming, but we'll see if that actually survives over the dominance of Python. Um, one more technical question. Why is Nvidia worth $2 trillion and the other companies are struggling? Technical answer.

I mean, I think it just boils down to, like, most of code needs to run with CUDA optimizations that currently only Nvidia GPU supports. So, like, other companies can make whatever they want to, but unless they have the 10 years of software there, you don't have the machine learning optimizations -

I like to think of CUDA as the C programming language for GPUs. All right? That's the way I like to think of it. It was founded in 2008. I always thought it was a terrible language, and yet it's become dominant. There's another insight. There's a set of open source libraries which are highly optimized to CUDA and not anything else, and everybody who builds all these stacks, right, this is completely missed in any of the discussions, right? The com- it's, it's technically called VLLM and a whole bunch of libraries like that, highly optimized to COO- CUDA, very hard to replicate that if you're a competitor. So what does all this mean? In the next year, you're gonna see very large context windows, agents, and text to action. When they are delivered at scale, it's gonna have an impact on the world at a scale that no one understands yet. Much bigger than the horrific impact we've had on s- by social media, right, in my view. So here's why. In a context window, you can basically use that as short-term memory, and I was shocked that context windows could get this long. The technical reasons have to do with the fact that it's hard to serve, hard to calculate and so forth. The interesting thing about short-term memory is when you feed the, the... you, you ask it a question, "Read 20 books," you give it the text of the books, is the query, and you say, "Tell me what they say," it forgets the middle, which is exactly how human brains work too.

Right? That's where we are. With respect to agents, there are people who are now building essentially LLM agents, and the way they do it is they read something, like chemistry, they discover the principles of chemistry, and then they test it, and then they add that back into their understanding, right? That's extremely powerful. And then the third thing, as I mentioned, is text to action. So I'll give you an example. The government is in the process of trying to ban TikTok. We'll see if that actually happens. If TikTok is banned, here's what I propose each and every one of you do. Say to your LLM the following, "Make me a copy of TikTok, steal all the users, steal all the music, put my preferences in it, produce this program in the next 30 seconds, release it, and in one hour, if it's not viral, do something different along the same lines."

That's the command. Boom, boom, boom, boom. Right? You understand how powerful that is. If you can go from arbitrary language to arbitrary digital command, which is essentially what Python in this scenario is, imagine that each and every human on the planet has their own programmer that actually does what they want as opposed to the programmers that work for me who don't do what I ask them.

Right? The, the programmers here know what I'm talking about. So imagine a non-arrogant programmer that actually does what you want, and you don't have to pay all that money too, and there's infinite supply of these programmers.

Text-to-action in the context of Large Language Models refers to the capability of these AI systems to interpret natural language instructions and generate actionable outputs or perform specific tasks based on those instructions.

In more simple English, it means that you can TELL THE THING TO DO THE THING.

Let's explore what this really means for the future of technology and business.

The real alpha is understanding the synergy between LLMs and enterprise ML, allowing for deep learning through continuous feedback loops. You ask the LLM to do something, it tries, you provide feedback, and it improves. This cycle transforms these systems from mere "coding assistants" into full-fledged programmers in your pocket.

We're currently in a period of maximum growth and maximum gain. Why? Because computing power is a limited resource. The true market leaders will acquire significant proportions of that computing power, leaving competitors not just behind, but literally starved of the resources they need to compete.

Think about the implications. If LLMs can copy and generate code, then they can create entire applications just because we asked them to. So how many software businesses are truly "safe" in this new landscape?

For me, the answer comes back to cloud infrastructure software. Just having a great idea and the code to implement it won't be enough to survive. The winners of tomorrow will also control the hardware to keep these complex systems running.

We're witnessing the beginning of a major shift. The low end of complexity in software will vanish, replaced by sophisticated applications that can only operate on reserved, specialized hardware. The democratization of software creation through natural language will eliminate entire categories of traditional development work while creating entirely new opportunities for those who control the infrastructure.

This isn't just a technological evolution – it's a fundamental restructuring of the digital economy. The question isn't whether your business can build software; it's whether you have access to the computational resources needed to run it at scale and compete in a marketplace where ideas can be turned into functioning applications through conversation.

The future belongs to those who control not just the code, but the silicon that runs it. And in this new world, the gap between leaders and laggards will grow wider than ever before.