LLMs may have kicked off this AI boom, but the ceiling is closer than the hype suggests. As models run out of text data to train on, the companies and investors paying attention are already moving on. The next wave isn't better chatbots; it's machines that can understand the physical world. Luma AI, the Bay Area lab that raised over $1.4 billion from a16z, Nvidia, and Amazon, is betting on exactly that.
On episode of TechCrunch's Equity podcast, we’re bringing you a conversation Rebecca Bellan sat down with Amit Jain, co-founder and CEO of Luma AI, at Web Summit Qatar. Together, the pair dug into where the next trillion-dollar AI opportunity actually gets built, and whether the companies chasing it even know what they're building yet.
Listen to the full episode to hear about:
Why video, audio, and images are the real frontier for AI training data, not text
What an "intelligent world model" actually is, and why Jain thinks most companies building them are getting it completely wrong
The case for why AI won't kill creative jobs, and why Jain thinks studio heads are the real problem
How the path from video generation to robotics to AGI is simpler than anyone's making it sound
Subscribe to Equity on YouTube, Apple Podcasts, Overcast, Spotify and all the casts. You also can follow Equity on X and Threads, at @EquityPod.
Chapters:
00:00 Intro
01:13 Why LLMs are hitting a ceiling
02:43 The data problem & what comes after LLMs
04:30 What actually makes a world model a world model
06:05 Why 3D data is a dead end
07:39 What Luma is building next
09:08 How much humans stay in the loop
10:00 Near-term use cases for agentic video
11:22 Will AI kill jobs in film & production?
13:30 Why the entertainment industry is already dying
15:27 Why we actually need more content, not less
17:46 Luma's roadmap: generation, understanding, and robotics
19:54 Outro
Learn more about your ad choices. Visit megaphone.fm/adchoices