State of AI in 2026: LLMs, Coding, Scaling Laws, China, Agents, GPUs, AGI | Lex Fridman Podcast #490
If you’ve been trying to keep up with AI lately and feeling that familiar mix of excitement and quiet confusion, you’re not alone. That’s exactly the space Lex Fridman’s latest conversation steps into. In State of AI in 2026, Lex sits down with machine learning researchers Nathan Lambert and Sebastian Raschka to unpack where things really stand right now, and where they might be heading next.
This is very much their analysis, not speculation for clicks. They walk through the big questions many of us keep circling back to. Which models are actually pulling ahead? ChatGPT, Claude, Gemini, Grok. Is open source still competitive? And maybe the one you’ve thought about at least once, usually late at night. Are scaling laws finally slowing down?
What makes the discussion land is the balance. They zoom out to geopolitics, like the ongoing US versus China AI race, then zoom way in to the nuts and bolts of training models. Pre-training, mid-training, post-training. The less glamorous stuff that quietly decides whether a model feels helpful or hollow. There’s also a candid look at AI work culture. Long hours, intense pressure, and the strange Silicon Valley bubble that shapes so much of this progress (you can almost feel the burnout between the lines).
They also explore where things are stretching next. Agents that can actually use tools. Longer context windows that don’t forget what you said five minutes ago. Robotics slowly moving from demos to real-world use. And of course, the ongoing question of AGI. Not as a hype slogan, but as an unresolved, messy ambition.
If you’re a developer, a researcher, or just someone trying to understand how AI might reshape your work and daily life, this conversation offers grounding. It doesn’t promise certainty. It offers perspective.
You can watch the full discussion here:
https://youtu.be/EV7WhVT270Q?si=LS55MsfIZWosUffm
The future of AI isn’t arriving all at once. It’s unfolding in layers. Conversations like this help us notice which ones actually matter.



Kommentar abschicken