Our Head of Europe and Asia Technology Research Shawn Kim discusses AI’s move from passive chatbots to active agents—and how this influences tech supply chains.
Read more insights from Morgan Stanley.
----- Transcript -----
Welcome to Thoughts on the Market. I’m Shawn Kim, Head of Morgan Stanley’s Europe and Asia Technology Team.
Today: A foundational shift in the development of AI and its broad market implications.
It’s Tuesday, May 5th, at 3pm in London.
Think about the last time you asked a chatbot to write a summary or a draft. Or maybe answer a query. It was probably useful. But you were also still driving the interaction: asking, refining, copying, checking, and moving the work forward.
Now imagine a system that does not just respond, but acts. It remembers what you asked last week, understands your preferences, works across digital tools, plans a workflow, and adapts as circumstances change.
That is the shift from GenAI to agentic AI: from AI that helps with thinking to AI that helps with doing. GenAI is mostly passive. It takes a prompt and produces an answer. Agentic AI is active – less a copilot for one task but an autopilot for multi-step workflows.
The distinction is key because computing requirements are changing. In GenAI, large language models and GPUs handle much of the thinking. GPUs, or graphics processing units, process many calculations in parallel, making them central to modern AI models. In agentic AI, CPU becomes more important. CPUs, or central processing units, coordinate tasks and connect systems to the broader digital infrastructure.
Agentic AI also depends on three stacks: the brain, or the large language model; orchestration, where the CPU manages the doing; and knowledge, which is memory.
Memory may be the most important layer. An agent that knows your preferences, documents, tone, and task history becomes more useful over time. That creates a context flywheel. The more context it collects, the more personalized it becomes, and the harder it is to leave.
Typically, in computing, we think of memory as storage, mainly. We need to rethink this. Memory is also continuity. When an AI system can use past experiences, memory becomes a long-term state, shared knowledge, and behavioral grounding.
And that matters because LLMs have fixed context windows. Once a conversation exceeds that window, older content falls off. For simple questions, that may be fine. But for a coding agent working across a large codebase over days or weeks, it is a major limitation. Serious work requires persistent memory, short-term orientation, and active retrieval – remembering prior decisions, understanding changed files, and finding relevant codes without the user pointing to every dependency.
For investors, the implication is clear – agentic AI changes the bottlenecks. We see CPUs as the new bottleneck, with memory seeing the highest content increase. We estimate as much as 60 percent, or $60 billion of incremental CPU total addressable market by 2030, within a total CPU market of more than $100 billion. We also estimate up to 70 percent of incremental DRAM bit shipment tied to this theme.
That makes us more positive on supply chains including memory, foundry, substrates, CPU and memory interface, and capacitors and CPU sockets. These areas benefit from content growth, pricing power, and capacity constraints into 2027.
As AI moves from answering questions to taking actions, investors should watch the infrastructure behind the shift. Because in the agentic era, the next big AI leap may be less about the prompt, but more about the processor.
Thanks for listening. If you enjoy the show, please leave us a review wherever you listen and share Thoughts on the Market with a friend or colleague today.