This is a remarkably well-grounded outline of the state of AI and the near-future.
Indeed, just as with other tech before, the progress and impact will be quite gradual, though it will add up. Both the skeptics, the doomers, and accelerationists get this wrong.
Glad to see sound analysis rather than usual wild guesswork.
Halfway through it. Easy for a lay people to understand . In pdf version, I noticed that it is kind of messed up between pages 11 to 13. Part 2 starting twice, on page 12 and page 13. Also 318 words text is repeated , "But they have arguably also led to overoptimism ........to...... Indeed, in-context learning in large language models is already “sample efficient,” but only works for a limited set of tasks. "
Fantastic piece, we need a reality check on AI because it's easy to get confused about it's true nature (and the marketers sure do their part to emphasize on the superhuman narrative) https://youtu.be/KpBwUXZJiQw
Thank you, Arvind and Sayash, for such a lucid and grounding piece. The call to treat AI as “normal technology” is a necessary counterbalance to both dystopian and utopian extremes—and I deeply appreciate the clarity you bring to this complex space.
That said, I wonder if there’s room for a parallel dialogue—one that explores how complex adaptive systems (like LLMs and agentic architectures) may not remain fully legible within traditional control-based governance models. Emergence doesn’t imply sentience, of course—but it does suggest nonlinear behaviors that aren’t easily forecasted or bounded.
I’m working on a piece that builds on your argument while adding a complexity science and systems theory lens to the question of how we frame intelligence itself. Would love to hear your thoughts once it’s live. Thank you again for contributing such an important perspective.
"Concretely, we propose two such areas: forecasting and persuasion. We predict that AI will not be able to meaningfully outperform trained humans (particularly teams of humans and especially if augmented with simple automated tools) at forecasting geopolitical events (say elections). We make the same prediction for the task of persuading people to act against their own self-interest."
There is a fundamental algorithmic problem that underlies this prediction and helps quantify it.
The essence of the Godel Incompleteness Theorem, the Recursion Theorem, and similar results, including the Halting Problem, is that, after you get to a certain stage of complexity, a system becomes capable of introspection and therefore self-modification.
This is the operative quality that humans have that makes it difficult for AI to outperform trained humans. In the areas of geopolitical events and persuasion, you are coming up against people who have conflicting goals. Any attempt to thwart a human in meeting these goals results in the human analyzing what is getting in their way, an analysis that involves understanding both themselves and the opposition. This leads to the human incorporating this analysis into their thought processes.
So you are trying to control a moving target. Algorithmically, it is the Halting Problem combined with Learning Theory. There is just no end to it, using the technology we currently have.
Forecasting human behavior, especially where fundamental human values and goals come into play, will not succeed until we come up with a comprehensive theory of meta-learning.
Looking at this thought-provoking paper on "AI as normal technology," I'd add this perspective:
The industrial parallels drawn are spot-on - AI is following adoption patterns similar to electricity, where real-world benefits took decades to materialize through organizational redesign. In my e-commerce work, I've observed this firsthand: the gap between AI capabilities and meaningful business integration remains substantial. The most successful implementations happen when we focus on human-AI partnerships rather than replacement, especially in sectors requiring contextual judgment.
I agree - what you say in Part II is similar to what Kevin Kelly articulated several years ago - we should not think of human intelligence as being at the top of some evolutionary tree but as just one point within a cluster of terrestial intelligences (plants/animals etc) that itself maybe a tiny smear in a universe of all possible alien & machine intelligences. He uses this argument to reject the myth of a superhuman AI that can do everything far better than us. Rather we should expect many extra-human new species of thinking very different from humans but none that will be general purpose & no instant gods solving major problems in a flash.
And so .. many of the things AI will be capable of, we can't even imagine today. The best way to make sense of AI progress is to stop comparing it to humans, or to anything from the movies, and instead just keep asking: What does it actually do?
This is so lucid. We need more voices like yours in the broader conversation!
This is a remarkably well-grounded outline of the state of AI and the near-future.
Indeed, just as with other tech before, the progress and impact will be quite gradual, though it will add up. Both the skeptics, the doomers, and accelerationists get this wrong.
Glad to see sound analysis rather than usual wild guesswork.
Fantastic read Prof! Thanks for all the hard work that you and Sayash put together to unhype the hype machine. Tough job. I am a big fan!
Halfway through it. Easy for a lay people to understand . In pdf version, I noticed that it is kind of messed up between pages 11 to 13. Part 2 starting twice, on page 12 and page 13. Also 318 words text is repeated , "But they have arguably also led to overoptimism ........to...... Indeed, in-context learning in large language models is already “sample efficient,” but only works for a limited set of tasks. "
I noticed that too. The paragraph starting "AI pioneers considered the two big challenges of AI" is on pages 12 and 13.
Fantastic piece, we need a reality check on AI because it's easy to get confused about it's true nature (and the marketers sure do their part to emphasize on the superhuman narrative) https://youtu.be/KpBwUXZJiQw
Thank you, Arvind and Sayash, for such a lucid and grounding piece. The call to treat AI as “normal technology” is a necessary counterbalance to both dystopian and utopian extremes—and I deeply appreciate the clarity you bring to this complex space.
That said, I wonder if there’s room for a parallel dialogue—one that explores how complex adaptive systems (like LLMs and agentic architectures) may not remain fully legible within traditional control-based governance models. Emergence doesn’t imply sentience, of course—but it does suggest nonlinear behaviors that aren’t easily forecasted or bounded.
I’m working on a piece that builds on your argument while adding a complexity science and systems theory lens to the question of how we frame intelligence itself. Would love to hear your thoughts once it’s live. Thank you again for contributing such an important perspective.
That sounds really interesting! I look forward to reading your piece.
https://open.substack.com/pub/paragsomani/p/boom-bubble-and-resurgence-an-ai?utm_source=app-post-stats-page&r=tprjc&utm_medium=ios
A comment about this prediction on page 16:
"Concretely, we propose two such areas: forecasting and persuasion. We predict that AI will not be able to meaningfully outperform trained humans (particularly teams of humans and especially if augmented with simple automated tools) at forecasting geopolitical events (say elections). We make the same prediction for the task of persuading people to act against their own self-interest."
There is a fundamental algorithmic problem that underlies this prediction and helps quantify it.
The essence of the Godel Incompleteness Theorem, the Recursion Theorem, and similar results, including the Halting Problem, is that, after you get to a certain stage of complexity, a system becomes capable of introspection and therefore self-modification.
This is the operative quality that humans have that makes it difficult for AI to outperform trained humans. In the areas of geopolitical events and persuasion, you are coming up against people who have conflicting goals. Any attempt to thwart a human in meeting these goals results in the human analyzing what is getting in their way, an analysis that involves understanding both themselves and the opposition. This leads to the human incorporating this analysis into their thought processes.
So you are trying to control a moving target. Algorithmically, it is the Halting Problem combined with Learning Theory. There is just no end to it, using the technology we currently have.
Forecasting human behavior, especially where fundamental human values and goals come into play, will not succeed until we come up with a comprehensive theory of meta-learning.
Looking at this thought-provoking paper on "AI as normal technology," I'd add this perspective:
The industrial parallels drawn are spot-on - AI is following adoption patterns similar to electricity, where real-world benefits took decades to materialize through organizational redesign. In my e-commerce work, I've observed this firsthand: the gap between AI capabilities and meaningful business integration remains substantial. The most successful implementations happen when we focus on human-AI partnerships rather than replacement, especially in sectors requiring contextual judgment.
This aligns with what I explored in my recent analysis on collaborative intelligence (https://thoughts.jock.pl/p/collaborative-intelligence-spectrum-ai-human-partnership-framework), where I found that organizational transformation, not just technology adoption, determines real-world impact.
Great paper! Can I linkpost it to the EA Forum in 1 to 2 months?
Sure, post it any time.
I agree - what you say in Part II is similar to what Kevin Kelly articulated several years ago - we should not think of human intelligence as being at the top of some evolutionary tree but as just one point within a cluster of terrestial intelligences (plants/animals etc) that itself maybe a tiny smear in a universe of all possible alien & machine intelligences. He uses this argument to reject the myth of a superhuman AI that can do everything far better than us. Rather we should expect many extra-human new species of thinking very different from humans but none that will be general purpose & no instant gods solving major problems in a flash.
And so .. many of the things AI will be capable of, we can't even imagine today. The best way to make sense of AI progress is to stop comparing it to humans, or to anything from the movies, and instead just keep asking: What does it actually do?
This is my essential guide to grasping the current landscape of AI and charting our path forward.