45 Comments
Jun 27Liked by Sayash Kapoor, Arvind Narayanan

"Historically, standing on each step of the ladder, the AI research community has been terrible at predicting how much farther you can go with the current paradigm, what the next step will be, when it will arrive, what new applications it will enable, and what the implications for safety are. That is a trend we think will continue."

Your best zinger this year :)

Expand full comment

Except it's gone both ways. Experts under predicted the jump in LLM capabilities. Just saying, two way error bars.

Expand full comment

In fact for the past 10 years, ever since the deep learning revolution, it's mostly gone in the 'underestimate progress' direction.

Expand full comment
Jun 27Liked by Sayash Kapoor, Arvind Narayanan

One thing that's bothersome to me is that now many in the LLM promotion/biz are also conflating scaling of LLMs with the *way that humans learn*. e.g. this from Dario Amodei recently: https://youtu.be/xm6jNMSFT7g?t=750

it's as if not only do they believe in scaling leading to AGI (not true as per your article), they are now *selling* it to the public as the way that people learn, only better/faster/more expensive.

Expand full comment

LLMs learn in a way analogous to humans.

Biological neurons essentially do summation and thresholding operations on a very large scale, where they aggregate incoming signals and activate if the combined signal exceeds a certain threshold and perpetuate an action potential signal down the axon to the axon terminal. Artificial neurons are actually modelled after this process, they aggregate inputs, apply weights, and pass the result through an activation function, mimicking the summation and thresholding performed by biological neurons. Although biological neurons have a certain complexity that has not been completely replicated, but the essential processes are there in a way.

The output is sent to the next layer of neurons in the network, which is similar to how an action potential propagates to other neurons. Then we have the actual learning with this setup. Biological learning more or less involves synaptic plasticity, where the strength of connections between neurons changes based on experience and activity. With ANNs we have backpropogation which is in many ways analogous, and others not so much. And with BNNs synaptic plasticity works at the local level, backpropogation usually works globally which is probably inefficient, but they both involve adjusting neural connections based on experience essentially, even if through distinct mechanisms.

Expand full comment

Although I think your neurology-as-neural-net model is flawed, it misses my point: There is little evidence that nn learning or other higher functions like memory, reasoning, generalization, categorization, judgment, etc are in any way human-like. There can/could be many different paths to 'intelligent behavior'. Just because nn were inspired by neurology doesn't mean that's the way the brain actually works.

Expand full comment

Sam Altman is the very definition of a snake oil salesman. The term AGI is going to sound pretty laughable in a few years.

Venture Capital are betting the farm on generative AI and the losses are going to be staggering.

Expand full comment

AGI will sound perfectly sensible. The problem is that most people are identifying AGI with advanced LLMs. I am one of the few (apparently) who strongly doubt that LLMs will ever achieve AGI (on a reasonable definition) but that other approaches will. When? No idea. I don't think there is any reliable way of predicting that breakthrough.

Expand full comment

Very good point. I have watched many bubble cycles come and go in my career across all areas and there is a pattern to them. The way Sam Altman talks is typical. In this case, the unintended consequences are much higher. It is a deliberate overspending as it also pushes everyone else out using these scaling arguments as an excuse. It is what regulations will probably reflect as these players will guide in that direction. The more people that leave this space, the less creativity there is. It is what we see in many segments of the market today. Bigger companies in medical technology space always knew, for example, they needed the creativity of the smaller players to keep innovation going. They still know this but the gap size between big and small is so large that this is starting to break down. The “market” is up so much, but the contributors get smaller and smaller. It is very hard to get these smaller companies funded today but they are critical to the health of the whole system. What will we say a few years from now? No more recessions but we only have 20 companies in the index and everyone else is on a government subsidy. That is an exaggeration of course. However, the point is that Sam Altman’s way of looking at the world invites another perspective that is the opposite. How do we keep creative flow going? It is not big vs small. It is not centralized vs decentralized. It is not open vs closed. It is more about “and”-replacing the “or” with “and” and you automatically move from linear to nonlinear just by changing one word. That word is not in the vocabulary of the Sam Altmans of the world. But is that his fault or the system we have come to rely on? We have a global “or” economic system. He is just playing that game well. AGI capabilities require “and” thinking and innovation. And that way of defining AI will be much healthier for the world by definition.

Expand full comment

Sam Altman invested $375m in nuclear fusion: he's another Elon Musk (Musk promised a city on Mars)

Expand full comment

Thank you, I am fully aware of what Sam Altman is doing and trying to do.

Expand full comment

But why? Why does everyone care so much about carnival barkers? Is there a reason I should care about why Sam Altman is doing? I'm not interested in riding bubbles.

Expand full comment
Jun 28Liked by Sayash Kapoor, Arvind Narayanan

This is your best article yet, and that’s saying something. I believe in a few years we’ll look back at the inability of LLMs to handle novelty as cringeworthy considering the current level of hype. LLMs are echo machines, repeating back to us remixes of data we fed into them.

I do believe AI systems will emerge that can effectively handle novelty, but they will be different from LLMs (though maybe LLMs could be a component of these systems). It will be interesting to see how this develops … any bets on what approach will make this breakthrough?

Expand full comment

What do you mean "inability to handle novels"? I have not observed that at all? I can give it completely new information, a new article (too new to be in any training data), and it can extract what I want, explain to me what's happening, what the authors have done with some pretty good accuracy at times.

Expand full comment

First of all, I’m not an expert, so take my reply with a grain of salt. For sure, current LLMs can extract summaries of new documents that follow existing language patterns; they’re quite good and useful at that. What they aren’t capable of doing in their current form is to deal with new information that challenges existing patterns. They can’t make leaps of logic connecting seemingly disparate topics or disciplines. In short, they can’t invent.

I’m not saying AI is incapable of novelty, I’m just observing that as they exist today, LLMs are extremely sophisticated automatons. They’re very useful, we just need to be cognizant of their limitations.

Expand full comment

What a breath of fresh air! Stellar piece from beginning to end.

Expand full comment

There are several interesting things that I appreciated from this issue, including the fascinating way of dealing with aspects related to Computer Science and explaining them in a clear and crystalline way while also going in depth. However, one of the insights that I take away from careful reading is certainly the intriguing way in which you have questioned some laws and concepts taken for granted. For example when you specified: "what do you mean by better?". This way of not stopping at the data, of going beyond paradigms and also appropriately questioning them in light of new data and possible advances is absolutely worth taking home and is a very important principle also in writing. Thanks for sharing.

Expand full comment

CPU manufacturers did not "simply decide to stop competing" on clock speed. They ran into the limits of Dennard scaling. As transistors get smaller, they get leakier: they don't turn off as completely. Power consumption thus rises. Power also scales linearly with clock speed. In the early 2000s, it started to become difficult to increase clock speeds without making chips run unacceptably hot. Advances in transistor design have mitigated this problem sufficiently to allow transistors to continue to get smaller, but not enough to give us both smaller transistors and higher clock speeds.

Expand full comment

On emergence, I agree with your reasons for skepticism. I'd like to add two serious problems:

1. Independent researchers cannot query the training sets used for the latest LLMs. This renders all claims of emergence suspect, as we cannot assess "leakage". I would not be at all surprised if the "improved abilities" of newer models come from feeding them all that had been written about the limitations of their predecessors, either intentionally or unintentionally. I certainly wouldn't put it past any of the major players here to have done some targeted curating based on the kinds of metrics they knew would be used.

2. Tech companies and AGI "believers" (for lack of a better term) have a strong incentive to play up emergence as a mysterious phenomenon that no one can explain. This lets them credit the alleged mysteries to an ever-advancing form of intelligence. It's like how creationists play up alleged gaps in the evolutionary record and fill them in with divine intervention. Lacking any real scientific theory for how AGI is supposed to come about, particularly one for how machine learning could deliver it, they sell us on unexplained phenomena and let our imaginations do the rest. I don't mean to accuse them of bad faith (well, maybe OpenAI...) - this is part of a belief system, and I suppose they could turn this around and call skepticism of AGI a belief system that makes people like me unwilling to accept evidence for advancing intelligence. But knowing about the belief system does make it hard to take some kinds of claims at face value. People who see Jesus in their toast were looking for Him ahead of time, and people who see emergent phenomena arise mysteriously are the same ones anticipating the arrival of AGI.

Expand full comment

I'm a fan, but I feel like this fell into a few of the same traps as the AGI Faithful, but from the other side:

1. Easy to do this without definitions. You did not define AGI or comment on how much of the forecasting is on a specific vague definition. I think you all are the perfect people to debunk to "high schooler intelligence" claims

2. I think you discounted synthetic data, where there are works on rewriting pre training data https://arxiv.org/abs/2401.16380v1 and more work under ways (hopefully I can share more soon)

3. I would like to see a closer explanation on the scaling plots meaning technically. I still hear them described wrong. The examples of clock speed and stuff are interesting at least.

4. the industry stopping making models bigger is a short term snip, long term not true if you look at capital expenditures.

May just be me narrowing my focus on what I should write about.

p.s. how do I preorder a signed copy of the book? Or, I may just preorder and track you two down for a signature in person

Expand full comment

Self-play works for games because there's a black and white outcome metric by which the model can precisely measure the better approach. With LLMs, there's no clear output metric and therefore synthetic data will be of limited use.

Expand full comment

People trust AI more or less a lot less since ChatGPT was announced 19 months ago, that alone tells you all you need to know. When BigTech owns most of the revenue being generated by Generative AI today, we have a societal and American problem. Generative AI is becoming an element of American protectionism and exceptionalism that is frankly dangerous to national security.

Expand full comment

I appreciate the post, thanks to both. Could you provide a link to where a CEO has claimed that AGI would come in three years? I'm not aware of any such claim.

Expand full comment
author

Altman, Amodei, and Musk have all made such claims; you can easily find them :) Another commenter posted a link to a recent Amodei interview for example. (Altman tends to be slightly more vague about his timeline but roughly in that ballpark.)

Expand full comment
Jun 27·edited Jun 27

The interview you reference (https://youtu.be/xm6jNMSFT7g?t=750 for anyone else reading this) Amodei says he no longer thinks that AGI is this singular event in time.

That certainly supports your claim that CEOs are making more measured remarks about what AGI than they were before, but that's not what I'm asking about.

They might be hyping the technology as if they thought AGI was coming very soon, but that's a different thing than a falsifiable prediction, which, as far as I can tell, neither of the three have made.

Expand full comment

Musk: AGI in 3 years: https://www.youtube.com/watch?v=pIJRFr26vXQ

But given Musk's track record (e.g. autonomous driving in a few years for 10 years now)...I think all of the AGI true believers would both stupid and disingenuous if they were to predict *anything* about AI within 3 years (positive or negative)...because it would be obvious that they were just stringing everyone along (i.e. hype).

Expand full comment

Thanks Scott!

Expand full comment

on 26th June 2024, Dario Amodei gave an interview to Norges Bank Investment Management, in which he said that year 2026 "will be crazy"; the video is on YouTube

Expand full comment

Here's an interview with Dario Amodei from last August (so, almost a year ago): https://www.dwarkeshpatel.com/p/dario-amodei. In it, he speculates that human-level AI “could happen in two or three years”.

Expand full comment

I'm always skeptical of the phrase "..if trends continue."

Expand full comment

I curious to get your thought...

If the challenge is AI gen'd or AI aug'd output, and did a human intervene, contribute, etc...and who is this person, which begs the most important question, and problem...

Human agency. How/Who/What is the strategy for the 'digital world' (all actors, goo, amaz, openai, anthrop, et al....

Who has solved the validation of human agency?

What do you know....

Two Auth...not it.

I'm curious

Expand full comment

Interesting that you endorse Dwarkesh's post despite having seemingly pretty different personal takes (he being relatively bullish on scaling at the end of his post, you being more skeptical). Specifically, he reasons that more open-ended systems might be capable of self play so long as they are "better at evaluating reasoning than at doing it de novo." Can you explain if/why you disagree?

Expand full comment

Near the beginning of your article you imply acceptance of the idea that emergent abilities are a (poorly understood) property of LLMs. I'm curious to get your take on this paper which proposes that most (if not all) of the so-called emergent abilities are a mirage created by poor choice of metrics: https://arxiv.org/abs/2304.15004

Expand full comment