23 Comments
Aug 19Liked by Sayash Kapoor, Arvind Narayanan

The title is splendid, and the article delivers, well done!

I've been saying more or less the same (not so well written) for a long time in my own "The Skeptic AI Enthusiast" –for instance, in the last issue I include the quote "The antidote to hype is to focus on concrete value." While many get excited about AGI, I prefer useful AI products that deliver value to the user, like Grammarly, Udio, Midjourney and many others.

Expand full comment
Aug 20Liked by Arvind Narayanan

Great article with many profound points, especially the question of whether deterministic systems are feasible given stochastic components like LLMs. At Microsoft, we've made encouraging progress with constrained decoding via our Guidance OSS library. Guidance masks out token that would not match a user's expectations, thereby leading to much better aligned and more predictable generations. https://github.com/guidance-ai/guidance

Expand full comment

“Better” is anything from a fraction of an inch to light years and this word slop is really the heart of the article. Think of it as happier. Being happier doesn’t mean you are happy enough to be happy but just that you are happier.

Something is always hidden when something is revealed. Casting light on something directs attention away from something else. What’s missing, the negative space in art, is required for the image to exist. The unsaid is always greater than what is said.

“In a conversation with a thinker we must attend to four points, and more attentively to each point in the series, for the rigor of thinking lies in this attentiveness of listening, not in the effort (the forcing) of representational, conceptual grasping that wills to know. We must attend:

1) to what is said; for those today, this is difficult enough;

2) to what is not said;

3) to what is unthought, but should be thought;

4) to what cannot be said, because it should be kept silent."

--Heidegger

Expand full comment
Aug 26Liked by Arvind Narayanan

It’s good to see AI companies shifting focus into building real, practical products. The challenges outlined—like cost, reliability, and privacy—are spot on and need serious attention. This more grounded approach could be what’s needed to turn all that AI potential into something genuinely useful and sustainable. Great article! 👏

Expand full comment
Aug 23Liked by Arvind Narayanan

"... when it comes to LLMs, misuse is easier than legitimate uses (which require thought), so it isn't a surprise that misuses have been widespread."

Made immediate sense as soon as I read this here, but didn't strike me earlier. Excellent point!

Expand full comment

Hogging the comments . . . thinking of trillions, or even billions of dollars into LLMs has often brought to mind that the gold standard that the LLMs are trying to match . . .

. . . is a few ounces of flesh, powered by muffins.

The sprawling zillion dollar factories using gonzowatts to create and train the LLMs are faster at returning uncanny extensive documents and images, no doubt, but the contrast stands, and will continue to stand through several orders of magnitude of improvement, seems to me . . . b.rad

Expand full comment

Thanks . . . maybe you meant to write ‘ . . . the company's shift from creating gods to building products . . . ‘ but Freud might have slipped in there . . . and many others would dispute said shift . . . thanks, b.rad

Expand full comment

ps I see you meant it, it’s in the title . . . ok . . . the antecedents to said gods did not register with me . . . the principals of the startups, and their backers, or the LLMs and the massive systems behind them?

Expand full comment

AI pundits are pivoting from thinking about the future, to not thinking about the future

Expand full comment

Having recently interacted with a customer service chatbot, I think the reliability problem is going to be even harder in this domain than it seems, because the reliability rate is naturally lower for the kinds of problems that require customer service.

You give the example of a chatbot that gets something right 90% of the time. The 90% of cases will look different from the 10% in that the 10% will be associated with tasks not well represented in the training data. The 90% will be tasks that are more common. Now, why would a customer choose to go to customer service? I won't speak for anyone else, but I don't like dealing with customer service and I only do it when there's a problem that I'm unable to solve on my own. In other words, a less common problem. The exact kind of problem that chatbots aren't good at solving.

In short, if the problem is easy, then 1) chatbots will be better at solving it, and 2) so will humans, so they don't need the chatbot. If the human can't solve it, it's probably a harder problem, and the chatbot will be less reliable.

Expand full comment

This is where we are on the hype graph. Expected.

Expand full comment

Great stuff, we are addressing all that - check us out

Expand full comment

An interesting article, thank you.

Expand full comment

It appears you mean for the two words "similar points" to link to two different articles, but they link to a single article. Is there another similar article that you mean to share there?

Expand full comment
author

Fixed, thank you!

Expand full comment

“Move fast & break things” is a stupendously stupid motto for an industry commanding trillions of capital.

This is a bubble waiting to burst, along with resumes topped by AI positions.

Even in its current form AI is debasing human life.

90% accuracy for booking a hotel room?

Expand full comment

Really enjoyed this. Rather than all the hype and tech-bro boostering that's out there, I appreciate your fresh, analytical take that's informative, educational, and thoughtful.

Expand full comment

great summary. excellent work all!

Expand full comment

Thanks to the open standard concepts of the big players, smaller players can now engage in the industry and use the technology to deliver more value products. If it were not for these major players, rapid development would not have been possible. On the flip side, the advancement of AI in self sustaining or autonomous models that can automanage or autocorrect and auto deliver solutions will create another layer of unease among the workforce.

Expand full comment