Discussion about this post

User's avatar
Emily M. Bender's avatar

Thank you for this! A lot of what you say really resonates for me and I especially appreciate the point about the "wishful mnemonic" (<= Drew McDermott's wonderful term) "predict".

Three quick bits of feedback:

For our paper "AI and the Everything in the Whole Wide World Benchmark" (Raji et al 2021, NeurIPS Datasets and Benchmarks track) it would better to point to the published version instead of arXiv as you are now doing. You can find that here: https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/hash/084b6fbb10729ed4da8c3d3f5a3ae7c9-Abstract-round2.html

I think that the way that arXiv is used also a factor in the culture of hype and supposedly "fast progress" in deep learning/AI, and it is always valuable to point to peer-reviewed venues for papers that have actually been peer reviewed.

Second, I object to the assertion that NLP is a "highly circumscribed domain" like chess or Go. There are tasks within NLP that are highly circumscribed, but that doesn't go for the domain as a whole. I have no particular expertise in computer vision, but at the very least it also seems extremely ill-defined compared to chess and Go. If it's "highly circumscribed" it isn't in the same way the games are. You kind of get to this in the next paragraph (for both NLP and CV), but I think it would be better to avoid the assertion. These domains only look "highly circumscribed" if you look at them without any domain expertise. (Though again, for CV, it's a little unclear what the domain of expertise even is...)

Finally, I'd like to respond to this: "Noted AI researcher Rich Sutton wrote an essay in which he forcefully argued that attempts to add domain knowledge to AI systems actually hold back progress."

That framing suggests that the progress we should care about is progress in AI. But that is entirely at odds with what you say above regarding domain experts:

"They are not seen as partners in designing the system. They are not seen as clients whom the AI system is meant to help by augmenting their abilities. They are seen as inefficiencies to be automated away, and standing in the way of progress."

I think it is a mistake to cede the framing to those who think the point of this all is to build AI, rather than to build tools that support human flourishing. (<= That lovely turn of phrase comes from Batya Friedman's keynote at NAACL 2022.)

Expand full comment
Jimbo's avatar

Thanks for this. The hype and hubris coming from some corners of the community is harmful in a lot of ways. Why would a student work hard on their education if they've been convinced human labour is about to become obsolete? Look at the steady flow of questions on quora from people concerned all sorts of careers are about to become obsolete. The popular narrative on AI has to change.

Expand full comment
15 more comments...

No posts