Discussion about this post

User's avatar
Jan Matusiewicz's avatar

I remember that moral panic about the dire consequences of misinformation produced using generative AI. Some would use such predictions to justify call for AI development pause despite claiming that generative AI is "hitting the wall". Back in March 2023 I argued against validity of these concerns in https://medium.com/@jan.matusiewicz/misplaced-concerns-real-risks-of-chatbots-75dcc730057 But not all mainstream media followed the moral panic, for example The Economist was a notable exception (https://www.economist.com/leaders/2023/08/31/how-artificial-intelligence-will-affect-the-elections-of-2024 , https://www.economist.com/united-states/2023/08/31/ai-will-change-american-elections-but-not-in-the-obvious-way)

Expand full comment
Matt Laslo's avatar

This is misleadingly simplistic (just like the amazingly fun, if limited to academically-useless dataset provided by WIRED (where i'm a contributor...)). You all built a 'definitive' 'conclusion' off a ridiculously small sub-sample of KNOWN generative AI-generated content.

Don't people make generative AI content that nobody knows is AI-generated?*

*pssst... Answer: Daily.

Expand full comment
4 more comments...

No posts