This is misleadingly simplistic (just like the amazingly fun, if limited to academically-useless dataset provided by WIRED (where i'm a contributor...)). You all built a 'definitive' 'conclusion' off a ridiculously small sub-sample of KNOWN generative AI-generated content.
Don't people make generative AI content that nobody knows is AI-generated?*
Your post and the article made me realize just how much of this and the AI disinformation panic are really just parts of AI hype. Attempts to make AI come off as far more powerful than it is. It's DEFINITELY letting us do something we weren't able to do that before and please ignore all evidence to the contrary and start assuming there's just a lot of AI-generated stuff that's so good that you are not even noticing it's AI.
I mean there were Republican politicians in red states that were begging their constituencies to stop sharing AI-created images purporting to show the collapse of disaster relief efforts. To not even mention that issue in this essay seems like a huge miss.
The takeaway from Kapoor and Narayanan’s analysis of 78 political deepfakes is clear: the threat of AI-generated election misinformation is overhyped. Their data show that cheap fakes, manipulated media that require no AI at all, are far more prevalent and equally effective. This is not a technology crisis. It is a trust crisis.
But here’s the deeper issue they only hint at: when institutions lose legitimacy and media ecosystems fracture, truth doesn’t scale, no matter how accurate or human-sounding your chatbot is. AI’s real risk isn’t that it creates falsehoods more efficiently. It is that it performs emotional fluency in ways that simulate trust without earning it.
The danger isn't fake Joe Biden telling people not to vote. It is the erosion of shared epistemology that makes anyone believe that voice was plausible.
The focus on AI as the problem lets platforms, bad actors, and collapsing institutions off the hook. Misinformation thrives where trust fails. Fixing that does not mean regulating chatbots. It means rebuilding systems people want to believe in.
The question isn't whether AI can lie. It is whether we’ve created a world where lying is expected and indistinguishable from leadership.
I remember that moral panic about the dire consequences of misinformation produced using generative AI. Some would use such predictions to justify call for AI development pause despite claiming that generative AI is "hitting the wall". Back in March 2023 I argued against validity of these concerns in https://medium.com/@jan.matusiewicz/misplaced-concerns-real-risks-of-chatbots-75dcc730057 But not all mainstream media followed the moral panic, for example The Economist was a notable exception (https://www.economist.com/leaders/2023/08/31/how-artificial-intelligence-will-affect-the-elections-of-2024 , https://www.economist.com/united-states/2023/08/31/ai-will-change-american-elections-but-not-in-the-obvious-way)
This is misleadingly simplistic (just like the amazingly fun, if limited to academically-useless dataset provided by WIRED (where i'm a contributor...)). You all built a 'definitive' 'conclusion' off a ridiculously small sub-sample of KNOWN generative AI-generated content.
Don't people make generative AI content that nobody knows is AI-generated?*
*pssst... Answer: Daily.
Your post and the article made me realize just how much of this and the AI disinformation panic are really just parts of AI hype. Attempts to make AI come off as far more powerful than it is. It's DEFINITELY letting us do something we weren't able to do that before and please ignore all evidence to the contrary and start assuming there's just a lot of AI-generated stuff that's so good that you are not even noticing it's AI.
I mean there were Republican politicians in red states that were begging their constituencies to stop sharing AI-created images purporting to show the collapse of disaster relief efforts. To not even mention that issue in this essay seems like a huge miss.
**i may get canned for this post haha
make it worth it!
The takeaway from Kapoor and Narayanan’s analysis of 78 political deepfakes is clear: the threat of AI-generated election misinformation is overhyped. Their data show that cheap fakes, manipulated media that require no AI at all, are far more prevalent and equally effective. This is not a technology crisis. It is a trust crisis.
But here’s the deeper issue they only hint at: when institutions lose legitimacy and media ecosystems fracture, truth doesn’t scale, no matter how accurate or human-sounding your chatbot is. AI’s real risk isn’t that it creates falsehoods more efficiently. It is that it performs emotional fluency in ways that simulate trust without earning it.
The danger isn't fake Joe Biden telling people not to vote. It is the erosion of shared epistemology that makes anyone believe that voice was plausible.
The focus on AI as the problem lets platforms, bad actors, and collapsing institutions off the hook. Misinformation thrives where trust fails. Fixing that does not mean regulating chatbots. It means rebuilding systems people want to believe in.
The question isn't whether AI can lie. It is whether we’ve created a world where lying is expected and indistinguishable from leadership.