6 Comments

Putting "ad personalization" in the positives section certainly is a choice.

Expand full comment

It is not in the positives section. We emphasize repeatedly that nonmalicious doesn't mean harmless. We call out ad personalization as particularly risky and worrisome.

Expand full comment

We aren’t portraying ad personalization (or any other examples of non-malicious uses) as “positive”. Part of the reason we wrote this essay was to have more nuanced conversations about the benefits and harms—including moving beyond the positive-negative binary. Particularly because non-malicious applications (including advertising) can and do cause harm.

Expand full comment

"In fact, the bottleneck has always been distributing disinformation, not generating it, and AI hasn’t changed that." Nonsensical statement. Cambridge Analytica being able to even very crudely personalize disinfo at scale just at the cohort level was a big part of how Trump was able to win in 2016. Generative AI exponentializes that capability closer to one-to-one personalization that can also interact with each recipient ongoing.

"It’s true that generative AI reduces the cost of malicious uses. But in many of the most prominently discussed domains, it hasn't led to a novel set of malicious capabilities." No, it's not at all just cost reduction—it's a step change, even just in the one example area I described above.

Why are you guys are working so hard to sweep serious legitimate concerns under the rug way so quickly and with such faulty logic?

"Finally, note that media accounts and popular perceptions of the effectiveness of social media disinformation at persuading voters tend to be exaggerated. For example, Russian influence operations during the 2016 U.S. elections are often cited as an example of election interference. But studies have not detected a meaningful effect of Russian social media disinformation accounts on attitudes, polarization, or voting behavior." Misdirective argument. Did you entirely forget about Cambridge Analytica—or are you just hoping your readers have? Or do you just think the billions of dollars spent on political propaganda—both within the law and outside of it—is done just for the hell of it?

AI Snake Oil, indeed.

Expand full comment
Comment deleted
Jun 20, 2023
Comment deleted
Expand full comment

It's both.

Expand full comment

I agree with some of the points in the main essay like distribution beings one of the major bottlenecks, but I think the essay isn't covering some ways a combination of these technologies could be used to spread misinformation. Here are some examples:

1. Consider a real-time conversational agent like the one in the game, but it's used to scam people into giving out their sensitive information. A system like *GPT could then be used to parse out the information from the conversation. Now when you imagine this at scale, the implications are much more serious, specially for the more tech-illiterate crowd

2. A Twitter bot that automatically responds to people with realistic-sounding counterpoints and fake citations regarding conspiracy theories, etc. This can provide a lot of steam to dangerous misinformation campaigns

These are just some basic examples of the ways these systems can be used for dis- and misinformation online and a large portion of the world is not ready to understand it, IMO

Expand full comment