Subscribe
Sign in
Home
Start here
Get the book
Book exercises
About us
AI safety
Latest
Top
Discussions
Does the UK’s liver transplant matching algorithm systematically exclude younger patients?
Seemingly minor technical decisions can have life-or-death effects
Nov 11
•
Arvind Narayanan
and
Sayash Kapoor
96
Share this post
AI Snake Oil
Does the UK’s liver transplant matching algorithm systematically exclude younger patients?
Copy link
Facebook
Email
Notes
More
11
AI existential risk probabilities are too unreliable to inform policy
How speculation gets laundered through pseudo-quantification
Jul 26
•
Arvind Narayanan
and
Sayash Kapoor
104
Share this post
AI Snake Oil
AI existential risk probabilities are too unreliable to inform policy
Copy link
Facebook
Email
Notes
More
39
AI safety is not a model property
Trying to make an AI model that can’t be misused is like trying to make a computer that can’t be used for bad things
Mar 12
•
Arvind Narayanan
and
Sayash Kapoor
115
Share this post
AI Snake Oil
AI safety is not a model property
Copy link
Facebook
Email
Notes
More
23
A safe harbor for AI evaluation and red teaming
An argument for legal and technical safe harbors for AI safety and trustworthiness research
Mar 5
•
Sayash Kapoor
and
Arvind Narayanan
29
Share this post
AI Snake Oil
A safe harbor for AI evaluation and red teaming
Copy link
Facebook
Email
Notes
More
1
On the Societal Impact of Open Foundation Models
Adding precision to the debate on openness in AI
Feb 27
•
Sayash Kapoor
and
Arvind Narayanan
35
Share this post
AI Snake Oil
On the Societal Impact of Open Foundation Models
Copy link
Facebook
Email
Notes
More
11
Are open foundation models actually more risky than closed ones?
A policy brief on open foundation models
Dec 15, 2023
•
Sayash Kapoor
and
Arvind Narayanan
32
Share this post
AI Snake Oil
Are open foundation models actually more risky than closed ones?
Copy link
Facebook
Email
Notes
More
3
Model alignment protects against accidental harms, not intentional ones
The hand wringing about failures of model alignment is misguided
Dec 1, 2023
•
Arvind Narayanan
and
Sayash Kapoor
49
Share this post
AI Snake Oil
Model alignment protects against accidental harms, not intentional ones
Copy link
Facebook
Email
Notes
More
5
Is AI-generated disinformation a threat to democracy?
An essay on the future of generative AI on social media
Jun 19, 2023
•
Sayash Kapoor
and
Arvind Narayanan
31
Share this post
AI Snake Oil
Is AI-generated disinformation a threat to democracy?
Copy link
Facebook
Email
Notes
More
6
Is Avoiding Extinction from AI Really an Urgent Priority?
The history of technology suggests that the greatest risks come not from the tech, but from the people who control it
May 31, 2023
•
Arvind Narayanan
80
Share this post
AI Snake Oil
Is Avoiding Extinction from AI Really an Urgent Priority?
Copy link
Facebook
Email
Notes
More
18
A misleading open letter about sci-fi AI dangers ignores the real risks
Misinformation, labor impact, and safety are all risks. But not in the way the letter implies.
Mar 29, 2023
•
Sayash Kapoor
and
Arvind Narayanan
105
Share this post
AI Snake Oil
A misleading open letter about sci-fi AI dangers ignores the real risks
Copy link
Facebook
Email
Notes
More
31
The LLaMA is out of the bag. Should we expect a tidal wave of disinformation?
The bottleneck isn't the cost of producing disinfo, which is already very low.
Mar 6, 2023
•
Arvind Narayanan
and
Sayash Kapoor
12
Share this post
AI Snake Oil
The LLaMA is out of the bag. Should we expect a tidal wave of disinformation?
Copy link
Facebook
Email
Notes
More
9
Students are acing their homework by turning in machine-generated essays. Good.
Teachers adapted to the calculator. They can certainly adapt to language models.
Oct 21, 2022
•
Arvind Narayanan
34
Share this post
AI Snake Oil
Students are acing their homework by turning in machine-generated essays. Good.
Copy link
Facebook
Email
Notes
More
15
Share
Copy link
Facebook
Email
Notes
More
This site requires JavaScript to run correctly. Please
turn on JavaScript
or unblock scripts