18 Comments

The hour is very late. LLMs are already being embedded into agent architectures, becoming controllers rather than just chatbots. The time when we need to know how to make the AI itself an ethical being, is already here.

Expand full comment

The biggest threat to humanity is not AI, it's humans. From the start of recorded history, we are the biggest danger to ourselves. Admittedly, AI is a very powerful tool in the hands of the wrong person or group can wreck widespread havoc with minimal effort (propaganda, misinformation, hacking, etc). However, it's the people programming the AI, not the AI itself.

The current AI doomsday narrative is rooted deeply in Hollywood films and popular sci-fiction. It makes for good watching and reading, but not for good public policy. I completely agree with your analysis and conclusion that we need to work on building institutions that reduce the risks and harms of AI. We also need to find mechanisms to include the underrepresented in the datasets to combat weights and biases in the system. However, that is boring. It's far more interesting to fight Skynet!

Expand full comment

While I think this letter is thoughtful and honest, I think there quite clear gaps in the arguments, which take away from the seriousness of AI existential risk. I've written my detailed thoughts here:

https://www.soroushjp.com/2023/06/01/yes-avoiding-extinction-from-ai-is-an-urgent-priority-a-response-to-seth-lazar-jeremy-howard-and-arvind-narayanan/

Open to feedback and further discussion -- I definitely think this is a complex topic that's worth a deep societal discussion, and this is just my attempt to do exactly that.

Expand full comment

Excellent rebuttal

Expand full comment

Concentrated power is dangerous. Dystopia and science fiction may be leading imaginations because their stories are compelling--I know I am compelled by them. However, I look at them as cautionary and as a space to think problems through. How can we motivate and inspire those who do not see money signs and automated workflows? Having something to fight and fear means it is easier to keep people engaged, for better or worse. I agree with Sunil Daluvoy that we need the underrepresented in the datasets and engaging with those datasets.

Expand full comment

I suspect that this, "What about risks posed by people who negligently, recklessly, or maliciously use AI systems" is intended to be part of the concern, based on what I've heard at least some of the signers talk about.

Expand full comment

"But it also means empowering voices and groups underrepresented on this AI power list—many of whom have long been drawing attention to societal-scale risks of AI without receiving so much attention." Should scholars urge news outlets to implement a moratorium on the word godfather in all AI news stories going forward? Many AI experts are women; that's erased every time "Godfather" is used in a news title. It encourages this lofty outdated "great man" theory of humanity.

Expand full comment

I'm a bit confused by your statement that "in calling for regulations to address the risks of future rogue AI systems, [AI industry leaders] have proposed interventions that would further cement their power." It links to an article titled "Sam Altman says a government agency should license AI companies — and punish them if they do wrong". Wouldn't regulation help to ensure that AI companies are subject to government oversight, and thus limit the power of AI companies?

Expand full comment
Comment deleted
Jun 1, 2023
Comment deleted
Expand full comment

Who's to say that's the only way that it has to be a difficult-to-surmount burden for competition? Altman is clever in calling for regulation - it turns his opposition against regulation entirely, rather than galvanizing them for the kind of regulation that has kept us safe while allowing all sorts of small businesses (restaurants, plumbers...) to continue to thrive. Well, restaurants have it rough, but that's due to other economic mischief, not due to food safety regulations.

Expand full comment

Well said. The rush to put restrictions onto systems seems very premature. There is still lots of research that needs to be done for any system to actually become uncontrollable by humans. Restrictions would stifle progress we are making.

Expand full comment

Is the ability to imitate a voice perfectly with only 3 seconds of audio progress? Is being able to generate concept art and photographs from text progress? Meanwhile GPT-4 is actually very bad at medical advice and engineering green energy. These are mere toys for those with good intentions, and bountiful tools for those with bad intentions (https://gizmodo.com/youtube-biden-ai-political-ad-gop-stable-diffusion-1850373150).

Maybe the rush to put in restrictions is informed by the trauma of our failure to do the same in the face of past advancements: pollution, global warming, eradication of biodiversity, proliferation of obesity, social-media-induced mental health crises...

Expand full comment

If including more marginalized voices will reduce the risk of a rogue superintelligence relative to trying to move ahead on regulation without them, then that sounds good. If not though, we must consider the paramount importance of preserving our species as a whole, and ensuring this world remains ours to shape and live on.

Expand full comment

I’m more or less nodding along with this whole post. Proviso: realpolitik is going to apply its grubby hand to events. Those busy with developing the near future of AI are the ones funding and coercing government and working on other levers of power. “It’s our ball and we’ll say how we want the playing area to look.” Governments too have the motive and opportunity to call down major crimes on the people’s of the world.

It’s true that regulation stifles innovation, but what we do with AI is a genuine battle between good and evil. The world’s voiceless, i.e. suppressed opinion, does not have any power in the arena, by definition. All we can do is make democratic overtures, and to call out those who want to abuse this technology.

We suspect enough that ‘the singularity’ is coming, but good luck avoiding that. I agree 100% that we need concrete steps for these very present abuses. These people have previous.

Warning: the titans of AI will seriously get to play God here - they will create AI in their own image. They will play dirty games to ensure the playing arena works to enable their godlike status.

Expand full comment

> powerful group

The AIS field has orders of magnitude less money going in it than AGI companies.

> What about risks posed by people who negligently, recklessly, or maliciously use AI systems?

Yup, that's also fair.

I think most people concerned about extinction are also concerned about this, but merely consider at an unlikely event. Not to say they wouldn't also want to prevent this event from occurring.

> the industry leaders signing this statement should immediately shut down their data centres and hand everything over to national governments.

Yes, yes this should happen and many of the famous signatories are on record calling for this to happen.

> We can mitigate these risks now—we don’t have to wait for some unpredictable scientific advance to make progress.

I agree, also for current-day problems of AI, would it benefit humanity a lot if we shut it all down and instead focussed on interpretability, testing and such. These causes are highly mutually compatible.

> what we should do

stop AI progress until we found out how to not have it discriminate, kill, unemploy, alienate or divide us.

> This doesn’t just mean soliciting input on what rules god-like AI should be governed by. It means asking whether there is, anywhere, a democratic majority for creating such systems at all.

Fully agree.

So, you're clearly not someone with any high P(doom). But I think you can find the P(doom) faction to be highly aligned to what you want to achieve. Instead of having two small factions, there could be a single large one united under the purpose of "making AI go well".

Expand full comment

I'm all for a Butlerian Jihad.

Expand full comment

As with any technogy the promise of good use needs to be weighed against the certainty of misuse. To your point, we [currently] have far more to fear from rogue people using AI than rogue AI itself.

Expand full comment

Great analysis, agree with every word of it. Even if the fear of some of the signees is genuine, the document still has completely the wrong focus and is devoid of any solution-oriented language. Failing to mention climate change as one of the major risks facing humanity (when daring to talk about extinction) is baffling.

This line from your piece sums it up for me: "After all, why would we have any confidence in our ability to address risks from future AI, if we won’t do the hard work of addressing those that are already with us?"

Expand full comment

An excellent perspective 😀

Expand full comment