31 Comments

Today we had the FLI Open letter, AI snakeoil post, and a Yudkowsky article, all offering quite different views. What. A. Ride...

Expand full comment

I'm one of those millions of mechanical turks getting pennies for hours of work in ''correcting the ai'', and after spending the last two weeks on those jobs I am now constantly rolling my eyes at the lies and gaslighting that the companies are spreading to sell their product. Its not AI, its barely an autocorrect.

Expand full comment

Great framing.

Re: Speculative/real long-term risk.

AI algorithms have already significantly altered our civilization. Recommendation engines (though considered old AI) are designed to show you the content that is very similar to what you previously consumed. Some analyses show that such algorithms lead to extremist viewpoints (https://policyreview.info/articles/analysis/recommender-systems-and-amplification-extremist-content, https://www.extremetech.com/internet/327855-whistleblower-facebook-is-designed-to-make-you-angry). These algorithms have incentivized creators to generate highly divisive, incendiary content by rewarding them with revenue.

Expand full comment

I agree. AI is likely to take all the bad aspects of social media and simply amplify them. There is a lot of concern among AI alignment researches for "power seeking" behavior of the AI system.

However, there is little awareness that AI systems, which represent orders of magnitude more influential power than social media, will inevitably attract "power seeking" humans to be its custodian.

Expand full comment

See my comment below. This is all about an imbalance of risk from a settlement free Web/Internet 1.0. Risk that is 100% on the receiver brought us to this point.

Expand full comment

I disagree with your characterization that AI taking people's jobs is a speculative risk, and its formed in its most extreme version further distorting the argument.

You don't need AI to take everyone's job for the worst case scenario. All you need is for them to replace and be human competitive in the low to mid salary range jobs which from a distribution perspective amounts to most jobs.

The ideal corporation is 1 leader at the top, its shareholders, and no workers just robot slaves producing. The cost of labor is the majority of the cost for anything. Our society requires a fine balance between factor markets and product markets. Money made in factor markets is used to buy goods in product markets.

If the balance is upset, wealth concentrates, business sectors concentrate and consolidate, this eventually leads to macro-economic issues, and the division of labor eventually breaks down when government prints money.

Unrest historically is what happens when you have breakdowns in food security, environmental security, and/or are deprived of the means to correct the situation. Current versions of AI chatbots can eliminate most low to mid-paying jobs with a little prompt engineering. Ther's a lot of money getting behind replacing human labor and production with robots.

What then happens to the people who can't meet basic necessities. By the time this type of problem can be characterized, societal mechanics would make the outcome inevitable because all of the important indicators lag, and you run into forms of the economic calculation problem.

Expand full comment

If you're going to make this argument, you need to actually argue it. It might be true that giving time to concerns about AGI is bad because AGI risk is not something to worry about. But to argue this you need to actually show that AGI risk is not something to worry about rather than handwaving it away as sci-fi: plainly, if catastrophic or existential risk from AI is real, then it is worth considering, to do otherwise would be stupid. To not offer any explanation for why you dismiss it is poor argumentation and is not worthy of the authors. I also note that none of the linked further reading argues this point either...

Additionally, you do not offer any arguments as to why GPT-4, which we can all interact with and see its capabilities to easily complete many of the menial tasks we do every day, is not in danger of automating away a large % of jobs. The linked further reading also does not convincingly argue against this. You simply point out the (true) mismatch between benchmarks and real world performance, but this is not a true counterargument. The letter also targets GPT-5 on this point in any case, and you do not offer any argument as to why it may not be in danger of automating away more jobs either. This undermines your entire point.

Expand full comment

"Additionally, you do not offer any arguments as to why GPT-4, which we can all interact with and see its capabilities to easily complete many of the menial tasks we do every day, is not in danger of automating away a large % of jobs."

ChatGPT has a well-documented tendency to "hallucinate" (fabricate facts) and there's no guaranteed solution to the problem. If I were a business owner, I would hesitate to outsource work to a hallucinating chatbot. Sure, a chatbot can write code or articles, but an actual human being has to proofread the results before releasing them to the public. If someone has to meticulously check everything the chatbot does, how much time are you really saving by using it?

Expand full comment

Certainly this is an argument that has some merit (and I wish the authors had made), but in general I think verification is easier than generation for most domains where we use chatbots, and so it is easier to generate a blog post/piece of marketing copy/research proposal/etc and proofread it than it is to write it yourself: I am currently doing this practically every day in my work and I'm convinced it's much faster.

Also, as an LLM researcher I think the hallucination problem is severely overrated: things like the ChatGPT plugins and Bing search where models are integrated with knowledge bases and APIs to verifiable external resources greatly reduce this issue to the point that I think only a minority of businesses will practically need to be concerned.

Expand full comment
Comment deleted
Mar 30, 2023
Comment deleted
Expand full comment

I think this is mostly wrong: GPT-4 can very obviously solve non-memorized problems, which you can see by simply testing it on problems from after the data cutoff in 2021. The Codeforces benchmark is one datapoint suggesting some contamination, but Microsoft's recent evaluations paper tested on Leetcode problems and found above human level performance on purely recent questions across all difficulty levels, so I find the Codeforces datapoint inconclusive. In any case, if you spend any time at all prompting the model with code problems and questions then you can see qualitatively that's it's very capable, and personally has saved me hours of coding time already. As to GPT-5, you can say that GPT-4 "simply" has a larger number of problems memorized than GPT-3 if you like, but that doesn't stop it being orders of magnitude more useful for doing practical tasks in the real world.

Also, like all AI replacing jobs debates, it's important to point out that simply automating tasks, rather than full jobs, is itself sufficient to cause unemployment increases: if copywriters can work 20% faster then I do not expect the demand for copywriters to increase proportionally to ensure they all stay employed. I think you are simply overfitting to a preconceived bias that the next token objective cannot induce generalization and reasoning skills, which seems objectively incorrect based on the available evidence.

Expand full comment

I strongly agree that there is a disproportionate amount of concern for long term risk we can not yet evaluate, while lacking concern for how AI is currently being used.

I think the security issues will be a nightmare. Consider the situation is essentially a system that exhibits unsuspected emergent behaviors, the internal are black box and not understood, and the input is everything that can be described by human language. That is a huge potential attack surface to try to contain with a lot of unknowns.

Finally, I would add there are also substantial societal issues that will begin to emerge that we don't know how they will yet play out. This probably follows immediately after the security issues, but before long term risk scenarios.

I've also put together a rather extensive set of thought explorations that goes deeper societal aspects of both short term and longer term. Would be interested in your thoughts as well.

https://dakara.substack.com/p/ai-and-the-end-to-all-things

Expand full comment

I agree and disagree at the same time. I think Jeffrey Lee Funk and Gary N. Smith nailed the situation well:

https://www.salon.com/2023/04/22/our-misplaced-faith-in-ai-is-turning-the-internet-into-a-cesspool-of-misinformation-and-spam/

According to them, the real issue is AI generated garbage. It may or may not be a risk as such, but it will pollute the whole open Internet. There are already estimates that as much as 90-95% of all content in the open Internet could be generated by AI in the next few years. There are already hundreds of tools popping out here and there that allow creating a website with few clicks, polluting it with AI garbage, and then feeding the garbage to social media, trying to fool SEO guards, or whatever. In other words, much of the garbage will be the usual stuff; commercial promotion material and advertising generated by machines used by advertisers, companies, advertising agencies, and related parties. It does not stop here: all product descriptions will be filled with AI garbage, their reviews will be about AI garbage generated by the click farms and other manipulators throughout the world, and so forth.

As the companies openly admit that their tools output garbage, also the inaccuracies and misinformation are very much real even when no nefarious purposes are involved. These will be delivered both to users of the chatbots directly and spammed to the Internet.

Expect also a lot of AI garbage in science: predatory publishers and paper mills will most certainly utilize LLMs. My advice for any serious academic would be to steer away from any AI tools as these mostly output garbage. Academia does not need any more "productivity" as there are already plenty of nonsense. If you need help at "writing", "rewording", or "restructing" by machine-generation tools you are already working in a wrong place.

Furthermore, any safeguards put place by big companies are useless in practice because custom LLMs in particular (unlikely image generators perhaps) are fairly easy to build. Thus, I wouldn't count out the issues with explicit disinformation, phishing, social engineering, spam, malware, etc. because these things are being developed everywhere in the world, including by nation states. EUROPOL's recent report hinted that also crooks in the dark web are building their own LLMs. If you look at what is happening at, say, Telegram, LLMs are presumably already used to push heavy-handed disinformation by bots.

Few scenarios:

(a) Once enough AI garbage is out there, it will likely become impossible to train better LLMs in the future because learning from one's own garbage is hardly a good idea (some say that synthetic data may help, but I doubt it).

(b) It will become hard for any volunteer human-based knowledge-creation endeavor to compete with AI garbage. Even things like Wikipedia may eventually be a victim. Places like StackOverflow will die. Volunteer communities have little reason to contribute serious content because it is impossible and pointless to compete with AI garbage.

(c) On the positive side, it may be that science, traditional media, and reputable publishers will gain in this garbage scenario because they will likely be able to maintain some quality controls that will be absent in the AI cesspit that the Internet will become.

(d) Again on the positive side, people may learn to enjoy AI garbage. Although I despise LLMs, I have to admit that the AI art is amazing. I can't wait to see what the video game industry will be able to do with this stuff. Therein this AI stuff is a perfect fit.

(e) The jury is still out on what will happen to social media, but I expect also large transformations there due to the AI garbage and the rise of AI bots. Closed walled-gardens may emerge. Even real-identity verification may be a thing.

(f) Crawling the web may become more difficult as some companies and communities may put blockers to spiders because copyrights are not honored by AI garbage harvesters.

(g) The open source community may start to migrate away from GitHub due to license violations by CoPilot and others.

So, all in all, it will be the era of garbage.

Expand full comment

Are you aware 'I read a scifi book once' is a lazy straw-man argument?

Expand full comment

Signed by mRNA vessel, AI driving cars’ salesman, brain-linking, Elon Musk 😄

“governments should step in and institute a moratorium”

https://futureoflife.org/open-letter/pause-giant-ai-experiments/?fbclid=IwAR1lB7vyQxxMYEewzAx3JkkfW-tuRPgO2GYDUq3RSaJKe7Mbg1PUU1howR4&mibextid=Zxz2cZ

Chatty will always be biased for a reason! The big guys will never ever allow everyone to utilise AI’s full potential. Maybe you have already noticed that our monetary system, which runs the world, is tweaked in a way that ensures the power is kept in the same hands.

https://open.substack.com/pub/manuherold/p/proof-microsofts-chatty-chatgpt-has?r=eymvs&utm_medium=ios&utm_campaign=post

Expand full comment

> Consider the viral Twitter thread about the dog who was saved because ChatGPT gave the correct medical diagnosis. In this case, ChatGPT was helpful. But we won't hear of the myriad of other examples where ChatGPT hurt someone due to an incorrect diagnosis.

Ofcourse its the other way around. The dog story was a nice fluff thingy that happened, and *maybe* we'll see a few others reported, but we'll surely hear about every instance a chatbot even remotly contributes to any more extreme harm done to humans. --> https://www.dailymail.co.uk/news/article-11920801/Married-father-kills-talking-AI-chatbot-six-weeks-climate-change-fears.html

Expand full comment

?"One way to do right by artists would be to tax AI companies and use it to increase funding for the arts."

Kind of like taxing the auto industry last century to fund horse saddle making.

Expand full comment

A questionable analogy. The auto industry didn't appropriate the work of saddle makers to build their products. Without the work of real artists, apps like Stable Diffusion would be useless.

Expand full comment

Barn door, horse, bolted! Enough said!

Expand full comment

When I hear comments like that, to halt the development of AI, I always think about people that are deadly ill, dying of cancer for example. I can imagine AI will do wonders in drug design & personal drug doze optimization. And what - you will ask these people to wait because you're scared for the future (or maybe want some additional time to get in the game) No way. Genie is out of the bottle already. There's only path forward.

Expand full comment

What I really can't stand in these discussions, is any claim like "AI will kill us", which is just embarrassing, because AI **itself** is the most FRAGILE thing on Earth (if anybody cares, I've argued about this in my substack)

Expand full comment

I absolutely don't understand why the (for lack of a better phrase) AI safety establishment has decided that civilizational-scale tail risks are inimical somehow to consideration of other risks and also so unworthy of consideration that they can be dismissed like this, by handwaving and name-calling alone.

Like imagine a world where the people who work on safeguarding nuclear material from thieves or terrorists in practice seem to spend half their time writing screeds about how the World War Three Bros are distracting us from the real issues with their speculative, futuristic apocalypse visions. And their arguments for why not to worry about World War Three are all statements like "In reality, no-one has ever detonated a hydrogen bomb over a city." Very weird stuff.

Expand full comment

Most AI-doom focused orgs seem to be dismissive of near-term risks from AI. Paraphrasing some words I've seen, "Algorithmic bias? In the face of human extinction, what does that matter?"

If I agreed with all of Eliezer's underlying models, then my answer would be 'very little'. So I get where he's coming from. But let's not pretend the cause of the animosity is one-sided.

Anthropic's attempt at bridging the gap seems good so far. I don't think near-term and future-focused AI safety have to be at odds with one another.

Expand full comment

Very interesting article, thanks. I tackle the same topic coming from a different angle, that of a humanitarian aid worker who specialises in protection of civilians. You can read it on The Machine Race, my blog series looking at AI, human rights and society. The new article on this is called, 'Fighting over AI: Lessons from Ukraine'. Would love to hear your thoughts, Sayash and Arvind, and those of others.

https://medium.com/@themachinerace/fighting-over-ai-lessons-from-ukraine-191b59e86f6b

Expand full comment