Excellent and useful piece. The very last link for the “agenda setting quality” of media is broken. Were there other citations to replace it?
I can see the “black box” nature of AI being a cop out for applications like those in EdTech but would “black box” discussion be useful in some limited circumstances like general purpose LLMs?
Flawed human-AI comparisons always get me, they are modes of intelligence so fundamentally different, that making comparisons is fishy from the get go. How many matches does it take muzero to get to the level of basic competency a human can get in just a couple of games? It's orders of magnitude different. What's the difference in rate of energy consumption per "learning unit"? Again, radically different. These differences are understandable once you consider how disparate their methods and material substrates are, but this always gets lost in the hype. I understand non-technical people making such oversights, but you often see even technical people making comparisons willy-nilly.
Excellent, informative, serious -- and a (temporary?) relief from the onslaught of information on the dangers of runaway AI.
Two core questions:
1. Is AI truly a core species-level anthropogenic existential risk?
2. How much of what you have observed is merely the result of bad journalism and poorly managed public relations -- versus a partial or upfront attempt to hype up AI Risk as a form of fake news or even intentional PsyOps focused on destablizing certain audiences and societies?
Would also be interesting to better understand how your insights relate to the alarm concerning AI as an existential risk sounded by the likes of Stephen Hawking, Elon Musk, Eliezer Yudkowsky, Bill Gates, Stuart Russell.
Also, to understand how your research and vision of the near to mid-term future of the impact of AI align with, or differ from the work of Nick Bostrom, Jaan Taallin, Max Tegmark, Yann LeCun, Brian Christian, or Melanie Mitchell, among others.
Excellent and useful piece. The very last link for the “agenda setting quality” of media is broken. Were there other citations to replace it?
I can see the “black box” nature of AI being a cop out for applications like those in EdTech but would “black box” discussion be useful in some limited circumstances like general purpose LLMs?
Flawed human-AI comparisons always get me, they are modes of intelligence so fundamentally different, that making comparisons is fishy from the get go. How many matches does it take muzero to get to the level of basic competency a human can get in just a couple of games? It's orders of magnitude different. What's the difference in rate of energy consumption per "learning unit"? Again, radically different. These differences are understandable once you consider how disparate their methods and material substrates are, but this always gets lost in the hype. I understand non-technical people making such oversights, but you often see even technical people making comparisons willy-nilly.
Excellent, informative, serious -- and a (temporary?) relief from the onslaught of information on the dangers of runaway AI.
Two core questions:
1. Is AI truly a core species-level anthropogenic existential risk?
2. How much of what you have observed is merely the result of bad journalism and poorly managed public relations -- versus a partial or upfront attempt to hype up AI Risk as a form of fake news or even intentional PsyOps focused on destablizing certain audiences and societies?
Would also be interesting to better understand how your insights relate to the alarm concerning AI as an existential risk sounded by the likes of Stephen Hawking, Elon Musk, Eliezer Yudkowsky, Bill Gates, Stuart Russell.
Also, to understand how your research and vision of the near to mid-term future of the impact of AI align with, or differ from the work of Nick Bostrom, Jaan Taallin, Max Tegmark, Yann LeCun, Brian Christian, or Melanie Mitchell, among others.
Thanks, appreciate your article deeply.