venturebeat.com/ai/ai-doom-ai-boom-and-the-possible-destruction-of-humanity/
1 Users
0 Comments
27 Highlights
0 Notes
Tags
Top Highlights
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.”
This statement, released this week by the Center for AI Safety (CAIS), reflects an overarching — and some might say overreaching — worry about doomsday scenarios due to a runaway superintelligence.
existential threats may manifest over the next decade or two unless AI technology is strictly regulated on a global scale.
The statement has been signed by a who’s who of academic experts and technology luminaries ranging from Geoffrey Hinton (formerly at Google and the long-time proponent of deep learning) to Stuart Russell (a professor of computer science at Berkeley) and Lex Fridman (a research scientist and podcast host from MIT).
In addition to extinction, the Center for AI Safety warns of other significant concerns ranging from enfeeblement of human thinking to threats from AI-generated misinformation undermining societal decision-making.
“There’s a very common misconception, even in the AI community, that there only are a handful of doomers. But, in fact, many people privately would express concerns about these things.”
Clearly, there is a lot of doom talk going on now. For example, Hinton recently departed from Google so that he could embark on an AI-threatens-us-all doom tour.
Throughout the AI community, the term “P(doom)” has become fashionable to describe the probability of such doom.
P(doom) is an attempt to quantify the risk of a doomsday scenario in which AI, especially superintelligent AI, causes severe harm to humanity or even leads to human extinction.
Kevin Roose of The New York Times set his P(doom) at 5%. Ajeya Cotra, an AI safety expert with Open Philanthropy and a guest on the show, set her P(doom) at 20 to 30%.
it needs to be said that P(doom) is purely speculative and subjective, a reflection of individual beliefs and attitudes toward AI risk — rather than a definitive measure of that risk.
Not everyone buys into the AI doom narrative. In fact, some AI experts argue the opposite. These include Andrew Ng (who founded and led the Google Brain project) and Pedro Domingos (a professor of computer science and engineering at the University of Washington and author of The Master Algorithm).
As put forward by Ng, there are indeed existential dangers, such as climate change and future pandemics, and that AI can be part of how these are addressed and hopefully mitigated.
Melanie Mitchell, a prominent AI researcher, is also skeptical of doomsday thinking.
Among her arguments is that intelligence cannot be separated from socialization.
While the concept of P(doom) serves to highlight the potential risks of AI, it can inadvertently overshadow a crucial aspect of the debate: The positive impact AI could have on mitigating existential threats.
we should also consider another possibility that I call “P(solution)” or “P(sol),” the probability that AI can play a role in addressing these threats.
To give you a sense of my perspective, I estimate my P(doom) to be around 5%, but my P(sol) stands closer to 80%. This reflects my belief that, while we shouldn’t discount the risks, the potential benefits of AI could be substantial enough to outweigh them.
we should not focus solely on potential bad outcomes or claims, as does a post in the Effective Altruism Forum, that doom is the default probability.
The primary worry, according to many doomers, is the problem of alignment, where the objectives of a superintelligent AI are not aligned with human values or societal objectives.
Glasp is a social web highlighter that people can highlight and organize quotes and thoughts from the web, and access other like-minded people’s learning.