70% of researchers think minimizing AI risks should be prioritized.
Researchers believe AI’s unchecked evolution poses significant risks.
WHY IT MATTERS – The ethical concerns and risks associated with artificial intelligence (AI) are already occurring or will soon occur.
THE NUMBERS – An AI Impacts poll from October 2023, asked AI researchers numerous questions about the rapid development, high hopes, and dire concerns surrounding the technology. It revealed the following:
- Researchers believe there is a 50% chance that unaided machines will outperform humans across the entire workforce by 2047, a 13-year decrease from their 2022 survey.
Amidst this rapid growth of AI, there’s a growing consensus on the need for prioritizing research aimed at minimizing risks:
- 22% say the amount of research should remain the same, down 16 points from 2016.
- 70% say there needs to be more/much more research, up 31 points from 2016.
The growing concerns surrounding the lack of research about AI align with the broader societal concerns:
- 73% say they are substantially/extremely concerned about authoritarian rulers using AI to control their populations.
- 79% say they are substantially/extremely concerned with AI manipulating large-scale public opinion.
- 86% say they are substantially/extremely concerned with AI spreading false information.
Among the other nine scenarios listed, at least a third of participants considered it deserving of substantial or extreme concern. This highlights the important need for thorough research and careful ethical governance in AI.
THE BOTTOM LINE – Alongside its rapid development, there needs to be thorough research to ensure AI is safe, beneficial, and serves humanity responsibly.
GO DEEPER – Survey of 2,778 AI authors: six parts in pictures