The public supports regulating AI for safety

3 Min Read

Zach Stein-Perlman, 16 February 2023

A high-quality American public survey on AI, Synthetic Intelligence Use Prompts Considerations, was launched yesterday by Monmouth. Some notable outcomes:

  • 9% say AI would do extra good than hurt vs 41% extra hurt than good (just like responses to an identical survey in 2015)
  • 55% say AI may finally pose an existential menace (up from 44% in 2015)
  • 55% favor “having a federal company regulate the usage of synthetic intelligence just like how the FDA regulates the approval of medicine and medical units”
  • 60% say they’ve “heard about A.I. merchandise – equivalent to ChatGPT – that may have conversations with you and write complete essays primarily based on only a few prompts from people”

Worries about security and assist of regulation echoes different surveys:

  • 71% of People agree that there needs to be nationwide laws on AI (Morning Seek the advice of 2017)
  • The general public is anxious about some AI coverage points, particularly privateness, surveillance, and cyberattacks (GovAI 2019)
  • The general public is anxious about numerous destructive penalties of AI, together with lack of privateness, misuse, and lack of jobs (Stevens / Morning Seek the advice of 2021)

Surveys match the anecdotal proof from speaking to Uber drivers: People are apprehensive about AI security and would assist regulation on AI. Maybe there is a chance to enhance the general public’s beliefs, attitudes, and memes and frames for making sense of AI; maybe higher public opinion would allow higher coverage responses to AI or actions from AI labs or researchers.

See also  OpenAI buffs safety team and gives board veto power on risky AI

Public want for security and regulation is way from adequate for a very good authorities response to AI. But it surely does imply that the primary problem for enhancing authorities response helps related actors imagine what’s true, growing good affordances for them, and serving to them take good actions— not making individuals care sufficient about AI to behave in any respect.



We welcome recommendations for this web page or something on the location through our suggestions field, although is not going to tackle all of them.

Source link

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *