Experts alone can't handle AI – social scientists explain why the public needs a seat at the table
Retrieved on:
Tuesday, September 5, 2023
Technology, DNA, CRISPR, Industry, University, Interview, Policy, Society, Research, Tech, Asilomar, Government, University of Wisconsin–Madison College of Agricultural and Life Sciences, Science, Congress, Pandemic, Trust, Asilomar Conference on Recombinant DNA, Supreme court, Artificial intelligence, Ethics, AI, Hunger, Environment, Asilomar Conference on Beneficial AI, Conference, Medical device, Vaccine, Sales, Facebook
Are democratic societies ready for a future in which AI algorithmically assigns limited supplies of respirators or hospital beds during pandemics?
Key Points:
- Are democratic societies ready for a future in which AI algorithmically assigns limited supplies of respirators or hospital beds during pandemics?
- Or one in which AI fuels an arms race between disinformation creation and detection?
- Or sways court decisions with amicus briefs written to mimic the rhetorical and argumentative styles of Supreme Court justices?
Ready or not, unintended consequences
- Striking a balance between the awe-inspiring possibilities of emerging technologies like AI and the need for societies to think through both intended and unintended outcomes is not a new challenge.
- Societies are severely limited in their ability to anticipate and mitigate unintended consequences of rapidly emerging technologies like AI without good-faith engagement from broad cross-sections of public and expert stakeholders.
- AI runs a very real risk of creating similar blind spots when it comes to intended and unintended consequences that will often not be obvious to elites like tech leaders and policymakers.
- Nine in 10 (90.3%) predicted that there will be unintended consequences of AI applications, and three in four (75.9%) did not think that society is prepared for the potential effects of AI applications.
Who gets a say on AI?
- Industry leaders, policymakers and academics have been slow to adjust to the rapid onset of powerful AI technologies.
- In 2017, researchers and scholars met in Pacific Grove for another small expert-only meeting, this time to outline principles for future AI research.
- Meanwhile, there is a hunger among the public for helping to shape our collective future.
- Only about a quarter of U.S. adults in our 2020 AI survey agreed that scientists should be able “to conduct their research without consulting the public” (27.8%).
A healthy dose of skepticism?
- Industry leaders have had a hard time disentangling their commercial interests from efforts to develop an effective regulatory system for AI.
- Much more urgently, societies need to figure out what types of applications AI should be used for, and how.
- AI might not wipe out humanity anytime soon, but it is likely to increasingly disrupt life as we currently know it.