In August 2015, Bill Gates, Elon Musk, Stephen Hawking and a plethora of other big names all issued a letter to discuss the potential dangers in artificial intelligence (AI). They boldly claimed that AI can potentially be more dangerous than nuclear weapons.
HOW DO WE KEEP ARTIFICIAL INTELLIGENCE SAFE? – A DISCUSSION
The eventual 23 principles were developed in conjunction with the 2017 conference. The A.I. principles were not just chosen arbitrarily; a minimum of 90% of the participants had to be in agreement for them to make the list.
Before the meeting took place, the participants were extensively surveyed to find potential principle candidates. The experts panel included academicians, economists, entrepreneurs, philosophers, government representatives and many more.
The debate included topics about A.I. safety, socio-economic impact on human workers, ethics of programming and universal basic income (UBI) to name a few.
“What remained was a list of 23 A.I. principles ranging from research strategies to data rights to future issues including potential super-intelligence, which was signed by those wishing to associate their name with the list,” Future of Life’s website explains. “This collection of A.I. principles is by no means comprehensive and it’s certainly open to differing interpretations, but it also highlights how the current ‘default’ behavior around many relevant issues could violate principles that most participants agreed are important to uphold.”
THE FULL LIST OF ASILOMAR A.I. PRINCIPLES:
1. Research Goal: The goal of A.I. research should be to create not undirected intelligence, but beneficial intelligence.
2. Research Funding: Investments in A.I. should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies, such as:
- How can we make future A.I. systems highly robust, so that they do what we want without malfunctioning or getting hacked?
- How can we grow our prosperity through automation while maintaining people’s resources and purpose?
How can we update our legal systems to be more fair and efficient, to keep pace with A.I., and to manage the risks associated with A.I.?
- What set of values should A.I. be aligned with, and what legal and ethical status should it have?
3. Science-Policy Link: There should be constructive and healthy exchange between A.I. researchers and policy-makers.
4. Research Culture: A culture of cooperation, trust, and transparency should be fostered among researchers and developers of A.I.
5. Race Avoidance: Teams developing A.I. systems should actively cooperate to avoid corner-cutting on safety standards.
6. Safety: A.I. systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.
7. Failure Transparency: If an A.I. system causes harm, it should be possible to ascertain why.
8. Judicial Transparency: Any involvement by an autonomous system in judicial decision-making should provide a satisfactory explanation auditable by a competent human authority.
9. Responsibility: Designers and builders of advanced A.I. systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.
10. Value Alignment: Highly autonomous A.I. systems should be designed so that their goals and behaviours can be assured to align with human values throughout their operation.
11. Human Values: A.I. systems should be designed and operated so as to be compatible with ideals of human dignity, rights, freedoms, and cultural diversity.
12. Personal Privacy: People should have the right to access, manage and control the data they generate, given A.I. systems power to analyse and utilize that data.
13. Liberty and Privacy: The application of A.I. to personal data must not unreasonably curtail people’s real or perceived liberty.
14. Shared Benefit: A.I. technologies should benefit and empower as many people as possible.
15. Shared Prosperity: The economic prosperity created by A.I. should be shared broadly, to benefit all of humanity.
16. Human Control: Humans should choose how and whether to delegate decisions to A.I. systems, to accomplish human-chosen objectives.
17. Non-subversion: The power conferred by control of highly advanced A.I. systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends.
18. A.I. Arms Race: An arms race in lethal autonomous weapons should be avoided.
19. Capability Caution: There being no consensus, we should avoid strong assumptions regarding upper limits on future A.I. capabilities.
20. Importance: Advanced A.I. could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.
21. Risks: Risks posed by A.I. systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.
22. Recursive Self-Improvement: A.I. systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.
23. Common Good: Super-intelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.
“We hope that these principles will provide material for vigorous discussion and also aspirational goals for how the power of AI can be used to improve everyone’s lives in coming years,” concludes the Future of Life.