Briefing
|
- Asilomar AI Principles – Code of AI principles established to ensure development of artificial intelligent machines benefit humanity and prevent catastrophic consequences
- Three Pillars – Created by Future of Life Institute, non-profit dedicated to mitigating existential risks facing humanity, comprised of three categories: research issues, ethics and values, and long term issues
- Significant Backing – Endorsed by cosmologist Stephen Hawking and Tesla CEO Elon Musk
- Beneficial AI Conference – Based on discussions at Beneficial AI conference held in January 2016, and attended by DeepMind CEO Demis Hassabis and Facebook Director of AI Research Yann Le Cun
- Superintelligence Risk – Researchers believe machines can reach human intelligence, at which point, they can learn to develop more powerful AIs, called superintelligence, that exceed human intellect and capabilities
|