Autres discussions . Kelly Wanser on Marine Cloud Brightening… September 15, 2020 Andrew Critch on AI Research Considerations for Human Existential Safety. Durant la conférence, l'institut fit circuler une lettre ouverte sur l'intelligence artificielle qui fut par la suite signée par Stephen Hawking, Elon Musk, et de nombreux autres experts[14]. Let's make a difference! [1][2] FLI is particularly focused on the potential risks to humanity from the development of human-level or superintelligent artificial general intelligence (AGI). [12] The Institute circulated an open letter on AI safety at the conference which was subsequently signed by Stephen Hawking, Elon Musk, and many artificial intelligence experts. The Institute's 14-person Scientific Advisory Board comprises 12 men and 2 women, and includes computer scientists Stuart J. Russell and Francesca Rossi, biologist George Church, cosmologist Saul Perlmutter, astrophysicist Sandra Faber, theoretical physicist Frank Wilczek, entrepreneur Elon Musk, and actors and science communicators Alan Alda and Morgan Freeman (as well as cosmologist Stephen Hawking prior to his death in 2018). Parmi ses fondateurs et ses conseillers, on trouve les cosmologistes Max Tegmark et Stephen Hawking, le cofondateur de Skype Jaan Tallinn et l'entrepreneur Elon Musk. The Breakdown of the INF: Who’s to Blame for the Collapse of the Landmark Nuclear Treaty? ", "Our science-fiction apocalypse: Meet the scientists trying to predict the end of the world", "The Future of Technology: Benefits and Risks", "Machine Intelligence Research Institute - June 2014 Newsletter", "FHI News: 'Future of Life Institute hosts opening event at MIT, "Top 23 One-liners From a Panel Discussion That Gave Me a Crazy Idea", "Research Priorities for Robust and Beneficial Artificial Intelligence: an Open Letter", "Mark Zuckerberg, Elon Musk and the Feud Over Killer Robots", "CSER at the Beneficial AGI 2019 Conference", "Elon Musk donates $10M to keep AI beneficial", "Elon Musk donates $10M to Artificial Intelligence research", "Elon Musk is Donating $10M of his own Money to Artificial Intelligence Research", "An International Request for Proposals - Timeline", "New International Grants Program Jump-Starts Research to Ensure AI Remains Beneficial", "United States and Allies Protest U.N. Talks to Ban Nuclear Weapons", "An Open Letter to Everyone Tricked into Fearing Artificial Intelligence", Existential risk from artificial general intelligence, Center for Human-Compatible Artificial Intelligence, Center for Security and Emerging Technology, Institute for Ethics and Emerging Technologies, Artificial intelligence as a global catastrophic risk, Controversies and dangers of artificial general intelligence, Superintelligence: Paths, Dangers, Strategies, https://en.wikipedia.org/w/index.php?title=Future_of_Life_Institute&oldid=978265307, Short description is different from Wikidata, Creative Commons Attribution-ShareAlike License. Parmi les consultants de l'institut, on trouve l'informaticien Stuart Russell, le biologiste George Church, les cosmologistes Stephen Hawking et Saul Perlmutter, le physicien Frank Wilczek, l'entrepreneur Elon Musk, ainsi que Alan Alda et Morgan Freeman[4],[5],[6]. Let's make a difference! On January 4-7, 2019, FLI organized the Beneficial AGI conference in Puerto Rico. August 18, 2020 Peter Railton on Moral Learning and Metaethics in AI Systems. If you would like to participate, you can edit this article, or visit the project page for more details. Its founders include MIT cosmologist Max Tegmark and Skype co-founder Jaan Tallinn, and its board of advisors includes entrepreneur Elon Musk. Artificial Intelligence (AI) will one day become the most powerful technology in human history, being either the best thing ever to happen to humanity, or the worst. (including Yann LeCun, Elon Musk, and Nick Bostrom). ", an interview with, "Transcending Complacency on Superintelligent Machines", an op-ed in the. [9][10] The discussion covered a broad range of topics from the future of bioengineering and personal genetics to autonomous weapons, AI ethics and the Singularity. Le Future of Life Institute (FLI, Institut pour l'avenir de la vie) est une association de volontaires basée dans la région de Boston, cherchant à diminuer les risques existentiels menaçant lhumanité, en particulier ceux provenant de lintelligence artificielle (IA). Overview The mission of the Future of Life Institute is to catalyze and support research and initiatives for safeguarding life and developing optimistic visions of the future, including positive ways for humanity to steer its own course considering new technologies and challenges. Coordinates: 42°22′25″N 71°06′35″W / 42.3736158°N 71.1097335°W / 42.3736158; -71.1097335. [3], The Institute was founded in March 2014 by MIT cosmologist Max Tegmark, Skype co-founder Jaan Tallinn, Harvard graduate student and International Mathematical Olympiad (IMO) medalist Viktoriya Krakovna, Boston University graduate student Meia Chita-Tegmark (Tegmark's wife), and UCSC physicist Anthony Aguirre. La mission du FLI est de catalyser et de soutenir les recherches et les initiatives pour préserver la vie et développer des visions optimistes de l'avenir, en particulier des façons positives de développer et d'utiliser de nouvelles technologies[1],[2]. Unsubscribe at any time. [11], On January 2-5, 2015, FLI organized "The Future of AI: Opportunities and Challenges" conference in Puerto Rico, which brought together the world's leading AI builders from academia and industry to engage with each other and experts in economics, law, and ethics. The Future of Life Institute (FLI) is a non-profit research institute and outreach organization in the Boston area that works to mitigate existential risks facing humanity, particularly existential risk from advanced artificial intelligence (AI). [15] The institute released a set of principles for responsible AI development that came out of the discussion at the conference, signed by Yoshua Bengio, Yann LeCun, and many other AI researchers.[16]. Its founders include MIT cosmologist Max Tegmark and Skype co-founder Jaan Tallinn, and its board of advisors includes entrepreneur Elon Musk. FLI's mission is to catalyze and support research and initiatives for safeguarding life and developing optimistic visions of the future, including positive ways for humanity to steer its course in response to new technologies and challenges. Sauter à la navigation Sauter à la recherche. Le Future of Life Institute (FLI, Institut pour l'avenir de la vie) est une association de volontaires basée dans la région de Boston, cherchant à diminuer les risques existentiels menaçant l’humanité, en particulier ceux provenant de l’intelligence artificielle (IA). [23] On July 1, 2015, a total of $7 million was awarded to 37 research projects. Du 2 au 5 janvier 2015, le FLI organisa et hébergea la conférence « The Future of AI: Opportunities and Challenges » (L’avenir de l’IA : opportunités et défis), avec pour objectif d'identifier les directions de recherche les plus prometteuses pour bénéficier des avantages de l'IA[13]. Not to be confused with Future of Life Institute. ", Leverhulme Centre for the Future of Intelligence, "Transcending Complacency on Superintelligent Machines", "CSER News: 'A new existential risk reduction organisation has launched in Cambridge, Massachusetts, "But What Would the End of Humanity Mean for Me? Le FLI recrute par ailleurs localement (politique grassroots) des volontaires et de jeunes universitaires[3]. It was founded in 2005 as part of the Faculty of Philosophy and the Oxford Martin School. "Top 23 One-liners From a Panel Discussion That Gave Me a Crazy Idea" in Diana Crow Science. Polls have shown that most AI researchers expect artificial general intelligence (AGI) within decades, able to …