본문바로가기

Wealth of Nations : Reducing Long-Term Catastrophic Risks from Artificial Intelligence

EliezerYudkowsky, AnnaSalamon, MACHINE INTELLIGENCE RESEARCH INSTITUTE, 2010년 발간

세부항목 안내표
대분류 키워드 Time Horizon Quality Territorial Scope
Technological Artificial Intelligence 없음 Recommand Global

Issue 보고서

요약

In1965,I.J.Goodproposedthatmachineswouldonedaybesmartenoughtomake themselves smarter. Having made themselves smarter, they would spot still further opportunitiesforimprovement, quicklyleavinghumanintelligencefarbehind(Good 1965). He called this the “intelligence explosion.” Later authors have called it the “technologicalsingularity”orsimply“theSingularity”(Kurzweil2005;Vinge1993). TheSingularityInstituteaimstoreducetheriskofacatastropheresultingfroman intelligenceexplosion. Wedoresearch,education,andconferences. Inthispaper,we make the case for taking artificial intelligence (AI) risks seriously, and suggest some strategiestoreducethoserisks.


본문

1. WhatWe’re(Not)About 

2. Indifference,NotMalice 

3. AnIntelligenceExplosionMayBeSudden 

4. IsConcernPremature? 

5. FriendlyAI 

6. Seedingresearchprograms 

7. OurAims 

8. TheUpsideandDownsideofArtificialIntelligence 

9. RecommendedReading 

연관자료

연관자료 list
ID 제목 카테고리 Territorial Scope
연관자료 정보가 없습니다.