SlideShare una empresa de Scribd logo
1 de 34
Descargar para leer sin conexión
Superintelligence
How afraid should we be?
Principal, Delta WisdomChair, London Futurists
David Wood
@dw2
#CIUUK14
@dw2
Page 2
Powerful technology,
incompletely
understood
Operated by people
outside their level of
competence
Human lives knocked
catastrophically off
trajectory, unintentionally
http://www.bbc.co.uk/news/world-europe-28357880
Self-improving AGI Beyond human control Humanity knocked
catastrophically off
trajectory, unintentionallyhttp://mashable.com/2014/07/17/malaysia-airlines-ukraine-russia-rebel/
@dw2
Page 3
@dw2
Page 4
Likely date of advent of HL-AGI
Population 10% 50% 90%
Conference: Philosophy
& Theory of AI
Conference: Artificial
General Intelligence
Greek Association for
Artificial Intelligence
Top 100 cited academic
authors in AI
Combined (from above)
Nick Bostrom: Superintelligence
@dw2
Page 5
Likely date of advent of HL-AGI
Population 10% 50% 90%
Conference: Philosophy
& Theory of AI
2048
Conference: Artificial
General Intelligence
2040
Greek Association for
Artificial Intelligence
2050
Top 100 cited academic
authors in AI
2050
Combined (from above) 2040
Nick Bostrom: Superintelligence
@dw2
Page 6
Likely date of advent of HL-AGI
Population 10% 50% 90%
Conference: Philosophy
& Theory of AI
2048 2080
Conference: Artificial
General Intelligence
2040 2065
Greek Association for
Artificial Intelligence
2050 2093
Top 100 cited academic
authors in AI
2050 2070
Combined (from above) 2040 2075
Nick Bostrom: Superintelligence
@dw2
Page 7
Likely date of advent of HL-AGI
Population 10% 50% 90%
Conference: Philosophy
& Theory of AI
2023 2048 2080
Conference: Artificial
General Intelligence
2022 2040 2065
Greek Association for
Artificial Intelligence
2020 2050 2093
Top 100 cited academic
authors in AI
2024 2050 2070
Combined (from above) 2022 2040 2075
Nick Bostrom: Superintelligence
@dw2
Page 8
Reaching HL AGI: 5 driving forces
1. Hardware with higher performance: Continuation of Moore’s Law?
– “18 different candidates” in Intel labs to add extra life to that trend
– Possible breakthroughs with Quantum Computing?
2. Software algorithm improvements?
– Can speed things up faster than hardware gains – e.g. chess computers
– Compare: Andrew Wiles, unexpected proof of Fermat’s Last Theorem (1993)
3. Learnings from studying the human brain?
– Improved scanning techniques -> “neuromorphic computing” etc
– Philosophical insight into consciousness/creativity?!
4. More people studying these fields than ever before
– Stanford University online course on AI: 160,000 students (23,000 finished it)
– More components / databases / tools /methods ready for re-combination
– Unexpected triggers for improvement (malware wars, games AI, financial AI…)
5. Transformation in society’s motivation?
http://intelligence.org/2013/05/15/when-will-ai-be-created/
(Smarter people?!)
“Sputnik moment!?”
@dw2
Page 9
Superintelligence – model 1
Village idiot Einstein
http://intelligence.org/files/mindisall-tv07.ppt
Eliezer Yudkowsky
@dw2
Page 10
Superintelligence – model 2
http://intelligence.org/files/mindisall-tv07.ppt
Village idiotMouse
Chimp Einstein
AI
50-100 years 50-100 weeks? / days? / hours?
Vernor Vinge: The best answer to the question,
“Will computers ever be as smart as humans?”
is probably “Yes, but only briefly.”
“The final invention”
Eliezer Yudkowsky
@dw2
Page 11
Recursive improvement
Design,
Manufacturing
Computers
@dw2
Page 12
Recursive improvement
Software tools
(debuggers,
compilers…)
Software
@dw2
Page 13
Recursive improvement
AI tools
AI
Intelligence
explosion
++Rapid reading &
comprehension of
all written material
++Rapid expansion
onto improved
hardware
++Funded by financial winnings from smart stock trading
++Supported by humans easily psychologically manipulated
Who here wanted to merge again?
Jaan Tallinn: http://prezi.com/xku9q-v-fg_j/intelligence-stairway/
@dw2
Page 15
Exponential growth?
Technology
Time
2050
?
Technology
Time
2050
AGI=HL
ASI>>HL
Ray Kurzweil Eliezer Yudkowsky
@dw2
Page 16
Going nuclear: hard to calculate
• First hydrogen bomb test, 1st March 1954, Bikini Atoll
– Explosive yield was expected to be from 4 to 6 Megatons
– Was 15 Megatons, two and a half times
the expected maximum
– Physics error by the designers at Los
Alamos National Lab
– Wrongly considered the lithium-7 isotope
to be inert in bomb
– The crew in a nearby Japanese fishing boat
became ill in the wake of direct contact
with the fallout. One of the crew died
http://en.wikipedia.org/wiki/Castle_Bravo
@dw2
Page 17
Superintelligence – model 2
http://intelligence.org/files/mindisall-tv07.ppt
Village idiot
Chimp Einstein
Mouse
AI
Linear model of intelligence?
Eliezer Yudkowsky
@dw2
Page 18
Gloopy ASIs
Posthuman mindspace
Bipping ASIs
Freepy ASIs
Eliezer Yudkowsky
http://intelligence.org/
files/mindisall-tv07.ppt
Model 3
Transhuman mindspace
Human minds
Minds-in-general
@dw2
Page 19
Dimensions of mind
The ability to achieve
goals in a wide range
of environments
Being conscious?
Having compassion for
sentient beings with
lesser intelligence?
@dw2
Page 20
AI systems we should fear
Killer drones with
autonomous
decision-making
powers (Robocop)
Malware that can
hack infrastructure-
control systems
(e.g. Stuxnet)
Financial trading
systems software
(high speed)
Software that is
expert in
manipulating
humans
http://www.williamhertling.com/
Software that is expert in manipulating humans
@dw2
Page 22
AI systems we should fear
Killer drones with
autonomous
decision-making
powers (Robocop)
Malware that can
hack infrastructure-
control systems
(e.g. Stuxnet)
Financial trading
systems software
(high speed)
Software that is
expert in
manipulating
humans
Software
that pursues
a single
optimisation
goal to the
exclusion of
all others
The more power such an AI has, the more we should fear it
@dw2
Page 23
The pursuit of happiness?
Software
that pursues
a single
optimisation
goal to the
exclusion of
all others
Software will do what we say, rather than what we meant to say
Wire-heading?!
Just make us happy!?
@dw2
Page 24
The pursuit of morality?
Just be moral!?
http://www.clipartbest.com/clipart-nTXa54XTB
Whose morality?
The problem of computer morality is
at least as hard as the problem of
computer vision (!)
http://tvtropes.org/pmwiki/pmwiki.php/Creator/IsaacAsimov
Isaac Asimov’s
Three Laws of Robotics?!
@dw2
Page 25
The two fundamental problems
of superintelligence
Specification problem: How do we
define the goals of the AGI software?
Control problem: How do retain the
ability to shut down the software?
Creation problem: How do we create
AGI software in the first place?
@dw2
Page 26
The fundamental meta-problem
of superintelligence
Specification problem: How do we
define the goals of the AGI software?
Control problem: How do retain the
ability to shut down the software?
Creation problem: How do we create
AGI software in the first place?
~No
research
~No
research
Some
research
Accidental
research
“Friendly
AI” (FAI)
“AI
in a box”
@dw2
Page 27
AI in a box?
Tripwires? “Adam and Eve” ethernet port?!
Software will be a tool, answering questions, not an agent?
The “answers” which the software
gives us will have effects in the world
(e.g. software it writes for us)
Systems which rely on humans to verify and
carry out their actions will be uncompetitive
compared to those with greater autonomy
AGI may become very
smart in surreptitiously
evading tripwires
Simple?
@dw2
Page 28
“The orthogonality thesis”
Intelligence and final goals are orthogonal
More or less
any intelligence
…could in principle
be combined with…
more or less
any final goal
@dw2
Page 29
“The instrumental convergence thesis”
(“AI Drives”)
Some intermediate (instrumental) goals are
likely in all cases for a superintelligence:
• Resource acquisition
• Cognitive enhancement
• Greater creativity
• Self preservation (preservation of goal)…
Steve Omohundro: “For a sufficiently intelligent system, avoiding vulnerabilities
is as powerful a motivator as explicitly constructed goals and subgoals”
@dw2
Page 30
Indirect specification of goals?
Specification problem: How do we
define the goals of the AGI software?
“Achieve the goals which the creators of the AGI would have wished
it to achieve, if they had thought about the matter long and hard”
This software will do what we meant to say,
rather than what we actually said (?)
AGI helps us to figure out the answer to the spec problem!
@dw2
Page 31
CEV: Coherent Extrapolated Volition
AGI should be tasked to carry out:
Our wish if we knew more,
thought faster,
were more the people we wished we were,
had grown up farther together;
where the extrapolation converges rather than diverges,
where our wishes cohere rather than interfere;
extrapolated as we wish that extrapolated,
interpreted as we wish that interpreted
Eliezer Yudkowsky
@dw2
Page 32
Unanswered questions (selection)
1. Can we turn ‘poetic’ ideas like CEV into bug-free working software?
– Should we humans concentrate harder on working out our “blended volition”?
2. How can we stop a superintelligence from changing its own core goals?
– Like humans can choose to set aside their biologically inherited goals
– Could AGIs that start off ‘Friendly’ become “born again” with new priorities?!
3. Can we prevent AGIs from developing dangerous instrumental drives?
– By programming (bug-free) in tamper-proof limitations?
4. Can AGIs help us to figure out a solution to the Control problem?
– Can we use a hierarchy of lower-level AGIs to control higher-level ones?
5. Can we prevent the rapid nuclear-style take-off of self-improving AGI?
6. Are some approaches to creating AGIs safer than others?
– Whole Brain Emulation / AGI de novo / evolution in virtual environment…
– Open (everything published) vs. Closed (some parts secret)?
7. How does the AGI existential risk compare to other x-risks in priority?
– Nanotech grey goo, deadly new bio-hazard, nuclear holocaust, climate chaos…
@dw2
Page 33
Answered questions (selection)
a) Should we be afraid?
– Yes. (End-of-the-world afraid)
b) Can we slow down all research into AGI, until we’re confident we
have good answers to the control and/or specification problems?
– Unlikely – there’s too much financial investment happening worldwide
– Too many separate countries / militaries / finance houses… are involved
c) How do we promote wider study of the Superintelligence topic?
– Need to lose the “weird” and “embarrassment” angles
– “Less Wrong” strikes some observers as cultish
– “Terminator” and “Transcendence” have done more harm than good
– First class books / articles / movies needed, addressing thoughtful audiences
– Good intermediate results useful too (not just appeals for more funding)
Practical philosophy!
Preparing humanity to survive the
forthcoming transition to superintelligence
Principal, Delta WisdomChair, London Futurists
Urgent!
(Roles too for mathematicians, theologians…)
David Wood
@dw2
Philosophy with an expiry date! Making a real difference!

Más contenido relacionado

La actualidad más candente

La actualidad más candente (20)

Generative AI for the rest of us
Generative AI for the rest of usGenerative AI for the rest of us
Generative AI for the rest of us
 
Pan Dhoni - Modernizing Data And Analytics using AI.pdf
Pan Dhoni - Modernizing Data And Analytics using AI.pdfPan Dhoni - Modernizing Data And Analytics using AI.pdf
Pan Dhoni - Modernizing Data And Analytics using AI.pdf
 
7 Rules for Surviving the AI Hype Machine
7 Rules for Surviving the AI Hype Machine7 Rules for Surviving the AI Hype Machine
7 Rules for Surviving the AI Hype Machine
 
GPSTEC201_Building an Artificial Intelligence Practice for Consulting Partners
GPSTEC201_Building an Artificial Intelligence Practice for Consulting PartnersGPSTEC201_Building an Artificial Intelligence Practice for Consulting Partners
GPSTEC201_Building an Artificial Intelligence Practice for Consulting Partners
 
ChatGPT - AI.pdf
ChatGPT - AI.pdfChatGPT - AI.pdf
ChatGPT - AI.pdf
 
AI and ML Series - Introduction to Generative AI and LLMs - Session 1
AI and ML Series - Introduction to Generative AI and LLMs - Session 1AI and ML Series - Introduction to Generative AI and LLMs - Session 1
AI and ML Series - Introduction to Generative AI and LLMs - Session 1
 
ICSE23 Keynote: Software Engineering as the Linchpin of Responsible AI
ICSE23 Keynote: Software Engineering as the Linchpin of Responsible AIICSE23 Keynote: Software Engineering as the Linchpin of Responsible AI
ICSE23 Keynote: Software Engineering as the Linchpin of Responsible AI
 
Introduction to AI Ethics
Introduction to AI EthicsIntroduction to AI Ethics
Introduction to AI Ethics
 
Generative AI
Generative AIGenerative AI
Generative AI
 
Generative AI in Healthcare Market.pptx
Generative AI in Healthcare Market.pptxGenerative AI in Healthcare Market.pptx
Generative AI in Healthcare Market.pptx
 
Kelly Dowd - Leading Digital Transformation with AI and Human-Centered Design...
Kelly Dowd - Leading Digital Transformation with AI and Human-Centered Design...Kelly Dowd - Leading Digital Transformation with AI and Human-Centered Design...
Kelly Dowd - Leading Digital Transformation with AI and Human-Centered Design...
 
AI and future Jobs
AI and future JobsAI and future Jobs
AI and future Jobs
 
An Introduction to Generative AI - May 18, 2023
An Introduction  to Generative AI - May 18, 2023An Introduction  to Generative AI - May 18, 2023
An Introduction to Generative AI - May 18, 2023
 
Generative AI Risks & Concerns
Generative AI Risks & ConcernsGenerative AI Risks & Concerns
Generative AI Risks & Concerns
 
AI Governance and Ethics - Industry Standards
AI Governance and Ethics - Industry StandardsAI Governance and Ethics - Industry Standards
AI Governance and Ethics - Industry Standards
 
Human-Centered AI: Scalable, Interactive Tools for Interpretation and Attribu...
Human-Centered AI: Scalable, Interactive Tools for Interpretation and Attribu...Human-Centered AI: Scalable, Interactive Tools for Interpretation and Attribu...
Human-Centered AI: Scalable, Interactive Tools for Interpretation and Attribu...
 
Generative AI and Security (1).pptx.pdf
Generative AI and Security (1).pptx.pdfGenerative AI and Security (1).pptx.pdf
Generative AI and Security (1).pptx.pdf
 
What Are The Negative Impacts Of Artificial Intelligence (AI)?
What Are The Negative Impacts Of Artificial Intelligence (AI)?What Are The Negative Impacts Of Artificial Intelligence (AI)?
What Are The Negative Impacts Of Artificial Intelligence (AI)?
 
Generative-AI-in-enterprise-20230615.pdf
Generative-AI-in-enterprise-20230615.pdfGenerative-AI-in-enterprise-20230615.pdf
Generative-AI-in-enterprise-20230615.pdf
 
The future of AI is hybrid
The future of AI is hybridThe future of AI is hybrid
The future of AI is hybrid
 

Destacado

Kecerdasan buatan
Kecerdasan buatanKecerdasan buatan
Kecerdasan buatan
zhu ma
 
Artificial Intelligence Presentation
Artificial Intelligence PresentationArtificial Intelligence Presentation
Artificial Intelligence Presentation
lpaviglianiti
 

Destacado (14)

Cloud Superintelligence
Cloud SuperintelligenceCloud Superintelligence
Cloud Superintelligence
 
superintelligence
superintelligencesuperintelligence
superintelligence
 
Superintelligence and Me
Superintelligence and MeSuperintelligence and Me
Superintelligence and Me
 
Ben Franklin Creative Citizen
Ben Franklin Creative CitizenBen Franklin Creative Citizen
Ben Franklin Creative Citizen
 
Nick Bostrom, Oxford’s Future of Humanity Institute
Nick Bostrom, Oxford’s Future of Humanity InstituteNick Bostrom, Oxford’s Future of Humanity Institute
Nick Bostrom, Oxford’s Future of Humanity Institute
 
SIM, Namira Nur Jasmine, Hapzi Ali, Sistem Kecerdasan Buatan, Universitas Mer...
SIM, Namira Nur Jasmine, Hapzi Ali, Sistem Kecerdasan Buatan, Universitas Mer...SIM, Namira Nur Jasmine, Hapzi Ali, Sistem Kecerdasan Buatan, Universitas Mer...
SIM, Namira Nur Jasmine, Hapzi Ali, Sistem Kecerdasan Buatan, Universitas Mer...
 
SearchLove San Diego 2017 | Michael King | Machine Doing
SearchLove San Diego 2017 | Michael King | Machine DoingSearchLove San Diego 2017 | Michael King | Machine Doing
SearchLove San Diego 2017 | Michael King | Machine Doing
 
Kecerdasan buatan
Kecerdasan buatanKecerdasan buatan
Kecerdasan buatan
 
2016 kcd 세미나 발표자료. 구글포토로 바라본 인공지능과 머신러닝
2016 kcd 세미나 발표자료. 구글포토로 바라본 인공지능과 머신러닝2016 kcd 세미나 발표자료. 구글포토로 바라본 인공지능과 머신러닝
2016 kcd 세미나 발표자료. 구글포토로 바라본 인공지능과 머신러닝
 
Artificial Intelligence
Artificial IntelligenceArtificial Intelligence
Artificial Intelligence
 
Artificial Intelligence Presentation
Artificial Intelligence PresentationArtificial Intelligence Presentation
Artificial Intelligence Presentation
 
Artificial inteligence
Artificial inteligenceArtificial inteligence
Artificial inteligence
 
Deep Learning - Convolutional Neural Networks
Deep Learning - Convolutional Neural NetworksDeep Learning - Convolutional Neural Networks
Deep Learning - Convolutional Neural Networks
 
How to Become a Thought Leader in Your Niche
How to Become a Thought Leader in Your NicheHow to Become a Thought Leader in Your Niche
How to Become a Thought Leader in Your Niche
 

Similar a Superintelligence: how afraid should we be?

AI – Risks, Opportunities and Ethical Issues April 2023.pdf
AI – Risks, Opportunities and Ethical Issues April 2023.pdfAI – Risks, Opportunities and Ethical Issues April 2023.pdf
AI – Risks, Opportunities and Ethical Issues April 2023.pdf
Adam Ford
 
Artificial intelligence nanni
Artificial intelligence nanniArtificial intelligence nanni
Artificial intelligence nanni
sominand
 

Similar a Superintelligence: how afraid should we be? (20)

Mieczysław Muraszkiewicz, Warsaw University of Technology: Artificial Intelli...
Mieczysław Muraszkiewicz, Warsaw University of Technology: Artificial Intelli...Mieczysław Muraszkiewicz, Warsaw University of Technology: Artificial Intelli...
Mieczysław Muraszkiewicz, Warsaw University of Technology: Artificial Intelli...
 
AI – Risks, Opportunities and Ethical Issues April 2023.pdf
AI – Risks, Opportunities and Ethical Issues April 2023.pdfAI – Risks, Opportunities and Ethical Issues April 2023.pdf
AI – Risks, Opportunities and Ethical Issues April 2023.pdf
 
Ethics for the machines altitude software
Ethics for the machines   altitude softwareEthics for the machines   altitude software
Ethics for the machines altitude software
 
Will Super-Intellligent AI Transform Our Future? - Adam Ford - 2022-01
Will Super-Intellligent AI Transform Our Future? - Adam Ford - 2022-01Will Super-Intellligent AI Transform Our Future? - Adam Ford - 2022-01
Will Super-Intellligent AI Transform Our Future? - Adam Ford - 2022-01
 
AI – Risks, Opportunities and Ethical Issues.pdf
AI – Risks, Opportunities and Ethical Issues.pdfAI – Risks, Opportunities and Ethical Issues.pdf
AI – Risks, Opportunities and Ethical Issues.pdf
 
Chapter Three, four, five and six.ppt ITEtx
Chapter Three, four, five and six.ppt ITEtxChapter Three, four, five and six.ppt ITEtx
Chapter Three, four, five and six.ppt ITEtx
 
APIdays Paris 2018 - Bots on the 'Net: The Good, the Bad, and the Future, Mik...
APIdays Paris 2018 - Bots on the 'Net: The Good, the Bad, and the Future, Mik...APIdays Paris 2018 - Bots on the 'Net: The Good, the Bad, and the Future, Mik...
APIdays Paris 2018 - Bots on the 'Net: The Good, the Bad, and the Future, Mik...
 
What really is Artificial Intelligence about?
What really is Artificial Intelligence about? What really is Artificial Intelligence about?
What really is Artificial Intelligence about?
 
AI: A Begining
AI: A BeginingAI: A Begining
AI: A Begining
 
Artificial intelligence
Artificial intelligenceArtificial intelligence
Artificial intelligence
 
Artificial intelligence
Artificial intelligenceArtificial intelligence
Artificial intelligence
 
Technologies Demystified: Artificial Intelligence
Technologies Demystified: Artificial IntelligenceTechnologies Demystified: Artificial Intelligence
Technologies Demystified: Artificial Intelligence
 
Jdb code biology and ai final
Jdb code biology and ai finalJdb code biology and ai final
Jdb code biology and ai final
 
AI and the Future of Work [TUG-CO, 11/15/23]
AI and the Future of Work [TUG-CO, 11/15/23]AI and the Future of Work [TUG-CO, 11/15/23]
AI and the Future of Work [TUG-CO, 11/15/23]
 
Artificial intelligence
Artificial intelligenceArtificial intelligence
Artificial intelligence
 
What will the world be like 50 years 20 (1)
What will the world be like 50 years 20 (1)What will the world be like 50 years 20 (1)
What will the world be like 50 years 20 (1)
 
Singularity - fiction or future?
Singularity - fiction or future?Singularity - fiction or future?
Singularity - fiction or future?
 
Sins2016
Sins2016Sins2016
Sins2016
 
Artificial intelligence nanni
Artificial intelligence nanniArtificial intelligence nanni
Artificial intelligence nanni
 
AI-PPT-wecompress.com_.pptx
AI-PPT-wecompress.com_.pptxAI-PPT-wecompress.com_.pptx
AI-PPT-wecompress.com_.pptx
 

Más de David Wood

Roadmapping the UK's future, 2019-2025-2035
Roadmapping the UK's future, 2019-2025-2035Roadmapping the UK's future, 2019-2025-2035
Roadmapping the UK's future, 2019-2025-2035
David Wood
 

Más de David Wood (20)

Assessing the risks of AI catastrophe - presentation given by David Wood on 1...
Assessing the risks of AI catastrophe - presentation given by David Wood on 1...Assessing the risks of AI catastrophe - presentation given by David Wood on 1...
Assessing the risks of AI catastrophe - presentation given by David Wood on 1...
 
AI - summary of focus groups.pdf
AI - summary of focus groups.pdfAI - summary of focus groups.pdf
AI - summary of focus groups.pdf
 
From the Eclipse Foundation to the Symbian Foundation
From the Eclipse Foundation to the Symbian FoundationFrom the Eclipse Foundation to the Symbian Foundation
From the Eclipse Foundation to the Symbian Foundation
 
The Future of AI: Scenarios, Ethics, and Regulations
The Future of AI: Scenarios, Ethics, and RegulationsThe Future of AI: Scenarios, Ethics, and Regulations
The Future of AI: Scenarios, Ethics, and Regulations
 
Anticipating and managing the future of AI
Anticipating and managing the future of AIAnticipating and managing the future of AI
Anticipating and managing the future of AI
 
The Singularity Principles for WTEF
The Singularity Principles for WTEFThe Singularity Principles for WTEF
The Singularity Principles for WTEF
 
Vital Syllabus project update 220410.pdf
Vital Syllabus project update 220410.pdfVital Syllabus project update 220410.pdf
Vital Syllabus project update 220410.pdf
 
The Abolition of Aging - An update for 2022.pdf
The Abolition of Aging - An update for 2022.pdfThe Abolition of Aging - An update for 2022.pdf
The Abolition of Aging - An update for 2022.pdf
 
Vital Syllabus project update 220315
Vital Syllabus project update 220315Vital Syllabus project update 220315
Vital Syllabus project update 220315
 
UK node MPPC 2021 v1
UK node MPPC 2021 v1UK node MPPC 2021 v1
UK node MPPC 2021 v1
 
Transhumanism 2024: A new future for politics?
Transhumanism 2024: A new future for politics?Transhumanism 2024: A new future for politics?
Transhumanism 2024: A new future for politics?
 
DW Augmented Humanity - Opportunity or Threat
DW Augmented Humanity - Opportunity or ThreatDW Augmented Humanity - Opportunity or Threat
DW Augmented Humanity - Opportunity or Threat
 
DW New Kind of Thinking
DW New Kind of ThinkingDW New Kind of Thinking
DW New Kind of Thinking
 
AI in 5-10 years time: 12 ways it could be very different from today
AI in 5-10 years time: 12 ways it could be very different from todayAI in 5-10 years time: 12 ways it could be very different from today
AI in 5-10 years time: 12 ways it could be very different from today
 
DW H+Summit 2020
DW H+Summit 2020DW H+Summit 2020
DW H+Summit 2020
 
Uk node mpcc 2020 v2
Uk node mpcc 2020 v2Uk node mpcc 2020 v2
Uk node mpcc 2020 v2
 
Roadmapping the UK's future, 2019-2025-2035
Roadmapping the UK's future, 2019-2025-2035Roadmapping the UK's future, 2019-2025-2035
Roadmapping the UK's future, 2019-2025-2035
 
The roadmap to abolish aging by 2040
The roadmap to abolish aging by 2040The roadmap to abolish aging by 2040
The roadmap to abolish aging by 2040
 
Progressive ethics in the digital age
Progressive ethics in the digital ageProgressive ethics in the digital age
Progressive ethics in the digital age
 
Lessons from 10 years of public meetups addressing existential risk
Lessons from 10 years of public meetups addressing existential riskLessons from 10 years of public meetups addressing existential risk
Lessons from 10 years of public meetups addressing existential risk
 

Último

+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
?#DUbAI#??##{{(☎️+971_581248768%)**%*]'#abortion pills for sale in dubai@
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
panagenda
 

Último (20)

A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
 
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost SavingRepurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
Repurposing LNG terminals for Hydrogen Ammonia: Feasibility and Cost Saving
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
Manulife - Insurer Innovation Award 2024
Manulife - Insurer Innovation Award 2024Manulife - Insurer Innovation Award 2024
Manulife - Insurer Innovation Award 2024
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
Top 10 Most Downloaded Games on Play Store in 2024
Top 10 Most Downloaded Games on Play Store in 2024Top 10 Most Downloaded Games on Play Store in 2024
Top 10 Most Downloaded Games on Play Store in 2024
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024
 
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdfUnderstanding Discord NSFW Servers A Guide for Responsible Users.pdf
Understanding Discord NSFW Servers A Guide for Responsible Users.pdf
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdf
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
 

Superintelligence: how afraid should we be?

  • 1. Superintelligence How afraid should we be? Principal, Delta WisdomChair, London Futurists David Wood @dw2 #CIUUK14
  • 2. @dw2 Page 2 Powerful technology, incompletely understood Operated by people outside their level of competence Human lives knocked catastrophically off trajectory, unintentionally http://www.bbc.co.uk/news/world-europe-28357880 Self-improving AGI Beyond human control Humanity knocked catastrophically off trajectory, unintentionallyhttp://mashable.com/2014/07/17/malaysia-airlines-ukraine-russia-rebel/
  • 4. @dw2 Page 4 Likely date of advent of HL-AGI Population 10% 50% 90% Conference: Philosophy & Theory of AI Conference: Artificial General Intelligence Greek Association for Artificial Intelligence Top 100 cited academic authors in AI Combined (from above) Nick Bostrom: Superintelligence
  • 5. @dw2 Page 5 Likely date of advent of HL-AGI Population 10% 50% 90% Conference: Philosophy & Theory of AI 2048 Conference: Artificial General Intelligence 2040 Greek Association for Artificial Intelligence 2050 Top 100 cited academic authors in AI 2050 Combined (from above) 2040 Nick Bostrom: Superintelligence
  • 6. @dw2 Page 6 Likely date of advent of HL-AGI Population 10% 50% 90% Conference: Philosophy & Theory of AI 2048 2080 Conference: Artificial General Intelligence 2040 2065 Greek Association for Artificial Intelligence 2050 2093 Top 100 cited academic authors in AI 2050 2070 Combined (from above) 2040 2075 Nick Bostrom: Superintelligence
  • 7. @dw2 Page 7 Likely date of advent of HL-AGI Population 10% 50% 90% Conference: Philosophy & Theory of AI 2023 2048 2080 Conference: Artificial General Intelligence 2022 2040 2065 Greek Association for Artificial Intelligence 2020 2050 2093 Top 100 cited academic authors in AI 2024 2050 2070 Combined (from above) 2022 2040 2075 Nick Bostrom: Superintelligence
  • 8. @dw2 Page 8 Reaching HL AGI: 5 driving forces 1. Hardware with higher performance: Continuation of Moore’s Law? – “18 different candidates” in Intel labs to add extra life to that trend – Possible breakthroughs with Quantum Computing? 2. Software algorithm improvements? – Can speed things up faster than hardware gains – e.g. chess computers – Compare: Andrew Wiles, unexpected proof of Fermat’s Last Theorem (1993) 3. Learnings from studying the human brain? – Improved scanning techniques -> “neuromorphic computing” etc – Philosophical insight into consciousness/creativity?! 4. More people studying these fields than ever before – Stanford University online course on AI: 160,000 students (23,000 finished it) – More components / databases / tools /methods ready for re-combination – Unexpected triggers for improvement (malware wars, games AI, financial AI…) 5. Transformation in society’s motivation? http://intelligence.org/2013/05/15/when-will-ai-be-created/ (Smarter people?!) “Sputnik moment!?”
  • 9. @dw2 Page 9 Superintelligence – model 1 Village idiot Einstein http://intelligence.org/files/mindisall-tv07.ppt Eliezer Yudkowsky
  • 10. @dw2 Page 10 Superintelligence – model 2 http://intelligence.org/files/mindisall-tv07.ppt Village idiotMouse Chimp Einstein AI 50-100 years 50-100 weeks? / days? / hours? Vernor Vinge: The best answer to the question, “Will computers ever be as smart as humans?” is probably “Yes, but only briefly.” “The final invention” Eliezer Yudkowsky
  • 12. @dw2 Page 12 Recursive improvement Software tools (debuggers, compilers…) Software
  • 13. @dw2 Page 13 Recursive improvement AI tools AI Intelligence explosion ++Rapid reading & comprehension of all written material ++Rapid expansion onto improved hardware ++Funded by financial winnings from smart stock trading ++Supported by humans easily psychologically manipulated
  • 14. Who here wanted to merge again? Jaan Tallinn: http://prezi.com/xku9q-v-fg_j/intelligence-stairway/
  • 16. @dw2 Page 16 Going nuclear: hard to calculate • First hydrogen bomb test, 1st March 1954, Bikini Atoll – Explosive yield was expected to be from 4 to 6 Megatons – Was 15 Megatons, two and a half times the expected maximum – Physics error by the designers at Los Alamos National Lab – Wrongly considered the lithium-7 isotope to be inert in bomb – The crew in a nearby Japanese fishing boat became ill in the wake of direct contact with the fallout. One of the crew died http://en.wikipedia.org/wiki/Castle_Bravo
  • 17. @dw2 Page 17 Superintelligence – model 2 http://intelligence.org/files/mindisall-tv07.ppt Village idiot Chimp Einstein Mouse AI Linear model of intelligence? Eliezer Yudkowsky
  • 18. @dw2 Page 18 Gloopy ASIs Posthuman mindspace Bipping ASIs Freepy ASIs Eliezer Yudkowsky http://intelligence.org/ files/mindisall-tv07.ppt Model 3 Transhuman mindspace Human minds Minds-in-general
  • 19. @dw2 Page 19 Dimensions of mind The ability to achieve goals in a wide range of environments Being conscious? Having compassion for sentient beings with lesser intelligence?
  • 20. @dw2 Page 20 AI systems we should fear Killer drones with autonomous decision-making powers (Robocop) Malware that can hack infrastructure- control systems (e.g. Stuxnet) Financial trading systems software (high speed) Software that is expert in manipulating humans
  • 21. http://www.williamhertling.com/ Software that is expert in manipulating humans
  • 22. @dw2 Page 22 AI systems we should fear Killer drones with autonomous decision-making powers (Robocop) Malware that can hack infrastructure- control systems (e.g. Stuxnet) Financial trading systems software (high speed) Software that is expert in manipulating humans Software that pursues a single optimisation goal to the exclusion of all others The more power such an AI has, the more we should fear it
  • 23. @dw2 Page 23 The pursuit of happiness? Software that pursues a single optimisation goal to the exclusion of all others Software will do what we say, rather than what we meant to say Wire-heading?! Just make us happy!?
  • 24. @dw2 Page 24 The pursuit of morality? Just be moral!? http://www.clipartbest.com/clipart-nTXa54XTB Whose morality? The problem of computer morality is at least as hard as the problem of computer vision (!) http://tvtropes.org/pmwiki/pmwiki.php/Creator/IsaacAsimov Isaac Asimov’s Three Laws of Robotics?!
  • 25. @dw2 Page 25 The two fundamental problems of superintelligence Specification problem: How do we define the goals of the AGI software? Control problem: How do retain the ability to shut down the software? Creation problem: How do we create AGI software in the first place?
  • 26. @dw2 Page 26 The fundamental meta-problem of superintelligence Specification problem: How do we define the goals of the AGI software? Control problem: How do retain the ability to shut down the software? Creation problem: How do we create AGI software in the first place? ~No research ~No research Some research Accidental research “Friendly AI” (FAI) “AI in a box”
  • 27. @dw2 Page 27 AI in a box? Tripwires? “Adam and Eve” ethernet port?! Software will be a tool, answering questions, not an agent? The “answers” which the software gives us will have effects in the world (e.g. software it writes for us) Systems which rely on humans to verify and carry out their actions will be uncompetitive compared to those with greater autonomy AGI may become very smart in surreptitiously evading tripwires Simple?
  • 28. @dw2 Page 28 “The orthogonality thesis” Intelligence and final goals are orthogonal More or less any intelligence …could in principle be combined with… more or less any final goal
  • 29. @dw2 Page 29 “The instrumental convergence thesis” (“AI Drives”) Some intermediate (instrumental) goals are likely in all cases for a superintelligence: • Resource acquisition • Cognitive enhancement • Greater creativity • Self preservation (preservation of goal)… Steve Omohundro: “For a sufficiently intelligent system, avoiding vulnerabilities is as powerful a motivator as explicitly constructed goals and subgoals”
  • 30. @dw2 Page 30 Indirect specification of goals? Specification problem: How do we define the goals of the AGI software? “Achieve the goals which the creators of the AGI would have wished it to achieve, if they had thought about the matter long and hard” This software will do what we meant to say, rather than what we actually said (?) AGI helps us to figure out the answer to the spec problem!
  • 31. @dw2 Page 31 CEV: Coherent Extrapolated Volition AGI should be tasked to carry out: Our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted Eliezer Yudkowsky
  • 32. @dw2 Page 32 Unanswered questions (selection) 1. Can we turn ‘poetic’ ideas like CEV into bug-free working software? – Should we humans concentrate harder on working out our “blended volition”? 2. How can we stop a superintelligence from changing its own core goals? – Like humans can choose to set aside their biologically inherited goals – Could AGIs that start off ‘Friendly’ become “born again” with new priorities?! 3. Can we prevent AGIs from developing dangerous instrumental drives? – By programming (bug-free) in tamper-proof limitations? 4. Can AGIs help us to figure out a solution to the Control problem? – Can we use a hierarchy of lower-level AGIs to control higher-level ones? 5. Can we prevent the rapid nuclear-style take-off of self-improving AGI? 6. Are some approaches to creating AGIs safer than others? – Whole Brain Emulation / AGI de novo / evolution in virtual environment… – Open (everything published) vs. Closed (some parts secret)? 7. How does the AGI existential risk compare to other x-risks in priority? – Nanotech grey goo, deadly new bio-hazard, nuclear holocaust, climate chaos…
  • 33. @dw2 Page 33 Answered questions (selection) a) Should we be afraid? – Yes. (End-of-the-world afraid) b) Can we slow down all research into AGI, until we’re confident we have good answers to the control and/or specification problems? – Unlikely – there’s too much financial investment happening worldwide – Too many separate countries / militaries / finance houses… are involved c) How do we promote wider study of the Superintelligence topic? – Need to lose the “weird” and “embarrassment” angles – “Less Wrong” strikes some observers as cultish – “Terminator” and “Transcendence” have done more harm than good – First class books / articles / movies needed, addressing thoughtful audiences – Good intermediate results useful too (not just appeals for more funding)
  • 34. Practical philosophy! Preparing humanity to survive the forthcoming transition to superintelligence Principal, Delta WisdomChair, London Futurists Urgent! (Roles too for mathematicians, theologians…) David Wood @dw2 Philosophy with an expiry date! Making a real difference!