by Samantha Adams, Met Office.
Originally purely academic research fields, Machine Learning and AI are now definitely mainstream and frequently mentioned in the Tech media (and regular media too).
We’ve also got the explosion of Data Science which encompasses these fields and more. There’s a lot of interesting things going on and a lot of positive as well as negative hype. The terms ML and AI are often used interchangeably and techniques are also often described as being inspired by the brain.
In this talk I will explore the history and evolution of these fields, current progress and the challenges in making artificial brains
From the FreshTech 2017 conference by TechExeter
www.techexeter.uk
1. Machine Learning, AI and the
Brain
Samantha Adams
Met Office, University of Plymouth
2. Talk Overview
• Why this talk?
• Some definitions and history
• Why is it difficult to build a brain?
• Should we be worried?
• Hot topics
• Resources
3. Why this talk?
• Machine Learning and AI are now mainstream
• Big data and Data Science in the mix
• Positive and negative hype
• ML == AI ?
• Human-like AI still a challenge
• My background
• Disclaimer – my opinions are my own!
4. “A field of study that gives computers the ability to
learn without being explicitly programmed“
(Arthur Samuel, 1959)
“Machine Learning is the study of techniques and
algorithms that allow machines to autonomously
extract meaningful information from data”
(Me, 2017)
“Machine Learning is a paradigm that enables
systems to automatically improve their
performance at a task by observing relevant data”
(Stanford One Hundred Year Study on Artificial Intelligence, 2016)
Some definitions - Machine learning
5. “Artificial Intelligence (AI) is a science and a set of computational
technologies that are inspired by—but typically operate quite
differently from—the ways people use their nervous systems and
bodies to sense, learn, reason, and take action.”
(Stanford One Hundred Year Study on Artificial Intelligence, 2016)
…….ML is about techniques for enabling machines to assist humans,
The goal of AI is the same but by generating human-like behaviour.
ML is part of the AI researchers toolbox.
“…is intelligence exhibited by machines. Colloquially, the term
‘artificial intelligence’ is applied when a machine mimics ‘cognitive’
functions that humans associate with other human minds, such as
‘learning’ and ‘problem solving’“
(Wikipedia)
Some definitions – Artificial Intelligence
6. ML (I think) is pretty easy to define:
• Predict stock market prices based on historical data
• Decide if a tumour is malignant or benign from a brain scan
• Classify an image as containing a cat or a dog
• Determine most efficient route given a set of constraints
AI is harder (what constitutes ‘behaviour’?)
• Self-driving car
• Alexa, Cortana, Siri assistants
• Machine Translation (tricky one)
• Describing an image (very tricky one)
Some examples
7. ‘Machine Learning’ has been going for a very long time (if one
accepts the previous definitions)
In the eighteenth century, Thomas Bayes - reasoning about
the probability of events
In the nineteenth century, George Boole showed that logical
reasoning could be performed systematically
In the twentieth century, from the experimental sciences
came the field of statistics. which enables the modelling of
data and drawing inferences from it
When Computer Science became mature the idea of creating
a machine to execute such operations soon followed.
History
8. Alan Turing 1912-1954
• “Computing Machinery and Intelligence”
(1950)
• Probably first mention of Artificial
Intelligence but he didn’t use the phrase
John McCarthy 1927-2011
• Invented the term ‘Artificial Intelligence’
• In 1956 Organised the first major
conference
Images courtesy of Wikipedia
“ Intelligence as Computation “
9. • Expectations had run high for what was going to be
possible with AI
• In 1981, the Japanese Ministry of International Trade
and Industry set aside $850 million for the ’Fifth
generation computer project’. Their objectives were
to write programs and build machines that could:
• carry on conversations
• translate languages
• interpret pictures
• reason like human beings
“ Intelligence as Computation “
10. • An ‘AI Winter’ (actually the second such). AI suffered a series
of financial setbacks as industry and business lost faith and
stopped investing
• It was an assumption that human intelligence, behaviour etc.
could be programmed into a machine (computer)
• For some domain-specific problems this worked – e.g. games
where the rules can be explicitly set down (chess)
• Ironically, it was the things that humans find easy that were
the real challenge for AI
“ Intelligence as Computation “
11. • How words (symbols) are assigned meanings
• A thing (‘referent’) can be referred to using
different words with different meanings
• Words in our minds are grounded
• Words in isolation are not grounded
The Symbol Grounding Problem
The middle door
The blue door
The door she came in through
12. • How to solve the problem of autonomous
machines (e.g. a robot) operating in a dynamic
environment
• Even very simple scenarios become complicated
to specify using logic
The Reference Frame Problem
Icons made by Freepik from www.flaticon.com
Light: ON Light: OFF
Door: OPEN Door: CLOSED
13. The Reference Frame Problem
Door: OPEN
INITIAL State:
Light: OFF
Door: CLOSED
Update State: OPEN DOOR
NEXT State:
Light: ON Light: OFF
OR ??
The problem is that specifying only
which conditions are changed by the
actions does not entail that all other
conditions are not changed!
AND Light: OFF
Light: OFF
14. Rodney Brooks 1954 -
• “Elephants Don’t Play Chess” (1990)
• “Situated and Embodied AI”
• The Subsumption Architecture
• Integration of AI and Robotics
• These ideas spawned a lot of subfields in
AI / Robotics
“ Intelligence as Interaction between Body,
Brain and Environment “
Image(s) courtesy of Wikipedia
"the world is its own best model. It is always exactly up to date. It
always has every detail there is to be known. The trick is to sense it
appropriately and often enough”
A Subsumption Architecture
15. • Field became fragmented many different names for the same
research, e.g. ‘Computational Intelligence’ or ‘Cognitive
Systems’ (some stigma from AI Winter)
• Many techniques just became assimilated into regular
Computer Science (e.g. search)
• Increased computer power and data availability has allowed
many of the ‘Fifth generation’ dreams to become reality today
• Successful applications are generally domain-specific and have
required decades of prior research to bear fruit
• Not as much progress towards Artificial ‘General’ Intelligence
as they don’t address genuine human level reasoning,
common sense ability and adaptation
Since then…
16. • Lack of full understanding of what the brain does and how it does it
• Lack of a definitive metric for ‘intelligence’
• Difficult to define what objective is to be optimised. Is the brain even optimising all
the time? (Check out Gerd Gigerenzer’s work on Fast and Frugal heuristics)
• Brains are inside a body that has sensors and actuators!
• The brain is multi-modal. Although there are specific regions of the brain that deal
with vision, audition, speech, motor, the functions are not cleanly separable.
• The brain is extremely good at adaptation – most ML techniques work well in
stationary environments but cannot adapt very well if the environment changes (an
example is Computer Vision)
• The brain is an electro-chemical network so there is a mixture of global and local
learning. Most NN have global learning mechanisms
• The brain uses pulse computation or spikes. Most NN have constant activation
• The brain rewires itself constantly. Most NN have a fixed network structure where
only the weights on the connections change (note dropout in DNN)
• The brain does not start as a ‘blank sheet’, but is the product of evolution (note the
current interest in transfer learning and pretrained NN)
So why is it a struggle to build brains
with even the sophistication of a fly’s , let alone a human’s?
17. So why is it a struggle to build brains
with even the sophistication of a fly’s , let alone a human’s?
“The brain is an infinite-dimensional network of networks of
genes, proteins, cells, synapses, and brain regions, all operating in
a dynamically changing cocktail of neurochemicals. Our
perceptions and movements, thoughts and feelings emerge as
electrical, chemical, and mechanical chain reactions explode and
weave through these networks. Because there is no scientific
evidence that we can ignore any of these reactions, the only way
to get close to the capabilities of the brain is to simulate or
emulate all of them. When that will happen depends on the level
of resolution that we need to capture all these reactions.”
Henry Markram (Founder/director of the Blue Brain Project and founder of
the Human Brain Project), Interview for IEEE Spectrum, June 2017.
18. • Like it or not, AI is already here! Like most technologies, when
they become useful they are assimilated into everyday life and
are invisible
• Concerns fall into these categories
• Ethical, e.g. Safety, privacy
• Economic, e.g. Taking jobs away from humans
• Existential, e.g. Machines will become smarter than us and
subjugate or eliminate us
• I see a conflict between the desire to make autonomous
systems that behave like humans but also to ensure that they
do not harm people, or make biased decisions, or show
discrimination
Should we be worried?
19. “Contrary to the more fantastic predictions for AI in the
popular press, the Study Panel found no cause for concern
that AI is an imminent threat to humankind. No machines
with self-sustaining long-term goals and intent have been
developed, nor are they likely to be developed in the near
future”
“the new jobs that will emerge are harder to imagine in
advance than the existing jobs that will likely be lost”
Should we be worried?
(Stanford One Hundred Year Study on Artificial Intelligence, 2016)
20. “There are quite a few people out there who’ve said that AI is an
existential threat: Stephen Hawking, astronomer Royal Martin
Rees, who has written a book about it, and they share a common
thread, in that: they don’t work in AI themselves. For those who
do work in AI, we know how hard it is to get anything to actually
work through product level.”
Should we be worried?
Rodney Brooks, interview for TechCrunch, July 2017.
21. “my biggest worry is about machines having too much power, not
about them being too smart. You could, for example, be president
of the United States and do a lot of damage, regardless of what
your IQ is”
Should we be worried?
Gary Marcus (Professor of psychology, New York University), Interview for IEEE
Spectrum, May 2017.
22. “Every technology since fire has had intertwined promise and
peril. I believe that our best strategy to keep AI safe and beneficial
is to essentially merge with it. We are already on that path.
Whether [intelligent] machines are inside or outside [the] body is
not a critical issue.”
Should we be worried?
Ray Kurzweil (Cofounder and chancellor, Singularity University), Interview for IEEE
Spectrum, May 2017
23. Large-scale machine learning
Deep Learning
Reinforcement learning
Robotics
Computer Vison
Natural Language Processing
Collaborative systems
Crowdsourcing and human computation
Algorithmic game theory and computational social choice
Internet of Things (IoT)
Neuromorphic computing
Hot Topics
(According to the Stanford One Hundred Year Study on Artificial Intelligence)
24. • Be wary about paying lots of money for anything!!
• You don’t need to read mathematical research papers either
• Most Udacity and Coursera videos are on YouTube
• Deep Learning TV YouTube
• Jeremy Howard’s Practical Deep Learning for Coders
http://course.fast.ai/
• Machinelearningmastery.com. Jason Brownlee’s blog. Highly
recommended for his tutorials. His eBooks are good value for
money
Resources
25. Peter Stone et al. (2016). Artificial Intelligence and Life in 2030.
One Hundred Year Study on Artificial Intelligence: Report of the
2015-2016 Study Panel, Stanford University, Stanford,
CA, September 2016. Available online
from http://ai100.stanford.edu/2016-report
Alan M. Turing (1950). Computing Machinery and Intelligence.
Mind 49: 433-460. Available online from
http://cogprints.org/499/1/turing.HTML
Rodney A. Brooks (1990). Elephants Don’t Play Chess.
Robotics and Autonomous Systems, 6: 3-15. Available online from
http://people.csail.mit.edu/brooks/papers/elephants.pdf
Selected references
McCarthy – The Dartmouth conference – co-organisers included Marvin Minsky and Claude Shannon, attendees included other people who would go on to do important work such as Arthur samuel
.g. games where the rules can be explicitly set down (chess), computationally intensive problems that just require the kind of brute force approach humans can’t perform. Ironically these kind of problems most humans find quite hard, and it’s the things that we find easy that were the real challenge for AI.
Example of an ungrounded word – any word in a foreign language that you don’t understand. You could look it up in a dictionary in that language but you still don’t understand the meaning because you don’t understand the language
Alan Turing did mention this point in his paper!!!!
Subsumption architecture:
Lower levels are ‘basic’ behaviour which can be subsumed by higher levels depending upon sensory input
Note the feedback loop from actuators to sensors
Note, however, that military applications are outside of the scope of this study group
Note, however, that military applications are outside of the scope of this study group
Note, however, that military applications are outside of the scope of this study group
Note, however, that military applications are outside of the scope of this study group