LinkedIn emplea cookies para mejorar la funcionalidad y el rendimiento de nuestro sitio web, así como para ofrecer publicidad relevante. Si continúas navegando por ese sitio web, aceptas el uso de cookies. Consulta nuestras Condiciones de uso y nuestra Política de privacidad para más información.
LinkedIn emplea cookies para mejorar la funcionalidad y el rendimiento de nuestro sitio web, así como para ofrecer publicidad relevante. Si continúas navegando por ese sitio web, aceptas el uso de cookies. Consulta nuestra Política de privacidad y nuestras Condiciones de uso para más información.
By Susan Etlinger, Analyst
Altimeter, a Prophet Company
January 31, 2017
How Artificial Intelligence Is
THE AGE OF
1www.altimetergroup.com | @setlinger | email@example.com
Seemingly overnight, Artificial Intelligence (AI) has moved from a plot point in science fiction
movies to a core technology for companies such as Google, Facebook, Baidu, Microsoft, and
Amazon. But the idea of AI — of machines that can sense, classify, learn, reason, predict, and
interact — has been around for decades. Today, the combination of massive and available
datasets, inexpensive parallel computing, and advances in algorithms has made it possible
for machines to function in ways that were previously unthinkable.1
While the more obvious examples such as robotics, driverless cars, and intelligent agents such
as Siri and Alexa tend to dominate the news, artificial intelligence has much wider implications.
Gartner predicts that “by 2020, algorithms will positively alter the behavior of billions of global
Markets & Markets expects the AI market to reach $5.05B by 2020.3
This report lays out the current state of AI for business, describes primary and emerging use
cases, and states the risks, opportunities, and organizational considerations that businesses
are facing. It concludes with recommendations for companies thinking about applying AI to
their own organizations and a look at some of the business, legal, and technical trends that
are likely to shape the future.
Executive Summary 1
What is Artificial Intelligence? 2
Use Cases for Artificial Intelligence 8
Implications and Recommendations 13
A Look at the Future 17
End Notes 19
About Us 24
TABLE OF CONTENTS
www.altimetergroup.com | @setlinger | firstname.lastname@example.org
WHAT IS ARTIFICIAL INTELLIGENCE?
AI means many things to many people, from TV and movies such as Blade
Runner and The Terminator series to HBO’s Westworld to Apple’s Siri and
Amazon’s Alexa to the driverless truck that completed its first commercial
delivery — a 143-mile beer run — in October 2016.4
But today’s reality is far from
the aspirational and often dystopian views of AI in popular culture. To understand
AI and its implications, it is important to start from a shared understanding of
what it isn’t, what it is today, and what it might become in the future.
WHAT AI ISN’T — AND WHAT IT IS
There are almost as many definitions of AI as there are people talking about
The fundamental challenge of defining AI is that it is not possible simply
to translate human intelligence into digital form, simply because there is no
consensus about what human intelligence actually is.6
One theory of human
“If we accept that there are different
types of intelligence in the animal
and human world, then the same
is true for the AI world. Hence
trying to define AI is as difficult
as trying to define the notion
of intelligence itself.”
— Prianka Srinivasan
Office of the CTO,
Technology Vision & Strategy, HP
www.altimetergroup.com | @setlinger | email@example.com
intelligence, proposed in 1983 by developmental psychologist Howard Gardner,
argues that people display not one but multiple intelligences7
(See Figure 1).
The meaning of human intelligence has occupied philosophers and psychologists
for centuries and will do so for centuries to come. But Howard Gardner’s model
offers a useful way to frame some of the differences between machine and human
intelligence and some of the types of intelligences that AI is best (and least)
equipped to handle.
Clearly, computers lack consciousness of the type we have seen in TV and film, but
they excel at logical-mathematical reasoning. With training, they can recognize
and interpret images, language, music, and spatial relationships. But can they be
HOWARD GARDNER’S THEORY OF MULTIPLE INTELLIGENCES
www.altimetergroup.com | @setlinger | firstname.lastname@example.org
creative? Fair? Empathetic? And, more to the point, do we want that? The notion
of the “uncanny valley,” or the point at which “lifelike” dolls and robots seem
creepy and disturbing, has been explored in more than 500 scientific papers to
Today, scientists, philosophers, and engineers are exploring these questions
and probing the boundaries of what AI can and should do. But, argues Gideon
Lewis-Krause in his New York Times Magazine article, “The Great A.I. Awakening,”
“Artificial intelligence is not about building a mind; it’s about the improvement of
tools to solve problems.”9
It can be challenging to distinguish systems that use AI from those that do not,
especially if they are engineered to provide a realistic experience. What
differentiates AI from what we see in video games, for example, is that in video
games, algorithms dictate the characters’ behavior and players learn based on a
set of rules. With AI, however, algorithms change themselves based on what they
STRONG VS. WEAK AI
Another important distinction is the kind of AI, specifically the idea of strong
or general AI versus a weak or narrow AI. A strong/general AI would replicate
humans’ general intelligence(s), while a weak/narrow AI focuses on a specific use
case. So far, we’ve only seen examples of strong/general AI in fiction and film, such
as 2001: A Space Odyssey, Blade Runner, Terminator, Black Mirror, and so on, in
which robots, androids, or simply disembodied voices display human reasoning,
emotion, and behavior.
Typically, when we hear warnings about the dangers of artificial intelligence from
technologists and futurists, such as Raymond Kurzweil, Dr. Nick Bostrom, Stephen
J. Hawking, and others, it is strong AI that is being discussed.10
While the notion
of a “singularity” — that AI may one day outpace humanity’s ability to understand
and/or control it — is a critical issue for society to address, this report focuses on
more narrow and pragmatic use cases that are achievable today. It’s also important
to realize that machine learning, relatively speaking, is still in its infancy, so much
so that “real artificial intelligence does not quite exist yet,” says Pete Skomoroch,
CEO and Co-Founder of Skipflag.
Once we eliminate the more futuristic, aspirational, and contentious elements
associated with AI, we are left with today’s reality. Examples of narrow/weak AI
surround us every day, including Google search, recommendation engines,
chatbots, intelligent medical diagnostics, and so on. But we shouldn’t take the
term “narrow/weak” to imply inadequacy or a lack of value. Using machine
learning, advanced algorithms, and other computer science techniques, these
“narrow” examples of AI typically require the ability to sense and process vast
amounts of data and can demonstrate their value economically or in more human
terms, such as quality of life.11
While “true” AI — the ability for machines to fully replicate human intelligence —
is aspirational, this report will nonetheless use the term “AI” to refer to systems
www.altimetergroup.com | @setlinger | email@example.com
* For a more detailed view of computer vision and its business applications, see Susan Etlinger,
Altimeter Group, “Image Intelligence: Making Visual Content Predictive”.
EXAMPLES OF AI
There are many conflicting definitions of artificial intelligence (AI), ranging from futuristic visions of human-
like machine intelligence to more restrained definitions that refer to the ability of machines to self-program
based on new data. Today, AI commonly refers to systems that employ machine learning and can:
• Collect and process signals via sensors or other methods;
• Classify, learn, reason, and predict possible outcomes; and
• Interact with people or other machines.
While there are plenty of experiments and early examples of AI, the majority today cluster around
three specific types of intelligence: Visual/Spatial, Auditory/Linguistic and Motor Intelligence.
See and classify images
based on objects, scenes,
attributes and emotion.
Analyze patient data, tests, and
scans to help diagnose disease
and recommend treatment.
Combine data and analytics with
reasoning to navigate and adapt
to real world environments.
Communicate with users
and answer questions via
speech or written text.
TYPES OF MACHINE INTELLIGENCE
The ability to see and process
the physical and digital world.
Examples include computer
facial recognition and
The ability to listen selectively
and communicate using written
or spoken language. Examples
include virtual personal assistants
such as Alexa, Siri , Viv, Cortana,
Natural Language Processing (NLP),
machine translation and chatbots.
Cognitive intelligence is
the ability to learn, reason,
predict and respond.
This is the key distinction
between machines and
rule-based systems. With-
out it, machines simply
respond to pre-defined
inputs; with it, they can
self-program based on
The ability to move around
and manipulate physical or
virtual environments, or
communicate using gestures.
Examples include robots and
gestural or adaptive interfaces.
CURRENT CAPABILITIES OF ARTIFICIAL INTELLIGENCE (AI)
that employ machine learning and can sense, classify, learn, reason, predict, and
interact. It will focus on the kinds of intelligences that computers can, to varying
extents, replicate and the opportunities they present. Figure 2 illustrates the core
capabilities of artificial intelligence as it exists today.
www.altimetergroup.com | @setlinger | firstname.lastname@example.org
Artificial Intelligence was first conceptualized in 1950 by Alan Turing and has seen a number of
innovations and false starts since then. In the past three or so years, innovation has accelerated
greatly. Following is a selection of some of the major milestones in AI during the past seven decades.
1950 – Alan Turing
and intelligence,“ asks
‘Can machines think?'”
1959 – Artificial
founded at MIT
1989 – NASA’s
used to discover
new classes of stars
1974 – The first
AI winter; fund-
ing and interest
1975 – MYCIN, a system
that diagnoses bacterial
infections and recommends
antibiotics, is developed
2011 – Apple
2002 – Amazon
editors with an
1956 – “Artificial
by John McCarthy at
1997 – IBM’s Deep
Blue beats world
Kasparov at chess
2016 – Google’s
AlphaGo defeats Lee
Sedol, one of the world’s
leading Go players
1994 – First
One common question about AI is why it has become so popular (many would say “hyped”)
so quickly. In reality, the idea of AI has existed since 1950, when Alan Turing (inventor of the
Turing test, and depicted in the 2014 film The Imitation Game) published a paper entitled
“Computing Machinery and Intelligence.”
Six years later, Stanford professor John McCarthy coined the term “artificial intelligence,”
and in 1959, MIT founded its Artificial Intelligence laboratory. The following decades saw
both promising advances (the first mobile robot, IBM Deep Blue’s victory over chess legend
Gary Kasparov) and dispiriting dry spells as interest in and capabilities of AI ebbed and
flowed (See Figure 3).
Given the number of false starts in the past several decades, it’s reasonable to wonder
why now is different. There are three key factors that distinguish today’s AI climate from
that of the past:12
• Massive and available datasets (also known as “Big Data”);
• Inexpensive parallel computation; and
• Improved algorithms.
The combination of these three factors has made it possible, finally, for AI to become
not just a wild idea or rarefied technology, but a commercial reality.
MACHINE LEARNING IN A NUTSHELL
To enable computers to reason, make predictions, and/or take actions, they need to be able
to learn without being explicitly programmed.13
In order to learn, they must be “trained”
using large amounts of data so they can classify things properly. There are two main types
and several subtypes of machine learning (see Figure 4, on the following page). Each has
benefits and drawbacks related to scalability, precision (accuracy), and other factors, but
in all cases the algorithm learns from the data it is given.14
The amount and relevance of
training data is critical to the machine’s ability to learn and properly classify future data.
A BRIEF HISTORY OF AI