2. Applications of AI
1. Marketing
2. Banking
3. Finance
4. Agriculture
5. HealthCare
6. Gaming
7. Space Exploration
8. Autonomous Vehicles
9. Chatbots
10.Artificial Creativity
3. Daily Applications of AI
1. Google’s AI-powered predictions (E.g.: Google Maps)
2. Ride-sharing applications (E.g.: Uber, Lyft)
3. AI Autopilot in Commercial Flights
4. Spam filters on E-mails
5. Plagiarism checkers and tools
6. Facial Recognition
7. Search recommendations
8. Voice-to-text features
9. Smart personal assistants (E.g.: Siri, Alexa)
10.Fraud protection and prevention
4. Robots in AI
� Assembly
• AI along with advanced vision systems can help in real-time course correction
• It also helps robots to learn which path is best for a certain process while its in operation
� Customer Service
• AI-enabled robots are being used in a customer service capacity in retail and hospitality
industries
• These robots leverage Natural Language Processing to interact with customers intelligently
and like a human
• More these systems interact with humans, more they learn with the help of machine
learning
� Packaging
• AI enables quicker, cheaper, and more accurate packaging
• It helps in saving certain motions that a robot is making and constantly refines them, making
installing and moving robotic systems easily
� Open Source Robotics
• Robotic systems today are being sold as open-source systems having AI capabilities.
• In this way, users can teach robots to perform custom tasks based on a specific application
• Eg: small scale agriculture
5. 1. What is artificial intelligence?
� Artificial intelligence (AI) is wide-ranging branch of computer science concerned with building
smart machines capable of performing tasks that typically require human intelligence.
� AI is an interdisciplinary science with multiple approaches, but advancements in machine
learning and deep learning are creating a paradigm shift in virtually every sector of the tech
industry.
6. Thinking humanly Thinking rationally
Acting humanly Acting rationally
� The definitions on top are concerned with thought processes and reasoning, whereas the
ones on the bottom address behaviour
� The definitions on the left measure success in terms of fidelity to human performance,
whereas RATIONALITY the ones on the right measure against an ideal performance
measure, called rationality. A system is rational if it does the “right thing,” given what it
knows.
� A human-centered approach must be in part an empirical science, involving observations
and hypotheses about human behaviour.
� A rationalist approach involves a combination of mathematics and engineering. The
various group have both disparaged and helped each other.
7. 1.1 Acting humanly: Turing Test Approach
� In 1950, Alan Turing introduced a test to check whether a machine can think like a human or not, this test is
known as the Turing Test. In this test, Turing proposed that the computer can be said to be an intelligent if it
can mimic human response under specific conditions.
� Turing Test was introduced by Turing in his 1950 paper, "Computing Machinery and Intelligence," which
considered the question, "Can Machine think?"
8. � The Turing test is based on a party game "Imitation game," with some modifications. This game
involves three players in which one player is Computer, another player is human responder, and
the third player is a human Interrogator, who is isolated from other two players and his job is to
find that which player is machine among two of them.
� Consider, Player A is a computer, Player B is human, and Player C is an interrogator. Interrogator
is aware that one of them is machine, but he needs to identify this on the basis of questions and
their responses.
� The conversation between all players is via keyboard and screen so the result would not depend
on the machine's ability to convert words as speech.
� The test result does not depend on each correct answer, but only how closely its responses like a
human answer. The computer is permitted to do everything possible to force a wrong
identification by the interrogator.
9. The questions and answers can be like:
Interrogator: Are you a computer?
PlayerA (Computer): No
Interrogator: Multiply two large numbers such as (256896489*456725896)
Player A: Long pause and give the wrong answer.
In this game, if an interrogator would not be able to identify which is a machine and which is
human, then the computer passes the test successfully, and the machine is said to be intelligent
and can think like a human.
"In 1991, the New York businessman Hugh Loebner announces the prize competition, offering
a $100,000 prize for the first computer to pass the Turing test. However, no AI program to till
date, come close to passing an undiluted Turing test".
10. Chatbots to attempt the Turing test:
ELIZA: ELIZA was a Natural language processing computer program created by Joseph Weizenbaum. It was created to
demonstrate the ability of communication between machine and humans. It was one of the first chatterbots, which has
attempted the Turing Test.
Parry: Parry was a chatterbot created by Kenneth Colby in 1972. Parry was designed to simulate a person with Paranoid
schizophrenia(most common chronic mental disorder). Parry was described as "ELIZA with attitude." Parry was tested
using a variation of the Turing Test in the early 1970s.
Eugene Goostman: Eugene Goostman was a chatbot developed in Saint Petersburg in 2001. This bot has competed in the
various number of Turing Test. In June 2012, at an event, Goostman won the competition promoted as largest-ever Turing
test content, in which it has convinced 29% of judges that it was a human.Goostman resembled as a 13-year old virtual boy.
The Chinese Room Argument:
There were many philosophers who really disagreed with the complete concept of Artificial Intelligence. The most famous
argument in this list was "Chinese Room."
In the year 1980, John Searle presented "Chinese Room" thought experiment, in his paper "Mind, Brains, and
Program," which was against the validity of Turing's Test. According to his argument, "Programming a computer may
make it to understand a language, but it will not produce a real understanding of language or consciousness in a
computer."
He argued that Machine such as ELIZA and Parry could easily pass the Turing test by manipulating keywords and symbol,
but they had no real understanding of language. So it cannot be described as "thinking" capability of a machine such as a
human.
11. Features required for a machine to pass the Turing test:
Natural language processing: NLP is required to communicate with Interrogator in general
human language like English.
Knowledge representation: To store and retrieve information during the test.
Automated reasoning: To use the previously stored information for answering the questions.
Machine learning: To adapt new changes and can detect generalized patterns.
Vision (For total Turing test): To recognize the interrogator actions and other objects during
a test.
Motor Control (For total Turing test): To act upon objects if requested.
12. 1.2 Thinking humanly: The cognitive modelling
approach
- Program thinks like a human ..!
We need to get inside the actual workings of human minds. There are two ways:
� through introspection--trying to catch our own thoughts as they go by—
� or through psychological experiments.
� GPS -``General Problem Solver''
� (GPS) A procedure and program developed by Allen Newell, J. C. Shaw, and Herbert Simon.
� GPS attains an objective by using recursive search and by applying rules to generate the alternatives at
each branch in the recursive expansion of possible sequences.
� GPS uses a procedure to measure the "distance" from the goal.
13. 1.3 Thinking rationality: The Logical approach
� Ensure that all actions performed by computer are justifiable (“rational”)
� Rational = Conclusions are provable from inputs and prior knowledge
� Problems:
� Representation of informal knowledge is difficulty
� Hard to define “provable” plausible reasoning
� Combinatorial explosion: Not enough time or space to prove desired conclusions.
Facts and Rules in
Formal Logic
Theorem Prover
14. 1.4 Acting rationally: The rational agent
approach
� Rational behavior : doing the right thing ( that which is expected to maximize goal achievement,
given the available information).
� The right thing: that which is expected to maximize goal achievement, given the available
information
� Agent- An agent is an entity that perceives and acts
This course is about designing rational agents
Abstractly, an agent is a function from percept histories to actions:
f : P∗ → A
For any given class of environments and tasks, we seek the agent (or class of agents) with the best
performance
� Agent and Program - design best program for given machine resources
� Rational Agent is one that acts to achieve the best outcomes or, when there is uncertainty, the best
expected outcome.
15. Rational Agents
� Adjust amount of reasoning according to available resources and importance of
the result
� This is one thing that makes AI hard
CS 362
very few resources lots of resources
no thought
“reflexes”
Careful, deliberate
reasoning
limited,
approximate
reasoning
16. Foundations of AI
� Philosophy: logic, methods of reasoning, mind as physical system,
foundations of learning, language, rationality
� Mathematics: formal representation and proof algorithms, computation,
(un)decidability, (in)tractability, probability
� Psychology: adaptation phenomena of perception and motor control,
experimental techniques (psychophysics, etc.)
� Economics: formal theory of rational decisions
� Linguistics: knowledge representation, grammar
� Neuroscience: plastic physical substrate for mental activity
� Control theory: homeostatic systems, stability, simple optimal agent
designs
18. 3. The state of the Art
What can AI do today? A concise answer is difficult because there are so many activities in so
many subfields.
1. Robotic vehicles:
2. Speech recognition:
3. Autonomous planning and scheduling:
4. Game playing:
5. Spam fighting
6. Logistics planning
7. Robotics
8. Machine Translation
19. 4. Agents and Environments
� An agent is anything that can be viewed as perceiving its environment through sensors and
SENSOR acting upon that environment through actuators.
20. Intelligent Agents: Agents and Environments
� An agent is any entity that can be viewed
as perceiving its environment through
sensors and acting upon that environment
through actuators.
� A human agent has eyes, ears, and other
organs for sensors and hands, legs, vocal
tract, and so on for actuators.
� A robotic agent might have cameras and
infrared range finders for sensors and
various motors for actuators.
� A software agent (softbots) receives
keystrokes, file contents, and network
packets as sensory inputs and acts on the
environment by displaying on the screen,
writing files, and sending network packets.
� This course is about building Rational
Agents.
21. Agents and Environments
� We use the term percept to refer to the agent’s perceptual inputs at any
given instant.
� An agent’s percept sequence is the complete history of everything the
agent has ever perceived.
� In general, an agent’s choice of action at any given instant can depend
on the entire percept sequence observed to date, but not on anything it
hasn’t perceived.
� By specifying the agent’s choice of action for every possible percept
sequence, we have said more or less everything about the agent.
� Mathematically speaking, an agent’s behaviour is described by the
agent function that maps any given percept sequence to an action.
22. Agents and Environments
� Imagine tabulating the agent function that describes any given agent; for most agents,
this would be a very large table—infinite, in fact, unless we place a bound on the length
of percept sequences we want to consider.
� Given an agent to experiment with, we can, in principle, construct this table by trying out
all possible percept sequences and recording which actions the agent does in response.
� The table is, of course, an external characterization of the agent.
� Internally, the agent function for an artificial agent will be implemented by an agent
program. It is important to keep these two ideas distinct.
� Abstractly, an agent function: 𝑓: 𝑃∗
→ 𝐴
� The agent function is an abstract mathematical description; the agent program is a
concrete implementation, running within some physical system.
25. Good Behaviour : The concept of Rationality
� A rational agent is one that does the right thing—conceptually speaking, every entry in the
table for the agent function is filled out correctly.
� Consider the consequence of agents behaviour
� Agent-> Environment->Perceive-> Sequence of Actions-> Sequence of States
(Environment)
� If the sequence is desirable, then the agent has performed well. This notion of desirability
is captured by a performance measure that evaluates any given sequence of environment
states
� Not one fixed performance measure for all tasks and agents.
� As a general rule, it is better to design performance measures according to what one
actually wants in the environment, rather than according to how one thinks the agent
should behave
26. Rationality
What is rational at any given time depends on four things:
� •The performance measure that defines the criterion of
success.
� •The agent’s prior knowledge of the environment.
� •The actions that the agent can perform.
� •The agent’s percept sequence to date.
27. Definition of rational agent
� For each possible percept sequence, a rational agent should select an action that is
expected to maximize its performance measure, given the evidence provided by the
percept sequence and whatever built-in knowledge the agent has.
� Performance measure: An objective criterion for success of an agents behaviour
� Whether vacuum cleaner discussed before a Rational agent
� That depends on the performance measure, what is known about the environment, and what
sensors and actuators the agent has.
� Omniscient->knowing everything clairvoyant->clear
28. PEAS Representation
� PEAS is a type of model on which an AI agent works upon. When we define an AI agent
or rational agent, then we can group its properties under PEAS representation model. It is
made up of four words:
o P: Performance measure
o E: Environment
o A: Actuators
o S: Sensors
� Here performance measure is the objective for the success of an agent's behavior.
31. Properties of Task Environments
� Fully observable vs partially observable:
• A task environment is effectively fully observable if the sensors detect all aspects that are relevant to
the choice of action; relevance, in turn, depends on the performance measure. If not fully observable
the environment is partially observable.
• A fully observable environment is easy as there is no need to maintain the internal state to keep track
history of the world.
• An agent with no sensors in all environments then such an environment is called as unobservable.
� Single agent vs multiagent:
▪ If only one agent is involved in an environment and operating by itself then such an environment is
called single agent environment.
▪ However, if multiple agents are operating in an environment, then such an environment is called a
multi-agent environment.
▪ The agent design problems in the multi-agent environment are different from single agent
environment.
▪ For example, an agent solving a crossword puzzle by itself is clearly in a single-agent environment,
whereas an agent playing chess is in a two-agent environment.
32. � Deterministic vs stochastic:
• If an agent's current state and selected action can completely determine the next state of the
environment, then such environment is called a deterministic environment.
• A stochastic environment is random in nature and cannot be determined completely by an agent.
• In a deterministic, fully observable environment, agent does not need to worry about uncertainty.
� Episodic vs sequential:
• In an episodic environment, there is a series of one-shot actions, and only the current percept is
required for the action. E.g defetective parts identification
• However, in Sequential environment, an agent requires memory of past actions to determine the next
best actions. E.g chess,taxi driving
33. � Static vs Dynamic:
• If the environment can change itself while an agent is deliberating then such environment is called a dynamic
environment else it is called a static environment.
• Static environments are easy to deal because an agent does not need to continue looking at the world while
deciding for an action.
• However for dynamic environment, agents need to keep looking at the world at each action.
• Taxi driving is an example of a dynamic environment whereas Crossword puzzles are an example of a static
environment.
� Discrete vs Continuous:
• If in an environment there are a finite number of percepts and actions that can be performed within it, then
such an environment is called a discrete environment else it is called continuous environment.
• A chess game comes under discrete environment as there is a finite number of moves that can be performed.
• A self-driving car is an example of a continuous environment.
34. � Known vs Unknown
• Known and unknown are not actually a feature of an environment, but it is an agent's state of
knowledge to perform an action.
• In a known environment, the results for all actions are known to the agent. While in unknown
environment, agent needs to learn how it works in order to perform an action.
• It is quite possible that a known environment to be partially observable and an Unknown environment
to be fully observable.
35.
36. Structure of Agents
The job of AI is to design an agent program that implements the agent function—the mapping from
percepts to actions.
Agent program will run on some sort of computing device with physical sensors and actuators
Agent = Architecture + Program
Basic kinds of agent programs that embody the principles underlying almost all intelligent
systems:
1. •Simple reflex agents.
2. •Model-based reflex agents.
3. •Goal-based agents.
4. •Utility-based agents.
Each kind of agent program combines components in particular ways to generate actions
37. a. Simple reflex agents
� The Simple reflex agents are the simplest agents. These agents take decisions on the basis of the
current percepts and ignore the rest of the percept history.
� These agents only succeed in the fully observable environment.
� The Simple reflex agent does not consider any part of percepts history during their decision and action
process.
� The Simple reflex agent works on Condition-action rule, which means it maps the current state to
action. Such as a Room Cleaner agent, it works only if there is dirt in the room.
� Problems for the simple reflex agent design approach:
∙ They have very limited intelligence
∙ They do not have knowledge of non-perceptual parts of the current state
∙ Mostly too big to generate and to store.
∙ Not adaptive to changes in the environment.
� Escape from infinite loops is possible if the agent can randomize its actions.
38. � Escape from infinite loops is possible if the agent can randomize its actions.
39. b. Model-based reflex agents
� A model-based reflex agent is one that uses its percept history and its internal memory to make
decisions about an internal ''model'' of the world around it.
� “how the world works”—whether implemented in simple Boolean circuits or in complete scientific
theories—is called a model of the world.
o The Model-based agent can work in a partially observable environment, and track the situation.
o A model-based agent has two important factors:
o Model: It is knowledge about "how things happen in the world," so it is called a Model-based
agent.
o Internal State: It is a representation of the current state based on percept history.
o These agents have the model, "which is knowledge of the world" and based on the model they perform
actions.
o Updating the agent state requires information about:
1. How the world evolves 2. How the agent's action affects the world.
40.
41. c. Goal-based agents
� A goal-based agent has flexibility to adjust its actions based on successfully reaching a goal.
� A goal-based agent has an agenda, you might say. It operates based on a goal in front of it and makes
decisions based on how best to reach that goal
� A goal-based agent operates as a search and planning function, meaning it targets the goal ahead and
finds the right action in order to reach it.
o The knowledge of the current state environment is not always sufficient to decide for an agent to what
to do.
o The agent needs to know its goal which describes desirable situations.
o Goal-based agents expand the capabilities of the model-based agent by having the "goal" information.
o They choose an action, so that they can achieve the goal.
o These agents may have to consider a long sequence of possible actions before deciding whether the
goal is achieved or not. Such considerations of different scenario are called searching and planning,
which makes an agent proactive.
42.
43. d. Utility-based agents
� These agents are similar to the goal-based agent but provide an extra component of utility
measurement which makes them different by providing a measure of success at a given state.
� Utility-based agent act based not only goals but also the best way to achieve the goal.
� The Utility-based agent is useful when there are multiple possible alternatives, and an agent has to
choose in order to perform the best action.
� The utility function maps each state to a real number to check how efficiently each action achieves the
goals.
44.
45. e. Learning Agents
∙ A learning agent in AI is the type of agent which can learn from its past experiences, or it has learning
capabilities.
∙ It starts to act with basic knowledge and then able to act and adapt automatically through learning.
∙ A learning agent has mainly four conceptual components, which are:
∙ Learning element: It is responsible for making improvements by learning from environment
∙ Critic: Learning element takes feedback from critic which describes that how well the agent is doing
with respect to a fixed performance standard.
∙ Performance element: It is responsible for selecting external action
∙ Problem generator: This component is responsible for suggesting actions that will lead to new and
informative experiences.
∙ Hence, learning agents are able to learn, analyze performance, and look for new ways to improve the
performance.
46.
47. 7. How the components of agent programs work
� a) Atomic Representation: Each state of the world is a black box that has no internal structure. E.g.,
finding a driving route, each state is a city. AI algorithms: search, games, Markov decision processes,
hidden Markov models, etc.
� b) Factored Representation: Each state has some attribute value properties. E.g., GPS location, amount
of gas in the tank. AI algorithms: constraint satisfaction, propositional logic, planning, machine
learning and Bayesian networks.
� c) Structured Representation: Relationships between the objects of a state can be explicitly expressed.
AI algorithms: relational databases, first order logic, knowledge-based learning, natural language
understanding
48. References
1. Stuart Russell and Peter Norvig, "Artificial Intelligence: A Modern Approach", 3rd
Edition, Pearson Education, NA, 2013.
2. Elaine Rich, Kelvin Knight and Shivashankar B Nair, "Artificial Intelligence", 3rd
Edition, McGraw Hill Education Private Limited, India, 2017.