Вы находитесь на странице: 1из 34

Artificial Intelligence

(ITec4151)

Chapter 2- Intelligent Agents


Content
• Agents and environments
• Rationality
• PEAS (Performance measure,
Environment, Actuators, Sensors)
• Class of Environment
• Agent types

2
Agent
• Agent is something that perceives its environment through
SENSORS and acts upon that environment through EFFECTORS.
• The agent is assumed to exist in an environment in which it
perceives and acts
Diagram of an agent

3
Agent
Human Agent Physical Agent
Sensors Eyes, Ears, Cameras, Scanners,
Nose Mic, infrared range
sensors
Effectors/ Hands, Legs, Various Motors
Actuators Mouth (artificial hand,
artificial leg),
Speakers,

4
Intelligent Agent
• An intelligent agent is a program that can make
decisions or perform a service based on its
environment, user input and experiences.
• These programs can be used to autonomously
gather information on a regular, programmed
schedule or when prompted by the user in real
time.

5
• Examples of intelligent agents
• a robot that will
– Clean a house
– Cook when someone don’t want to do
– Wash clothes
– Cut a hair
– Take a note when some one is in a meeting
• AI is the science of building software or physical agents
that act rationally with respect to a goal.

6
Types of Intelligent Agents
Software agents:
 Sometimes, the environment may not be the real world. E.g.
video games
 They are all artificial but very complex environments.
 Those agents working in these environments are called
Software agent (softbots) Because all parts of the agent are
software.
 It is an agent that operates within the confines of the
computer.
 It interacts with a software environment by issuing
commands and interpreting the environments feedback.

7
Physical agents
 are robots that have the ability to move and act in the physical
world and can perceive and manipulate objects in that world,
possibly for responding to new perceptions.
Example :Vacuum-cleaner world
Perception: Clean or Dirty? where it is in?
Percept: Agent’s perceptual inputs at any given instant
Percept sequence: Complete history of everything that the agent has
ever perceived.
Actions: Move left, Move right, suck, do nothing
Function Reflex-Vacuum-Agent([location,status]) return an action
If status = Dirty then return Suck
else if location = A then return Right
8
else if location = B then return left
Rational agent
• One that does the right thing
• A rational agent should always strive to "do the right thing", based
on what it can perceive and the actions it can perform.
– What does right thing mean? one that will cause the agent to be most
successful and is expected to maximize goal achievement, given the
available information
• A rational agent is not omniscient
– An Omniscient agent knows the actual outcome of its actions,
and can act accordingly, but in reality omniscience is
impossible.
– Rational agents take action with expected success, where as
omniscient agent take action with 100% sure of its success.
9
Example: Is the agent Rational?
• Alex was walking along the road to Bus Station; He saw an old friend
across the street. There was no traffic.
• So, being rational, he started to cross the street.
• Meanwhile a big banner falls off from above and before he finished
crossing the road, he was crushed.
Was Alex irrational to cross the street?
• This points out that rationality is concerned with expected success,
given what has been perceived.
– Crossing the street was rational, because most of the time, the
crossing would be successful, and there was no way you could have
foreseen the falling banner.
– The EXAMPLE shows that we can not blame an agent for failing to take
into account something it could not perceive. Or for failing to take an
action that it is incapable of taking.
10
• In summary what is rational at any given point depends on PEAS
(Performance measure, Environment, Actuators, Sensors)
framework.
– Performance measure
• The performance measure that defines degrees of success of the agent
– Environment
• Knowledge: What an agent already knows about the environment
– Actuators – generating actions
• The actions that the agent can perform back to the environment
– Sensors – receiving percepts
• Perception: Everything that the agent has perceived so far
concerning the current scenario in the environment
• For each possible percept sequence, a rational agent should select
an action that is expected to maximize its performance measure,
given the evidence provided by the percept sequence and
whatever built-in knowledge the agent has.
11
Performance measure
 How do we decide whether an agent is successful or
not?
 Establish a standard of what it means to be successful in an
environment and use it to measure the performance
 A rational agent should do whatever action is expected to
maximize its performance measure, on the basis of the
evidence provided by the percept sequence and whatever
built-in knowledge the agent has.
 What is the performance measure for “crossing the
road”?
 What about “Chess Playing”?
12
Example: PEAS
 Consider the task of designing an automated taxi
driver agent:
 Performance measure: Safe, fast, legal, comfortable
trip, maximize profits
 Environment: Roads, other traffic, pedestrians,
customers
 Actuators: Artificial legs & hands, Speaker
 Sensors: Cameras, GPS, engine sensors, recorder
(microphone)
 Goal: driving safely from source to destination point

13
Examples: Agents for Various Applications
Agent Percepts Actions Goals Environment
type
Interactiv Typed words, Print exercises, Maximize Set of
e English Keyboard suggestions, student's students
tutor corrections score on test
Medical Symptoms, Questions, Healthy Patients
diagnosis patient's tests, person,
system answers treatments minimize
costs
Part- Pixels of Pick up parts Place parts in Conveyor
picking varying and sort into correct bins belts with
robot intensity bins parts

14
Assignment for next class (individual)
 Consider the need to design a “player agent” . It
may be chess player, football player, tennis player,
basket player, etc…
 Identify what to perceive, actions to take, the
environment it interacts with?
 Identify sensors, effectors, goals, environment
and performance measure that should be
integrated for the agent to be successful in its
operation?

15
Environments
 An environment is everything in the world which surrounds the
agent, but it is not a part of an agent itself.
 The environment is where agent lives, operate and provide the
agent with something to sense and act upon it.
• Actions are done by the agent on the environment.
Environments provide percepts to an agent.
• Agent perceives and acts in an environment. Hence in order to
design a successful agent , the designer of the agent has to
understand the type of the environment it interacts with.

16
Class of Environments:
an environment can have various features from the point of
view of an agent:
 Fully observable vs. Partially observable
 Deterministic vs. Stochastic
 Episodic /occasional vs. Sequential
 Static vs. Dynamic
 Discrete vs. Continuous
 Single agent vs. Multi_agent

17
Fully observable vs. partially observable
 Does the agent’s sensory see the complete state of the
environment?
 If an agent has access to the complete state of the
environment, then the environment is accessible or fully
observable.
 All relevant information about the environment is
available for the agent to take the action.
 A fully observable environment is easy as there is no need
to maintain the internal state to keep track history of the
world.
 Example
 Chess playing agent: the agent has complete knowledge of the
18
environment(board).
Partially observable
o The relevant features of the environment are available
partially.
o An environment might be Partially observable because
of noisy and inaccurate sensors or because parts of the
state are simply missing from the sensor data.
Examples:
 A local dirt sensor of the cleaner cannot tell whether
other squares are clean or not
 Taxi driving is partially observable

 Poker: agent do not know the hands of the oponent

19
Deterministic vs. stochastic
 The environment is deterministic if the next state is
completely determined by
 The current state of the environment and the actions selected and
executed by the agent.
 Given the agents action how the environment changes is determined.
 If an element of uncertainty occurs the environment is stochastic.
 It is difficult to decide how the environment changes by the agents
action.
 In a deterministic, fully observable environment, agent does not need to
worry about uncertainty.
taxi driver are:
 Stochastic because of some unobservable aspects  noise or
unknown
 Any example of deterministic? You can take poker, if the agent has access to the
cards of the opponent.
20
Episodic vs. Sequential
 Does the next “episode” or incident or event depend on the
actions taken in previous episodes?
 In an episodic environment, the agent's experience is divided
into "episodes".
 The subsequent episodes do not depend on the actions on
the previous episode.
 An episode = agent’s single pair of perception & action
 Each episode consists of the agent perceiving and then performing a
single action, and the choice of action in each episode depends only
on the episode itself.
 In sequential environment the current decision could affect
all future decisions.
 Thus the agent need to plan a head.
 Taxi driving is sequential
 Any example of Episodic? Part picking robot
21
Static vs. Dynamic
 Can the world change while the agent is thinking and
on purpose?
 A dynamic environment is always changing over time
by itself.
 If the environment can change while the agent is on purpose,
then we say the environment is dynamic for that agent
 E.g., the number of people in the street
 Taxi driving is dynamic
 otherwise it is static: it can’t change by itself.
E.g. the destination

22
Discrete vs. Continuous

• Are the distinct percepts & actions limited or


unlimited?
– If there are a limited number of distinct, clearly
defined percepts and actions, we say the environment
is discrete.
– otherwise it is continuous.
• Taxi driving is continuous
– Any example of discrete?
– E.g., Chess game

23
Single agent vs. Multi_agent
• If an agent operate by itself in an environment, it
is a single agent environment.eg part picking
robot.
• If there are many agents working together, it is
multi_agent environment. Eg taxi driver,chess…

24
types of agents
Agents can be grouped into five classes based on their
degree of perceived intelligence and capability.
These are given below:
 Simple reflex agents
 Model-based reflex agents
 Goal-based agents
 Utility-based agents
 Learning agents(reading Assignment)

25
Simple reflex agents
 It works by finding a rule whose condition matches the current
situation and then doing the action associated with that rule.
 They choose actions only based on the current percept.
 Simple but very limited intelligence.
 Action does not depend on percept history, only on
current percept.
 Therefore no memory requirements.
Work only
 ifthe environment is fully observable
Eg. Vacum cleaner : if room is dirty, clean if not, stop.
Fly buzzing around window or light.

26
Structure of a simple reflex agent
Simple Reflex Agent
sensors

Environment
What the world
is like now

Condition - What action I


should do now
action rules
effectors

27
Model-Based Reflex Agent
• An agent that do not know the whole world but
knows partial world.
• This is a reflex agent with internal state.
–It keeps track of the world that it can’t see now.
• It works by finding a rule whose condition matches
the current situation and the stored internal state.

28
Structure of Model-Based reflex agent

State sensors
How the world evolves What the world

Environment
is like now
What my actions do

Condition - action rules


What action I
should do now

effectors

29
Goal based agents
• Choose actions that achieve the goal (an agent with
explicit goals)
• Agents action depends upon its goal.
• Involves consideration of the future:
 Knowing about the current state of the environment is
not always enough to decide what to do.
– For example, at a road junction, the taxi can turn
left, right or go straight.
 The right decision depends on where the taxi is trying to
get to.
 The agent needs some sort of goal information, which
describes situations that are desirable. E.g. being at the
passenger's destination.
30
Structure of a Goal-based agent
sensors
State

How the world evolves


What the world

Environment
is like now
What my actions do

What it will be like


if I do action A

Goals What action I


should do now

effectors

31
Utility based agents
• Goals are not really enough to generate high quality behavior.

– For e.g., there are many action sequences that will get the taxi to its

destination, thereby achieving the goal. Some are quicker, safer,


more reliable, or cheaper than others. We need to consider Speed
and safety

• When there are several goals that the agent can aim for, non

of which can be achieved with certainty. Utility provides a


way in which the likelihood of success can be weighed up
against the importance of the goals.
32
Structure of a utility-based agent
State sensors

How the world evolves


What the world is
like now
What my actions do

Environment
What it will be like
if I do action A

How happy I will be


Utility
in such as a state

What action I should


do now

effectors

33
Learning Agents
 A learning agent in AI is the type of agent which can learn from
its past experiences, or it has learning capabilities.
 It starts to act with basic knowledge and then able to act and
adapt automatically through learning.

34

Вам также может понравиться