ARTIFICIAL INTELLIGENCE|| PART#3
InstructionalObjectives
 Agents and Environment
 Intelligent agent
 Goals of Agents
 PEAS concept
 Properties of environment
 Classes of intelligent agents
 Applications of Intelligent agent
Agents
An agent is anything that can be viewed as perceiving its
environment through sensors, take decision accordingly and
acts upon that environment through effectors.
 Operates in an environment.
 Perceive its environment through sensors.
 Acts upon its environment through actuators/ effectors.
Eg: Sometime when you are leaving , experience humidity and see the
sky full of clouds . You anticipate that it might rain. So , you go back to
your house and pick up the umbrella.
Intelligentagents
Fundamental faculties of intelligence
Acting
Sensing
Understanding, Reasoning & Learning
Intelligent Agent=Architecture + Agent Program
Where
Architecture is the machinery that an agent executes.
Agent program an implementation of design functions.
AgentandEnvironment
Examplesofagents
Human Agents
• Sensors: eyes, ears, skin, taste buds etc.
• Effectors: hands, fingers, legs, mouth.
 Robots Agents
• Sensors: camera, infrared range finders,
bumper, etc.
• Effectors: grippers, wheels, lights, speakers.
 Software Agents
• Sensors: scanners, keyboard, mouse ,readers.
• Effectors: monitors, speakers ,printers and files.
.
 High Performance: performance should be at max.
 Optimized Result : correct and short procedure.
 Rational Action : Right actions should be performed.
GOALsofagents
PEAS concept
PEAS :
Performance ,Environment,
Actuators and Sensors
Performance: Output which we get from the agents
after processing.
Environment: All surrounding things and conditions.
Actuators: Hardware or software devices through
which agent performs.
Sensors: Devices through which agent observes and
perceives its environment.
PEAS Example
Self Driven Car
Performance: Speed , safety of car and
passengers, time taken and comfort of user.
Environment: Roads, pedestrians, crossings,
traffic signals etc.
Actuators: Steering, Accelerator, Breaks, Horns,
Music system etc.
Sensors: Camera, Speedometer, GPS, Odometer,
Sonar etc.
Structureofagents
A simple agent program can be defined mathematically
as an agent function (A) which maps every possible
precepts (p) sequence of a possible action the agent
can perform.
F: p*-> A
AGENT’sEnvironment
 Environments in which agents operates.
 Environment is the surrounding appears from
the point of view of the agent itself.
 Environment provided to the agents can be
artificial, very detailed ,complex or real.
PropertiesofEnvironment
Deterministic or Non-deterministic: The next state of the
environment is completely described by the current state and the
agent’s action it is deterministic ;otherwise non deterministic.
Static or Dynamic :Static environment does not change while an
agent is acting ;otherwise it is dynamic.
 Single /Multiple agents: Presence of some other agent of
same or other type.
 Observable / Partially observable: Agent is able to
determine complete state of the environment is observable and in
partial known state are limited.
PropertiesofEnvironment
 Episodic/Sequential: Episodic is when subsequent episodes do
not depend on what actions occurred in previous episodes. In
Sequential the agent engages in a series of connected episodes.
 Discrete/Continuous: If the number of distinct percepts and
actions is limited, the environment is discrete, otherwise it is
continuous.
 Accessible or Inaccessible :If the agent’s sensory apparatus
can access complete environment conditions then it is accessible
otherwise inaccessible.
Classes ofIntelligentAgents
 Simple reflex agents
 Model based reflex agents
 Goal based agents
 Utility based agents
 Learning agents
Simplereflexagents
 Simple reflex agents act only on the basis of the
current percept, ignoring the rest of the percept
history.
 The agent function is based on the
condition-action rule: if condition then action.
 Succeeds when the environment is fully observable
and limited knowledge is required.
 Works on If then condition like game playing i.e. Tic
tac toe, Chess etc.
Simplereflexagents
Model based reflexagents
 A model-based agent can handle a partially
observable environment and track the situation.
 The agent has the knowledge about "how the world
evolves" is called a model of the world, hence the
name given "model-based agent” like self driving car
etc.
 Based on current situation it performs action.
Model based reflexagents
Goalbasedagents
 Goal-based agents expand on the capabilities of the
model-based reflex agents, by using "goal" information.
 Goal information describes situations that are desirable.
The agent chooses a action among multiple
possibilities, which reaches a goal state.
 Search and planning are considered for finding
appropriate action sequences that achieve the agent's
goals.
Goalbasedagents
Utilitybasedagents
 It has an extra component then goal based agent as
“Utility function”. Which acts to provide measures of
success at a given state.
 Such model is useful when there are multiple
possible alternatives, and a agent must select one
go get best performance.
 It is a performance measure of different world states
according to exactly how Happy they would make
the agent.
Utilitybasedagents
Learningagents
Learningagents
 Learning allows the agents to initially operate in
unknown environments .
 “critics”: It identifies the performance of the agent and
gives feedback.
 “learning element": It is responsible for making learning
and improvements.
 "performance element": It is main part which is
responsible for selecting external actions based on the
input provided by sensors and learning element .
 "problem generator“: It is responsible for suggesting
actions that will lead to new and informative
experiences.
Applications of Intelligent Agents
Agent Performance
measure
Environment Actuators Sensors Image
1. Medical
Diagnose
•Healthy patient
•Minimized cost
•Patient
•Hospital
•Staff
•Tests
•Treatments
• Keyboard
(Entry of
symptoms)
• Scanning
machines
2. Vacuum
Cleaner
•Cleanness
•Efficiency
•Battery life
•Security
•Room
•Table
•Wood floor
•Carpet
•Various
obstacles
•Wheels
•Brushes
•Vacuum
Extractor
•Camera
•Dirt detection
sensor
•Cliff sensor
•Bump Sensor
•Infrared Wall
Sensor
3. Part -
picking
Robot
•Percentage of
parts in correct
bins.
•Convey or
belt with parts,
•Bins
•Jointed Arms
•Hand
•Camera
•Joint angle
sensors.
AI Agents, Agents in Artificial Intelligence

AI Agents, Agents in Artificial Intelligence

  • 1.
  • 2.
    InstructionalObjectives  Agents andEnvironment  Intelligent agent  Goals of Agents  PEAS concept  Properties of environment  Classes of intelligent agents  Applications of Intelligent agent
  • 3.
    Agents An agent isanything that can be viewed as perceiving its environment through sensors, take decision accordingly and acts upon that environment through effectors.  Operates in an environment.  Perceive its environment through sensors.  Acts upon its environment through actuators/ effectors. Eg: Sometime when you are leaving , experience humidity and see the sky full of clouds . You anticipate that it might rain. So , you go back to your house and pick up the umbrella.
  • 4.
    Intelligentagents Fundamental faculties ofintelligence Acting Sensing Understanding, Reasoning & Learning Intelligent Agent=Architecture + Agent Program Where Architecture is the machinery that an agent executes. Agent program an implementation of design functions.
  • 5.
  • 6.
    Examplesofagents Human Agents • Sensors:eyes, ears, skin, taste buds etc. • Effectors: hands, fingers, legs, mouth.  Robots Agents • Sensors: camera, infrared range finders, bumper, etc. • Effectors: grippers, wheels, lights, speakers.  Software Agents • Sensors: scanners, keyboard, mouse ,readers. • Effectors: monitors, speakers ,printers and files. .
  • 7.
     High Performance:performance should be at max.  Optimized Result : correct and short procedure.  Rational Action : Right actions should be performed. GOALsofagents
  • 8.
    PEAS concept PEAS : Performance,Environment, Actuators and Sensors Performance: Output which we get from the agents after processing. Environment: All surrounding things and conditions. Actuators: Hardware or software devices through which agent performs. Sensors: Devices through which agent observes and perceives its environment.
  • 9.
    PEAS Example Self DrivenCar Performance: Speed , safety of car and passengers, time taken and comfort of user. Environment: Roads, pedestrians, crossings, traffic signals etc. Actuators: Steering, Accelerator, Breaks, Horns, Music system etc. Sensors: Camera, Speedometer, GPS, Odometer, Sonar etc.
  • 10.
    Structureofagents A simple agentprogram can be defined mathematically as an agent function (A) which maps every possible precepts (p) sequence of a possible action the agent can perform. F: p*-> A
  • 11.
    AGENT’sEnvironment  Environments inwhich agents operates.  Environment is the surrounding appears from the point of view of the agent itself.  Environment provided to the agents can be artificial, very detailed ,complex or real.
  • 12.
    PropertiesofEnvironment Deterministic or Non-deterministic:The next state of the environment is completely described by the current state and the agent’s action it is deterministic ;otherwise non deterministic. Static or Dynamic :Static environment does not change while an agent is acting ;otherwise it is dynamic.  Single /Multiple agents: Presence of some other agent of same or other type.  Observable / Partially observable: Agent is able to determine complete state of the environment is observable and in partial known state are limited.
  • 13.
    PropertiesofEnvironment  Episodic/Sequential: Episodicis when subsequent episodes do not depend on what actions occurred in previous episodes. In Sequential the agent engages in a series of connected episodes.  Discrete/Continuous: If the number of distinct percepts and actions is limited, the environment is discrete, otherwise it is continuous.  Accessible or Inaccessible :If the agent’s sensory apparatus can access complete environment conditions then it is accessible otherwise inaccessible.
  • 14.
    Classes ofIntelligentAgents  Simplereflex agents  Model based reflex agents  Goal based agents  Utility based agents  Learning agents
  • 15.
  • 16.
     Simple reflexagents act only on the basis of the current percept, ignoring the rest of the percept history.  The agent function is based on the condition-action rule: if condition then action.  Succeeds when the environment is fully observable and limited knowledge is required.  Works on If then condition like game playing i.e. Tic tac toe, Chess etc. Simplereflexagents
  • 17.
  • 18.
     A model-basedagent can handle a partially observable environment and track the situation.  The agent has the knowledge about "how the world evolves" is called a model of the world, hence the name given "model-based agent” like self driving car etc.  Based on current situation it performs action. Model based reflexagents
  • 19.
  • 20.
     Goal-based agentsexpand on the capabilities of the model-based reflex agents, by using "goal" information.  Goal information describes situations that are desirable. The agent chooses a action among multiple possibilities, which reaches a goal state.  Search and planning are considered for finding appropriate action sequences that achieve the agent's goals. Goalbasedagents
  • 21.
  • 22.
     It hasan extra component then goal based agent as “Utility function”. Which acts to provide measures of success at a given state.  Such model is useful when there are multiple possible alternatives, and a agent must select one go get best performance.  It is a performance measure of different world states according to exactly how Happy they would make the agent. Utilitybasedagents
  • 23.
  • 24.
    Learningagents  Learning allowsthe agents to initially operate in unknown environments .  “critics”: It identifies the performance of the agent and gives feedback.  “learning element": It is responsible for making learning and improvements.  "performance element": It is main part which is responsible for selecting external actions based on the input provided by sensors and learning element .  "problem generator“: It is responsible for suggesting actions that will lead to new and informative experiences.
  • 25.
    Applications of IntelligentAgents Agent Performance measure Environment Actuators Sensors Image 1. Medical Diagnose •Healthy patient •Minimized cost •Patient •Hospital •Staff •Tests •Treatments • Keyboard (Entry of symptoms) • Scanning machines 2. Vacuum Cleaner •Cleanness •Efficiency •Battery life •Security •Room •Table •Wood floor •Carpet •Various obstacles •Wheels •Brushes •Vacuum Extractor •Camera •Dirt detection sensor •Cliff sensor •Bump Sensor •Infrared Wall Sensor 3. Part - picking Robot •Percentage of parts in correct bins. •Convey or belt with parts, •Bins •Jointed Arms •Hand •Camera •Joint angle sensors.