In this chapter, the book talks about the concept and design of agents, which perceive information and do actions based on them.
Agent and the environment
An agent is anything that can perceive its environment through sensors and act upon the environment through actuators.
Mathematically, an agent’s behavior is determined by the agent function that maps any given percept sequence to an action. An concrete implementation of the agent’s function is called agent function.
What is considered rational
Desirability is captured by a performance measure that evaluates any given sequence of environment states. Why environment states? Because if it is left to the agent to decide what is rational, it will be objective.
It is better to design the performance measure according to what one actually wants in the environment, rather than what we want the agent to do
A rational agent should select an action that is expected to maximize its perfornmance measure, given the evidence provided by the percept sequence and build-in knowledge.
Rationality depends on 4 things:
- Performance measure
- Agent’s prior knowledge
- Actions agent can perform
- Agent’s percept sequence to date
Sometimes the agent would engage in action to “look around”. It is called information gathering and it will modify future percepts.
Our rational agents, by definition, are also required to learn from what they perceive. The init config would reflect prior knowledge but the knowledge will change and augement based on incoming perspects.
Structure of agents
Agent = architecture + program.