AI notes

Introduction

AI = crossroad between science and engineering.

Types of AI systems:

  • system that thinks as a human: imitates human think process (cognitive models), internal process.

  • system that acts as humans: conceptual gap with the previous category, which had the focus on which kind of internal mechanism the system is using on providing an answer. In this case we have that the relevant factor is the outer performances the system provides, it's a more conventional approach to AI. For e.g. if an entity shows some performances which meet some requirements, regardless of the internal mechanism, we can call it AI.

    Imitation game

    The most famous definition of AI based on this approach is the Turing's Imitation game (Turing test):

    1st round:

    image-20210917095341699

    Basic Turing test functioning: three participants, a female, a male, and a interrogator. The male must fake to be a female, and the interrogator has to find out who is who. They can communicate only via text chat.

    2nd round:

    image-20210917095630746

    We substitute the male with a computer. The female must make the interrogator think that she's a female, and so must do the computer. The interrogator now needs to distinguish between computer and female.

    If we play both rounds several times, and if the interrogator is equally succesful in both rounds, the machine is intelligent. If the success rate of the first rate is overwhelming, the machine fails the test.

    Note that the property of intelligence is attached to the machine in a relative way, since if we have a bad interrogator, the test result is polluted.

  • system that thinks rationally: the system is able to reason in a formally correct way. This definition comes from a tradition of logic which is used to represent the laws of thought. Given that rationality is an ideal concept, it can be defined formally.

  • system that acts rationally: in this course we will focus on this last category. Other definitions are not wrong, but not best for engineering purposes. It is based on rational agents which operate to reach a certain goal.

Agents

image-20210917103640536

They will perceive the environment and decide what to do in some ways. Basic cycle: perceive, decide, act. It is a general definition, since everything that fits this schema is an object of AI study. Example of agent: thermostat. They can be anything, also sw, and the environment can be anything (also virtual) as well.

In practice a designer of an agent should develop an agent program that implements the agent function on a specific computing system.

The goal of AI is to design rational agents! But what is a rational agent? First we need to specify its behavior. Agents are specified by their function f(), which given the sequence of perception of the agent so far, the function returns the action he has to perform at time t. Two consequences: we are assuming that time is discrete, and that in principle we have no constraints over the definition of such function.

Agent formal definition:

A rational agent is an agent that has an agent function that allows him to behave correctly. In practice being rational means that for every possible sequence of perception he chooses the action that maximizes the expected value of its performance measure, given its knowledge.

To be rational an agent must know something, because of environment perception, or because of built-in knowledge provided by its designer. The knowledge is updated by environment perception, which is its most important source of knowledge.

Environment types

From an agent pov an environment can be completely/partially observable, static/dynamic, discrete/continuous and single/multi agent.

For e.g. in the game of chess by looking at the table an agent can get the whole state of the game. On the other hand, the perception power of the agent is not enough to get a whole picture of the state of environment (for e.g. the game of soccer, where the ball can be hidden by some player, or poker).

Static environments: it changes only if the agent performs some actions. While dynamic environment can change even if the agent is not doing anything. This distinction is important since in a static environment the agent can take its time, while in dynamic ones the agent has to act quick.

The difference in discrete/continuous is about how the environment can be described (for e.g. continuous envs can be described by differential equations).

Agent architecture types

We can sort agent architectures by complexity. Ranking from less complex to most complex. Note that every step in complexity fully incorporates less complex agents capabilities.

  1. Simple Reflex Agent

    It's like a baseline for our development. It interacts with the environment, it perceives it with its sensors, which gives it a view of the environment. For e.g. a device that wants to track light uses multiple sensors. It analyzes sensor data, and turns in the direction in which points the sensor that's receiving more light.

    image-20210921142104045Example of real world simple reflex agent: Roomba smart cleaning appliance. In fact it moved randomly and used sensors to detect whenever it was encountering obstacles.

  2. Reflex Agent with state

    More modern versions of Roomba have actually an internal state to track where it is located in the room, and optimize its behaviour. This is achieved trough the action and environment models.

    image-20210921142633171

  3. Goal-based Agent

    Key difference with less complex agents: it is capable of thinking beyond the present moment to decide the best actions to take in order to achieve its goal:

    image-20210921144757971

    It has planning capabilities that are necessary to reach its goal. Example: table game bots: they can simulate future moves and/or scenarios in order to achieve its goal, which in this case is win the game. Problem-solving agents are goal-based agents.

  4. Utility based agents

    A utility-based agent is an agent that acts based not only on what the goal is, but the best way to reach that goal. In short, it's the usefulness (or utility) of the agent that makes itself distinct from its counterparts.

    image-20210921145551131

    From the drawing we can see that it chooses the action that leads to the best expected utility, where expected utility is computed by averaging over all possible outcome states, weighted by the probability of the outcome.

    From the book:

    Technically speaking, a rational utility-based agent chooses the action that maximizes the expected utility of the action outcomes—that is, the utility the agent expects to derive, on average, given the probabilities and utilities of each outcome.

Also note that all the agents presented before can improve their performance with learning.

Final goal of agents: build intelligent systems.

AI types recap

image-20210921151259643

Sorted from less powerful to more powerful.

Last updated