Foundations of Artificial Intelligence

Introduction

Basically this course's exam is made up of a long list of search algorithm. Those are strategies created to tackle search problems, which are an abstraction of real world problems. More specifically, those problems have a state-space representation: a directeg graph in which vertex are states and edges are actions. The macro arguments that will be introduced are:

  • Search strategies

  • Contraint satisfaction problems

  • Inference procedures for propositional logic

  • Classical planning

Search strategies

A state is a possible configuration of the problem, while an action is what allows us to move from one state to the next. The search for a solution in this context is a path in the state space from the initial state to a state that satisfies the goal test.

To search for a solution we have different types of algorithms that basically have different ways of exploring the state-space graph. A search strategy determines the ordering of nodes in the frontier. Selecting a node amounts to decide the ordering of the frontier and to select the first node. This means that how we select which nodes to add to the frontier is what distinguish one search strategy from another. There are three types of search strategies:

  1. Uninformed search: also called blind search, they use only the information contained in the problem formulation.

  2. Informed search: They exploit specific knowledge that is not contained in problem formulation. More specifically they select a node from the frontier according to an evaluation function f(n), which provides an estimate of how much a node is “promising”. There are different ways to calculate the evaluation function, we will focus on:

    • Greedy best-first search

    • A* search

  3. Adversarial search: those are based on the concept of formulating search problems as zero sum games. This means that we have more than one player whose actions are meant to give him advantage, and to give disadvantage to his opponent. Solving the game corresponds to finding the best action for MAX (or for MIN) in a given state.

Constraint satisfaction problems

Problem-solving agents are examples of goal-based agents, which basically work by:

  1. Formulating a problem

  2. Searching the solution

  3. Executing the solution

Recall also that search algorithms work by comparing states by checking of they are equal or not. It can be really useful in modeling a complex problem to specialize the representation of the states. Constraint satisfaction problems, or CSPs, work by assigning to each variable a value such that all constraints that are related to that variable are satisfied.

Inference procedures for propositional logic

Propositional logic used to perform semantic checks between models. More formally, An inference procedure aims at demonstrating that αβ\alpha ⊨ \beta by using an algorithm AA.

Model checking

Algorithm AA can check models, namely it can check whether, when aa is true, bb is true too.

Theorem proving

Algorithm AA can build a demonstration or a proof leading from aa to bb, aAba⊢_{A}b:

  • Sound algorithm: everything it claims to prove is in fact entailed: aAbaba ⊢_{A} b ⟹ a ⊨ b

  • Complete algorithm: everything that is entailed can be proved: abaAba ⊨ b ⟹ a ⊢_{A} b

Classical planning

How is search combined with logic-based representation of states? This is where classical planning comes handy.

In planning problems (also called classical plannning, action planning, or task planning), states, actions, and goals are represented using logical sentences. The goal of planning is to find a sequence of (partially) ordered actions that reach the goals.

Last updated