Foundations of Artificial Intelligence
Introduction
Basically this course's exam is made up of a long list of search algorithm. Those are strategies created to tackle search problems, which are an abstraction of real world problems. More specifically, those problems have a state-space representation: a directeg graph in which vertex are states and edges are actions. The macro arguments that will be introduced are:
Search strategies
Contraint satisfaction problems
Inference procedures for propositional logic
Classical planning
Search strategies
A state is a possible configuration of the problem, while an action is what allows us to move from one state to the next. The search for a solution in this context is a path in the state space from the initial state to a state that satisfies the goal test.
To search for a solution we have different types of algorithms that basically have different ways of exploring the state-space graph. A search strategy determines the ordering of nodes in the frontier. Selecting a node amounts to decide the ordering of the frontier and to select the first node. This means that how we select which nodes to add to the frontier is what distinguish one search strategy from another. There are three types of search strategies:
Uninformed search: also called blind search, they use only the information contained in the problem formulation.
Informed search: They exploit specific knowledge that is not contained in problem formulation. More specifically they select a node from the frontier according to an evaluation function
f(n), which provides an estimate of how much a node is “promising”. There are different ways to calculate the evaluation function, we will focus on:Greedy best-first search
A* search
Adversarial search: those are based on the concept of formulating search problems as zero sum games. This means that we have more than one player whose actions are meant to give him advantage, and to give disadvantage to his opponent. Solving the game corresponds to finding the best action for MAX (or for MIN) in a given state.
Constraint satisfaction problems
Problem-solving agents are examples of goal-based agents, which basically work by:
Formulating a problem
Searching the solution
Executing the solution
Recall also that search algorithms work by comparing states by checking of they are equal or not. It can be really useful in modeling a complex problem to specialize the representation of the states. Constraint satisfaction problems, or CSPs, work by assigning to each variable a value such that all constraints that are related to that variable are satisfied.
Inference procedures for propositional logic
Propositional logic used to perform semantic checks between models. More formally, An inference procedure aims at demonstrating that α⊨β by using an algorithm A.
Model checking
Algorithm A can check models, namely it can check whether, when a is true, b is true too.
Theorem proving
Algorithm A can build a demonstration or a proof leading from a to b, a⊢Ab:
Sound algorithm: everything it claims to prove is in fact entailed: a⊢Ab⟹a⊨b
Complete algorithm: everything that is entailed can be proved: a⊨b⟹a⊢Ab
Classical planning
How is search combined with logic-based representation of states? This is where classical planning comes handy.
In planning problems (also called classical plannning, action planning, or task planning), states, actions, and goals are represented using logical sentences. The goal of planning is to find a sequence of (partially) ordered actions that reach the goals.
Last updated