Problem solving by search
Several real-world problems can be formulated as search problems, which means that the solution can be found by exploring different alternatives. They are goal-based agents.
Search problem configuration
A state space is constituted by:
Set of states S, and an initial state s0;
Function
actions(s), given a state s, it returns the set of feasible actions starting from that specific state. It works by filtering the whole set of actions A and by returning only the actions (let's call that subset A′, such that A′⊆A) that are applicable from s.Function
result(s, a), given a state s and an action a, it returns the state s′ reached after executing that action:∀a,∀s,result(a,s)=s′∈SFunction
cost(s,a,s’)returns cost of an action a which goes from state s to state s′;
A state space problem is constituted by:
Where:
G: set of goal states
S: state space
s0: initial state
Note that we also have the goal test goal(s), which checks if a given a state s is a goal state (note that goal states are chosen by the players):
Path: sequence of nodes leading to a goal state, which is a sequence (a0,s0,...,at−1,st−1,st). si+1=result(ai,si),∀i∈∣S∣.
Solution: (if it exists) sequence of actions leading to a goal state. Optimal if the path cost is minimized.
Path cost:
Basically we start from s0. Then, iteratively, we need to perform actions while minimizing the cost. This means that at each step we must perform a cost check against evey possible action from the state we're in. After choosing the action we perform the result function and we check if the arriving state s′ is a goal state by executing the goal test. Then we start again until we reach it. The state space can be represented with a directed graph in which vertex are states and arcs are actions:

The solution to a search problem is a path in the state space from the initial state to a state that satisfies the goal test. Keep in mind that:
The optimal solution is a solution with the lowest cost
The cost of a path (path cost) is the sum of the costs of the arcs that compose the path (step costs)
A problem could have no solution
It is also important to note that it is usually impossible to build an explicit representation of the entire state space, since the number of states becomes higher exponentially with the complexity of the problem (for e.g. the 24-puzzle state space would require more than 109 years to generate). This is also a memory problem, since:
#states>>#atoms_in_the_universe
This means that a problem-solving agent must find a solution by exploring only a small portion of the state space.
Moreover search problems are typical of agents that operate in environments that are fully observable, static, discrete (finite number of state moves), and deterministic (consequences of actions are predetermined).
More formally, a state space problem can be defined as:
With:
S, state space
s0∈S, initial state
G⊆S, goal set
Building the search tree
Two steps:
Node generation
Create a node and connect it to the tree (to the frontier, which is the set of nodes that is awaiting to be expanded)
Node expansion/extraction
Take a node out of the frontier and generate all successors.
Problem configuration examples
The eight puzzle

State: configuration of the 8 tiles on the 3x3 grid.
Search for a solution: sequence of states in the state space.
Functions:
action(s)- takes the state and returns the applicable actions in that state, which is a possible movement of the blank tile position:{up, down, left, right}. Note that not all actions may be available depending on where the tile is located in the puzzle. If its in the rightmost tiles like in the right state state of the preceding image, it can only move up or left. We can have 8*4 possible state actions (up/down/left/right movement for each tile)result(s, a)- given a state s and an action a applicable in s, it returns the state s' reached by executing a in s. s' is a successor of s.
The Eight-Queens Puzzle
objective is to position 8 queens in a chessboard so that no two queens are in the same row, column, or diagonal:

State – All arrangements of 0, 1, 2, ..., 8 queens on the board – The state space contains 64 x 63 x ... x 57 ~ 1.8x1014 states
Initial state – 0 queens on the board
Functions:
actions() – All the possible ways to add one queen in an empty square
result() – …
Goal test – 8 queens are on the board, with no queens attacking each other
Step cost – Irrelevant, unitary
Searching for solutions
Solutions for search problems are found by building a search tree from the space state graph. The solution to a search problem is found when we encounter a node that satisfies goal conditions while descending the graph:

Note that:
A search tree is composed of search nodes
Each node corresponds to a path from the initial state to a state in the state space
Each state of the space state can correspond to multiple nodes, when a state can be reached, from the initial state, following multiple paths
Note: If the states can be visited multiple times, then the search tree can be infinite, even if the state space is finite!
Frontier
The frontier is the set of nodes of the search tree that have been generated, but not yet chosen, implemented with a priority queue (= #nodes to be evaluated).
Node expansion
The expansion of a node in the search tree involves two steps:
Apply
actions()to the state s of node n to compute all the available actions, and for each action a, compute the successor state s’ usingresult(s,a)and then generate a child node for each successor state.The new generated nodes are inserted in the frontier.
Tree search algorithm pseudo code:
Last updated