Jump to content

Artificial Intelligence/Search/Heuristic search/Astar Search

From Wikibooks, open books for an open world

Search Algorithms in AI

[edit | edit source]

Search techniques are general problem-solving methods. When there is a formulated search problem, a set of states, a set of operators, an initial state, and a goal criterion we can use search techniques to solve the problem (Pearl & Korf, 1987). A search algorithm is explained in simple terms by Cawsey (1998): Suppose that you are trying to find your way around in a small town, which has many one way streets. There are many routes from your initial state (A) to the target destination (B), but you do not know which one to take. You can try as many as you like in real life, although it might be a difficult process. There might be much inconvenience in the whole picture: Once you have passed though one street you may not be able to go back, as it is one way, or you may be stuck in a park. This simplified idea of searching for a place indicates that we can actually map out all the possible routes in such an example; even it is exhausting, you will eventually find a way. In small scale search problems, as introduced above, simple search techniques are sufficient to do systematic search. However, sometimes we need systematically search for such a route, for instance, on a much more complex map. How do we do that? Search algorithms are good for solving such problems. The problem could a be physical one, such as walking or driving from A to B; or it could abstract, such as a set of steps in a theorem, which will allow us to prove from a set of facts.

A* Search Algorithm

Heuristic search makes use of the fact that most problem spaces provide some information that distinguishes among states in terms of their likelihood of leading to a goal. This information is called a heuristic evaluation function (Pearl & Korf, 1987). In other words, the goal of a heuristic search is to reduce the number of nodes searched in seeking a goal (Kopec & Marsland, 2001).

The most widely used form of best first search is called A*, which is pronounced as A star. It is a heuristic searching method, and is used to minimize the search cost in a given problem (Bolc & Cytowski, 1992). It aims to find the least-cost path from a given initial node to the goal. It is an extended form of best-first search algorithm. Best first-search algorithm tries to find a solution to minimize the total cost of the search pathway, too. However, the difference from Best-First Search is that A* also takes into account the cost from the start, and not simply the local cost from the previously visited node. Best-first search finds a goal state in any predetermined problem space. However, it cannot guarantee that it will choose the shortest path to the goal (Pearl & Korf, 1987). For instance, if there are two options to chose from, one of which is a long way from the initial point but has a slightly shorter estimate of distance to the goal, and another that is very close to the initial state but has a slightly longer estimate of distance to the goal, best-first search will always choose to expand next the state with the shorter estimate. The A* algorithm fixes this drawback of best first search (Pearl & Korf, 1987).

In short, the A* algorithm searches all possible routes from a starting point until it finds the shortest path or cheapest cost to a goal. The terms like shortest path, cheapest cost here refer to a general notion. It could be some other term depending on the problem. For instance, in a map problem the cost is replaced by the term distance (Cawsey, 1998). A* may reduce the necessity to search all the possible pathways in a search space, and result in faster solution. A* evaluates nodes by combining g(n) and h(n). In the standard terminology used when talking about A*:

f(n)= g(n)+h(n)


The purpose of this equation is to obtain the lowest f score in a given problem. n being node number crossed until the final node,

f(n) is the total search cost, g(n) is actual lowest cost( shortest distance traveled) of the path from initial start point to the node n, h(n) is the estimated of cost of cheapest(distance) from the node n to a goal node. This part of the equation is also called heuristic function/estimation.

At each node, the lowest f value is chosen to be the next step to expand until the goal node is chosen and reached for expansion. (Pearl & Korf, 1987). Whenever the heuristic function satisfies certain conditions, A* search is both complete and optimal (Russell & Norvig, 2003).


Characteristics of A* Search Algorithm

[edit | edit source]

Admissibility. Strategies which guarantee an optimal solution, if there is one, are called admissible. Several items need to be satisfied for a heuristic to be admissible. If a tree-search is in use, the optimality of A* is straightforward to analyze (Russell & Norvig, 2003). In this case, A* is optimal, if h(n) is an admissible heuristic.

h*(n) = the true minimal cost to the goal from n.

A heuristic h is admissible if h(n) ≤ h*(n) for all states n.

Convergence. A search strategy is convergent if it promises to find a path, a solution graph, or information on whether they exist (Bolc & Cytowski, 1992). If there is a solution, A* will always find it. The convergence properties of the A* search algorithm are satisfied for any network with a non-negative cost function, either finite or infinite. For a finite network with a non-negative cost function, if A* terminates after finding a solution, or if there is no solution, then it is convergent. The number of the paths in a cyclic path is finite. A node which is previously examined node, lets say w, is revisited only if the search finds a smaller cost than the previous one. The cost function is non-negative; therefore an edge can be examined only once. Thus, A* is convergent (Bolc & Cytowski, 1992).


Problems with the A* Search Algorithm

According to Pearl & Korf (1987) the main shortcoming of A*, and any best-first search, is its memory requirement. Because the entire open pathway list must be saved, A* is space-limited in practice and is no more practical than breadth first search. For large search spaces, A* will run out of memory. Fortunately, a lot of memory can be saved by using Iterative Deepening A* (IDA*).

Pseudo-code A*

[edit | edit source]

(Anonymous, 2006)

create the open list of nodes, initially containing only our starting node
create the closed list of nodes, initially empty

while (we have not reached our goal) 
{
	consider the best node in the open list (the node with the lowest f value)

	if (this node is the goal) 
        {
		then we're done
	}
	else 
        {
		move the current node to the closed list and consider all of its neighbors

		for (each neighbor) 
                {
			if (this neighbor is in the closed list and our current g value is lower) 
                        {
				update the neighbor with the new, lower, g value 
				change the neighbor's parent to our current node
			}
			else if (this neighbor is in the open list and our current g value is lower) 
                        {
				update the neighbor with the new, lower, g value 
				change the neighbor's parent to our current node
			}
			else this neighbor is not in either the open or closed list 
                        {
				add the neighbor to the open list and set its g value
			}
		}
	}
}

Practical Applications of A*

[edit | edit source]

A* is the most popular choice for path finding, because it's fairly flexible and can be used in a wide range of contexts such as games (8-puzzle and a path finder).

  • Variations of A*
  • Bidirectional search
  • Iterative deepening
  • Beam search
  • Dynamic weighting
  • Bandwidth search
  • Dynamic A* and Lifelong Planning A*

Some Online Sources

[edit | edit source]

References

[edit | edit source]
  • Anonymous, (2006). Path Finding Tutorial, (Retrieved October 14 2008, from http://wiki.gamegardens.com/Path_Finding_Tutorial#Pseudo-code_A.2A)
  • Bolc, L.., & Cytowski, J.,(1992). Search Methods for Artificial Intelligence. London: Academic Press.
  • Cawsey, A. (1998). The Essence of Artificial Intelligence. Upper Saddle River, NJ: Prentice Hall
  • Kopec, D. & Marsland, T.A. (2001). Search. Retrieved October 10 2008, from http://www.cs.ualberta.ca/~tony/RecentPapers/Draft.5.2.pdf
  • Pearl, J. & Korf, R. E.(1987). Search techniques. Annual Review of Computer Science, 451-467.
  • Russell, S. J., & Norvig, P. (2003). Artificial Intelligence: A Modern Approach. Upper Saddle River, NJ: Prentice Hall