Book contents
- Frontmatter
- Contents
- Preface
- Acknowledgements
- 1 Introduction
- 2 Search Spaces
- 3 Blind Search
- 4 Heuristic Search
- 5 Stochastic Local Search
- 6 Algorithm A* and Variations
- 7 Problem Decomposition
- 8 Chess and Other Games
- 9 Automated Planning
- 10 Deduction as Search
- 11 Search in Machine Learning
- 12 Constraint Satisfaction
- Appendix: Algorithm and Pseudocode Conventions
- References
- Index
6 - Algorithm A* and Variations
Published online by Cambridge University Press: 30 April 2024
- Frontmatter
- Contents
- Preface
- Acknowledgements
- 1 Introduction
- 2 Search Spaces
- 3 Blind Search
- 4 Heuristic Search
- 5 Stochastic Local Search
- 6 Algorithm A* and Variations
- 7 Problem Decomposition
- 8 Chess and Other Games
- 9 Automated Planning
- 10 Deduction as Search
- 11 Search in Machine Learning
- 12 Constraint Satisfaction
- Appendix: Algorithm and Pseudocode Conventions
- References
- Index
Summary
Finding a solution is one aspect of problem solving. Executing it is another. In certain applications the cost of executing the solution is important. For example, maintaining supplies to the International Space Station, a repetitive task, or sending a rocket to Jupiter, an infrequent activity. Coming down to Earth, the manufacturing industry needs to manage its supplies, inventory, scheduling, and shipping of products. At home, juggling the morning activity of cooking, sending off kids to school, and heading for office after grabbing a coffee and a bite could do with optimized processes.
In this chapter we look at the algorithm A* for finding optimal solutions. It is a heuristic search algorithm that guarantees an optimal solution. It does so by combining the goal seeking of best first search with a tendency to keep as close to the source as possible. We begin by looking at the algorithm branch & bound that focuses only on the latter, before incorporating the heuristic function.
We revert to graph search for the study of algorithms that guarantee optimal solutions. The task is to find a shortest path in a graph from a start node to a goal node. We have already studied algorithms BFS and DFID in Chapter 3. The key idea there was to extend that partial path which was the shortest. We begin with the same strategy. Except that now we add weights to edges in the graph. Without edge weights, the optimal or shortest path has the least number of edges in the path. With edge weights added, we modify this notion to the sum of the weights on the edges.
The common theme continuing in our search algorithms is as follows:
Pick the best node from OPEN and extend it, till you pick the goal node.
The question that remains is the definition of ‘best’. In DFS, the deepest node is the best node. In BestFirstSearch, the node that appears to be closest to the goal is the best. In BFS, the node closest to the start node is the best. We begin by extending the idea behind breadth first search.
We can generalize our common theme as follows. With every node N on OPEN, we associate a number that stands for the estimated cost of the final solution.
- Type
- Chapter
- Information
- Search Methods in Artificial Intelligence , pp. 147 - 184Publisher: Cambridge University PressPrint publication year: 2024