## Suboptimal Map Pathfinding Algorithms

This demo illustrates several different suboptimal path planning algorithms, which are described below.### Instructions

- Choose an algorithm
- Choose suboptimality bounds (if appropriate)
- Drag to select a path on the map on the left
- The plot on the right visualizes the h (x-axis) and g (y-axis) of each state on the open list as well as the boundary of the priority function used to determine which state to expand next.

Algorithm: Optimality Bound: Search Weight (when applicable):

- A*: f = g+h

This is the classical best-first heuristic search algorithm that combines g+h evenly to find the shortest path - Weighted A*: f = g+w*h

This is a simple modification to weighted A* that finds w-optimal solutions. Weighted A* does not need to re-open states to find w-optimal solutions. - Weighted A* (XDP): f = (1/2w)[g+(2w-1)h+sqrt((g-h)
^{2}+4wgh)]

XDP stands for convex downward parabola. This is a small modification to weighted A*. In Weighted A*, the set of states with the same priority are on a straight line. XDP changes this straight line to a parabola. XDP causes the best-first search to find near-optimal paths near the start and paths that are up to (2w-1) supoptimal near the goal. Overall the paths found are still w-optimal, and re-openings are not required. - Weighted A* (XUP): f = (1/2w)(g + h + sqrt((g+h)
^{2}+4w(w-1)h^{2}))

XUP stands for convex upward parabola. XUP is similar to XDP, except it looks for (2w-1)-optimal paths near the start and optimal paths near the goal. - Weighted A* (pwXD): [h > g]: f = g+h; [h ≤ g]: f = (g+(2w-1)h)/w

pwXD is a piecewise downward curve which uses A* until h<g, after which is uses WA* with a weight of 2w-1. Returns w-optimal solutions. - Weighted A* (pwXU): [g < (2w-1)h]: f = g/(2w-1)+h; [g ≥ (2w-1)h]: f = (g+h)/w

pwXU is a piecewise upward curve which uses WA* with a weight of 2w-1 initially, and then switches to A*. Returns w-optimal solutions. - Weighted A* (AB):
[g < K]: f = g*(K-γ)/K+h; [g ≥ K]: f = g+h+γ where K = h(start, goal)

Phi_AB is a Phi function that produces a solution within a constant suboptimality bound. In this demo the bound is 25x the selected optimality bound. Phi_AB uses weighted A* initially and then reverts to A* after the suboptimality bound is exhausted. - A* epsilon: focal list f = h; open list f = g + h

A* epsilon introduced the idea of a focal list - the subset of states within the open that have a given property. A* epsilon interleaves greedy search to the goal with A* expansions to prove that the solution is w-optimal. (This implementation doesn't quite handle the focal list properly, but is a very similar approach.) - Optimistic Search: optimistic: f = g + (2w-1)h; proof: f = g+h

Optimistic search relies on the fact that WA* often finds paths that are better than w-suboptimal. It searches with a larger weight to find the solution and then tries to prove that the solution is still w-optimal. Optimistic search re-opens states with a shorter path is found. (Note that in the demo below you can choose the w-bound and exactly which bound to use for the optimistic search.) - Dynamic Potential Search: f = (w*f
_{min}-g(n))/h(n) where f_{min}is the minimum f-cost on open

Dynamic Potential Search searches dynamically based on the minimum f-cost in the open list. It often performs quite well, but not on grid domains, because it must re-open states when a shorter path is found, and the open list must be re-sorted when f_{min}changes. - Improved Optimistic Search

Improved Optimistic Search (IOS) is very similar to Optimistic search except (1) it doesn't re-open closed states in the optimistic portion of the search, (2) it has a better method of keeping track of improved solution quality, (3) it has better termination conditions, and (4) can use the alternate XDP/XUP priority functions. - Improved Optimistic Search (XDP)

This is IOS with the optimistic search using the XDP search. - Improved Optimistic Search (XUP)

This is IOS with the optimistic search using the XDP search.

### Related Videos

### Selected Related Publications

Loading...