zerowidth positive lookahead

A Visual Explanation of Jump Point Search

There are several related algorithms for finding the shortest path on a uniform-cost 2D grid. The A* algorithm is a common and straightforward optimization of breadth-first (Dijkstra’s) and depth-first searches. There are many extensions to this algorithm, including D*, HPA*, and Rectangular Symmetry Reduction, which all seek to reduce the number of nodes required to find a best path.

The Jump Point Search algorithm, introduced by Daniel Harabor and Alban Grastien, is one such way of making pathfinding on a rectangular grid more efficient. In this post, I’ll do my best to explain it as clearly as I can without resorting to the underlying mathematical proofs as presented in the research papers. Instead, I’ll attempt to explain using diagrams and appeals to your intuition.

I am assuming familiarity with the A* search algorithm, and more generally, Dijkstra’s for pathfinding. For background information and an excellent explanation, see Amit’s introduction to A*.

For these examples, I’m assuming pathfinding on a regular square grid where horizontal and vertical movement costs 1 and diagonal movement costs √2̅.

Try It Out

You can play with A* and JPS here. Click and drag anywhere to add obstacles, drag the green (start) and red (goal) nodes to move them, and click the “Find Path” button to find a best path.

[interactive svg diagram]

Path Expansion and Symmetry Reduction

During each iteration of A*, we expand our search in the best known direction. However, there are situations that can lead to some inefficiencies. One kind of inefficiency shows up in large open spaces. To show an example, let’s look at a rectangular grid.

[svg diagram]

An open grid, with a path from the green node to the red.

[svg diagram]

There are many equally valid best paths through this rectangular area. Dijkstra’s, doing a breadth-first search, exemplifies this. You can see that each of these paths costs the same. The only difference is in what order we choose to move diagonally or horizontally.

In the research paper, Harabor and Grastien call these “symmetric paths” because they’re effectively identical. Ideally, we could recognize this situation and ignore all but one of them.

Expanding Intelligently

The A* algorithm expands its search by doing the simplest thing possible: adding a node’s immediate neighbors to the set of what to examine next. What if we could look ahead a little bit, and skip over some nodes that we can intuit aren’t valuable to look at directly? We can try and identify situations where path symmetries are present, and ignore certain nodes as we expand our search.

Looking Ahead Horizontally and Vertically

[svg diagram]

First, let’s look at horizontal–and by extension, vertical–movement. On an open grid, let’s consider moving from the green node to the right. There are several assumptions we can make about this node’s immediate neighbors.

[svg diagram]

First, we can ignore the node we’re coming from, since we’ve already visited it. This is marked in grey.

[svg diagram]

Second, we can assume the nodes diagonally “behind” us have been reached via our parent, since those are are shorter paths than going through us.

[svg diagram]

The nodes above and below could also have been reached more optimally from our parent. Going through us instead would have a cost of 2 rather than √2̅, so we can ignore them too.

[svg diagram]

The neighbors diagonally in front of us can be reached via our neighbors above and below. The path through us costs the same, so for the sake of simplicity we’ll assume the other way is preferable and ignore these nodes too.

[svg diagram]

This leaves us with only one node to examine: the one to our immediate right. We’ve already assumed that all our other neighbors are reached via alternate paths, so we can focus on this single neighbor only.

[svg diagram]

And that’s the trick: as long as the way is clear, we can jump ahead one more node to the right and repeat our examination without having to officially add the node to the open set.

But what does “the way is clear” mean? When are our assumptions incorrect, and when do we need to stop and take a closer look?

[svg diagram]

We made an assumption about the nodes diagonally to the right, that any equivalent path would pass through our neighbors above and below. But there is one situation where this cannot be true: if an obstacle above or below blocks the way.

Here, we have to stop and reexamine. Not only must we look at the node to our right, we’re also forced to to look at the one diagonally upward from it. In the Jump Point Search paper, this is is called a forced neighbor because we’re forced to consider it when we would have otherwise ignored it.

When we reach a node with a forced neighbor, we stop jumping rightward and add the current node to the open set for further examination and expansion later.

[svg diagram]

One final assumption: if the way is blocked as we jump to the right, we can safely disregard the entire jump. We’ve already assumed that paths above and below us are handled via other nodes, and we haven’t stopped because of a forced diagonal neighbor. Because we only care about what’s immediately to our right, an obstacle there means there’s nowhere else to go.

Looking Ahead Diagonally

[svg diagram]

We can apply similar simplifying assumptions when moving diagonally. In this example we’re moving up and to the right.

[svg diagram]

The first assumption we can make here is that our neighbors to the left and below can be reached optimally from our parent via direct vertical or horizontal moves.

[svg diagram]

We can also assume the same about the nodes up and to the left and down and to the right. These can also be reached more efficiently via the neighbors to the left and below.

[svg diagram]

This leaves us with three neighbors to consider: the two above and to the right, and one diagonally in our original direction of travel.

[svg diagram]

Similar to the forced neighbors during horizontal movement, when an obstacle is present to our left or below, the neighbors diagonally up-and-left and down-and-right cannot be reached any other way but through us. These are the forced neighbors for diagonal movement.

How might we jump ahead when moving diagonally, when there are three neighbors to consider?

Two of our neighbors require vertical or horizontal movement. Since we know how to jump ahead in these directions, we can look there first, and if neither of those directions find any nodes worth looking at, we can move one more step diagonally and do it again.

For example, here are several steps of a diagonal jump. The horizontal and vertical paths are considered before moving diagonally, until one of them finds a node that warrants further consideration. In this case, it’s the edge of the barrier, detected because it has a forced neighbor as we jump to the right.

[svg diagram]

First, we expand horizontally and vertically. Both jumps end in an obstacle (or the edge of the map), so we can move diagonally.

[svg diagram]

Once again both vertical and horizontal expansions are met with obstacles, so we move diagonally.

[svg diagram]

And a third time…

[svg diagram]

Finally, while the vertical expansion merely reaches the edge of the map, a jump to the right reveals a forced neighbor.

[svg diagram]

At this point we add our current node to the open set and continue with the next iteration of the A* algorithm.

Tying It Together

We’ve now considered ways of skipping most neighbors for a node when we’re traveling in a particular direction, and have also determined some rules about when we can jump ahead.

To tie this back into the A* algorithm, we’ll apply these “jumping ahead” steps whenever we examine a node in the open set. We’ll use its parent to determine the direction of travel, and skip ahead as far as we can. If we find a node of interest, we’ll ignore all the intermediary steps (since we’ve used our simplifying rules to skip over them) and add that new node to the open set.

Each node in the open set is the expanded based on the direction of its parent, following the same jump point expansion as before: look horizontally and vertically first, then move diagonally.

Here is an example sequence of path expansions, with the final path marked at the end.

[svg diagram]

Same example from earlier, but with a goal node marked.

[svg diagram]

Starting at the previously-identified node (the only one in the open set), we expand vertically and find nothing.

[svg diagram]

Jumping horizontally finds a node with a forced neighbor (highlighted in purple). We add this node to the open set.

[svg diagram]

Lastly, we expand diagonally, finding nothing because we bump into the edge of the map.

[svg diagram]

Next we examine the next best (or in this case, only) open node. Because we were moving horizontally when we reached this node, we continue to jump horizontally.

[svg diagram]

Because we also have a forced neighbor, we expand in that direction as well. Following the rules of diagonal jumps, we move diagonally, then look both vertically and horizontally.

[svg diagram]

Finding nothing, we move diagonally again.

[svg diagram]

This time, when we expand horizontally (nowhere to go) and vertically, we see the goal node. This is equally as interesting as finding a node with a forced neighbor, so we add this node to the open set.

[svg diagram]

Finally, we expand the last open node, reaching the goal.

[svg query]

Skipping the last iteration of the algorithm–adding the goal itself to the open set only to recognize it as such–we’ve found (a) best path!

Notes

Update, April 2014

Thanks to Matthew Fearnley and others who suggested some improvements. Two things:

  1. The JPS algorithm is based on the assumption that accessing the contents of many points on a grid in a few iterations of A* is more efficient than maintaining a priority queue over many iterations of A*. Here, it’s O(1) versus O(n), though with a better queue implementation it’d be O(1) versus O(n log n).
  2. I’ve updated the diagrams to distinguish between “examined” paths versus paths established via the open/closed sets from the A* algorithm itself.