# Jump Point Search Explained

There are several related algorithms for finding the shortest path on a uniform-cost 2D grid. The A* algorithm is a common and straightforward optimization of breadth-first (Dijkstra’s) and depth-first searches. There are many extensions to this algorithm, including D*, HPA*, and Rectangular Symmetry Reduction, which all seek to reduce the number of nodes required to find a best path.

The Jump Point Search algorithm, introduced by Daniel Harabor and Alban Grastien, is one such way of making pathfinding on a rectangular grid more efficient. In this post, I’ll do my best to explain it as clearly as I can without resorting to the underlying mathematical proofs as presented in the research papers. Instead, I’ll attempt to explain using diagrams and appeals to your intuition.

I am assuming familiarity with the A* search algorithm, and more generally, Dijkstra’s for pathfinding. For background information and an excellent explanation, see Amit’s introduction to A*.

For these examples, I’m assuming pathfinding on a regular square grid where horizontal and vertical movement costs 1 and diagonal movement costs √2̅.

## Try It Out

You can play with A* and JPS here. Click and drag anywhere to add obstacles, drag the green (start) and red (goal) nodes to move them, and click the “Find Path” button to find a best path.

## Path Expansion and Symmetry Reduction

During each iteration of A*, we expand our search in the best known direction. However, there are situations that can lead to some inefficiencies. One kind of inefficiency shows up in large open spaces. To show an example, let’s look at a rectangular grid.

In the research paper, Harabor and Grastien call these “symmetric paths” because they’re effectively identical. Ideally, we could recognize this situation and ignore all but one of them.

## Expanding Intelligently

The A* algorithm expands its search by doing the simplest thing possible: adding a node’s immediate neighbors to the set of what to examine next. What if we could look ahead a little bit, and skip over some nodes that we can intuit aren’t valuable to look at directly? We can try and identify situations where path symmetries are present, and ignore certain nodes as we expand our search.

### Looking Ahead Horizontally and Vertically

But what does “the way is clear” mean? When are our assumptions incorrect, and when do we need to stop and take a closer look?

Here, we have to stop and reexamine. Not only must we look at the node to our
right, we’re also forced to to look at the one diagonally upward from it. In the
Jump Point Search paper, this is is called a **forced neighbor** because we’re
forced to consider it when we would have otherwise ignored it.

When we reach a node with a forced neighbor, we stop jumping rightward and add the current node to the open set for further examination and expansion later.

### Looking Ahead Diagonally

How might we jump ahead when moving diagonally, when there are three neighbors to consider?

Two of our neighbors require vertical or horizontal movement. Since we know how to jump ahead in these directions, we can look there first, and if neither of those directions find any nodes worth looking at, we can move one more step diagonally and do it again.

For example, here are several steps of a diagonal jump. The horizontal and vertical paths are considered before moving diagonally, until one of them finds a node that warrants further consideration. In this case, it’s the edge of the barrier, detected because it has a forced neighbor as we jump to the right.

## Tying It Together

We’ve now considered ways of skipping most neighbors for a node when we’re traveling in a particular direction, and have also determined some rules about when we can jump ahead.

To tie this back into the A* algorithm, we’ll apply these “jumping ahead” steps whenever we examine a node in the open set. We’ll use its parent to determine the direction of travel, and skip ahead as far as we can. If we find a node of interest, we’ll ignore all the intermediary steps (since we’ve used our simplifying rules to skip over them) and add that new node to the open set.

Each node in the open set is the expanded based on the direction of its parent, following the same jump point expansion as before: look horizontally and vertically first, then move diagonally.

Here is an example sequence of path expansions, with the final path marked at the end.

## Notes

- The code used to create these SVG diagrams and interactive pathfinding is available on GitHub under the jps-explained project.
- The original research paper by Daniel Harabor and Alban Grastien introduces the JPS algorithm formally and serves as an excellent reference when implementing the algorithm for yourself. Several other related papers are at Mr. Harabor’s website.
- Mr. Harabor’s original blog post series introducting Rectangular Symmetry Reduction and Jump Point Search starts with this post on his blog.
- Amit’s A* Pages are an excellent introduction to A* and pathfinding algorithms.
- Xueqiao Xu has created a javascript pathfinding library called PathFinding.js which includes JPS, and also has an interactive demo that directly inspired the visualizations and diagrams you find here.
- I implemented JPS in poorly-written Clojure as part of a project experimenting with pathfinding and visualization.

## Update, April 2014

Thanks to Matthew Fearnley and others who suggested some improvements. Two things:

- The JPS algorithm is based on the assumption that accessing the contents of many points on a grid in a few iterations of A* is more efficient than maintaining a priority queue over many iterations of A*. Here, it’s O(1) versus O(n), though with a better queue implementation it’d be O(1) versus O(n log n).
- I’ve updated the diagrams to distinguish between “examined” paths versus paths established via the open/closed sets from the A* algorithm itself.