We study the impact of using different priority queues in the performance of Dijkstra's SSSP algorithm. We consider only general priority queues that can handle any type of keys (integer, floating point, etc.); the only exception is that we use as a benchmark the DIMACS Challenge SSSP code which can handle only integer values for distances.
Our experiments were focussed on the following:
1. We study the performance of two variants of Dijkstra's algorithm: the well-known version that uses a priority queue that supports the Decrease-Key operation, and another that uses a basic priority queue that supports only Insert and Delete-Min. For the latter type of priority queue we include several for which high-performance code is available such as bottom-up binary heap, aligned 4-ary heap, and sequence heap.
2. We study the performance of Dijkstra's algorithm designed for flat memory relative to versions that try to be cache-efficient. For this, in main part, we study the difference in performance of Dijkstra's algorithm relative to the cache-efficiency of the priority queue used, both in-core and out-of-core. We also study the performance of an implementation of Dijkstra's algorithm that achieves a modest amount of additional cache-efficiency in undirected graphs through the use of two cache-efficient priority queues. This is theoretically the most cache-efficient implementation of Dijkstra's algorithm currently known.
Overall, our results show that using a standard priority queue without the decrease-key operation results in better performance than using one with the decrease-key operation in most cases; that cache-efficient priority queues improve the performance of Dijkstra's algorithm, both in-core and out-of-core on current processors; and that the dual priority queue version of Dijkstra's algorithm has a significant overhead in the constant factor, and hence is quite slow in in-core execution, though it performs by far the best on sparse graphs out-of-core.