All-Pairs Shortest Paths
Go to:
> Material
> Lecture Recap
Material
- Slides: pre-class (w.o. solutions)
- Slides: with solutions & annotations
Lecture Recap
The final topic of this course is the All-Pairs Shortest Paths (APSP) problem. We saw two algorithms that outperform the naive approach of running Bellman-Ford $n$ times ($O(n^2m)$), depending on whether the graph is dense or sparse.
Floyd-Warshall
This DP approach is best suited for dense graphs.
- How: We iteratively compute $d_{uv}^k$, which represents the shortest path from $u$ to $v$ using only intermediate vertices from the set ${1, \dots, k}$. The recurrence $d_{uv}^k = \min(d_{uv}^{k-1}, d_{uk}^{k-1} + d_{kv}^{k-1})$ simply checks if routing through the newly available vertex $k$ is faster than the best path found using only vertices ${1, \dots, k-1}$.
- Runtime: $O(n^3)$. It is easy to implement and can detect negative cycles (if $d_{ii} < 0$).
Johnson
The best choice for sparse graphs, effectively running Dijkstra $n$ times even with negative weights.
- How: We use a reweighting technique where new weights are $\hat c_{uv} = c_{uv} + h(u) - h(v)$. This preserves shortest paths because the new path weight is a telescoping sum: intermediate potentials cancel out, shifting every path between $u$ and $v$ by the exact same constant ($h(u) - h(v)$).
- We proved that if no negative cycles exist, our distance function for shortest paths satisfies the constraints of $h$, motivating Johnson’s algorithm.
- Runtime: $O(n(m+n) \log n)$. On sparse graphs ($m \approx n$), this is significantly faster than Floyd-Warshall.
Matrix Multiplication
We also saw that finding shortest paths is equivalent to matrix multiplication over the $(\min, +)$ semiring. While we can’t use fast matrix multiplication (like Strassen’s) for distances due to the lack of additive inverses, we can use it to solve Transitive Closure (reachability) in roughly $O(n^{2.37})$.