+44 (0) 15 64 793552

ETC Conference Papers 2022

A fast path-based algorithm for traffic assignment on large networks

Seminar
Day 1 (7 Sep 2022), Session 2, TRAFFIC ASSIGNMENT, 14:00 - 16:00

Status
Accepted, documents submitted

Submitted by / Abstract owner
Arne Schneck

Authors
Arne Schneck

Short abstract
We present a fast algorithm for classical static user equilibrium traffic assignment. It outperforms previously published algorithms and is suitable for large networks.

Abstract
We present a new path-based algorithm for the static user equilibrium traffic assignment problem (UE-TAP). Despite the trend to more sophisticated traffic models with new modes of transport, classical static traffic assignment is still the workhorse of most models used in practice. At the same time, models are getting larger, increasing the need for practical solution methods that can handle them in a reasonable time.

Our algorithm combines several new ideas with some recently published improvements to path-based and link-based methods for traffic assignment. It is faster in publicly available networks than any published algorithm that we know of, can achieve high-precision solutions, and can handle networks with millions of nodes on a typical desktop computer. In our presentation, we share the details of our method.

Path-based algorithms for traffic assignment typically alternate between two phases: A column generation phase, where new paths that are found through a shortest path search are added, and an equilibration phase, where flows are shifted between paths and paths are removed.

For the column generation phase, we use the recently developed customizable contraction hierarchies (CCH). CCH are a speed-up technique for shortest path search that requires a preprocessing phase but makes subsequent searches faster by one to two orders of magnitude than the classical Dijkstra algorithm on large networks. Shortest-path search is run in parallel over the origin zones.

In order to handle large networks on typical desktop computers, we employ a path compression scheme. Instead of storing paths, we store path trees in a compressed form. The advantage of this is that shared parts of paths starting at the same origin zone must be stored only once, which reduces memory usage.

The algorithmic design of our equilibration phase is based on two key observations: On the one hand, the actual equilibration procedure which shifts volume between paths is relatively expensive and not straight-forward to parallelize. On the other hand, convergence is dominated by only a small share of poorly converged origin destination (OD) pairs, but as long as no flows are changed, identification of poorly converged OD pairs can be executed in parallel.

Our equilibration phase therefore alternates between two subphases: In a multi-stage parallel filtering procedure, poorly converged OD pairs are identified for a block of origin zones. Subsequently, these OD pairs are equilibrated sequentially.

For the actual equilibration of an OD pair, various methods have been suggested in the literature, but the difference between them is not that dramatic. We propose yet another method: Firstly, we shift flow from the currently longest path to the currently shortest path of an OD. This idea is not new, but we use an accelerated implementation for which we determine which parts of the two paths are only used by one of the paths (paired alternative segments), and shift volume only for these usually smaller path segments. Secondly, for the amount of flow to shift, we do a line search, but instead of the usually suggested bisection method, we use the Anderson-Björck variant of the false position method, which usually needs only a few iterations to converge. It does not need derivatives of the link travel time functions, which are not always available in practice.

Our algorithm has been implemented in the software package PTV Visum 2022, with further improvements in version 2023. In the publicly available Chicago Regional network, we currently reach a practically relevant gap of 1e-4 in 21 seconds on a high-end machine with 64 cores (1e-6: 31 seconds, 1e-8: 64 seconds) and 36 seconds on an older desktop computer with 6 cores (1e-6: 57 seconds, 1e-8: 108 seconds). A national network with 2.7 million nodes and 20,680 zones converges to a gap of 1e-4 in 47 minutes on a high-end machine.

Programme committee
Transport Models