Multi-Agent Path-Finding (MAPF) Benchmarks
This page is part of Nathan Sturtevant's Moving AI Lab.
This page is focused on benchmark maps and problems for multi-agent path-finding.
There is a wide body of researchers who use gridworld domains as benchmarks. The goal of this page is to collect
benchmark problems and maps that can be broadly used and referenced for comparison and testing purposes.
Browse and download the MAPF Benchmark Sets
- Important: Please append your results running on these benchmarks to the benchmark page on mapf.info. This will help the community track improvements in solvers over time.
- 8/30/19: Added new maze map with 1x1 corridors to benchmark set
- 11/26/19: Added 4 new warehouse maps to benchmark set
- If you are interested in comparing your work to other published results, there is/will be a wiki at http://www.mapf.info/ which is dedicated to this purpose.
- There are 25 (x2) benchmark sets for each of the maps.
- Each benchmark file has a list of start/goal locations. The intention is that one would add one agent at a time until an algorithm cannot solve a problem in a given time/memory limit.
- One set of benchmarks has problems that are generated purely randomly, capped at 1000 problems per file. The individual problems for a given agent will all tend to be longer.
- The other set of benchmarks has problem that are evenly distributed in buckets of 10 problems with the same (length/4). These problems will have an even mix of short and long problems.
- TODO: Release instances as PDDL. The same instances in PDDL for use by planners (both optimal and satisficing). This will help compare the performance between general-purpose and problem specific solvers.
If you would like to use these benchmarks in a paper, please cite the following paper which has an overview of the benchmarks:
@article{stern2019mapf,
title={Multi-Agent Pathfinding: Definitions, Variants, and Benchmarks},
author={Roni Stern and Nathan R. Sturtevant and Ariel Felner and Sven Koenig and Hang Ma and Thayne T. Walker and Jiaoyang Li and Dor Atzmon and Liron Cohen and T. K. Satish Kumar and Eli Boyarski and Roman Bartak},
journal={Symposium on Combinatorial Search (SoCS)},
year={2019},
pages={151--158}
}
This material is based upon work supported by the National Science Foundation under Grant No. 1815660.
Any opinions, findings, and conclusions or recommendations expressed in this material are those of the
author(s) and do not necessarily reflect the views of the National Science Foundation.
All data is made available under the Open Data Commons Attribution License