π DIPY Benchmarks π#
Benchmarking DIPY with Airspeed Velocity (ASV). Measure the speed and performance of DIPY functions easily!
Prerequisites βοΈ#
Make sure you have the required tools installed:
pip install spin asv virtualenv
Getting Started π#
DIPY benchmarking uses spin, which handles building DIPY and running ASV
automatically. You do not need to manually install a development version of
DIPY into your current Python environment.
Running Benchmarks π#
To run all available benchmarks, navigate to the root DIPY directory and run:
spin bench
This builds DIPY and runs all benchmarks in the benchmarks/benchmarks/
directory. Each benchmark is run multiple times to measure execution time
distribution β be patient, this can take a while.
For quick local testing (each benchmark runs only once, timings less accurate):
spin bench --quick
To run benchmarks from a specific module, such as bench_segment.py:
spin bench -t bench_segment
To run a specific benchmark class, such as BenchQuickbundles:
spin bench -t bench_segment.BenchQuickbundles
To run benchmarks matching a pattern directly with ASV:
cd benchmarks/
asv run --dry-run --show-stderr --python=same --quick -b "bench.*Segment"
Comparing Results π#
To compare benchmark results between the current branch and master:
spin bench --compare
spin bench --compare master
spin bench --compare master HEAD
To compare a specific benchmark only:
spin bench -t bench_segment --compare
To save results for future comparisons and view them in a browser:
cd benchmarks/
asv run -n -e --python=same
asv publish
asv preview
Continuous Integration π€#
Benchmarks run automatically on every push and pull request via the
Benchmarks / Linux CI check (see .github/workflows/benchmark.yml).
The CI workflow:
Installs DIPY with all dependencies
Sets single-threaded environment variables for reliable timings
Runs
asv runagainst the current commit
Note
Benchmark results are not yet published to a public dashboard. Contributions to set up ASV gh-pages publishing are welcome!
Contributing π€#
Want to add or improve a benchmark? Hereβs how:
Fork and clone the DIPY repository.
Create a new branch:
git checkout -b bench/my-new-benchmark
Add your benchmark in
benchmarks/benchmarks/. Follow the naming conventionbench_<module>.py(e.g.,bench_tracking.py).Test your benchmark locally:
spin bench -t bench_mymodule --quick
Open a pull request with a description of what you are benchmarking and why it is useful to track performance.
Writing Benchmarks βοΈ#
See the ASV documentation for full details.
Key guidelines:
The benchmark suite must be importable across multiple DIPY versions.
Benchmark parameters must not depend on which DIPY version is installed.
Keep individual benchmark runtimes reasonable (a few seconds at most).
Use ASVβs
time_prefix for timing benchmarks,mem_for memory usage.Prepare large arrays and fixtures in
setup()rather than intime_methods, so setup cost is not included in the timing.Avoid benchmarks that require network access or large file downloads.
Example benchmark:
import numpy as np
class BenchMyFunction:
def setup(self):
from dipy.data import get_fnames
from dipy.io.streamline import load_tractogram
fname = get_fnames(name="fornix")
self.streamlines = load_tractogram(
fname, "same", bbox_valid_check=False
).streamlines
def time_my_function(self):
from dipy.segment.clustering import QuickBundles
qb = QuickBundles(threshold=10.0)
qb.cluster(self.streamlines)
Embrace the Speed! β©#
You are all set to benchmark DIPY with ASV. Happy benchmarking! π