dipy
¶
Diffusion Imaging in Python¶
For more information, please visit http://dipy.org
Subpackages¶
align -- Registration, streamline alignment, volume resampling
boots -- Bootstrapping algorithms
core -- Spheres, gradient tables
core.geometry -- Spherical geometry, coordinate and vector manipulation
core.meshes -- Point distributions on the sphere
data -- Small testing datasets
denoise -- Denoising algorithms
direction -- Manage peaks and tracking
io -- Loading/saving of dpy datasets
reconst -- Signal reconstruction modules (tensor, spherical harmonics,
diffusion spectrum, etc.)
segment -- Tractography segmentation
sims -- MRI phantom signal simulation
tracking -- Tractography, metrics for streamlines
viz -- Visualization and GUIs
Utilities¶
test -- Run unittests
__version__ -- Dipy version
|
Run benchmarks for module using nose. |
|
|
Set numpy print options to “legacy” for new versions of numpy |
|
|
Run tests for module using nose. |
bench¶
-
dipy.
bench
(label='fast', verbose=1, extra_argv=None)¶ Run benchmarks for module using nose.
- Parameters
- label{‘fast’, ‘full’, ‘’, attribute identifier}, optional
Identifies the benchmarks to run. This can be a string to pass to the nosetests executable with the ‘-A’ option, or one of several special values. Special values are:
‘fast’ - the default - which corresponds to the
nosetests -A
option of ‘not slow’.‘full’ - fast (as above) and slow benchmarks as in the ‘no -A’ option to nosetests - this is the same as ‘’.
None or ‘’ - run all tests.
attribute_identifier - string passed directly to nosetests as ‘-A’.
- verboseint, optional
Verbosity value for benchmark outputs, in the range 1-10. Default is 1.
- extra_argvlist, optional
List with any extra arguments to pass to nosetests.
- Returns
- successbool
Returns True if running the benchmarks works, False if an error occurred.
Notes
Benchmarks are like tests, but have names starting with “bench” instead of “test”, and can be found under the “benchmarks” sub-directory of the module.
Each NumPy module exposes bench in its namespace to run all benchmarks for it.
Examples
>>> success = np.lib.bench() #doctest: +SKIP Running benchmarks for numpy.lib ... using 562341 items: unique: 0.11 unique1d: 0.11 ratio: 1.0 nUnique: 56230 == 56230 ... OK
>>> success #doctest: +SKIP True
setup_test¶
-
dipy.
setup_test
()¶ Set numpy print options to “legacy” for new versions of numpy
If imported into a file, pytest will run this before any doctests.
References
https://github.com/numpy/numpy/commit/710e0327687b9f7653e5ac02d222ba62c657a718 https://github.com/numpy/numpy/commit/734b907fc2f7af6e40ec989ca49ee6d87e21c495 https://github.com/nipy/nibabel/pull/556
test¶
-
dipy.
test
(label='fast', verbose=1, extra_argv=None, doctests=False, coverage=False, raise_warnings=None, timer=False)¶ Run tests for module using nose.
- Parameters
- label{‘fast’, ‘full’, ‘’, attribute identifier}, optional
Identifies the tests to run. This can be a string to pass to the nosetests executable with the ‘-A’ option, or one of several special values. Special values are:
‘fast’ - the default - which corresponds to the
nosetests -A
option of ‘not slow’.‘full’ - fast (as above) and slow tests as in the ‘no -A’ option to nosetests - this is the same as ‘’.
None or ‘’ - run all tests.
attribute_identifier - string passed directly to nosetests as ‘-A’.
- verboseint, optional
Verbosity value for test outputs, in the range 1-10. Default is 1.
- extra_argvlist, optional
List with any extra arguments to pass to nosetests.
- doctestsbool, optional
If True, run doctests in module. Default is False.
- coveragebool, optional
If True, report coverage of NumPy code. Default is False. (This requires the coverage module).
- raise_warningsNone, str or sequence of warnings, optional
This specifies which warnings to configure as ‘raise’ instead of being shown once during the test execution. Valid strings are:
“develop” : equals
(Warning,)
“release” : equals
()
, do not raise on any warnings.
- timerbool or int, optional
Timing of individual tests with
nose-timer
(which needs to be installed). If True, time tests and report on all of them. If an integer (sayN
), report timing results forN
slowest tests.
- Returns
- resultobject
Returns the result of running the tests as a
nose.result.TextTestResult
object.
Notes
Each NumPy module exposes test in its namespace to run all tests for it. For example, to run all tests for numpy.lib:
>>> np.lib.test() #doctest: +SKIP
Examples
>>> result = np.lib.test() #doctest: +SKIP Running unit tests for numpy.lib ... Ran 976 tests in 3.933s
OK
>>> result.errors #doctest: +SKIP [] >>> result.knownfail #doctest: +SKIP []