workflows#

Module: workflows.align#

ResliceFlow([output_strategy, mix_names, ...])

SlrWithQbxFlow([output_strategy, mix_names, ...])

ImageRegistrationFlow([output_strategy, ...])

The registration workflow allows the user to use only one type of registration (such as center of mass or rigid body registration only).

ApplyTransformFlow([output_strategy, ...])

SynRegistrationFlow([output_strategy, ...])

MotionCorrectionFlow([output_strategy, ...])

The Motion Correction workflow allows the user to align between-volumes DWI dataset.

BundleWarpFlow([output_strategy, mix_names, ...])

check_dimensions(static, moving)

Check the dimensions of the input images.

Module: workflows.base#

IntrospectiveArgumentParser([prog, usage, ...])

get_args_default(func)

none_or_dtype(dtype)

Check None presence before type casting.

Module: workflows.cli#

run()

Run scripts located in pyproject.toml.

Module: workflows.combined_workflow#

CombinedWorkflow([output_strategy, ...])

Module: workflows.denoise#

Patch2SelfFlow([output_strategy, mix_names, ...])

NLMeansFlow([output_strategy, mix_names, ...])

LPCAFlow([output_strategy, mix_names, ...])

MPPCAFlow([output_strategy, mix_names, ...])

GibbsRingingFlow([output_strategy, ...])

Module: workflows.docstring_parser#

This was taken directly from the file docscrape.py of numpydoc package.

Copyright (C) 2008 Stefan van der Walt <stefan@mentat.za.net>, Pauli Virtanen <pav@iki.fi>

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

  1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.

  2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.

THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS’’ AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

Reader(data)

A line-based string reader.

NumpyDocString(docstring[, config])

dedent_lines(lines)

Deindent a list of lines maximally

Module: workflows.flow_runner#

get_level(lvl)

Transforms the logging level passed on the commandline into a proper logging level name.

run_flow(flow)

Wraps the process of building an argparser that reflects the workflow that we want to run along with some generic parameters like logging, force and output strategies.

Module: workflows.io#

IoInfoFlow([output_strategy, mix_names, ...])

FetchFlow([output_strategy, mix_names, ...])

SplitFlow([output_strategy, mix_names, ...])

ConcatenateTractogramFlow([output_strategy, ...])

ConvertSHFlow([output_strategy, mix_names, ...])

ConvertTensorsFlow([output_strategy, ...])

ConvertTractogramFlow([output_strategy, ...])

Module: workflows.mask#

MaskFlow([output_strategy, mix_names, ...])

Module: workflows.multi_io#

IOIterator([output_strategy, mix_names])

Create output filenames that work nicely with multiple input files from multiple directories (processing multiple subjects with one command)

common_start(sa, sb)

Return the longest common substring from the beginning of sa and sb.

slash_to_under(dir_str)

connect_output_paths(inputs, out_dir, out_files)

Generate a list of output files paths based on input files and output strategies.

concatenate_inputs(multi_inputs)

Concatenate list of inputs.

basename_without_extension(fname)

io_iterator(inputs, out_dir, fnames[, ...])

Create an IOIterator from the parameters.

io_iterator_(frame, fnc[, output_strategy, ...])

Create an IOIterator using introspection.

Module: workflows.nn#

EVACPlusFlow([output_strategy, mix_names, ...])

Module: workflows.reconst#

ReconstMAPMRIFlow([output_strategy, ...])

ReconstDtiFlow([output_strategy, mix_names, ...])

ReconstDsiFlow([output_strategy, mix_names, ...])

ReconstCSDFlow([output_strategy, mix_names, ...])

ReconstCSAFlow([output_strategy, mix_names, ...])

ReconstDkiFlow([output_strategy, mix_names, ...])

ReconstIvimFlow([output_strategy, ...])

ReconstRUMBAFlow([output_strategy, ...])

Module: workflows.segment#

MedianOtsuFlow([output_strategy, mix_names, ...])

RecoBundlesFlow([output_strategy, ...])

LabelsBundlesFlow([output_strategy, ...])

Module: workflows.stats#

SNRinCCFlow([output_strategy, mix_names, ...])

BundleAnalysisTractometryFlow([...])

LinearMixedModelsFlow([output_strategy, ...])

BundleShapeAnalysis([output_strategy, ...])

buan_bundle_profiles(model_bundle_folder, ...)

Applies statistical analysis on bundles and saves the results in a directory specified by out_dir.

Module: workflows.tracking#

LocalFiberTrackingPAMFlow([output_strategy, ...])

PFTrackingPAMFlow([output_strategy, ...])

Module: workflows.viz#

HorizonFlow([output_strategy, mix_names, ...])

Module: workflows.workflow#

Workflow([output_strategy, mix_names, ...])

ResliceFlow#

class dipy.workflows.align.ResliceFlow(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Bases: Workflow

__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

classmethod get_short_name()#

Return A short name for the workflow used to subdivide.

The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.

For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.

Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.

run(input_files, new_vox_size, order=1, mode='constant', cval=0, num_processes=1, out_dir='', out_resliced='resliced.nii.gz')#

Reslice data with new voxel resolution defined by new_vox_sz

Parameters#

input_filesstring

Path to the input volumes. This path may contain wildcards to process multiple inputs at once.

new_vox_sizevariable float

new voxel size.

orderint, optional

order of interpolation, from 0 to 5, for resampling/reslicing, 0 nearest interpolation, 1 trilinear etc.. if you don’t want any smoothing 0 is the option you need.

modestring, optional

Points outside the boundaries of the input are filled according to the given mode ‘constant’, ‘nearest’, ‘reflect’ or ‘wrap’.

cvalfloat, optional

Value used for points outside the boundaries of the input if mode=’constant’.

num_processesint, optional

Split the calculation to a pool of children processes. This only applies to 4D data arrays. Default is 1. If < 0 the maximal number of cores minus num_processes + 1 is used (enter -1 to use as many cores as possible). 0 raises an error.

out_dirstring, optional

Output directory. (default current directory)

out_reslicedstring, optional

Name of the resliced dataset to be saved.

SlrWithQbxFlow#

class dipy.workflows.align.SlrWithQbxFlow(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Bases: Workflow

__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

classmethod get_short_name()#

Return A short name for the workflow used to subdivide.

The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.

For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.

Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.

run(static_files, moving_files, x0='affine', rm_small_clusters=50, qbx_thr=(40, 30, 20, 15), num_threads=None, greater_than=50, less_than=250, nb_pts=20, progressive=True, out_dir='', out_moved='moved.trk', out_affine='affine.txt', out_stat_centroids='static_centroids.trk', out_moving_centroids='moving_centroids.trk', out_moved_centroids='moved_centroids.trk')#

Streamline-based linear registration.

For efficiency we apply the registration on cluster centroids and remove small clusters.

Parameters#

static_files : string moving_files : string x0 : string, optional

rigid, similarity or affine transformation model.

rm_small_clustersint, optional

Remove clusters that have less than rm_small_clusters.

qbx_thrvariable int, optional

Thresholds for QuickBundlesX.

num_threadsint, optional

Number of threads to be used for OpenMP parallelization. If None (default) the value of OMP_NUM_THREADS environment variable is used if it is set, otherwise all available threads are used. If < 0 the maximal number of threads minus |num_threads + 1| is used (enter -1 to use as many threads as possible). 0 raises an error. Only metrics using OpenMP will use this variable.

greater_thanint, optional

Keep streamlines that have length greater than this value.

less_thanint, optional

Keep streamlines have length less than this value.

nb_ptsint, optional

Number of points for discretizing each streamline.

progressive : boolean, optional out_dir : string, optional

Output directory. (default current directory)

out_movedstring, optional

Filename of moved tractogram.

out_affinestring, optional

Filename of affine for SLR transformation.

out_stat_centroidsstring, optional

Filename of static centroids.

out_moving_centroidsstring, optional

Filename of moving centroids.

out_moved_centroidsstring, optional

Filename of moved centroids.

Notes#

The order of operations is the following. First short or long streamlines are removed. Second the tractogram or a random selection of the tractogram is clustered with QuickBundlesX. Then SLR [Garyfallidis15] is applied.

References#

[Garyfallidis15]

Garyfallidis et al. “Robust and efficient linear

registration of white-matter fascicles in the space of streamlines”, NeuroImage, 117, 124–140, 2015

[Garyfallidis14]

Garyfallidis et al., “Direct native-space fiber

bundle alignment for group comparisons”, ISMRM, 2014.

[Garyfallidis17]

Garyfallidis et al. Recognition of white matter

bundles using local and global streamline-based registration and clustering, NeuroImage, 2017.

ImageRegistrationFlow#

class dipy.workflows.align.ImageRegistrationFlow(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Bases: Workflow

The registration workflow allows the user to use only one type of registration (such as center of mass or rigid body registration only).

Alternatively, a registration can be done in a progressive manner. For example, using affine registration with progressive set to ‘True’ will involve center of mass, translation, rigid body and full affine registration. Whereas, when progressive is False the registration will include only center of mass and affine registration. The progressive registration will be slower but will improve the quality.

This can be controlled by using the progressive flag (True by default).

__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

run(static_image_files, moving_image_files, transform='affine', nbins=32, sampling_prop=None, metric='mi', level_iters=(10000, 1000, 100), sigmas=(3.0, 1.0, 0.0), factors=(4, 2, 1), progressive=True, save_metric=False, out_dir='', out_moved='moved.nii.gz', out_affine='affine.txt', out_quality='quality_metric.txt')#

Parameters#

static_image_filesstring

Path to the static image file.

moving_image_filesstring

Path to the moving image file.

transformstring, optional

com: center of mass, trans: translation, rigid: rigid body, rigid_isoscaling: rigid body + isotropic scaling, rigid_scaling: rigid body + scaling, affine: full affine including translation, rotation, shearing and scaling.

nbinsint, optional

Number of bins to discretize the joint and marginal PDF.

sampling_propint, optional
Number ([0-100]) of voxels for calculating the PDF.

‘None’ implies all voxels.

metricstring, optional

Similarity metric for gathering mutual information).

level_itersvariable int, optional
The number of iterations at each scale of the scale space.

level_iters[0] corresponds to the coarsest scale, level_iters[-1] the finest, where n is the length of the

sequence.

sigmasvariable floats, optional
Custom smoothing parameter to build the scale space (one parameter

for each scale).

factorsvariable floats, optional
Custom scale factors to build the scale space (one factor for each

scale).

progressiveboolean, optional

Enable/Disable the progressive registration.

save_metricboolean, optional

If true, quality assessment metric are saved in ‘quality_metric.txt’.

out_dirstring, optional
Directory to save the transformed image and the affine matrix

(default current directory).

out_movedstring, optional

Name for the saved transformed image.

out_affinestring, optional

Name for the saved affine matrix.

out_qualitystring, optional
Name of the file containing the saved quality

metric.

ApplyTransformFlow#

class dipy.workflows.align.ApplyTransformFlow(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Bases: Workflow

__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

run(static_image_files, moving_image_files, transform_map_file, transform_type='affine', out_dir='', out_file='transformed.nii.gz')#

Parameters#

static_image_filesstring

Path of the static image file.

moving_image_filesstring

Path of the moving image(s). It can be a single image or a folder containing multiple images.

transform_map_filestring

For the affine case, it should be a text(*.txt) file containing the affine matrix. For the diffeomorphic case, it should be a nifti file containing the mapping displacement field in each voxel with this shape (x, y, z, 3, 2).

transform_typestring, optional

Select the transformation type to apply between ‘affine’ or ‘diffeomorphic’.

out_dirstring, optional

Directory to save the transformed files (default current directory).

out_filestring, optional

Name of the transformed file. It is recommended to use the flag –mix-names to prevent the output files from being overwritten.

SynRegistrationFlow#

class dipy.workflows.align.SynRegistrationFlow(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Bases: Workflow

__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

run(static_image_files, moving_image_files, prealign_file='', inv_static=False, level_iters=(10, 10, 5), metric='cc', mopt_sigma_diff=2.0, mopt_radius=4, mopt_smooth=0.0, mopt_inner_iter=0, mopt_q_levels=256, mopt_double_gradient=True, mopt_step_type='', step_length=0.25, ss_sigma_factor=0.2, opt_tol=1e-05, inv_iter=20, inv_tol=0.001, out_dir='', out_warped='warped_moved.nii.gz', out_inv_static='inc_static.nii.gz', out_field='displacement_field.nii.gz')#

Parameters#

static_image_filesstring

Path of the static image file.

moving_image_filesstring

Path to the moving image file.

prealign_filestring, optional
The text file containing pre alignment information via an

affine matrix.

inv_staticboolean, optional

Apply the inverse mapping to the static image.

level_itersvariable int, optional

The number of iterations at each level of the gaussian pyramid.

metricstring, optional

The metric to be used. metric available: cc (Cross Correlation), ssd (Sum Squared Difference), em (Expectation-Maximization).

mopt_sigma_difffloat, optional

Metric option applied on Cross correlation (CC). The standard deviation of the Gaussian smoothing kernel to be applied to the update field at each iteration.

mopt_radiusint, optional

Metric option applied on Cross correlation (CC). the radius of the squared (cubic) neighborhood at each voxel to be considered to compute the cross correlation.

mopt_smoothfloat, optional

Metric option applied on Sum Squared Difference (SSD) and Expectation Maximization (EM). Smoothness parameter, the larger the value the smoother the deformation field. (default 1.0 for EM, 4.0 for SSD)

mopt_inner_iterint, optional

Metric option applied on Sum Squared Difference (SSD) and Expectation Maximization (EM). This is number of iterations to be performed at each level of the multi-resolution Gauss-Seidel optimization algorithm (this is not the number of steps per Gaussian Pyramid level, that parameter must be set for the optimizer, not the metric). Default 5 for EM, 10 for SSD.

mopt_q_levelsint, optional

Metric option applied on Expectation Maximization (EM). Number of quantization levels (Default: 256 for EM)

mopt_double_gradientbool, optional

Metric option applied on Expectation Maximization (EM). if True, the gradient of the expected static image under the moving modality will be added to the gradient of the moving image, similarly, the gradient of the expected moving image under the static modality will be added to the gradient of the static image.

mopt_step_typestring, optional

Metric option applied on Sum Squared Difference (SSD) and Expectation Maximization (EM). The optimization schedule to be used in the multi-resolution Gauss-Seidel optimization algorithm (not used if Demons Step is selected). Possible value: (‘gauss_newton’, ‘demons’). default: ‘gauss_newton’ for EM, ‘demons’ for SSD.

step_lengthfloat, optional
the length of the maximum displacement vector of the update

displacement field at each iteration.

ss_sigma_factorfloat, optional
parameter of the scale-space smoothing kernel. For example, the

std. dev. of the kernel will be factor*(2^i) in the isotropic case where i = 0, 1, …, n_scales is the scale.

opt_tolfloat, optional
the optimization will stop when the estimated derivative of the

energy profile w.r.t. time falls below this threshold.

inv_iterint, optional
the number of iterations to be performed by the displacement field

inversion algorithm.

inv_tolfloat, optional
the displacement field inversion algorithm will stop iterating

when the inversion error falls below this threshold.

out_dirstring, optional

Directory to save the transformed files (default current directory).

out_warpedstring, optional

Name of the warped file.

out_inv_staticstring, optional
Name of the file to save the static image after applying the

inverse mapping.

out_fieldstring, optional

Name of the file to save the diffeomorphic map.

MotionCorrectionFlow#

class dipy.workflows.align.MotionCorrectionFlow(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Bases: Workflow

The Motion Correction workflow allows the user to align between-volumes DWI dataset.

__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

run(input_files, bvalues_files, bvectors_files, b0_threshold=50, bvecs_tol=0.01, out_dir='', out_moved='moved.nii.gz', out_affine='affine.txt')#

Parameters#

input_filesstring

Path to the input volumes. This path may contain wildcards to process multiple inputs at once.

bvalues_filesstring

Path to the bvalues files. This path may contain wildcards to use multiple bvalues files at once.

bvectors_filesstring

Path to the bvectors files. This path may contain wildcards to use multiple bvectors files at once.

b0_thresholdfloat, optional

Threshold used to find b0 volumes.

bvecs_tolfloat, optional

Threshold used to check that norm(bvec) = 1 +/- bvecs_tol b-vectors are unit vectors

out_dirstring, optional
Directory to save the transformed image and the affine matrix

(default current directory).

out_movedstring, optional

Name for the saved transformed image.

out_affinestring, optional

Name for the saved affine matrix.

BundleWarpFlow#

class dipy.workflows.align.BundleWarpFlow(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Bases: Workflow

__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

classmethod get_short_name()#

Return A short name for the workflow used to subdivide.

The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.

For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.

Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.

run(static_file, moving_file, dist=None, alpha=0.3, beta=20, max_iter=15, affine=True, out_dir='', out_linear_moved='linearly_moved.trk', out_nonlinear_moved='nonlinearly_moved.trk', out_warp_transform='warp_transform.npy', out_warp_kernel='warp_kernel.npy', out_dist='distance_matrix.npy', out_matched_pairs='matched_pairs.npy')#

BundleWarp: streamline-based nonlinear registration.

BundleWarp is nonrigid registration method for deformable registration of white matter tracts.

Parameters#

static_filestring

Path to the static (reference) .trk file.

moving_filestring

Path to the moving (target to be registered) .trk file.

diststring, optional

Path to the precalculated distance matrix file.

alphafloat, optional

Represents the trade-off between regularizing the deformation and having points match very closely. Lower value of alpha means high deformations. It is represented with λ in BundleWarp paper. NOTE: setting alpha<=0.01 will result in highly deformable registration that could extremely modify the original anatomy of the moving bundle. (default 0.3)

betaint, optional

Represents the strength of the interaction between points Gaussian kernel size. (default 20)

max_iterint, optional

Maximum number of iterations for deformation process in ml-CPD method. (default 15)

affineboolean, optional

If False, use rigid registration as starting point. (default True)

out_dirstring, optional

Output directory. (default current directory)

out_linear_movedstring, optional

Filename of linearly moved bundle.

out_nonlinear_movedstring, optional

Filename of nonlinearly moved (warped) bundle.

out_warp_transformstring, optional

Filename of warp transformations generated by BundleWarp.

out_warp_kernelstring, optional

Filename of regularization gaussian kernel generated by BundleWarp.

out_diststring, optional

Filename of MDF distance matrix.

out_matched_pairsstring, optional

Filename of matched pairs; treamline correspondences between two bundles.

References#

[Chandio2023]

Chandio et al. “BundleWarp, streamline-based nonlinear registration of white matter tracts.” bioRxiv (2023): 2023-01.

check_dimensions#

dipy.workflows.align.check_dimensions(static, moving)#

Check the dimensions of the input images.

Parameters#

static2D or 3D array

the image to be used as reference during optimization.

moving: 2D or 3D array

the image to be used as “moving” during optimization. It is necessary to pre-align the moving image to ensure its domain lies inside the domain of the deformation fields. This is assumed to be accomplished by “pre-aligning” the moving image towards the static using an affine transformation given by the ‘starting_affine’ matrix.

IntrospectiveArgumentParser#

class dipy.workflows.base.IntrospectiveArgumentParser(prog=None, usage=None, description=None, epilog=None, parents=(), formatter_class=<class 'argparse.RawTextHelpFormatter'>, prefix_chars='-', fromfile_prefix_chars=None, argument_default=None, conflict_handler='resolve', add_help=True)#

Bases: ArgumentParser

__init__(prog=None, usage=None, description=None, epilog=None, parents=(), formatter_class=<class 'argparse.RawTextHelpFormatter'>, prefix_chars='-', fromfile_prefix_chars=None, argument_default=None, conflict_handler='resolve', add_help=True)#

Augmenting the argument parser to allow automatic creation of arguments from workflows

Parameters#

progNone

The name of the program. (default: sys.argv[0])

usageNone

A usage message. (default: auto-generated from arguments)

descriptionstr

A description of what the program does.

epilogstr

Text following the argument descriptions.

parentslist

Parsers whose arguments should be copied into this one.

formatter_classobj

HelpFormatter class for printing help messages.

prefix_charsstr

Characters that prefix optional arguments.

fromfile_prefix_charsNone

Characters that prefix files containing additional arguments.

argument_defaultNone

The default value for all arguments.

conflict_handlerstr

String indicating how to handle conflicts.

add_helpbool

Add a -h/-help option.

add_description()#
add_epilogue()#
add_sub_flow_args(sub_flows)#

Take an array of workflow objects and use introspection to extract the parameters, types and docstrings of their run method. Only the optional input parameters are extracted for these as they are treated as sub workflows.

Parameters#

sub_flowsarray of dipy.workflows.workflow.Workflow

Workflows to inspect.

Returns#

sub_flow_optionals : dictionary of all sub workflow optional parameters

add_workflow(workflow)#

Take a workflow object and use introspection to extract the parameters, types and docstrings of its run method. Then add these parameters to the current arparser’s own params to parse. If the workflow is of type combined_workflow, the optional input parameters of its sub workflows will also be added.

Parameters#

workflowdipy.workflows.workflow.Workflow

Workflow from which to infer parameters.

Returns#

sub_flow_optionals : dictionary of all sub workflow optional parameters

get_flow_args(args=None, namespace=None)#

Return the parsed arguments as a dictionary that will be used as a workflow’s run method arguments.

property optional_parameters#
property output_parameters#
property positional_parameters#
show_argument(dest)#
update_argument(*args, **kargs)#

get_args_default#

dipy.workflows.base.get_args_default(func)#

none_or_dtype#

dipy.workflows.base.none_or_dtype(dtype)#

Check None presence before type casting.

run#

dipy.workflows.cli.run()#

Run scripts located in pyproject.toml.

CombinedWorkflow#

class dipy.workflows.combined_workflow.CombinedWorkflow(output_strategy='append', mix_names=False, force=False, skip=False)#

Bases: Workflow

__init__(output_strategy='append', mix_names=False, force=False, skip=False)#

Workflow that combines multiple workflows. The workflow combined together are referred as sub flows in this class.

get_optionals(flow, **kwargs)#

Returns the sub flow’s optional arguments merged with those passed as params in kwargs.

get_sub_runs()#

Returns a list of tuples (sub flow name, sub flow run method, sub flow short name) to be used in the sub flow parameters extraction.

run_sub_flow(flow, *args, **kwargs)#

Runs the sub flow with the optional parameters passed via the command line. This is a convenience method to make sub flow running more intuitive on the concrete CombinedWorkflow side.

set_sub_flows_optionals(opts)#

Sets the self._optionals variable with all sub flow arguments that were passed in the commandline.

Patch2SelfFlow#

class dipy.workflows.denoise.Patch2SelfFlow(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Bases: Workflow

__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

classmethod get_short_name()#

Return A short name for the workflow used to subdivide.

The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.

For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.

Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.

run(input_files, bval_files, model='ols', b0_threshold=50, alpha=1.0, verbose=False, patch_radius=0, b0_denoising=True, clip_negative_vals=False, shift_intensity=True, out_dir='', out_denoised='dwi_patch2self.nii.gz')#

Workflow for Patch2Self denoising method.

It applies patch2self denoising on each file found by ‘globing’ input_file and bval_file. It saves the results in a directory specified by out_dir.

Parameters#

input_filesstring

Path to the input volumes. This path may contain wildcards to process multiple inputs at once.

bval_filesstring

bval file associated with the diffusion data.

modelstring, or initialized linear model object.

This will determine the algorithm used to solve the set of linear equations underlying this model. If it is a string it needs to be one of the following: {‘ols’, ‘ridge’, ‘lasso’}. Otherwise, it can be an object that inherits from dipy.optimize.SKLearnLinearSolver or an object with a similar interface from Scikit-Learn: sklearn.linear_model.LinearRegression, sklearn.linear_model.Lasso or sklearn.linear_model.Ridge and other objects that inherit from sklearn.base.RegressorMixin. Default: ‘ols’.

b0_thresholdint, optional

Threshold for considering volumes as b0.

alphafloat, optional

Regularization parameter only for ridge regression model.

verbosebool, optional

Show progress of Patch2Self and time taken.

patch_radiusvariable int, optional

The radius of the local patch to be taken around each voxel

b0_denoisingbool, optional

Skips denoising b0 volumes if set to False.

clip_negative_valsbool, optional

Sets negative values after denoising to 0 using np.clip.

shift_intensitybool, optional

Shifts the distribution of intensities per volume to give non-negative values

out_dirstring, optional

Output directory (default current directory)

out_denoisedstring, optional

Name of the resulting denoised volume (default: dwi_patch2self.nii.gz)

References#

[Fadnavis20]

S. Fadnavis, J. Batson, E. Garyfallidis, Patch2Self: Denoising Diffusion MRI with Self-supervised Learning, Advances in Neural Information Processing Systems 33 (2020)

NLMeansFlow#

class dipy.workflows.denoise.NLMeansFlow(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Bases: Workflow

__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

classmethod get_short_name()#

Return A short name for the workflow used to subdivide.

The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.

For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.

Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.

run(input_files, sigma=0, patch_radius=1, block_radius=5, rician=True, out_dir='', out_denoised='dwi_nlmeans.nii.gz')#

Workflow wrapping the nlmeans denoising method.

It applies nlmeans denoise on each file found by ‘globing’ input_files and saves the results in a directory specified by out_dir.

Parameters#

input_filesstring

Path to the input volumes. This path may contain wildcards to process multiple inputs at once.

sigmafloat, optional

Sigma parameter to pass to the nlmeans algorithm.

patch_radiusint, optional

patch size is 2 x patch_radius + 1.

block_radiusint, optional

block size is 2 x block_radius + 1.

ricianbool, optional

If True the noise is estimated as Rician, otherwise Gaussian noise is assumed.

out_dirstring, optional

Output directory. (default current directory)

out_denoisedstring, optional

Name of the resulting denoised volume.

References#

[Descoteaux08]

Descoteaux, Maxime and Wiest-Daesslé, Nicolas and

Prima, Sylvain and Barillot, Christian and Deriche, Rachid. Impact of Rician Adapted Non-Local Means Filtering on HARDI, MICCAI 2008

LPCAFlow#

class dipy.workflows.denoise.LPCAFlow(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Bases: Workflow

__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

classmethod get_short_name()#

Return A short name for the workflow used to subdivide.

The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.

For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.

Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.

run(input_files, bvalues_files, bvectors_files, sigma=0, b0_threshold=50, bvecs_tol=0.01, patch_radius=2, pca_method='eig', tau_factor=2.3, out_dir='', out_denoised='dwi_lpca.nii.gz')#

Workflow wrapping LPCA denoising method.

Parameters#

input_filesstring

Path to the input volumes. This path may contain wildcards to process multiple inputs at once.

bvalues_filesstring

Path to the bvalues files. This path may contain wildcards to use multiple bvalues files at once.

bvectors_filesstring

Path to the bvectors files. This path may contain wildcards to use multiple bvectors files at once.

sigmafloat, optional

Standard deviation of the noise estimated from the data. Default 0: it means sigma value estimation with the Manjon2013 algorithm [3]_.

b0_thresholdfloat, optional

Threshold used to find b0 volumes.

bvecs_tolfloat, optional

Threshold used to check that norm(bvec) = 1 +/- bvecs_tol b-vectors are unit vectors.

patch_radiusint, optional

The radius of the local patch to be taken around each voxel (in voxels) For example, for a patch radius with value 2, and assuming the input image is a 3D image, the denoising will take place in blocks of 5x5x5 voxels.

pca_methodstring, optional

Use either eigenvalue decomposition (‘eig’) or singular value decomposition (‘svd’) for principal component analysis. The default method is ‘eig’ which is faster. However, occasionally ‘svd’ might be more accurate.

tau_factorfloat, optional

Thresholding of PCA eigenvalues is done by nulling out eigenvalues that are smaller than:

\[\tau = (\tau_{factor} \sigma)^2\]

tau_{factor} can be change to adjust the relationship between the noise standard deviation and the threshold tau. If tau_{factor} is set to None, it will be automatically calculated using the Marcenko-Pastur distribution [2]_.

out_dirstring, optional

Output directory. (default current directory)

out_denoisedstring, optional

Name of the resulting denoised volume.

References#

Fieremans E, 2016. Denoising of Diffusion MRI using random matrix theory. Neuroimage 142:394-406. doi: 10.1016/j.neuroimage.2016.08.016

mapping using random matrix theory. Magnetic Resonance in Medicine. doi: 10.1002/mrm.26059.

Diffusion Weighted Image Denoising Using Overcomplete Local PCA. PLoS ONE 8(9): e73021. https://doi.org/10.1371/journal.pone.0073021

MPPCAFlow#

class dipy.workflows.denoise.MPPCAFlow(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Bases: Workflow

__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

classmethod get_short_name()#

Return A short name for the workflow used to subdivide.

The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.

For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.

Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.

run(input_files, patch_radius=2, pca_method='eig', return_sigma=False, out_dir='', out_denoised='dwi_mppca.nii.gz', out_sigma='dwi_sigma.nii.gz')#

Workflow wrapping Marcenko-Pastur PCA denoising method.

Parameters#

input_filesstring

Path to the input volumes. This path may contain wildcards to process multiple inputs at once.

patch_radiusvariable int, optional

The radius of the local patch to be taken around each voxel (in voxels) For example, for a patch radius with value 2, and assuming the input image is a 3D image, the denoising will take place in blocks of 5x5x5 voxels.

pca_methodstring, optional

Use either eigenvalue decomposition (‘eig’) or singular value decomposition (‘svd’) for principal component analysis. The default method is ‘eig’ which is faster. However, occasionally ‘svd’ might be more accurate.

return_sigmabool, optional

If true, a noise standard deviation estimate based on the Marcenko-Pastur distribution is returned [2]_.

out_dirstring, optional

Output directory. (default current directory)

out_denoisedstring, optional

Name of the resulting denoised volume.

out_sigmastring, optional

Name of the resulting sigma volume.

References#

Fieremans E, 2016. Denoising of Diffusion MRI using random matrix theory. Neuroimage 142:394-406. doi: 10.1016/j.neuroimage.2016.08.016

mapping using random matrix theory. Magnetic Resonance in Medicine. doi: 10.1002/mrm.26059.

GibbsRingingFlow#

class dipy.workflows.denoise.GibbsRingingFlow(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Bases: Workflow

__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

classmethod get_short_name()#

Return A short name for the workflow used to subdivide.

The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.

For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.

Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.

run(input_files, slice_axis=2, n_points=3, num_processes=1, out_dir='', out_unring='dwi_unring.nii.gz')#

Workflow for applying Gibbs Ringing method.

Parameters#

input_filesstring

Path to the input volumes. This path may contain wildcards to process multiple inputs at once.

slice_axisint, optional

Data axis corresponding to the number of acquired slices. Could be (0, 1, or 2): for example, a value of 2 would mean the third axis.

n_pointsint, optional

Number of neighbour points to access local TV (see note).

num_processesint or None, optional

Split the calculation to a pool of children processes. Only applies to 3D or 4D data arrays. Default is 1. If < 0 the maximal number of cores minus num_processes + 1 is used (enter -1 to use as many cores as possible). 0 raises an error.

out_dirstring, optional

Output directory. (default current directory)

out_unringstring, optional

Name of the resulting denoised volume.

References#

Data Analysis and their Application to the Healthy Ageing Brain (Doctoral thesis). https://doi.org/10.17863/CAM.29356

artifact removal based on local subvoxel-shifts. Magn Reson Med. 2016 doi: 10.1002/mrm.26054.

Reader#

class dipy.workflows.docstring_parser.Reader(data)#

Bases: object

A line-based string reader.

__init__(data)#
datastr

String with lines separated by ‘

‘.

eof()#
is_empty()#
peek(n=0)#
read()#
read_to_condition(condition_func)#
read_to_next_empty_line()#
read_to_next_unindented_line()#
reset()#
seek_next_non_empty_line()#

NumpyDocString#

class dipy.workflows.docstring_parser.NumpyDocString(docstring, config={})#

Bases: object

__init__(docstring, config={})#

dedent_lines#

dipy.workflows.docstring_parser.dedent_lines(lines)#

Deindent a list of lines maximally

get_level#

dipy.workflows.flow_runner.get_level(lvl)#

Transforms the logging level passed on the commandline into a proper logging level name.

run_flow#

dipy.workflows.flow_runner.run_flow(flow)#

Wraps the process of building an argparser that reflects the workflow that we want to run along with some generic parameters like logging, force and output strategies. The resulting parameters are then fed to the workflow’s run method.

IoInfoFlow#

class dipy.workflows.io.IoInfoFlow(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Bases: Workflow

__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

classmethod get_short_name()#

Return A short name for the workflow used to subdivide.

The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.

For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.

Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.

run(input_files, b0_threshold=50, bvecs_tol=0.01, bshell_thr=100, reference=None)#

Provides useful information about different files used in medical imaging. Any number of input files can be provided. The program identifies the type of file by its extension.

Parameters#

input_filesvariable string

Any number of Nifti1, bvals or bvecs files.

b0_thresholdfloat, optional

Threshold used to find b0 volumes.

bvecs_tolfloat, optional

Threshold used to check that norm(bvec) = 1 +/- bvecs_tol b-vectors are unit vectors.

bshell_thrfloat, optional

Threshold for distinguishing b-values in different shells.

referencestring, optional

Reference anatomy for tck/vtk/fib/dpy file. support (.nii or .nii.gz).

FetchFlow#

class dipy.workflows.io.FetchFlow(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Bases: Workflow

__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

static get_fetcher_datanames()#

Gets available dataset and function names.

Returns#

available_data: dict

Available dataset and function names.

classmethod get_short_name()#

Return A short name for the workflow used to subdivide.

The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.

For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.

Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.

static load_module(module_path)#

Load / reload an external module.

Parameters#

module_path: string

the path to the module relative to the main script

Returns#

module: module object

run(data_names, out_dir='')#

Download files to folder and check their md5 checksums.

To see all available datasets, please type “list” in data_names.

Parameters#

data_namesvariable string

Any number of Nifti1, bvals or bvecs files.

out_dirstring, optional

Output directory. (default current directory)

SplitFlow#

class dipy.workflows.io.SplitFlow(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Bases: Workflow

__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

classmethod get_short_name()#

Return A short name for the workflow used to subdivide.

The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.

For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.

Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.

run(input_files, vol_idx=0, out_dir='', out_split='split.nii.gz')#

Splits the input 4D file and extracts the required 3D volume.

Parameters#

input_filesvariable string

Any number of Nifti1 files

vol_idx : int, optional out_dir : string, optional

Output directory. (default current directory)

out_splitstring, optional

Name of the resulting split volume

ConcatenateTractogramFlow#

class dipy.workflows.io.ConcatenateTractogramFlow(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Bases: Workflow

__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

classmethod get_short_name()#

Return A short name for the workflow used to subdivide.

The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.

For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.

Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.

run(tractogram_files, reference=None, delete_dpv=False, delete_dps=False, delete_groups=False, check_space_attributes=True, preallocation=False, out_dir='', out_extension='trx', out_tractogram='concatenated_tractogram')#

Concatenate multiple tractograms into one.

Parameters#

tractogram_listvariable string

The stateful tractogram filenames to concatenate

referencestring, optional

Reference anatomy for tck/vtk/fib/dpy file. support (.nii or .nii.gz).

delete_dpvbool, optional

Delete dpv keys that do not exist in all the provided TrxFiles

delete_dpsbool, optional

Delete dps keys that do not exist in all the provided TrxFile

delete_groupsbool, optional

Delete all the groups that currently exist in the TrxFiles

check_space_attributesbool, optional

Verify that dimensions and size of data are similar between all the TrxFiles

preallocationbool, optional

Preallocated TrxFile has already been generated and is the first element in trx_list (Note: delete_groups must be set to True as well)

out_dirstring, optional

Output directory. (default current directory)

out_extensionstring, optional

Extension of the resulting tractogram

out_tractogramstring, optional

Name of the resulting tractogram

ConvertSHFlow#

class dipy.workflows.io.ConvertSHFlow(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Bases: Workflow

__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

classmethod get_short_name()#

Return A short name for the workflow used to subdivide.

The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.

For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.

Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.

run(input_files, out_dir='', out_file='sh_convert_dipy_mrtrix_out.nii.gz')#

Converts SH basis representation between DIPY and MRtrix3 formats. Because this conversion is equal to its own inverse, it can be used to convert in either direction: DIPY to MRtrix3 or vice versa.

Parameters#

input_filesstring

Path to the input files. This path may contain wildcards to process multiple inputs at once.

out_dirstring, optional

Where the resulting file will be saved. (default ‘’)

out_filestring, optional

Name of the result file to be saved. (default ‘sh_convert_dipy_mrtrix_out.nii.gz’)

ConvertTensorsFlow#

class dipy.workflows.io.ConvertTensorsFlow(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Bases: Workflow

__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

classmethod get_short_name()#

Return A short name for the workflow used to subdivide.

The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.

For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.

Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.

run(tensor_files, from_format='mrtrix', to_format='dipy', out_dir='.', out_tensor='converted_tensor')#

Converts tensor representation between different formats.

Parameters#

tensor_filesvariable string

Any number of tensor files

from_formatstring, optional

Format of the input tensor files. Valid options are ‘dipy’, ‘mrtrix’, ‘ants’, ‘fsl’.

to_formatstring, optional

Format of the output tensor files. Valid options are ‘dipy’, ‘mrtrix’, ‘ants’, ‘fsl’.

out_dirstring, optional

Output directory. (default current directory)

out_tensorstring, optional

Name of the resulting tensor file

ConvertTractogramFlow#

class dipy.workflows.io.ConvertTractogramFlow(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Bases: Workflow

__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

classmethod get_short_name()#

Return A short name for the workflow used to subdivide.

The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.

For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.

Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.

run(input_files, reference=None, pos_dtype='float32', offsets_dtype='uint32', out_dir='', out_tractogram='converted_tractogram.trk')#

Converts tractogram between different formats.

Parameters#

input_filesvariable string

Any number of tractogram files

referencestring, optional

Reference anatomy for tck/vtk/fib/dpy file. support (.nii or .nii.gz).

pos_dtypestring, optional

Data type of the tractogram points, used for vtk files.

offsets_dtypestring, optional

Data type of the tractogram offsets, used for vtk files.

out_dirstring, optional

Output directory. (default current directory)

out_tractogramstring, optional

Name of the resulting tractogram

MaskFlow#

class dipy.workflows.mask.MaskFlow(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Bases: Workflow

__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

classmethod get_short_name()#

Return A short name for the workflow used to subdivide.

The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.

For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.

Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.

run(input_files, lb, ub=inf, out_dir='', out_mask='mask.nii.gz')#

Workflow for creating a binary mask

Parameters#

input_filesstring

Path to image to be masked.

lbfloat

Lower bound value.

ubfloat, optional

Upper bound value.

out_dirstring, optional

Output directory. (default current directory)

out_maskstring, optional

Name of the masked file.

IOIterator#

class dipy.workflows.multi_io.IOIterator(output_strategy='absolute', mix_names=False)#

Bases: object

Create output filenames that work nicely with multiple input files from multiple directories (processing multiple subjects with one command)

Use information from input files, out_dir and out_fnames to generate correct outputs which can come from long lists of multiple or single inputs.

__init__(output_strategy='absolute', mix_names=False)#
create_directories()#
create_outputs()#
file_existence_check(args)#
set_inputs(*args)#
set_out_dir(out_dir)#
set_out_fnames(*args)#
set_output_keys(*args)#

common_start#

dipy.workflows.multi_io.common_start(sa, sb)#

Return the longest common substring from the beginning of sa and sb.

slash_to_under#

dipy.workflows.multi_io.slash_to_under(dir_str)#

connect_output_paths#

dipy.workflows.multi_io.connect_output_paths(inputs, out_dir, out_files, output_strategy='absolute', mix_names=True)#

Generate a list of output files paths based on input files and output strategies.

Parameters#

inputsarray

List of input paths.

out_dirstring

The output directory.

out_filesarray

List of output files.

output_strategystring, optional
Which strategy to use to generate the output paths.

‘append’: Add out_dir to the path of the input. ‘prepend’: Add the input path directory tree to out_dir. ‘absolute’: Put directly in out_dir.

mix_namesbool, optional

Whether or not prepend a string composed of a mix of the input names to the final output name.

Returns#

A list of output file paths.

concatenate_inputs#

dipy.workflows.multi_io.concatenate_inputs(multi_inputs)#

Concatenate list of inputs.

basename_without_extension#

dipy.workflows.multi_io.basename_without_extension(fname)#

io_iterator#

dipy.workflows.multi_io.io_iterator(inputs, out_dir, fnames, output_strategy='absolute', mix_names=False, out_keys=None)#

Create an IOIterator from the parameters.

Parameters#

inputsarray

List of input files.

out_dirstring

Output directory.

fnamesarray

File names of all outputs to be created.

output_strategystring, optional

Controls the behavior of the IOIterator for output paths.

mix_namesbool, optional

Whether or not to append a mix of input names at the beginning.

out_keyslist, optional

Output parameter names.

Returns#

Properly instantiated IOIterator object.

io_iterator_#

dipy.workflows.multi_io.io_iterator_(frame, fnc, output_strategy='absolute', mix_names=False)#

Create an IOIterator using introspection.

Parameters#

frameframeobject

Contains the info about the current local variables values.

fncfunction

The function to inspect

output_strategystring

Controls the behavior of the IOIterator for output paths.

mix_namesbool

Whether or not to append a mix of input names at the beginning.

Returns#

Properly instantiated IOIterator object.

EVACPlusFlow#

class dipy.workflows.nn.EVACPlusFlow(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Bases: Workflow

__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

classmethod get_short_name()#

Return A short name for the workflow used to subdivide.

The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.

For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.

Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.

run(input_files, save_masked=False, out_dir='', out_mask='brain_mask.nii.gz', out_masked='dwi_masked.nii.gz')#

Extract brain using EVAC+.

Parameters#

input_filesstring

Path to the input volumes. This path may contain wildcards to process multiple inputs at once.

save_maskedbool, optional

Save mask.

out_dirstring, optional

Output directory. (default current directory)

out_maskstring, optional

Name of the mask volume to be saved.

out_maskedstring, optional

Name of the masked volume to be saved.

References#

[Park2022]

Park, J.S., Fadnavis, S., & Garyfallidis, E. (2022).

EVAC+: Multi-scale V-net with Deep Feature CRF Layers for Brain Extraction.

ReconstMAPMRIFlow#

class dipy.workflows.reconst.ReconstMAPMRIFlow(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Bases: Workflow

__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

classmethod get_short_name()#

Return A short name for the workflow used to subdivide.

The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.

For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.

Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.

run(data_files, bvals_files, bvecs_files, small_delta, big_delta, b0_threshold=50.0, laplacian=True, positivity=True, bval_threshold=2000, save_metrics=(), laplacian_weighting=0.05, radial_order=6, out_dir='', out_rtop='rtop.nii.gz', out_lapnorm='lapnorm.nii.gz', out_msd='msd.nii.gz', out_qiv='qiv.nii.gz', out_rtap='rtap.nii.gz', out_rtpp='rtpp.nii.gz', out_ng='ng.nii.gz', out_perng='perng.nii.gz', out_parng='parng.nii.gz')#

Workflow for fitting the MAPMRI model (with optional Laplacian regularization). Generates rtop, lapnorm, msd, qiv, rtap, rtpp, non-gaussian (ng), parallel ng, perpendicular ng saved in a nifti format in input files provided by data_files and saves the nifti files to an output directory specified by out_dir.

In order for the MAPMRI workflow to work in the way intended either the Laplacian or positivity or both must be set to True.

Parameters#

data_filesstring

Path to the input volume.

bvals_filesstring

Path to the bval files.

bvecs_filesstring

Path to the bvec files.

small_deltafloat

Small delta value used in generation of gradient table of provided bval and bvec.

big_deltafloat

Big delta value used in generation of gradient table of provided bval and bvec.

b0_thresholdfloat, optional

Threshold used to find b0 volumes.

laplacianbool, optional

Regularize using the Laplacian of the MAP-MRI basis.

positivitybool, optional

Constrain the propagator to be positive.

bval_thresholdfloat, optional

Sets the b-value threshold to be used in the scale factor estimation. In order for the estimated non-Gaussianity to have meaning this value should set to a lower value (b<2000 s/mm^2) such that the scale factors are estimated on signal points that reasonably represent the spins at Gaussian diffusion.

save_metricsvariable string, optional

List of metrics to save. Possible values: rtop, laplacian_signal, msd, qiv, rtap, rtpp, ng, perng, parng

laplacian_weightingfloat, optional

Weighting value used in fitting the MAPMRI model in the Laplacian and both model types.

radial_orderunsigned int, optional

Even value used to set the order of the basis.

out_dirstring, optional

Output directory. (default: current directory)

out_rtopstring, optional

Name of the rtop to be saved.

out_lapnormstring, optional

Name of the norm of Laplacian signal to be saved.

out_msdstring, optional

Name of the msd to be saved.

out_qivstring, optional

Name of the qiv to be saved.

out_rtapstring, optional

Name of the rtap to be saved.

out_rtppstring, optional

Name of the rtpp to be saved.

out_ngstring, optional

Name of the Non-Gaussianity to be saved.

out_perngstring, optional

Name of the Non-Gaussianity perpendicular to be saved.

out_parngstring, optional

Name of the Non-Gaussianity parallel to be saved.

ReconstDtiFlow#

class dipy.workflows.reconst.ReconstDtiFlow(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Bases: Workflow

__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

get_fitted_tensor(data, mask, bval, bvec, b0_threshold=50, bvecs_tol=0.01, fit_method='WLS', optional_args=None)#
classmethod get_short_name()#

Return A short name for the workflow used to subdivide.

The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.

For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.

Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.

run(input_files, bvalues_files, bvectors_files, mask_files, fit_method='WLS', b0_threshold=50, bvecs_tol=0.01, sigma=None, save_metrics=None, out_dir='', out_tensor='tensors.nii.gz', out_fa='fa.nii.gz', out_ga='ga.nii.gz', out_rgb='rgb.nii.gz', out_md='md.nii.gz', out_ad='ad.nii.gz', out_rd='rd.nii.gz', out_mode='mode.nii.gz', out_evec='evecs.nii.gz', out_eval='evals.nii.gz', nifti_tensor=True)#

Workflow for tensor reconstruction and for computing DTI metrics. using Weighted Least-Squares. Performs a tensor reconstruction on the files by ‘globing’ input_files and saves the DTI metrics in a directory specified by out_dir.

Parameters#

input_filesstring

Path to the input volumes. This path may contain wildcards to process multiple inputs at once.

bvalues_filesstring

Path to the bvalues files. This path may contain wildcards to use multiple bvalues files at once.

bvectors_filesstring

Path to the bvectors files. This path may contain wildcards to use multiple bvectors files at once.

mask_filesstring

Path to the input masks. This path may contain wildcards to use multiple masks at once.

fit_methodstring, optional

can be one of the following: ‘WLS’ for weighted least squares ‘LS’ or ‘OLS’ for ordinary least squares ‘NLLS’ for non-linear least-squares ‘RT’ or ‘restore’ or ‘RESTORE’ for RESTORE robust tensor fitting

b0_thresholdfloat, optional

Threshold used to find b0 volumes.

bvecs_tolfloat, optional

Threshold used to check that norm(bvec) = 1 +/- bvecs_tol

sigmafloat, optional

An estimate of the variance. [5] recommend to use 1.5267 * std(background_noise), where background_noise is estimated from some part of the image known to contain no signal (only noise) b-vectors are unit vectors.

save_metricsvariable string, optional

List of metrics to save. Possible values: fa, ga, rgb, md, ad, rd, mode, tensor, evec, eval

out_dirstring, optional

Output directory. (default current directory)

out_tensorstring, optional

Name of the tensors volume to be saved. Per default, this will be saved following the nifti standard: with the tensor elements as Dxx, Dxy, Dyy, Dxz, Dyz, Dzz on the last (5th) dimension of the volume (shape: (i, j, k, 1, 6)). If nifti_tensor is False, this will be saved in an alternate format that is used by other software (e.g., FSL): a 4-dimensional volume (shape (i, j, k, 6)) with Dxx, Dxy, Dxz, Dyy, Dyz, Dzz on the last dimension.

out_fastring, optional

Name of the fractional anisotropy volume to be saved.

out_gastring, optional

Name of the geodesic anisotropy volume to be saved.

out_rgbstring, optional

Name of the color fa volume to be saved.

out_mdstring, optional

Name of the mean diffusivity volume to be saved.

out_adstring, optional

Name of the axial diffusivity volume to be saved.

out_rdstring, optional

Name of the radial diffusivity volume to be saved.

out_modestring, optional

Name of the mode volume to be saved.

out_evecstring, optional

Name of the eigenvectors volume to be saved.

out_evalstring, optional

Name of the eigenvalues to be saved.

nifti_tensorbool, optional

Whether the tensor is saved in the standard Nifti format or in an alternate format that is used by other software (e.g., FSL): a 4-dimensional volume (shape (i, j, k, 6)) with Dxx, Dxy, Dxz, Dyy, Dyz, Dzz on the last dimension.

References#

ReconstDsiFlow#

class dipy.workflows.reconst.ReconstDsiFlow(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Bases: Workflow

__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

classmethod get_short_name()#

Return A short name for the workflow used to subdivide.

The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.

For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.

Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.

run(input_files, bvalues_files, bvectors_files, mask_files, qgrid_size=17, r_start=2.1, r_end=6.0, r_step=0.2, filter_width=32, normalize_peaks=False, extract_pam_values=False, parallel=False, num_processes=None, out_dir='', out_pam='peaks.pam5', out_shm='shm.nii.gz', out_peaks_dir='peaks_dirs.nii.gz', out_peaks_values='peaks_values.nii.gz', out_peaks_indices='peaks_indices.nii.gz')#

Diffusion Spectrum Imaging (DSI) reconstruction workflow.

Parameters#

input_filesstring

Path to the input volumes. This path may contain wildcards to process multiple inputs at once.

bvalues_filesstring

Path to the bvalues files. This path may contain wildcards to use multiple bvalues files at once.

bvectors_filesstring

Path to the bvectors files. This path may contain wildcards to use multiple bvectors files at once.

mask_filesstring

Path to the input masks. This path may contain wildcards to use multiple masks at once.

qgrid_sizeint, optional

has to be an odd number. Sets the size of the q_space grid. For example if qgrid_size is 17 then the shape of the grid will be (17, 17, 17).

r_startfloat, optional

ODF is sampled radially in the PDF. This parameters shows where the sampling should start.

r_endfloat, optional

Radial endpoint of ODF sampling

r_stepfloat, optional

Step size of the ODf sampling from r_start to r_end

filter_widthfloat, optional

Strength of the hanning filter

normalize_peaksbool, optional

Whether to normalize the peaks

extract_pam_valuesbool, optional

Save or not to save pam volumes as single nifti files.

parallelbool, optional

Whether to use parallelization in peak-finding during the calibration procedure.

num_processesint, optional

If parallel is True, the number of subprocesses to use (default multiprocessing.cpu_count()). If < 0 the maximal number of cores minus num_processes + 1 is used (enter -1 to use as many cores as possible). 0 raises an error.

out_dirstring, optional

Output directory. (default current directory)

out_pamstring, optional

Name of the peaks volume to be saved.

out_shmstring, optional

Name of the spherical harmonics volume to be saved.

out_peaks_dirstring, optional

Name of the peaks directions volume to be saved.

out_peaks_valuesstring, optional

Name of the peaks values volume to be saved.

out_peaks_indicesstring, optional

Name of the peaks indices volume to be saved.

ReconstCSDFlow#

class dipy.workflows.reconst.ReconstCSDFlow(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Bases: Workflow

__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

classmethod get_short_name()#

Return A short name for the workflow used to subdivide.

The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.

For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.

Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.

run(input_files, bvalues_files, bvectors_files, mask_files, b0_threshold=50.0, bvecs_tol=0.01, roi_center=None, roi_radii=10, fa_thr=0.7, frf=None, extract_pam_values=False, sh_order=8, odf_to_sh_order=8, parallel=False, num_processes=None, out_dir='', out_pam='peaks.pam5', out_shm='shm.nii.gz', out_peaks_dir='peaks_dirs.nii.gz', out_peaks_values='peaks_values.nii.gz', out_peaks_indices='peaks_indices.nii.gz', out_gfa='gfa.nii.gz')#

Constrained spherical deconvolution

Parameters#

input_filesstring

Path to the input volumes. This path may contain wildcards to process multiple inputs at once.

bvalues_filesstring

Path to the bvalues files. This path may contain wildcards to use multiple bvalues files at once.

bvectors_filesstring

Path to the bvectors files. This path may contain wildcards to use multiple bvectors files at once.

mask_filesstring

Path to the input masks. This path may contain wildcards to use multiple masks at once. (default: No mask used)

b0_thresholdfloat, optional

Threshold used to find b0 volumes.

bvecs_tolfloat, optional

Bvecs should be unit vectors.

roi_centervariable int, optional

Center of ROI in data. If center is None, it is assumed that it is the center of the volume with shape data.shape[:3].

roi_radiiint or array-like, optional

radii of cuboid ROI in voxels.

fa_thrfloat, optional

FA threshold for calculating the response function.

frfvariable float, optional

Fiber response function can be for example inputted as 15 4 4 (from the command line) or [15, 4, 4] from a Python script to be converted to float and multiplied by 10**-4 . If None the fiber response function will be computed automatically.

extract_pam_valuesbool, optional

Save or not to save pam volumes as single nifti files.

sh_orderint, optional

Spherical harmonics order (l) used in the CSA fit.

odf_to_sh_orderint, optional

Spherical harmonics order (l) used for peak_from_model to compress the ODF to spherical harmonics coefficients.

parallelbool, optional

Whether to use parallelization in peak-finding during the calibration procedure.

num_processesint, optional

If parallel is True, the number of subprocesses to use (default multiprocessing.cpu_count()). If < 0 the maximal number of cores minus num_processes + 1 is used (enter -1 to use as many cores as possible). 0 raises an error.

out_dirstring, optional

Output directory. (default current directory)

out_pamstring, optional

Name of the peaks volume to be saved.

out_shmstring, optional

Name of the spherical harmonics volume to be saved.

out_peaks_dirstring, optional

Name of the peaks directions volume to be saved.

out_peaks_valuesstring, optional

Name of the peaks values volume to be saved.

out_peaks_indicesstring, optional

Name of the peaks indices volume to be saved.

out_gfastring, optional

Name of the generalized FA volume to be saved.

References#

ReconstCSAFlow#

class dipy.workflows.reconst.ReconstCSAFlow(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Bases: Workflow

__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

classmethod get_short_name()#

Return A short name for the workflow used to subdivide.

The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.

For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.

Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.

run(input_files, bvalues_files, bvectors_files, mask_files, sh_order=6, odf_to_sh_order=8, b0_threshold=50.0, bvecs_tol=0.01, extract_pam_values=False, parallel=False, num_processes=None, out_dir='', out_pam='peaks.pam5', out_shm='shm.nii.gz', out_peaks_dir='peaks_dirs.nii.gz', out_peaks_values='peaks_values.nii.gz', out_peaks_indices='peaks_indices.nii.gz', out_gfa='gfa.nii.gz')#

Constant Solid Angle.

Parameters#

input_filesstring

Path to the input volumes. This path may contain wildcards to process multiple inputs at once.

bvalues_filesstring

Path to the bvalues files. This path may contain wildcards to use multiple bvalues files at once.

bvectors_filesstring

Path to the bvectors files. This path may contain wildcards to use multiple bvectors files at once.

mask_filesstring

Path to the input masks. This path may contain wildcards to use multiple masks at once. (default: No mask used)

sh_orderint, optional

Spherical harmonics order (l) used in the CSA fit.

odf_to_sh_orderint, optional

Spherical harmonics order (l) used for peak_from_model to compress the ODF to spherical harmonics coefficients.

b0_thresholdfloat, optional

Threshold used to find b0 volumes.

bvecs_tolfloat, optional

Threshold used so that norm(bvec)=1.

extract_pam_valuesbool, optional

Whether or not to save pam volumes as single nifti files.

parallelbool, optional

Whether to use parallelization in peak-finding during the calibration procedure.

num_processesint, optional

If parallel is True, the number of subprocesses to use (default multiprocessing.cpu_count()). If < 0 the maximal number of cores minus num_processes + 1 is used (enter -1 to use as many cores as possible). 0 raises an error.

out_dirstring, optional

Output directory. (default current directory)

out_pamstring, optional

Name of the peaks volume to be saved.

out_shmstring, optional

Name of the spherical harmonics volume to be saved.

out_peaks_dirstring, optional

Name of the peaks directions volume to be saved.

out_peaks_valuesstring, optional

Name of the peaks values volume to be saved.

out_peaks_indicesstring, optional

Name of the peaks indices volume to be saved.

out_gfastring, optional

Name of the generalized FA volume to be saved.

References#

ReconstDkiFlow#

class dipy.workflows.reconst.ReconstDkiFlow(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Bases: Workflow

__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

get_fitted_tensor(data, mask, bval, bvec, b0_threshold=50, fit_method='WLS', optional_args=None)#
classmethod get_short_name()#

Return A short name for the workflow used to subdivide.

The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.

For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.

Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.

run(input_files, bvalues_files, bvectors_files, mask_files, fit_method='WLS', b0_threshold=50.0, sigma=None, save_metrics=None, out_dir='', out_dt_tensor='dti_tensors.nii.gz', out_fa='fa.nii.gz', out_ga='ga.nii.gz', out_rgb='rgb.nii.gz', out_md='md.nii.gz', out_ad='ad.nii.gz', out_rd='rd.nii.gz', out_mode='mode.nii.gz', out_evec='evecs.nii.gz', out_eval='evals.nii.gz', out_dk_tensor='dki_tensors.nii.gz', out_mk='mk.nii.gz', out_ak='ak.nii.gz', out_rk='rk.nii.gz')#

Workflow for Diffusion Kurtosis reconstruction and for computing DKI metrics. Performs a DKI reconstruction on the files by ‘globing’ input_files and saves the DKI metrics in a directory specified by out_dir.

Parameters#

input_filesstring

Path to the input volumes. This path may contain wildcards to process multiple inputs at once.

bvalues_filesstring

Path to the bvalues files. This path may contain wildcards to use multiple bvalues files at once.

bvectors_filesstring

Path to the bvalues files. This path may contain wildcards to use multiple bvalues files at once.

mask_filesstring

Path to the input masks. This path may contain wildcards to use multiple masks at once. (default: No mask used)

fit_methodstring, optional

can be one of the following: ‘OLS’ or ‘ULLS’ for ordinary least squares ‘WLS’ or ‘UWLLS’ for weighted ordinary least squares

b0_thresholdfloat, optional

Threshold used to find b0 volumes.

sigmafloat, optional

An estimate of the variance. [3]_ recommend to use 1.5267 * std(background_noise), where background_noise is estimated from some part of the image known to contain no signal (only noise)

save_metricsvariable string, optional

List of metrics to save. Possible values: fa, ga, rgb, md, ad, rd, mode, tensor, evec, eval

out_dirstring, optional

Output directory. (default current directory)

out_dt_tensorstring, optional

Name of the tensors volume to be saved.

out_dk_tensorstring, optional

Name of the tensors volume to be saved.

out_fastring, optional

Name of the fractional anisotropy volume to be saved.

out_gastring, optional

Name of the geodesic anisotropy volume to be saved.

out_rgbstring, optional

Name of the color fa volume to be saved.

out_mdstring, optional

Name of the mean diffusivity volume to be saved.

out_adstring, optional

Name of the axial diffusivity volume to be saved.

out_rdstring, optional

Name of the radial diffusivity volume to be saved.

out_modestring, optional

Name of the mode volume to be saved.

out_evecstring, optional

Name of the eigenvectors volume to be saved.

out_evalstring, optional

Name of the eigenvalues to be saved.

out_mkstring, optional

Name of the mean kurtosis to be saved.

out_akstring, optional

Name of the axial kurtosis to be saved.

out_rkstring, optional

Name of the radial kurtosis to be saved.

References#

ReconstIvimFlow#

class dipy.workflows.reconst.ReconstIvimFlow(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Bases: Workflow

__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

get_fitted_ivim(data, mask, bval, bvec, b0_threshold=50)#
classmethod get_short_name()#

Return A short name for the workflow used to subdivide.

The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.

For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.

Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.

run(input_files, bvalues_files, bvectors_files, mask_files, split_b_D=400, split_b_S0=200, b0_threshold=0, save_metrics=None, out_dir='', out_S0_predicted='S0_predicted.nii.gz', out_perfusion_fraction='perfusion_fraction.nii.gz', out_D_star='D_star.nii.gz', out_D='D.nii.gz')#

Workflow for Intra-voxel Incoherent Motion reconstruction and for computing IVIM metrics. Performs a IVIM reconstruction on the files by ‘globing’ input_files and saves the IVIM metrics in a directory specified by out_dir.

Parameters#

input_filesstring

Path to the input volumes. This path may contain wildcards to process multiple inputs at once.

bvalues_filesstring

Path to the bvalues files. This path may contain wildcards to use multiple bvalues files at once.

bvectors_filesstring

Path to the bvalues files. This path may contain wildcards to use multiple bvalues files at once.

mask_filesstring

Path to the input masks. This path may contain wildcards to use multiple masks at once. (default: No mask used)

split_b_Dint, optional

Value to split the bvals to estimate D for the two-stage process of fitting.

split_b_S0int, optional

Value to split the bvals to estimate S0 for the two-stage process of fitting.

b0_thresholdint, optional

Threshold value for the b0 bval.

save_metricsvariable string, optional

List of metrics to save. Possible values: S0_predicted, perfusion_fraction, D_star, D

out_dirstring, optional

Output directory. (default current directory)

out_S0_predictedstring, optional

Name of the S0 signal estimated to be saved.

out_perfusion_fractionstring, optional

Name of the estimated volume fractions to be saved.

out_D_starstring, optional

Name of the estimated pseudo-diffusion parameter to be saved.

out_Dstring, optional

Name of the estimated diffusion parameter to be saved.

References#

[Stejskal65]

Stejskal, E. O.; Tanner, J. E. (1 January 1965). “Spin Diffusion Measurements: Spin Echoes in the Presence of a Time-Dependent Field Gradient”. The Journal of Chemical Physics 42 (1): 288. Bibcode: 1965JChPh..42..288S. doi:10.1063/1.1695690.

[LeBihan84]

Le Bihan, Denis, et al. “Separation of diffusion and perfusion in intravoxel incoherent motion MR imaging.” Radiology 168.2 (1988): 497-505.

ReconstRUMBAFlow#

class dipy.workflows.reconst.ReconstRUMBAFlow(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Bases: Workflow

__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

classmethod get_short_name()#

Return A short name for the workflow used to subdivide.

The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.

For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.

Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.

run(input_files, bvalues_files, bvectors_files, mask_files, b0_threshold=50.0, bvecs_tol=0.01, roi_center=None, roi_radii=10, fa_thr=0.7, extract_pam_values=False, sh_order_max=8, odf_to_sh_order=8, parallel=True, num_processes=None, gm_response=0.0008, csf_response=0.003, n_iter=600, recon_type='smf', n_coils=1, R=1, voxelwise=True, use_tv=False, sphere_name='repulsion724', verbose=False, relative_peak_threshold=0.5, min_separation_angle=25, npeaks=5, out_dir='', out_pam='peaks.pam5', out_shm='shm.nii.gz', out_peaks_dir='peaks_dirs.nii.gz', out_peaks_values='peaks_values.nii.gz', out_peaks_indices='peaks_indices.nii.gz', out_gfa='gfa.nii.gz')#

Reconstruct the fiber local orientations using the Robust and Unbiased Model-BAsed Spherical Deconvolution (RUMBA-SD) [1]_ model. The fiber response function (FRF) is computed using the single-shell, single-tissue model, and the voxel-wise fitting procedure is used for RUMBA-SD.

Parameters#

input_filesstring

Path to the input volumes. This path may contain wildcards to process multiple inputs at once.

bvalues_filesstring

Path to the bvalues files. This path may contain wildcards to use multiple bvalues files at once.

bvectors_filesstring

Path to the bvectors files. This path may contain wildcards to use multiple bvectors files at once.

mask_filesstring

Path to the input masks. This path may contain wildcards to use multiple masks at once.

b0_thresholdfloat, optional

Threshold used to find b0 volumes.

bvecs_tolfloat, optional

Bvecs should be unit vectors.

roi_centervariable int, optional

Center of ROI in data. If center is None, it is assumed that it is the center of the volume with shape data.shape[:3].

roi_radiiint or array-like, optional

radii of cuboid ROI in voxels.

fa_thrfloat, optional

FA threshold to compute the WM response function.

extract_pam_valuesbool, optional

Save or not to save pam volumes as single nifti files.

sh_orderint, optional

Spherical harmonics order (l) used in the CSA fit.

odf_to_sh_orderint, optional

Spherical harmonics order (l) used for peak_from_model to compress the ODF to spherical harmonics coefficients.

parallelbool, optional

Whether to use parallelization in peak-finding during the calibration procedure.

num_processesint, optional

If parallel is True, the number of subprocesses to use (default multiprocessing.cpu_count()). If < 0 the maximal number of cores minus num_processes + 1 is used (enter -1 to use as many cores as possible). 0 raises an error.

gm_responsefloat, optional

Mean diffusivity for GM compartment. If None, then grey matter volume fraction is not computed.

csf_responsefloat, optional

Mean diffusivity for CSF compartment. If None, then CSF volume fraction is not computed.

n_iterint, optional

Number of iterations for fODF estimation. Must be a positive int.

recon_typestr, optional

MRI reconstruction method type: spatial matched filter (smf) or sum-of-squares (sos). SMF reconstruction generates Rician noise while SoS reconstruction generates Noncentral Chi noise.

n_coilsint, optional

Number of coils in MRI scanner – only relevant in SoS reconstruction. Must be a positive int. Default: 1

Rint, optional

Acceleration factor of the acquisition. For SIEMENS, R = iPAT factor. For GE, R = ASSET factor. For PHILIPS, R = SENSE factor. Typical values are 1 or 2. Must be a positive integer.

voxelwisebool, optional

If true, performs a voxelwise fit. If false, performs a global fit on the entire brain at once. The global fit requires a 4D brain volume in fit.

use_tvbool, optional

If true, applies total variation regularization. This only takes effect in a global fit (voxelwise is set to False). TV can only be applied to 4D brain volumes with no singleton dimensions.

sphere_namestr, optional

Sphere name on which to reconstruct the fODFs.

verbosebool, optional

If true, logs updates on estimated signal-to-noise ratio after each iteration. This only takes effect in a global fit (voxelwise is set to False).

relative_peak_thresholdfloat, optional
Only return peaks greater than relative_peak_threshold * m

where m is the largest peak.

min_separation_anglefloat, optional

The minimum distance between directions. If two peaks are too close only the larger of the two is returned.

npeaksint, optional

Maximum number of peaks returned for a given voxel.

out_dirstring, optional

Output directory. (default current directory)

out_pamstring, optional

Name of the peaks volume to be saved.

out_shmstring, optional

Name of the spherical harmonics volume to be saved.

out_peaks_dirstring, optional

Name of the peaks directions volume to be saved.

out_peaks_valuesstring, optional

Name of the peaks values volume to be saved.

out_peaks_indicesstring, optional

Name of the peaks indices volume to be saved.

out_gfastring, optional

Name of the generalized FA volume to be saved.

References#

MedianOtsuFlow#

class dipy.workflows.segment.MedianOtsuFlow(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Bases: Workflow

__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

classmethod get_short_name()#

Return A short name for the workflow used to subdivide.

The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.

For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.

Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.

run(input_files, save_masked=False, median_radius=2, numpass=5, autocrop=False, vol_idx=None, dilate=None, out_dir='', out_mask='brain_mask.nii.gz', out_masked='dwi_masked.nii.gz')#

Workflow wrapping the median_otsu segmentation method.

Applies median_otsu segmentation on each file found by ‘globing’ input_files and saves the results in a directory specified by out_dir.

Parameters#

input_filesstring

Path to the input volumes. This path may contain wildcards to process multiple inputs at once.

save_maskedbool, optional

Save mask.

median_radiusint, optional

Radius (in voxels) of the applied median filter.

numpassint, optional

Number of pass of the median filter.

autocropbool, optional

If True, the masked input_volumes will also be cropped using the bounding box defined by the masked data. For example, if diffusion images are of 1x1x1 (mm^3) or higher resolution auto-cropping could reduce their size in memory and speed up some of the analysis.

vol_idxvariable int, optional

1D array representing indices of axis=-1 of a 4D input_volume. From the command line use something like 3 4 5 6. From script use something like [3, 4, 5, 6]. This input is required for 4D volumes.

dilateint, optional

number of iterations for binary dilation.

out_dirstring, optional

Output directory. (default current directory)

out_maskstring, optional

Name of the mask volume to be saved.

out_maskedstring, optional

Name of the masked volume to be saved.

RecoBundlesFlow#

class dipy.workflows.segment.RecoBundlesFlow(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Bases: Workflow

__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

classmethod get_short_name()#

Return A short name for the workflow used to subdivide.

The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.

For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.

Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.

run(streamline_files, model_bundle_files, greater_than=50, less_than=1000000, no_slr=False, clust_thr=15.0, reduction_thr=15.0, reduction_distance='mdf', model_clust_thr=2.5, pruning_thr=8.0, pruning_distance='mdf', slr_metric='symmetric', slr_transform='similarity', slr_matrix='small', refine=False, r_reduction_thr=12.0, r_pruning_thr=6.0, no_r_slr=False, out_dir='', out_recognized_transf='recognized.trk', out_recognized_labels='labels.npy')#

Recognize bundles

Parameters#

streamline_filesstring

The path of streamline files where you want to recognize bundles.

model_bundle_filesstring

The path of model bundle files.

greater_thanint, optional

Keep streamlines that have length greater than this value in mm.

less_thanint, optional

Keep streamlines have length less than this value in mm.

no_slrbool, optional

Don’t enable local Streamline-based Linear Registration.

clust_thrfloat, optional

MDF distance threshold for all streamlines.

reduction_thrfloat, optional

Reduce search space by (mm).

reduction_distancestring, optional

Reduction distance type can be mdf or mam.

model_clust_thrfloat, optional

MDF distance threshold for the model bundles.

pruning_thrfloat, optional

Pruning after matching.

pruning_distancestring, optional

Pruning distance type can be mdf or mam.

slr_metricstring, optional

Options are None, symmetric, asymmetric or diagonal.

slr_transformstring, optional

Transformation allowed. translation, rigid, similarity or scaling.

slr_matrixstring, optional

Options are ‘nano’, ‘tiny’, ‘small’, ‘medium’, ‘large’, ‘huge’.

refinebool, optional

Enable refine recognized bundle.

r_reduction_thrfloat, optional

Refine reduce search space by (mm).

r_pruning_thrfloat, optional

Refine pruning after matching.

no_r_slrbool, optional

Don’t enable Refine local Streamline-based Linear Registration.

out_dirstring, optional

Output directory. (default current directory)

out_recognized_transfstring, optional

Recognized bundle in the space of the model bundle.

out_recognized_labelsstring, optional

Indices of recognized bundle in the original tractogram.

References#

[Garyfallidis17]

Garyfallidis et al. Recognition of white matter bundles using local and global streamline-based registration and clustering, Neuroimage, 2017.

[Chandio2020]

Chandio, B.Q., Risacher, S.L., Pestilli, F.,

Bullock, D., Yeh, FC., Koudoro, S., Rokem, A., Harezlak, J., and Garyfallidis, E. Bundle analytics, a computational framework for investigating the shapes and profiles of brain pathways across populations. Sci Rep 10, 17149 (2020)

LabelsBundlesFlow#

class dipy.workflows.segment.LabelsBundlesFlow(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Bases: Workflow

__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

classmethod get_short_name()#

Return A short name for the workflow used to subdivide.

The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.

For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.

Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.

run(streamline_files, labels_files, out_dir='', out_bundle='recognized_orig.trk')#

Extract bundles using existing indices (labels)

Parameters#

streamline_filesstring

The path of streamline files where you want to recognize bundles.

labels_filesstring

The path of model bundle files.

out_dirstring, optional

Output directory. (default current directory)

out_bundlestring, optional

Recognized bundle in the space of the model bundle.

References#

[Garyfallidis17]

Garyfallidis et al. Recognition of white matter bundles using local and global streamline-based registration and clustering, Neuroimage, 2017.

SNRinCCFlow#

class dipy.workflows.stats.SNRinCCFlow(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Bases: Workflow

__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

classmethod get_short_name()#

Return A short name for the workflow used to subdivide.

The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.

For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.

Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.

run(data_files, bvals_files, bvecs_files, mask_file, bbox_threshold=(0.6, 1, 0, 0.1, 0, 0.1), out_dir='', out_file='product.json', out_mask_cc='cc.nii.gz', out_mask_noise='mask_noise.nii.gz')#

Compute the signal-to-noise ratio in the corpus callosum.

Parameters#

data_filesstring

Path to the dwi.nii.gz file. This path may contain wildcards to process multiple inputs at once.

bvals_filesstring

Path of bvals.

bvecs_filesstring

Path of bvecs.

mask_filestring

Path of a brain mask file.

bbox_thresholdvariable float, optional

Threshold for bounding box, values separated with commas for ex. [0.6,1,0,0.1,0,0.1].

out_dirstring, optional

Where the resulting file will be saved. (default current directory)

out_filestring, optional

Name of the result file to be saved.

out_mask_ccstring, optional

Name of the CC mask volume to be saved.

out_mask_noisestring, optional

Name of the mask noise volume to be saved.

BundleAnalysisTractometryFlow#

class dipy.workflows.stats.BundleAnalysisTractometryFlow(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Bases: Workflow

__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

classmethod get_short_name()#

Return A short name for the workflow used to subdivide.

The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.

For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.

Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.

run(model_bundle_folder, subject_folder, no_disks=100, out_dir='')#

Workflow of bundle analytics.

Applies statistical analysis on bundles of subjects and saves the results in a directory specified by out_dir.

Parameters#

model_bundle_folderstring

Path to the input model bundle files. This path may contain wildcards to process multiple inputs at once.

subject_folderstring

Path to the input subject folder. This path may contain wildcards to process multiple inputs at once.

no_disksinteger, optional

Number of disks used for dividing bundle into disks.

out_dirstring, optional

Output directory. (default current directory)

References#

[Chandio2020]

Chandio, B.Q., Risacher, S.L., Pestilli, F.,

Bullock, D., Yeh, FC., Koudoro, S., Rokem, A., Harezlak, J., and Garyfallidis, E. Bundle analytics, a computational framework for investigating the shapes and profiles of brain pathways across populations. Sci Rep 10, 17149 (2020)

LinearMixedModelsFlow#

class dipy.workflows.stats.LinearMixedModelsFlow(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Bases: Workflow

__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

get_metric_name(path)#

Splits the path string and returns name of anatomical measure (eg: fa), bundle name eg(AF_L) and bundle name with metric name (eg: AF_L_fa)

Parameters#

pathstring

Path to the input metric files. This path may contain wildcards to process multiple inputs at once.

classmethod get_short_name()#

Return A short name for the workflow used to subdivide.

The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.

For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.

Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.

run(h5_files, no_disks=100, out_dir='')#

Workflow of linear Mixed Models.

Applies linear Mixed Models on bundles of subjects and saves the results in a directory specified by out_dir.

Parameters#

h5_filesstring

Path to the input metric files. This path may contain wildcards to process multiple inputs at once.

no_disksinteger, optional

Number of disks used for dividing bundle into disks.

out_dirstring, optional

Output directory. (default current directory)

save_lmm_plot(plot_file, title, bundle_name, x, y)#

Saves LMM plot with segment/disk number on x-axis and -log10(pvalues) on y-axis in out_dir folder.

Parameters#

plot_filestring

Path to the plot file. This path may contain wildcards to process multiple inputs at once.

titlestring

Title for the plot.

bundle_name : string x : list

list containing segment/disk number for x-axis.

ylist

list containing -log10(pvalues) per segment/disk number for y-axis.

BundleShapeAnalysis#

class dipy.workflows.stats.BundleShapeAnalysis(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Bases: Workflow

__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

classmethod get_short_name()#

Return A short name for the workflow used to subdivide.

The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.

For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.

Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.

run(subject_folder, clust_thr=(5, 3, 1.5), threshold=6, out_dir='')#

Workflow of bundle analytics.

Applies bundle shape similarity analysis on bundles of subjects and saves the results in a directory specified by out_dir.

Parameters#

subject_folderstring

Path to the input subject folder. This path may contain wildcards to process multiple inputs at once.

clust_thrvariable float, optional

list of bundle clustering thresholds used in QuickBundlesX.

thresholdfloat, optional

Bundle shape similarity threshold.

out_dirstring, optional

Output directory. (default current directory)

References#

[Chandio2020]

Chandio, B.Q., Risacher, S.L., Pestilli, F.,

Bullock, D., Yeh, FC., Koudoro, S., Rokem, A., Harezlak, J., and Garyfallidis, E. Bundle analytics, a computational framework for investigating the shapes and profiles of brain pathways across populations. Sci Rep 10, 17149 (2020)

buan_bundle_profiles#

dipy.workflows.stats.buan_bundle_profiles(model_bundle_folder, bundle_folder, orig_bundle_folder, metric_folder, group_id, subject, no_disks=100, out_dir='')#

Applies statistical analysis on bundles and saves the results in a directory specified by out_dir.

Parameters#

model_bundle_folderstring

Path to the input model bundle files. This path may contain wildcards to process multiple inputs at once.

bundle_folderstring

Path to the input bundle files in common space. This path may contain wildcards to process multiple inputs at once.

orig_folderstring

Path to the input bundle files in native space. This path may contain wildcards to process multiple inputs at once.

metric_folderstring

Path to the input dti metric or/and peak files. It will be used as metric for statistical analysis of bundles.

group_idinteger

what group subject belongs to either 0 for control or 1 for patient.

subjectstring

subject id e.g. 10001.

no_disksinteger, optional

Number of disks used for dividing bundle into disks.

out_dirstring, optional

Output directory. (default current directory)

References#

[Chandio2020]

Chandio, B.Q., Risacher, S.L., Pestilli, F., Bullock, D.,

Yeh, FC., Koudoro, S., Rokem, A., Harezlak, J., and Garyfallidis, E. Bundle analytics, a computational framework for investigating the shapes and profiles of brain pathways across populations. Sci Rep 10, 17149 (2020)

LocalFiberTrackingPAMFlow#

class dipy.workflows.tracking.LocalFiberTrackingPAMFlow(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Bases: Workflow

__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

classmethod get_short_name()#

Return A short name for the workflow used to subdivide.

The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.

For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.

Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.

run(pam_files, stopping_files, seeding_files, use_binary_mask=False, stopping_thr=0.2, seed_density=1, step_size=0.5, tracking_method='eudx', pmf_threshold=0.1, max_angle=30.0, out_dir='', out_tractogram='tractogram.trk', save_seeds=False)#

Workflow for Local Fiber Tracking.

This workflow use a saved peaks and metrics (PAM) file as input.

Parameters#

pam_filesstring
Path to the peaks and metrics files. This path may contain

wildcards to use multiple masks at once.

stopping_filesstring

Path to images (e.g. FA) used for stopping criterion for tracking.

seeding_filesstring

A binary image showing where we need to seed for tracking.

use_binary_maskbool, optional

If True, uses a binary stopping criterion. If the provided stopping_files are not binary, stopping_thr will be used to binarize the images.

stopping_thrfloat, optional

Threshold applied to stopping volume’s data to identify where tracking has to stop.

seed_densityint, optional
Number of seeds per dimension inside voxel.

For example, seed_density of 2 means 8 regularly distributed points in the voxel. And seed density of 1 means 1 point at the center of the voxel.

step_sizefloat, optional

Step size (in mm) used for tracking.

tracking_methodstring, optional
Select direction getter strategy :
  • “eudx” (Uses the peaks saved in the pam_files)

  • “deterministic” or “det” for a deterministic tracking (Uses the sh saved in the pam_files, default)

  • “probabilistic” or “prob” for a Probabilistic tracking (Uses the sh saved in the pam_files)

  • “closestpeaks” or “cp” for a ClosestPeaks tracking (Uses the sh saved in the pam_files)

pmf_thresholdfloat, optional

Threshold for ODF functions.

max_anglefloat, optional

Maximum angle between streamline segments (range [0, 90]).

out_dirstring, optional

Output directory. (default current directory)

out_tractogramstring, optional

Name of the tractogram file to be saved.

save_seedsbool, optional

If true, save the seeds associated to their streamline in the ‘data_per_streamline’ Tractogram dictionary using ‘seeds’ as the key.

References#

Garyfallidis, University of Cambridge, PhD thesis 2012. Amirbekian, University of California San Francisco, PhD thesis 2017.

PFTrackingPAMFlow#

class dipy.workflows.tracking.PFTrackingPAMFlow(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Bases: Workflow

__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

classmethod get_short_name()#

Return A short name for the workflow used to subdivide.

The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.

For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.

Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.

run(pam_files, wm_files, gm_files, csf_files, seeding_files, step_size=0.2, seed_density=1, pmf_threshold=0.1, max_angle=20.0, pft_back=2, pft_front=1, pft_count=15, out_dir='', out_tractogram='tractogram.trk', save_seeds=False, min_wm_pve_before_stopping=0)#

Workflow for Particle Filtering Tracking.

This workflow use a saved peaks and metrics (PAM) file as input.

Parameters#

pam_filesstring
Path to the peaks and metrics files. This path may contain

wildcards to use multiple masks at once.

wm_filesstring

Path to white matter partial volume estimate for tracking (CMC).

gm_filesstring

Path to grey matter partial volume estimate for tracking (CMC).

csf_filesstring

Path to cerebrospinal fluid partial volume estimate for tracking (CMC).

seeding_filesstring

A binary image showing where we need to seed for tracking.

step_sizefloat, optional

Step size (in mm) used for tracking.

seed_densityint, optional
Number of seeds per dimension inside voxel.

For example, seed_density of 2 means 8 regularly distributed points in the voxel. And seed density of 1 means 1 point at the center of the voxel.

pmf_thresholdfloat, optional

Threshold for ODF functions.

max_anglefloat, optional

Maximum angle between streamline segments (range [0, 90]).

pft_backfloat, optional

Distance in mm to back track before starting the particle filtering tractography. The total particle filtering tractography distance is equal to back_tracking_dist + front_tracking_dist.

pft_frontfloat, optional

Distance in mm to run the particle filtering tractography after the the back track distance. The total particle filtering tractography distance is equal to back_tracking_dist + front_tracking_dist.

pft_countint, optional

Number of particles to use in the particle filter.

out_dirstring, optional

Output directory. (default current directory)

out_tractogramstring, optional

Name of the tractogram file to be saved.

save_seedsbool, optional

If true, save the seeds associated to their streamline in the ‘data_per_streamline’ Tractogram dictionary using ‘seeds’ as the key.

min_wm_pve_before_stoppingint, optional

Minimum white matter pve (1 - stopping_criterion.include_map - stopping_criterion.exclude_map) to reach before allowing the tractography to stop.

References#

Girard, G., Whittingstall, K., Deriche, R., & Descoteaux, M. Towards quantitative connectivity analysis: reducing tractography biases. NeuroImage, 98, 266-278, 2014.

HorizonFlow#

class dipy.workflows.viz.HorizonFlow(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Bases: Workflow

__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

classmethod get_short_name()#

Return A short name for the workflow used to subdivide.

The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.

For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.

Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.

run(input_files, cluster=False, rgb=False, cluster_thr=15.0, random_colors=None, length_gt=0, length_lt=1000, clusters_gt=0, clusters_lt=100000000, native_coords=False, stealth=False, emergency_header='icbm_2009a', bg_color=(0, 0, 0), disable_order_transparency=False, buan=False, buan_thr=0.5, buan_highlight=(1, 0, 0), roi_images=False, roi_colors=(1, 0, 0), out_dir='', out_stealth_png='tmp.png')#

Interactive medical visualization - Invert the Horizon!

Interact with any number of .trk, .tck or .dpy tractograms and anatomy files .nii or .nii.gz. Cluster streamlines on loading.

Parameters#

input_files : variable string cluster : bool, optional

Enable QuickBundlesX clustering.

rgbbool, optional

Enable the color image (rgb only, alpha channel will be ignored).

cluster_thrfloat, optional

Distance threshold used for clustering. Default value 15.0 for small animal brains you may need to use something smaller such as 2.0. The distance is in mm. For this parameter to be active cluster should be enabled.

random_colorsvariable str, optional

Given multiple tractograms and/or ROIs then each tractogram and/or ROI will be shown with different color. If no value is provided, both the tractograms and the ROIs will have a different random color generated from a distinguishable colormap. If the effect should only be applied to one of the 2 types, then use the options ‘tracts’ and ‘rois’ for the tractograms and the ROIs respectively.

length_gtfloat, optional

Clusters with average length greater than length_gt amount in mm will be shown.

length_ltfloat, optional

Clusters with average length less than length_lt amount in mm will be shown.

clusters_gtint, optional

Clusters with size greater than clusters_gt will be shown.

clusters_ltint, optional

Clusters with size less than clusters_gt will be shown.

native_coordsbool, optional

Show results in native coordinates.

stealthbool, optional

Do not use interactive mode just save figure.

emergency_headerstr, optional

If no anatomy reference is provided an emergency header is provided. Current options ‘icbm_2009a’ and ‘icbm_2009c’.

bg_colorvariable float, optional

Define the background color of the scene. Colors can be defined with 1 or 3 values and should be between [0-1].

disable_order_transparencybool, optional

Use depth peeling to sort transparent objects. If True also enables anti-aliasing.

buanbool, optional

Enables BUAN framework visualization.

buan_thrfloat, optional

Uses the threshold value to highlight segments on the bundle which have pvalues less than this threshold.

buan_highlightvariable float, optional

Define the bundle highlight area color. Colors can be defined with 1 or 3 values and should be between [0-1]. For example, a value of (1, 0, 0) would mean the red color.

roi_imagesbool, optional

Displays binary images as contours.

roi_colorsvariable float, optional

Define the color for the roi images. Colors can be defined with 1 or 3 values and should be between [0-1]. For example, a value of (1, 0, 0) would mean the red color.

out_dirstr, optional

Output directory. (default current directory)

out_stealth_pngstr, optional

Filename of saved picture.

References#

[Horizon_ISMRM19]

Garyfallidis E., M-A. Cote, B.Q. Chandio, S. Fadnavis, J. Guaje, R. Aggarwal, E. St-Onge, K.S. Juneja, S. Koudoro, D. Reagan, DIPY Horizon: fast, modular, unified and adaptive visualization, Proceedings of: International Society of Magnetic Resonance in Medicine (ISMRM), Montreal, Canada, 2019.

Workflow#

class dipy.workflows.workflow.Workflow(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Bases: object

__init__(output_strategy='absolute', mix_names=False, force=False, skip=False)#

Initialize the basic workflow object.

This object takes care of any workflow operation that is common to all the workflows. Every new workflow should extend this class.

get_io_iterator()#

Create an iterator for IO.

Use a couple of inspection tricks to build an IOIterator using the previous frame (values of local variables and other contextuals) and the run method’s docstring.

classmethod get_short_name()#

Return A short name for the workflow used to subdivide.

The short name is used by CombinedWorkflows and the argparser to subdivide the commandline parameters avoiding the trouble of having subworkflows parameters with the same name.

For example, a combined workflow with dti reconstruction and csd reconstruction might en up with the b0_threshold parameter. Using short names, we will have dti.b0_threshold and csd.b0_threshold available.

Returns class name by default but it is strongly advised to set it to something shorter and easier to write on commandline.

get_sub_runs()#

Return No sub runs since this is a simple workflow.

manage_output_overwrite()#

Check if a file will be overwritten upon processing the inputs.

If it is bound to happen, an action is taken depending on self._force_overwrite (or –force via command line). A log message is output independently of the outcome to tell the user something happened.

run(*args, **kwargs)#

Execute the workflow.

Since this is an abstract class, raise exception if this code is reached (not implemented in child class or literally called on this class)