align#

Bunch(**kwds)

Module: align._public#

Registration API: simplified API for registration of MRI data and of streamlines.

syn_registration(moving, static, *[, ...])

Register a 2D/3D source image (moving) to a 2D/3D target image (static).

register_dwi_to_template(dwi, gtab, *[, ...])

Register DWI data to a template through the B0 volumes.

write_mapping(mapping, fname)

Write out a syn registration mapping to a nifti file.

read_mapping(disp, domain_img, codomain_img, *)

Read a syn registration mapping from a nifti file.

resample(moving, static, *[, moving_affine, ...])

Resample an image (moving) from one space to another (static).

affine_registration(moving, static, *[, ...])

Find the affine transformation between two 3D images.

center_of_mass(moving, static, *[, ...])

Implements a center of mass transform.

translation(moving, static, *[, ...])

Implements a translation transform.

rigid(moving, static, *[, moving_affine, ...])

Implements a rigid transform.

rigid_isoscaling(moving, static, *[, ...])

Implements a rigid isoscaling transform.

rigid_scaling(moving, static, *[, ...])

Implements a rigid scaling transform.

affine(moving, static, *[, moving_affine, ...])

Implements an affine transform.

register_series(series, ref, *[, pipeline, ...])

Register a series to a reference image.

register_dwi_series(data, gtab, *[, affine, ...])

Register a DWI series to the mean of the B0 images in that series.

motion_correction(data, gtab, *[, affine, ...])

Apply a motion correction to a DWI dataset (Between-Volumes Motion correction)

streamline_registration(moving, static, *[, ...])

Register two collections of streamlines ('bundles') to each other.

Module: align.bundlemin#

determine_num_threads(num_threads)

Determine the effective number of threads to be used for OpenMP calls

distance_matrix_mdf(streamlines_a, streamlines_b)

Minimum direct flipped distance matrix between two streamline sets

Module: align.cpd#

Note#

This file is copied (possibly with major modifications) from the sources of the pycpd project - siavashk/pycpd. It remains licensed as the rest of PyCPD (MIT license as of October 2010).

# ## ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ## # # See COPYING file distributed along with the PyCPD package for the # copyright and license terms. # # ## ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ##

DeformableRegistration(X, Y, *args[, ...])

Deformable point cloud registration.

gaussian_kernel(X, beta[, Y])

low_rank_eigen(G, num_eig)

Calculate num_eig eigenvectors and eigenvalues of gaussian matrix G.

initialize_sigma2(X, Y)

Initialize the variance (sigma2).

lowrankQS(G, beta, num_eig, *[, eig_fgt])

Calculate eigenvectors and eigenvalues of gaussian matrix G.

Module: align.crosscorr#

Utility functions used by the Cross Correlation (CC) metric

compute_cc_backward_step_2d(grad_moving, ...)

Gradient of the CC Metric w.r.t.

compute_cc_backward_step_3d(grad_moving, ...)

Gradient of the CC Metric w.r.t.

compute_cc_forward_step_2d(grad_static, ...)

Gradient of the CC Metric w.r.t.

compute_cc_forward_step_3d(grad_static, ...)

Gradient of the CC Metric w.r.t.

precompute_cc_factors_2d(static, moving, radius)

Precomputations to quickly compute the gradient of the CC Metric.

precompute_cc_factors_2d_test(static, ...)

Precomputations to quickly compute the gradient of the CC Metric.

precompute_cc_factors_3d(static, moving, radius)

Precomputations to quickly compute the gradient of the CC Metric.

precompute_cc_factors_3d_test(static, ...)

Precomputations to quickly compute the gradient of the CC Metric.

Module: align.expectmax#

compute_em_demons_step_2d(delta_field, ...)

Demons step for EM metric in 2D

compute_em_demons_step_3d(delta_field, ...)

Demons step for EM metric in 3D

compute_masked_class_stats_2d(mask, v, ...)

Computes the mean and std.

compute_masked_class_stats_3d(mask, v, ...)

Computes the mean and std.

quantize_positive_2d(v, num_levels)

Quantize a 2D image to num_levels quantization levels.

quantize_positive_3d(v, num_levels)

Quantize a 3D volume to num_levels quantization levels.

Module: align.imaffine#

Affine image registration module consisting of the following classes:

AffineMap: encapsulates the necessary information to perform affine

transforms between two domains, defined by a static and a moving image. The domain of the transform is the set of points in the static image’s grid, and the codomain is the set of points in the moving image. When we call the transform method, AffineMap maps each point x of the domain (static grid) to the codomain (moving grid) and interpolates the moving image at that point to obtain the intensity value to be placed at x in the resulting grid. The transform_inverse method performs the opposite operation mapping points in the codomain to points in the domain.

ParzenJointHistogram: computes the marginal and joint distributions of

intensities of a pair of images, using Parzen windows [1] with a cubic spline kernel, as proposed by Mattes et al.[2]. It also computes the gradient of the joint histogram w.r.t. the parameters of a given transform.

MutualInformationMetric: computes the value and gradient of the mutual

information metric the way Optimizer needs them. That is, given a set of transform parameters, it will use ParzenJointHistogram to compute the value and gradient of the joint intensity histogram evaluated at the given parameters, and evaluate the value and gradient of the histogram’s mutual information.

AffineRegistration: it runs the multi-resolution registration, putting

all the pieces together. It needs to create the scale space of the images and run the multi-resolution registration by using the Metric and the Optimizer at each level of the Gaussian pyramid. At each level, it will setup the metric to compute value and gradient of the metric with the input images with different levels of smoothing.

References#

AffineInversionError

AffineInvalidValuesError

AffineMap(affine, *[, domain_grid_shape, ...])

MutualInformationMetric(*[, nbins, ...])

AffineRegistration(*[, metric, level_iters, ...])

transform_centers_of_mass(static, ...)

Transformation to align the center of mass of the input images.

transform_geometric_centers(static, ...)

Transformation to align the geometric center of the input images.

transform_origins(static, static_grid2world, ...)

Transformation to align the origins of the input images.

Module: align.imwarp#

Classes and functions for Symmetric Diffeomorphic Registration

DiffeomorphicMap(dim, disp_shape, *[, ...])

DiffeomorphicRegistration(*[, metric])

SymmetricDiffeomorphicRegistration(metric, *)

mult_aff(A, B)

Returns the matrix product A.dot(B) considering None as the identity

get_direction_and_spacings(affine, dim)

Extracts the rotational and spacing components from a matrix

Module: align.metrics#

Metrics for Symmetric Diffeomorphic Registration

SimilarityMetric(dim)

CCMetric(dim, *[, sigma_diff, radius])

EMMetric(dim, *[, smooth, inner_iter, ...])

SSDMetric(dim, *[, smooth, inner_iter, ...])

v_cycle_2d(n, k, delta_field, ...[, depth])

Multi-resolution Gauss-Seidel solver using V-type cycles

v_cycle_3d(n, k, delta_field, ...[, depth])

Multi-resolution Gauss-Seidel solver using V-type cycles

Module: align.parzenhist#

ParzenJointHistogram(nbins)

compute_parzen_mi(joint, joint_gradient, ...)

Computes the mutual information and its gradient (if requested)

cubic_spline(x)

Evaluates the cubic spline at a set of values

cubic_spline_derivative(x)

Evaluates the cubic spline derivative at a set of values

sample_domain_regular(k, shape, grid2world)

Take floor(total_voxels/k) samples from a (2D or 3D) grid

Module: align.reslice#

reslice(data, affine, zooms, new_zooms, *[, ...])

Reslice data with new voxel resolution defined by new_zooms.

Module: align.scalespace#

ScaleSpace(image, num_levels, *[, ...])

IsotropicScaleSpace(image, factors, sigmas, *)

Module: align.streamlinear#

StreamlineDistanceMetric(*[, num_threads])

BundleMinDistanceMetric(*[, num_threads])

Bundle-based Minimum Distance aka BMD.

BundleMinDistanceMatrixMetric(*[, num_threads])

Bundle-based Minimum Distance aka BMD

BundleMinDistanceAsymmetricMetric(*[, ...])

Asymmetric Bundle-based Minimum distance.

BundleSumDistanceMatrixMetric(*[, num_threads])

Bundle-based Sum Distance aka BMD

JointBundleMinDistanceMetric(*[, num_threads])

Bundle-based Minimum Distance for joint optimization.

StreamlineLinearRegistration(*[, metric, ...])

StreamlineRegistrationMap(matopt, xopt, ...)

JointStreamlineRegistrationMap(xopt, fopt, ...)

bundle_sum_distance(t, static, moving, *[, ...])

MDF distance optimization function (SUM).

bundle_min_distance(t, static, moving)

MDF-based pairwise distance optimization function (MIN).

bundle_min_distance_fast(t, static, moving, ...)

MDF-based pairwise distance optimization function (MIN).

bundle_min_distance_asymmetric_fast(t, ...)

MDF-based pairwise distance optimization function (MIN).

remove_clusters_by_size(clusters[, min_size])

progressive_slr(static, moving, metric, x0, ...)

Progressive SLR.

slr_with_qbx(static, moving, *[, x0, ...])

Utility function for registering large tractograms.

groupwise_slr(bundles, *[, x0, tol, ...])

Function to perform unbiased groupwise bundle registration

get_unique_pairs(n_bundle, *[, pairs])

Make unique pairs from n_bundle bundles.

compose_matrix44(t, *[, dtype])

Compose a 4x4 transformation matrix.

decompose_matrix44(mat, *[, size])

Given a 4x4 homogeneous matrix return the parameter vector.

Module: align.streamwarp#

average_bundle_length(bundle)

Find average Euclidean length of the bundle in mm.

find_missing(lst, cb)

Find unmatched streamline indices in moving bundle.

bundlewarp(static, moving, *[, dist, alpha, ...])

Register two bundles using nonlinear method.

bundlewarp_vector_filed(moving_aligned, ...)

Calculate vector fields.

bundlewarp_shape_analysis(moving_aligned, ...)

Calculate bundle shape difference profile.

Module: align.sumsqdiff#

Utility functions used by the Sum of Squared Differences (SSD) metric

compute_energy_ssd_2d(delta_field)

Sum of squared differences between two 2D images

compute_energy_ssd_3d(delta_field)

Sum of squared differences between two 3D volumes

compute_residual_displacement_field_ssd_2d(...)

The residual displacement field to be fit on the next iteration

compute_residual_displacement_field_ssd_3d(...)

The residual displacement field to be fit on the next iteration

compute_ssd_demons_step_2d(delta_field, ...)

Demons step for 2D SSD-driven registration

compute_ssd_demons_step_3d(delta_field, ...)

Demons step for 3D SSD-driven registration

iterate_residual_displacement_field_ssd_2d(...)

One iteration of a large linear system solver for 2D SSD registration

iterate_residual_displacement_field_ssd_3d(...)

One iteration of a large linear system solver for 3D SSD registration

solve_2d_symmetric_positive_definite(A, y, det)

Solves a 2-variable symmetric positive-definite linear system

solve_3d_symmetric_positive_definite(g, y, tau)

Solves a 3-variable symmetric positive-definite linear system

Module: align.transforms#

Module: align.vector_fields#

compose_vector_fields_2d(d1, d2, ...)

Computes the composition of two 2D displacement fields

compose_vector_fields_3d(d1, d2, ...)

Computes the composition of two 3D displacement fields

create_circle(nrows, ncols, radius)

Create a binary 2D image where pixel values are 1 iff their distance to the center of the image is less than or equal to radius.

create_harmonic_fields_2d(nrows, ncols, b, m)

Creates an invertible 2D displacement field

create_harmonic_fields_3d(nslices, nrows, ...)

Creates an invertible 3D displacement field

create_random_displacement_2d(from_shape, ...)

Creates a random 2D displacement 'exactly' mapping points of two grids

create_random_displacement_3d(from_shape, ...)

Creates a random 3D displacement 'exactly' mapping points of two grids

create_sphere(nslices, nrows, ncols, radius)

Create a binary 3D image where voxel values are 1 iff their distance to the center of the image is less than or equal to radius.

downsample_displacement_field_2d(field)

Down-samples the 2D input vector field by a factor of 2 Down-samples the input vector field by a factor of 2.

downsample_displacement_field_3d(field)

Down-samples the input 3D vector field by a factor of 2

downsample_scalar_field_2d(field)

Down-samples the input 2D image by a factor of 2

downsample_scalar_field_3d(field)

Down-samples the input volume by a factor of 2

gradient(img, img_world2grid, img_spacing, ...)

Gradient of an image in physical space

invert_vector_field_fixed_point_2d(d, ...[, ...])

Computes the inverse of a 2D displacement fields

invert_vector_field_fixed_point_3d(d, ...[, ...])

Computes the inverse of a 3D displacement fields

is_valid_affine(M, dim)

reorient_vector_field_2d(d, affine)

Linearly transforms all vectors of a 2D displacement field

reorient_vector_field_3d(d, affine)

Linearly transforms all vectors of a 3D displacement field

resample_displacement_field_2d(field, ...)

Resamples a 2D vector field to a custom target shape

resample_displacement_field_3d(field, ...)

Resamples a 3D vector field to a custom target shape

simplify_warp_function_2d(d, affine_idx_in, ...)

Simplifies a nonlinear warping function combined with an affine transform

simplify_warp_function_3d(d, affine_idx_in, ...)

Simplifies a nonlinear warping function combined with an affine transform

sparse_gradient(img, img_world2grid, ...)

Gradient of an image in physical space

transform_2d_affine(image, ref_shape[, affine])

Transforms a 2D image by an affine transform with bilinear interp.

transform_2d_affine_nn(image, ref_shape[, ...])

Transforms a 2D image by an affine transform with NN interpolation

transform_3d_affine(volume, ref_shape, affine)

Transforms a 3D volume by an affine transform with trilinear interp.

transform_3d_affine_nn(volume, ref_shape[, ...])

Transforms a 3D volume by an affine transform with NN interpolation

warp_2d(image, d1[, affine_idx_in, ...])

Warps a 2D image using bilinear interpolation

warp_2d_nn(image, d1[, affine_idx_in, ...])

Warps a 2D image using nearest neighbor interpolation

warp_3d(volume, d1[, affine_idx_in, ...])

Warps a 3D volume using trilinear interpolation

warp_3d_nn(volume, d1[, affine_idx_in, ...])

Warps a 3D volume using using nearest-neighbor interpolation

warp_coordinates_2d(points, d1, in2world, ...)

warp_coordinates_3d(points, d1, in2world, ...)

Bunch#

class dipy.align.Bunch(**kwds)[source]#

Bases: object

syn_registration#

dipy.align._public.syn_registration(moving, static, *, moving_affine=None, static_affine=None, step_length=0.25, metric='CC', dim=3, level_iters=None, prealign=None, **metric_kwargs)[source]#

Register a 2D/3D source image (moving) to a 2D/3D target image (static).

Parameters:
moving, staticarray or nib.Nifti1Image or str.

Either as a 2D/3D array or as a nifti image object, or as a string containing the full path to a nifti file.

moving_affine, static_affine4x4 array, optional.

Must be provided for data provided as an array. If provided together with Nifti1Image or str data, this input will over-ride the affine that is stored in the data input. Default: use the affine stored in data.

metricstring, optional

The metric to be optimized. One of CC, EM, SSD, Default: ‘CC’ => CCMetric.

dim: int (either 2 or 3), optional

The dimensions of the image domain. Default: 3

level_iterslist of int, optional

the number of iterations at each level of the Gaussian Pyramid (the length of the list defines the number of pyramid levels to be used). Default: [10, 10, 5].

metric_kwargsdict, optional

Parameters for initialization of the metric object. If not provided, uses the default settings of each metric.

Returns:
warped_movingndarray

The data in moving, warped towards the static data.

forwardndarray (…, 3)

The vector field describing the forward warping from the source to the target.

backwardndarray (…, 3)

The vector field describing the backward warping from the target to the source.

register_dwi_to_template#

dipy.align._public.register_dwi_to_template(dwi, gtab, *, dwi_affine=None, template=None, template_affine=None, reg_method='syn', **reg_kwargs)[source]#

Register DWI data to a template through the B0 volumes.

Parameters:
dwi4D array, nifti image or str

Containing the DWI data, or full path to a nifti file with DWI.

gtabGradientTable or sequence of strings

The gradients associated with the DWI data, or a sequence with (fbval, fbvec), full paths to bvals and bvecs files.

dwi_affine4x4 array, optional

An affine transformation associated with the DWI. Required if data is provided as an array. If provided together with nifti/path, will over-ride the affine that is in the nifti.

template3D array, nifti image or str

Containing the data for the template, or full path to a nifti file with the template data.

template_affine4x4 array, optional

An affine transformation associated with the template. Required if data is provided as an array. If provided together with nifti/path, will over-ride the affine that is in the nifti.

reg_methodstr,

One of “syn” or “aff”, which designates which registration method is used. Either syn, which uses the syn_registration() function or affine_registration() function. Default: “syn”.

reg_kwargskey-word arguments for syn_registration() or

affine_registration()

Returns:
warped_b0ndarray

b0 volume warped to the template.

mappingDiffeomorphicMap or ndarray

If reg_method is “syn”, a DiffeomorphicMap class instance that can be used to transform between the two spaces. Otherwise, if reg_method is “aff”, a 4x4 matrix encoding the affine transform.

Notes

This function assumes that the DWI data is already internally registered. See register_dwi_series().

write_mapping#

dipy.align._public.write_mapping(mapping, fname)[source]#

Write out a syn registration mapping to a nifti file.

Parameters:
mappingDiffeomorphicMap

Registration mapping derived from syn_registration()

fnamestr

Full path to the nifti file storing the mapping

Notes

The data in the file is organized with shape (X, Y, Z, 3, 2), such that the forward mapping in each voxel is in data[i, j, k, :, 0] and the backward mapping in each voxel is in data[i, j, k, :, 1].

read_mapping#

dipy.align._public.read_mapping(disp, domain_img, codomain_img, *, prealign=None)[source]#

Read a syn registration mapping from a nifti file.

Parameters:
dispstr or Nifti1Image

A file of image containing the mapping displacement field in each voxel Shape (x, y, z, 3, 2)

domain_imgstr or Nifti1Image
codomain_imgstr or Nifti1Image
Returns:
A DiffeomorphicMap object.

Notes

See write_mapping() for the data format expected.

resample#

dipy.align._public.resample(moving, static, *, moving_affine=None, static_affine=None, between_affine=None)[source]#

Resample an image (moving) from one space to another (static).

Parameters:
movingarray, nifti image or str

Containing the data for the moving object, or full path to a nifti file with the moving data.

staticarray, nifti image or str

Containing the data for the static object, or full path to a nifti file with the moving data.

moving_affine4x4 array, optional

An affine transformation associated with the moving object. Required if data is provided as an array. If provided together with nifti/path, will over-ride the affine that is in the nifti.

static_affine4x4 array, optional

An affine transformation associated with the static object. Required if data is provided as an array. If provided together with nifti/path, will over-ride the affine that is in the nifti.

between_affine: 4x4 array, optional

If an additional affine is needed between the two spaces. Default: identity (no additional registration).

Returns:
A Nifti1Image class instance with the data from the moving object
resampled into the space of the static object.

affine_registration#

dipy.align._public.affine_registration(moving, static, *, moving_affine=None, static_affine=None, pipeline=None, starting_affine=None, metric='MI', level_iters=None, sigmas=None, factors=None, ret_metric=False, moving_mask=None, static_mask=None, **metric_kwargs)[source]#

Find the affine transformation between two 3D images. Alternatively, find the combination of several linear transformations.

Parameters:
movingarray, nifti image or str

Containing the data for the moving object, or full path to a nifti file with the moving data.

staticarray, nifti image or str

Containing the data for the static object, or full path to a nifti file with the moving data.

moving_affine4x4 array, optional

An affine transformation associated with the moving object. Required if data is provided as an array. If provided together with nifti/path, will over-ride the affine that is in the nifti.

static_affine4x4 array, optional

An affine transformation associated with the static object. Required if data is provided as an array. If provided together with nifti/path, will over-ride the affine that is in the nifti.

pipelinelist of str, optional

Sequence of transforms to use in the gradual fitting. Default: gradual fit of the full affine (executed from left to right): ["center_of_mass", "translation", "rigid", "affine"] Alternatively, any other combination of the following registration methods might be used: center_of_mass, translation, rigid, rigid_isoscaling, rigid_scaling and affine.

starting_affine: 4x4 array, optional

Initial guess for the transformation between the spaces. Default: identity.

metricstr, optional.

Currently only supports ‘MI’ for MutualInformationMetric.

level_iterssequence, optional

AffineRegistration key-word argument: the number of iterations at each scale of the scale space. level_iters[0] corresponds to the coarsest scale, level_iters[-1] the finest, where n is the length of the sequence. By default, a 3-level scale space with iterations sequence equal to [10000, 1000, 100] will be used.

sigmassequence of floats, optional

AffineRegistration key-word argument: custom smoothing parameter to build the scale space (one parameter for each scale). By default, the sequence of sigmas will be [3, 1, 0].

factorssequence of floats, optional

AffineRegistration key-word argument: custom scale factors to build the scale space (one factor for each scale). By default, the sequence of factors will be [4, 2, 1].

ret_metricboolean, optional

Set it to True to return the value of the optimized coefficients and the optimization quality metric.

moving_maskarray, shape (S’, R’, C’) or (R’, C’), optional

moving image mask that defines which pixels in the moving image are used to calculate the mutual information.

static_maskarray, shape (S, R, C) or (R, C), optional

static image mask that defines which pixels in the static image are used to calculate the mutual information.

nbinsint, optional

MutualInformationMetric key-word argument: the number of bins to be used for computing the intensity histograms. The default is 32.

sampling_proportionNone or float in interval (0, 1], optional

MutualInformationMetric key-word argument: There are two types of sampling: dense and sparse. Dense sampling uses all voxels for estimating the (joint and marginal) intensity histograms, while sparse sampling uses a subset of them. If sampling_proportion is None, then dense sampling is used. If sampling_proportion is a floating point value in (0,1] then sparse sampling is used, where sampling_proportion specifies the proportion of voxels to be used. The default is None (dense sampling).

Returns:
resampledarray with moving data resampled to the static space
after computing the affine transformation.
final_affinethe affine 4x4 associated with the transformation.
xoptthe value of the optimized coefficients.
foptthe value of the optimization quality metric.

Notes

Performs a gradual registration between the two inputs, using a pipeline that gradually approximates the final registration. If the final default step (affine) is omitted, the resulting affine may not have all 12 degrees of freedom adjusted.

center_of_mass#

dipy.align._public.center_of_mass(moving, static, *, moving_affine=None, static_affine=None, pipeline=['center_of_mass'], starting_affine=None, metric='MI', level_iters=None, sigmas=None, factors=None, ret_metric=False, moving_mask=None, static_mask=None, **metric_kwargs)#

Implements a center of mass transform. Based on affine_registration().

translation#

dipy.align._public.translation(moving, static, *, moving_affine=None, static_affine=None, pipeline=['translation'], starting_affine=None, metric='MI', level_iters=None, sigmas=None, factors=None, ret_metric=False, moving_mask=None, static_mask=None, **metric_kwargs)#

Implements a translation transform. Based on affine_registration().

rigid#

dipy.align._public.rigid(moving, static, *, moving_affine=None, static_affine=None, pipeline=['rigid'], starting_affine=None, metric='MI', level_iters=None, sigmas=None, factors=None, ret_metric=False, moving_mask=None, static_mask=None, **metric_kwargs)#

Implements a rigid transform. Based on affine_registration().

rigid_isoscaling#

dipy.align._public.rigid_isoscaling(moving, static, *, moving_affine=None, static_affine=None, pipeline=['rigid_isoscaling'], starting_affine=None, metric='MI', level_iters=None, sigmas=None, factors=None, ret_metric=False, moving_mask=None, static_mask=None, **metric_kwargs)#

Implements a rigid isoscaling transform. Based on affine_registration().

rigid_scaling#

dipy.align._public.rigid_scaling(moving, static, *, moving_affine=None, static_affine=None, pipeline=['rigid_scaling'], starting_affine=None, metric='MI', level_iters=None, sigmas=None, factors=None, ret_metric=False, moving_mask=None, static_mask=None, **metric_kwargs)#

Implements a rigid scaling transform. Based on affine_registration().

affine#

dipy.align._public.affine(moving, static, *, moving_affine=None, static_affine=None, pipeline=['affine'], starting_affine=None, metric='MI', level_iters=None, sigmas=None, factors=None, ret_metric=False, moving_mask=None, static_mask=None, **metric_kwargs)#

Implements an affine transform. Based on affine_registration().

register_series#

dipy.align._public.register_series(series, ref, *, pipeline=None, series_affine=None, ref_affine=None, static_mask=None)[source]#

Register a series to a reference image.

Parameters:
series4D array or nib.Nifti1Image class instance or str

The data is 4D with the last dimension separating different 3D volumes

refint or 3D array or nib.Nifti1Image class instance or str

If this is an int, this is the index of the reference image within the series. Otherwise it is an array of data to register with (associated with a ref_affine required) or a nifti img or full path to a file containing one.

pipelinesequence, optional

Sequence of transforms to do for each volume in the series. Default: (executed from left to right): [center_of_mass, translation, rigid, affine]

series_affine, ref_affine4x4 arrays, optional.

The affine. If provided, this input will over-ride the affine provided together with the nifti img or file.

static_maskarray, shape (S, R, C) or (R, C), optional

static image mask that defines which pixels in the static image are used to calculate the mutual information.

Returns:
xformed, affines4D array with transformed data and a (4,4,n) array
with 4x4 matrices associated with each of the volumes of the input moving
data that was used to transform it into register with the static data.

register_dwi_series#

dipy.align._public.register_dwi_series(data, gtab, *, affine=None, b0_ref=0, pipeline=None, static_mask=None)[source]#

Register a DWI series to the mean of the B0 images in that series.

all first registered to the first B0 volume

Parameters:
data4D array or nibabel Nifti1Image class instance or str

Diffusion data. Either as a 4D array or as a nifti image object, or as a string containing the full path to a nifti file.

gtaba GradientTable class instance or tuple of strings

If provided as a tuple of strings, these are assumed to be full paths to the bvals and bvecs files (in that order).

affine4x4 array, optional.

Must be provided for data provided as an array. If provided together with Nifti1Image or str data, this input will over-ride the affine that is stored in the data input. Default: use the affine stored in data.

b0_refint, optional.

Which b0 volume to use as reference. Default: 0

pipelinelist of callables, optional.

The transformations to perform in sequence (from left to right): Default: [center_of_mass, translation, rigid, affine]

static_maskarray, shape (S, R, C) or (R, C), optional

static image mask that defines which pixels in the static image are used to calculate the mutual information.

Returns:
xform_img, affine_array: a Nifti1Image containing the registered data and
using the affine of the original data and a list containing the affine
transforms associated with each of the

motion_correction#

dipy.align._public.motion_correction(data, gtab, *, affine=None, b0_ref=0, pipeline=['center_of_mass', 'translation', 'rigid', 'affine'], static_mask=None)#

Apply a motion correction to a DWI dataset (Between-Volumes Motion correction)

Parameters:
data4D array or nibabel Nifti1Image class instance or str

Diffusion data. Either as a 4D array or as a nifti image object, or as a string containing the full path to a nifti file.

gtaba GradientTable class instance or tuple of strings

If provided as a tuple of strings, these are assumed to be full paths to the bvals and bvecs files (in that order).

affine4x4 array, optional.

Must be provided for data provided as an array. If provided together with Nifti1Image or str data, this input will over-ride the affine that is stored in the data input. Default: use the affine stored in data.

b0_refint, optional.

Which b0 volume to use as reference. Default: 0

pipelinelist of callables, optional.

The transformations to perform in sequence (from left to right): Default: [center_of_mass, translation, rigid, affine]

static_maskarray, shape (S, R, C) or (R, C), optional

static image mask that defines which pixels in the static image are used to calculate the mutual information.

Returns:
xform_img, affine_array: a Nifti1Image containing the registered data and
using the affine of the original data and a list containing the affine
transforms associated with each of the

streamline_registration#

dipy.align._public.streamline_registration(moving, static, *, n_points=100, native_resampled=False)[source]#

Register two collections of streamlines (‘bundles’) to each other.

Parameters:
moving, staticlists of 3 by n, or str

The two bundles to be registered. Given either as lists of arrays with 3D coordinates, or strings containing full paths to these files.

n_pointsint, optional

How many points to resample to. Default: 100.

native_resampledbool, optional

Whether to return the moving bundle in the original space, but resampled in the static space to n_points.

Returns:
alignedlist

Streamlines from the moving group, moved to be closely matched to the static group.

matrixarray (4, 4)

The affine transformation that takes us from ‘moving’ to ‘static’

determine_num_threads#

dipy.align.bundlemin.determine_num_threads(num_threads)#

Determine the effective number of threads to be used for OpenMP calls

  • For num_threads = None, - if the OMP_NUM_THREADS environment variable is set, return that value - otherwise, return the maximum number of cores retrieved by openmp.opm_get_num_procs().

  • For num_threads > 0, return this value.

  • For num_threads < 0, return the maximal number of threads minus |num_threads + 1|. In particular num_threads = -1 will use as many threads as there are available cores on the machine.

  • For num_threads = 0 a ValueError is raised.

Parameters:
num_threadsint or None

Desired number of threads to be used.

distance_matrix_mdf#

dipy.align.bundlemin.distance_matrix_mdf(streamlines_a, streamlines_b)#

Minimum direct flipped distance matrix between two streamline sets

All streamlines need to have the same number of points

Parameters:
streamlines_asequence

of streamlines as arrays, [(N, 3) .. (N, 3)]

streamlines_bsequence

of streamlines as arrays, [(N, 3) .. (N, 3)]

Returns:
DMarray, shape (len(streamlines_a), len(streamlines_b))

distance matrix

DeformableRegistration#

class dipy.align.cpd.DeformableRegistration(X, Y, *args, sigma2=None, alpha=None, beta=None, low_rank=False, num_eig=100, max_iterations=None, tolerance=None, w=None, **kwargs)[source]#

Bases: object

Deformable point cloud registration.

Attributes:
X: numpy array

NxD array of target points.

Y: numpy array

MxD array of source points.

TY: numpy array

MxD array of transformed source points.

sigma2: float (positive)

Initial variance of the Gaussian mixture model.

N: int

Number of target points.

M: int

Number of source points.

D: int

Dimensionality of source and target points

iteration: int

The current iteration throughout registration.

max_iterations: int

Registration will terminate once the algorithm has taken this many iterations.

tolerance: float (positive)

Registration will terminate once the difference between consecutive objective function values falls within this tolerance.

w: float (between 0 and 1)

Contribution of the uniform distribution to account for outliers. Valid values span 0 (inclusive) and 1 (exclusive).

q: float

The objective function value that represents the misalignment between source and target point clouds.

diff: float (positive)

The absolute difference between the current and previous objective function values.

P: numpy array

MxN array of probabilities. P[m, n] represents the probability that the m-th source point corresponds to the n-th target point.

Pt1: numpy array

Nx1 column array. Multiplication result between the transpose of P and a column vector of all 1s.

P1: numpy array

Mx1 column array. Multiplication result between P and a column vector of all 1s.

Np: float (positive)

The sum of all elements in P.

alpha: float (positive)

Represents the trade-off between the goodness of maximum likelihoo fit and regularization.

beta: float(positive)

Width of the Gaussian kernel.

low_rank: bool

Whether to use low rank approximation.

num_eig: int

Number of eigenvectors to use in lowrank calculation.

Methods

expectation()

Compute the expectation step of the EM algorithm.

get_registration_parameters()

Return the current estimate of the deformable transformation parameters.

iterate()

Perform one iteration of the EM algorithm.

maximization()

Compute the maximization step of the EM algorithm.

register(*[, callback])

Perform the EM registration.

transform_point_cloud(*[, Y])

Update a point cloud using the new estimate of the deformable transformation.

update_transform()

Calculate a new estimate of the deformable transformation.

update_variance()

Update the variance of the mixture model.

expectation()[source]#

Compute the expectation step of the EM algorithm.

get_registration_parameters()[source]#

Return the current estimate of the deformable transformation parameters.

Returns:
self.G: numpy array

Gaussian kernel matrix.

self.W: numpy array

Deformable transformation matrix.

iterate()[source]#

Perform one iteration of the EM algorithm.

maximization()[source]#

Compute the maximization step of the EM algorithm.

register(*, callback=<function DeformableRegistration.<lambda>>)[source]#

Perform the EM registration.

Parameters:
callback: function

A function that will be called after each iteration. Can be used to visualize the registration process.

Returns:
self.TY: numpy array

MxD array of transformed source points.

registration_parameters:

Returned params dependent on registration method used.

transform_point_cloud(*, Y=None)[source]#

Update a point cloud using the new estimate of the deformable transformation.

Parameters:
Y: numpy array, optional

Array of points to transform - use to predict on new set of points. Best for predicting on new points not used to run initial registration. If None, self.Y used.

Returns:
If Y is None, returns None.
Otherwise, returns the transformed Y.
update_transform()[source]#

Calculate a new estimate of the deformable transformation. See Eq. 22 of https://arxiv.org/pdf/0905.2635.pdf.

update_variance()[source]#

Update the variance of the mixture model.

This is using the new estimate of the deformable transformation. See the update rule for sigma2 in Eq. 23 of of https://arxiv.org/pdf/0905.2635.pdf.

gaussian_kernel#

dipy.align.cpd.gaussian_kernel(X, beta, Y=None)[source]#

low_rank_eigen#

dipy.align.cpd.low_rank_eigen(G, num_eig)[source]#

Calculate num_eig eigenvectors and eigenvalues of gaussian matrix G.

Enables lower dimensional solving.

initialize_sigma2#

dipy.align.cpd.initialize_sigma2(X, Y)[source]#

Initialize the variance (sigma2).

Parameters:
X: numpy array

NxD array of points for target.

Y: numpy array

MxD array of points for source.

Returns:
sigma2: float

Initial variance.

lowrankQS#

dipy.align.cpd.lowrankQS(G, beta, num_eig, *, eig_fgt=False)[source]#

Calculate eigenvectors and eigenvalues of gaussian matrix G.

!!! This function is a placeholder for implementing the fast gauss transform. It is not yet implemented. !!!

Parameters:
G: numpy array

Gaussian kernel matrix.

beta: float

Width of the Gaussian kernel.

num_eig: int

Number of eigenvectors to use in lowrank calculation of G

eig_fgt: bool

If True, use fast gauss transform method to speed up.

compute_cc_backward_step_2d#

dipy.align.crosscorr.compute_cc_backward_step_2d(grad_moving, factors, radius)#

Gradient of the CC Metric w.r.t. the backward transformation.

Computes the gradient of the Cross Correlation metric for symmetric registration (SyN) [3] w.r.t. the displacement associated to the static image (‘forward’ step) as in Avants et al.[4].

Parameters:
grad_movingarray, shape (R, C, 2)

the gradient of the moving image

factorsarray, shape (R, C, 5)

the precomputed cross correlation terms obtained via precompute_cc_factors_2d

Returns:
outarray, shape (R, C, 2)

the gradient of the cross correlation metric with respect to the displacement associated to the static image

energythe cross correlation energy (data term) at this iteration

References

compute_cc_backward_step_3d#

dipy.align.crosscorr.compute_cc_backward_step_3d(grad_moving, factors, radius)#

Gradient of the CC Metric w.r.t. the backward transformation.

Computes the gradient of the Cross Correlation metric for symmetric registration (SyN) [3]. w.r.t. the displacement associated to the static volume (‘backward’ step) as in Avants et al.[4].

Parameters:
grad_movingarray, shape (S, R, C, 3)

the gradient of the moving volume

factorsarray, shape (S, R, C, 5)

the precomputed cross correlation terms obtained via precompute_cc_factors_3d

radiusint

the radius of the neighborhood used for the CC metric when computing the factors. The returned vector field will be zero along a boundary of width radius voxels.

Returns:
outarray, shape (S, R, C, 3)

the gradient of the cross correlation metric with respect to the displacement associated to the static volume

energythe cross correlation energy (data term) at this iteration

References

compute_cc_forward_step_2d#

dipy.align.crosscorr.compute_cc_forward_step_2d(grad_static, factors, radius)#

Gradient of the CC Metric w.r.t. the forward transformation.

Computes the gradient of the Cross Correlation metric for symmetric registration (SyN) [3] w.r.t. the displacement associated to the moving image (‘backward’ step) as in Avants et al.[4].

Parameters:
grad_staticarray, shape (R, C, 2)

the gradient of the static image

factorsarray, shape (R, C, 5)

the precomputed cross correlation terms obtained via precompute_cc_factors_2d

Returns:
outarray, shape (R, C, 2)

the gradient of the cross correlation metric with respect to the displacement associated to the moving image

energythe cross correlation energy (data term) at this iteration

Notes

Currently, the gradient of the static image is not being used, but some authors suggest that symmetrizing the gradient by including both, the moving and static gradients may improve the registration quality. We are leaving this parameter as a placeholder for future investigation

References

compute_cc_forward_step_3d#

dipy.align.crosscorr.compute_cc_forward_step_3d(grad_static, factors, radius)#

Gradient of the CC Metric w.r.t. the forward transformation.

Computes the gradient of the Cross Correlation metric for symmetric registration (SyN) [3] w.r.t. the displacement associated to the moving volume (‘forward’ step) as in Avants et al.[4].

Parameters:
grad_staticarray, shape (S, R, C, 3)

the gradient of the static volume

factorsarray, shape (S, R, C, 5)

the precomputed cross correlation terms obtained via precompute_cc_factors_3d

radiusint

the radius of the neighborhood used for the CC metric when computing the factors. The returned vector field will be zero along a boundary of width radius voxels.

Returns:
outarray, shape (S, R, C, 3)

the gradient of the cross correlation metric with respect to the displacement associated to the moving volume

energythe cross correlation energy (data term) at this iteration

References

precompute_cc_factors_2d#

dipy.align.crosscorr.precompute_cc_factors_2d(static, moving, radius)#

Precomputations to quickly compute the gradient of the CC Metric.

Pre-computes the separate terms of the cross correlation metric [3] and image norms at each voxel considering a neighborhood of the given radius to efficiently compute the gradient of the metric with respect to the deformation field [4].

Parameters:
staticarray, shape (R, C)

the static volume, which also defines the reference registration domain

movingarray, shape (R, C)

the moving volume (notice that both images must already be in a common reference domain, i.e. the same R, C)

radiusthe radius of the neighborhood(square of (2*radius + 1)^2 voxels)
Returns:
factorsarray, shape (R, C, 5)

the precomputed cross correlation terms:

- factors[:,:,0] : static minus its mean value along the neighborhood
- factors[:,:,1] : moving minus its mean value along the neighborhood
- factors[:,:,2] : sum of the pointwise products of static and moving
  along the neighborhood
- factors[:,:,3] : sum of sq. values of static along the neighborhood
- factors[:,:,4] : sum of sq. values of moving along the neighborhood

References

precompute_cc_factors_2d_test#

dipy.align.crosscorr.precompute_cc_factors_2d_test(static, moving, radius)#

Precomputations to quickly compute the gradient of the CC Metric.

This version of precompute_cc_factors_2d is for testing purposes, it directly computes the local cross-correlation without any optimization.

precompute_cc_factors_3d#

dipy.align.crosscorr.precompute_cc_factors_3d(static, moving, radius, num_threads=None)#

Precomputations to quickly compute the gradient of the CC Metric.

Pre-computes the separate terms of the cross correlation metric and image norms at each voxel considering a neighborhood of the given radius to efficiently compute the gradient of the metric with respect to the deformation field [5], [3], [4].

Parameters:
staticarray, shape (S, R, C)

the static volume, which also defines the reference registration domain

movingarray, shape (S, R, C)

the moving volume (notice that both images must already be in a common reference domain, i.e. the same S, R, C)

radiusthe radius of the neighborhood (cube of (2 * radius + 1)^3 voxels)
Returns:
factorsarray, shape (S, R, C, 5)

the precomputed cross correlation terms:

- factors[:,:,:,0] : static minus its mean value along the neighborhood
- factors[:,:,:,1] : moving minus its mean value along the neighborhood
- factors[:,:,:,2] : sum of the pointwise products of static and moving
  along the neighborhood
- factors[:,:,:,3] : sum of sq. values of static along the neighborhood
- factors[:,:,:,4] : sum of sq. values of moving along the neighborhood

References

precompute_cc_factors_3d_test#

dipy.align.crosscorr.precompute_cc_factors_3d_test(static, moving, radius)#

Precomputations to quickly compute the gradient of the CC Metric.

This version of precompute_cc_factors_3d is for testing purposes, it directly computes the local cross-correlation factors without any optimization, so it is less error-prone than the accelerated version.

compute_em_demons_step_2d#

dipy.align.expectmax.compute_em_demons_step_2d(delta_field, sigma_sq_field, gradient_moving, sigma_sq_x, out)#

Demons step for EM metric in 2D

Computes the demons step [6] for SSD-driven registration ( eq. 4 in [6] ) using the EM algorithm [7] to handle multi-modality images.

In this case, \(\sigma_i\) in eq. 4 of [6] is estimated using the EM algorithm, while in the original version of diffeomorphic demons it is estimated by the difference between the image values at each pixel.

Parameters:
delta_fieldarray, shape (R, C)

contains, at each pixel, the difference between the moving image (warped under the current deformation s(. , .) ) J and the static image I: delta_field[i,j] = J(s(i,j)) - I(i,j). The order is important, changing to delta_field[i,j] = I(i,j) - J(s(i,j)) yields the backward demons step warping the static image towards the moving, which may not be the intended behavior unless the ‘gradient_moving’ passed corresponds to the gradient of the static image

sigma_sq_fieldarray, shape (R, C)

contains, at each pixel (i, j), the estimated variance (not std) of the hidden variable associated to the intensity at static[i,j] (which must have been previously quantized)

gradient_movingarray, shape (R, C, 2)

the gradient of the moving image

sigma_sq_xfloat

parameter controlling the amount of regularization. It corresponds to \(\sigma_x^2\) in algorithm 1 of [6]

outarray, shape (R, C, 2)

the resulting demons step will be written to this array

Returns:
demons_steparray, shape (R, C, 2)

the demons step to be applied for updating the current displacement field

energyfloat

the current em energy (before applying the returned demons_step)

References

compute_em_demons_step_3d#

dipy.align.expectmax.compute_em_demons_step_3d(delta_field, sigma_sq_field, gradient_moving, sigma_sq_x, out)#

Demons step for EM metric in 3D

Computes the demons step [6] for SSD-driven registration ( eq. 4 in [6] ) using the EM algorithm [7] to handle multi-modality images.

In this case, \(\sigma_i\) in eq. 4 of [6] is estimated using the EM algorithm, while in the original version of diffeomorphic demons it is estimated by the difference between the image values at each pixel.

Parameters:
delta_fieldarray, shape (S, R, C)

contains, at each pixel, the difference between the moving image (warped under the current deformation s ) J and the static image I: delta_field[k,i,j] = J(s(k,i,j)) - I(k,i,j). The order is important, changing to delta_field[k,i,j] = I(k,i,j) - J(s(k,i,j)) yields the backward demons step warping the static image towards the moving, which may not be the intended behavior unless the ‘gradient_moving’ passed corresponds to the gradient of the static image

sigma_sq_fieldarray, shape (S, R, C)

contains, at each pixel (k, i, j), the estimated variance (not std) of the hidden variable associated to the intensity at static[k,i,j] (which must have been previously quantized)

gradient_movingarray, shape (S, R, C, 2)

the gradient of the moving image

sigma_sq_xfloat

parameter controlling the amount of regularization. It corresponds to \(\sigma_x^2\) in algorithm 1 of footcite:p:Vercauteren2009.

outarray, shape (S, R, C, 2)

the resulting demons step will be written to this array

Returns:
demons_steparray, shape (S, R, C, 3)

the demons step to be applied for updating the current displacement field

energyfloat

the current em energy (before applying the returned demons_step)

References

compute_masked_class_stats_2d#

dipy.align.expectmax.compute_masked_class_stats_2d(mask, v, num_labels, labels)#

Computes the mean and std. for each quantization level.

Computes the mean and standard deviation of the intensities in ‘v’ for each corresponding label in ‘labels’. In other words, for each label L, it computes the mean and standard deviation of the intensities in ‘v’ at pixels whose label in ‘labels’ is L. This is used by the EM metric to compute statistics for each hidden variable represented by the labels.

Parameters:
maskarray, shape (R, C)

the mask of pixels that will be taken into account for computing the statistics. All zero pixels in mask will be ignored

varray, shape (R, C)

the image which the statistics will be computed from

num_labelsint

the number of different labels in ‘labels’ (equal to the number of hidden variables in the EM metric)

labelsarray, shape (R, C)

the label assigned to each pixel

Returns:
meansarray, shape (num_labels,)

means[i], 0<=i<num_labels will be the mean intensity in v of all voxels labeled i, or 0 if no voxels are labeled i

variancesarray, shape (num_labels,)

variances[i], 0<=i<num_labels will be the standard deviation of the intensities in v of all voxels labeled i, or infinite if less than 2 voxels are labeled i.

compute_masked_class_stats_3d#

dipy.align.expectmax.compute_masked_class_stats_3d(mask, v, num_labels, labels)#

Computes the mean and std. for each quantization level.

Computes the mean and standard deviation of the intensities in ‘v’ for each corresponding label in ‘labels’. In other words, for each label L, it computes the mean and standard deviation of the intensities in ‘v’ at voxels whose label in ‘labels’ is L. This is used by the EM metric to compute statistics for each hidden variable represented by the labels.

Parameters:
maskarray, shape (S, R, C)

the mask of voxels that will be taken into account for computing the statistics. All zero voxels in mask will be ignored

varray, shape (S, R, C)

the volume which the statistics will be computed from

num_labelsint

the number of different labels in ‘labels’ (equal to the number of hidden variables in the EM metric)

labelsarray, shape (S, R, C)

the label assigned to each pixel

Returns:
meansarray, shape (num_labels,)

means[i], 0<=i<num_labels will be the mean intensity in v of all voxels labeled i, or 0 if no voxels are labeled i

variancesarray, shape (num_labels,)

variances[i], 0<=i<num_labels will be the standard deviation of the intensities in v of all voxels labeled i, or infinite if less than 2 voxels are labeled i.

quantize_positive_2d#

dipy.align.expectmax.quantize_positive_2d(v, num_levels)#

Quantize a 2D image to num_levels quantization levels.

Quantizes the input image at num_levels intensity levels considering <=0 as a special value. Those input pixels <=0, and only those, will be assigned a quantization level of 0. The positive values are divided into the remaining num_levels-1 uniform quantization levels.

The following are undefined, and raise a ValueError:

* Quantizing at zero levels because at least one level must be assigned
* Quantizing at one level because positive values should be assigned a
  level different from the special level 0 (at least 2 levels are needed)
Parameters:
varray, shape (R, C)

the image to be quantized

num_levelsint

the number of levels

Returns:
outarray, shape (R, C), same shape as v

the quantized image

levelsarray, shape (num_levels,)

the quantization values: levels[0]=0, and levels[i] is the mid-point of the interval of intensities that are assigned to quantization level i, i=1, …, num_levels-1.

histarray, shape (num_levels,)

histogram: the number of pixels that were assigned to each quantization level

quantize_positive_3d#

dipy.align.expectmax.quantize_positive_3d(v, num_levels)#

Quantize a 3D volume to num_levels quantization levels.

Quantize the input volume at num_levels intensity levels considering <=0 as a special value. Those input voxels <=0, and only those, will be assigned a quantization level of 0. The positive values are divided into the remaining num_levels-1 uniform quantization levels.

The following are undefined, and raise a ValueError:

* Quantizing at zero levels because at least one level must be assigned
* Quantizing at one level because positive values should be assigned a
  level different from the special level 0 (at least 2 levels are needed)
Parameters:
varray, shape (S, R, C)

the volume to be quantized

num_levelsint

the number of levels

Returns:
outarray, shape (S, R, C), same shape as v

the quantized volume

levelsarray, shape (num_levels,)

the quantization values: levels[0]=0, and levels[i] is the mid-point of the interval of intensities that are assigned to quantization level i, i=1, …, num_levels-1.

histarray, shape (num_levels,)

histogram: the number of voxels that were assigned to each quantization level

AffineInversionError#

class dipy.align.imaffine.AffineInversionError[source]#

Bases: Exception

Attributes:
args

Methods

add_note

Exception.add_note(note) -- add a note to the exception

with_traceback

Exception.with_traceback(tb) -- set self.__traceback__ to tb and return self.

AffineInvalidValuesError#

class dipy.align.imaffine.AffineInvalidValuesError[source]#

Bases: Exception

Attributes:
args

Methods

add_note

Exception.add_note(note) -- add a note to the exception

with_traceback

Exception.with_traceback(tb) -- set self.__traceback__ to tb and return self.

AffineMap#

class dipy.align.imaffine.AffineMap(affine, *, domain_grid_shape=None, domain_grid2world=None, codomain_grid_shape=None, codomain_grid2world=None)[source]#

Bases: object

Methods

get_affine()

Return the value of the transformation, not a reference.

set_affine(affine)

Set the affine transform (operating in physical space).

transform(image, *[, interpolation, ...])

Transform the input image from co-domain to domain space.

transform_inverse(image, *[, interpolation, ...])

Transform the input image from domain to co-domain space.

get_affine()[source]#

Return the value of the transformation, not a reference.

Returns:
affinendarray

Copy of the transform, not a reference.

set_affine(affine)[source]#

Set the affine transform (operating in physical space).

Also sets self.affine_inv - the inverse of affine, or None if there is no inverse.

Parameters:
affinearray, shape (dim + 1, dim + 1)

the matrix representing the affine transform operating in physical space. The domain and co-domain information remains unchanged. If None, then self represents the identity transformation.

transform(image, *, interpolation='linear', image_grid2world=None, sampling_grid_shape=None, sampling_grid2world=None, resample_only=False)[source]#

Transform the input image from co-domain to domain space.

By default, the transformed image is sampled at a grid defined by self.domain_shape and self.domain_grid2world. If such information was not provided then sampling_grid_shape is mandatory.

Parameters:
image2D or 3D array

the image to be transformed

interpolationstring, either ‘linear’ or ‘nearest’

the type of interpolation to be used, either ‘linear’ (for k-linear interpolation) or ‘nearest’ for nearest neighbor

image_grid2worldarray, shape (dim + 1, dim + 1), optional

the grid-to-world transform associated with image. If None (the default), then the grid-to-world transform is assumed to be the identity.

sampling_grid_shapesequence, shape (dim,), optional

the shape of the grid where the transformed image must be sampled. If None (the default), then self.codomain_shape is used instead (which must have been set at initialization, otherwise an exception will be raised).

sampling_grid2worldarray, shape (dim + 1, dim + 1), optional

the grid-to-world transform associated with the sampling grid (specified by sampling_grid_shape, or by default self.codomain_shape). If None (the default), then the grid-to-world transform is assumed to be the identity.

resample_onlyBoolean, optional

If False (the default) the affine transform is applied normally. If True, then the affine transform is not applied, and the input image is just re-sampled on the domain grid of this transform.

Returns:
transformedarray

the transformed image, sampled at the requested grid, with shape sampling_grid_shape or self.codomain_shape.

transform_inverse(image, *, interpolation='linear', image_grid2world=None, sampling_grid_shape=None, sampling_grid2world=None, resample_only=False)[source]#

Transform the input image from domain to co-domain space.

By default, the transformed image is sampled at a grid defined by self.codomain_shape and self.codomain_grid2world. If such information was not provided then sampling_grid_shape is mandatory.

Parameters:
image2D or 3D array

the image to be transformed

interpolationstring, either ‘linear’ or ‘nearest’

the type of interpolation to be used, either ‘linear’ (for k-linear interpolation) or ‘nearest’ for nearest neighbor

image_grid2worldarray, shape (dim + 1, dim + 1), optional

the grid-to-world transform associated with image. If None (the default), then the grid-to-world transform is assumed to be the identity.

sampling_grid_shapesequence, shape (dim,), optional

the shape of the grid where the transformed image must be sampled. If None (the default), then self.codomain_shape is used instead (which must have been set at initialization, otherwise an exception will be raised).

sampling_grid2worldarray, shape (dim + 1, dim + 1), optional

the grid-to-world transform associated with the sampling grid (specified by sampling_grid_shape, or by default self.codomain_shape). If None (the default), then the grid-to-world transform is assumed to be the identity.

resample_onlyBoolean, optional

If False (the default) the affine transform is applied normally. If True, then the affine transform is not applied, and the input image is just re-sampled on the domain grid of this transform.

Returns:
transformedarray

the transformed image, sampled at the requested grid, with shape sampling_grid_shape or self.codomain_shape.

MutualInformationMetric#

class dipy.align.imaffine.MutualInformationMetric(*, nbins=32, sampling_proportion=None)[source]#

Bases: object

Methods

distance(params)

Numeric value of the negative Mutual Information.

distance_and_gradient(params)

Numeric value of the metric and its gradient at given parameters.

gradient(params)

Numeric value of the metric's gradient at the given parameters.

setup(transform, static, moving, *[, ...])

Prepare the metric to compute intensity densities and gradients.

distance(params)[source]#

Numeric value of the negative Mutual Information.

We need to change the sign so we can use standard minimization algorithms.

Parameters:
paramsarray, shape (n,)

the parameter vector of the transform currently used by the metric (the transform name is provided when self.setup is called), n is the number of parameters of the transform

Returns:
neg_mifloat

the negative mutual information of the input images after transforming the moving image by the currently set transform with params parameters

distance_and_gradient(params)[source]#

Numeric value of the metric and its gradient at given parameters.

Parameters:
paramsarray, shape (n,)

the parameter vector of the transform currently used by the metric (the transform name is provided when self.setup is called), n is the number of parameters of the transform

Returns:
neg_mifloat

the negative mutual information of the input images after transforming the moving image by the currently set transform with params parameters

neg_mi_gradarray, shape (n,)

the gradient of the negative Mutual Information

gradient(params)[source]#

Numeric value of the metric’s gradient at the given parameters.

Parameters:
paramsarray, shape (n,)

the parameter vector of the transform currently used by the metric (the transform name is provided when self.setup is called), n is the number of parameters of the transform

Returns:
gradarray, shape (n,)

the gradient of the negative Mutual Information

setup(transform, static, moving, *, static_grid2world=None, moving_grid2world=None, starting_affine=None, static_mask=None, moving_mask=None)[source]#

Prepare the metric to compute intensity densities and gradients.

The histograms will be setup to compute probability densities of intensities within the minimum and maximum values of static and moving

Parameters:
transform: instance of Transform

the transformation with respect to whose parameters the gradient must be computed

staticarray, shape (S, R, C) or (R, C)

static image

movingarray, shape (S’, R’, C’) or (R’, C’)

moving image. The dimensions of the static (S, R, C) and moving (S’, R’, C’) images do not need to be the same.

static_grid2worldarray (dim+1, dim+1), optional

the grid-to-space transform of the static image. The default is None, implying the transform is the identity.

moving_grid2worldarray (dim+1, dim+1)

the grid-to-space transform of the moving image. The default is None, implying the spacing along all axes is 1.

starting_affinearray, shape (dim+1, dim+1), optional

the pre-aligning matrix (an affine transform) that roughly aligns the moving image towards the static image. If None, no pre-alignment is performed. If a pre-alignment matrix is available, it is recommended to provide this matrix as starting_affine instead of manually transforming the moving image to reduce interpolation artifacts. The default is None, implying no pre-alignment is performed.

static_maskarray, shape (S, R, C) or (R, C), optional

static image mask that defines which pixels in the static image are used to calculate the mutual information.

moving_maskarray, shape (S’, R’, C’) or (R’, C’), optional

moving image mask that defines which pixels in the moving image are used to calculate the mutual information.

AffineRegistration#

class dipy.align.imaffine.AffineRegistration(*, metric=None, level_iters=None, sigmas=None, factors=None, method='L-BFGS-B', ss_sigma_factor=None, options=None, verbosity=1)[source]#

Bases: object

Methods

optimize(static, moving, transform, params0, *)

Start the optimization process.

optimize(static, moving, transform, params0, *, static_grid2world=None, moving_grid2world=None, starting_affine=None, ret_metric=False, static_mask=None, moving_mask=None)[source]#

Start the optimization process.

Parameters:
static2D or 3D array

the image to be used as reference during optimization.

moving2D or 3D array

the image to be used as “moving” during optimization. It is necessary to pre-align the moving image to ensure its domain lies inside the domain of the deformation fields. This is assumed to be accomplished by “pre-aligning” the moving image towards the static using an affine transformation given by the ‘starting_affine’ matrix

transforminstance of Transform

the transformation with respect to whose parameters the gradient must be computed

params0array, shape (n,)

parameters from which to start the optimization. If None, the optimization will start at the identity transform. n is the number of parameters of the specified transformation.

static_grid2worldarray, shape (dim+1, dim+1), optional

the voxel-to-space transformation associated with the static image. The default is None, implying the transform is the identity.

moving_grid2worldarray, shape (dim+1, dim+1), optional

the voxel-to-space transformation associated with the moving image. The default is None, implying the transform is the identity.

starting_affinestring, or matrix, or None, optional
If string:

‘mass’: align centers of gravity ‘voxel-origin’: align physical coordinates of voxel (0,0,0) ‘centers’: align physical coordinates of central voxels

If matrix:

array, shape (dim+1, dim+1).

If None:

Start from identity.

ret_metricboolean, optional

if True, it returns the parameters for measuring the similarity between the images (default ‘False’). The metric containing optimal parameters and the distance between the images.

static_maskarray, shape (S, R, C) or (R, C), optional

static image mask that defines which pixels in the static image are used to calculate the mutual information.

moving_maskarray, shape (S’, R’, C’) or (R’, C’), optional

moving image mask that defines which pixels in the moving image are used to calculate the mutual information.

Returns:
affine_mapinstance of AffineMap

the affine resulting affine transformation

xoptoptimal parameters

the optimal parameters (translation, rotation shear etc.)

foptSimilarity metric

the value of the function at the optimal parameters.

transform_centers_of_mass#

dipy.align.imaffine.transform_centers_of_mass(static, static_grid2world, moving, moving_grid2world)[source]#

Transformation to align the center of mass of the input images.

Parameters:
staticarray, shape (S, R, C)

static image

static_grid2worldarray, shape (dim+1, dim+1)

the voxel-to-space transformation of the static image

movingarray, shape (S, R, C)

moving image

moving_grid2worldarray, shape (dim+1, dim+1)

the voxel-to-space transformation of the moving image

Returns:
affine_mapinstance of AffineMap

the affine transformation (translation only, in this case) aligning the center of mass of the moving image towards the one of the static image

transform_geometric_centers#

dipy.align.imaffine.transform_geometric_centers(static, static_grid2world, moving, moving_grid2world)[source]#

Transformation to align the geometric center of the input images.

With “geometric center” of a volume we mean the physical coordinates of its central voxel

Parameters:
staticarray, shape (S, R, C)

static image

static_grid2worldarray, shape (dim+1, dim+1)

the voxel-to-space transformation of the static image

movingarray, shape (S, R, C)

moving image

moving_grid2worldarray, shape (dim+1, dim+1)

the voxel-to-space transformation of the moving image

Returns:
affine_mapinstance of AffineMap

the affine transformation (translation only, in this case) aligning the geometric center of the moving image towards the one of the static image

transform_origins#

dipy.align.imaffine.transform_origins(static, static_grid2world, moving, moving_grid2world)[source]#

Transformation to align the origins of the input images.

With “origin” of a volume we mean the physical coordinates of voxel (0,0,0)

Parameters:
staticarray, shape (S, R, C)

static image

static_grid2worldarray, shape (dim+1, dim+1)

the voxel-to-space transformation of the static image

movingarray, shape (S, R, C)

moving image

moving_grid2worldarray, shape (dim+1, dim+1)

the voxel-to-space transformation of the moving image

Returns:
affine_mapinstance of AffineMap

the affine transformation (translation only, in this case) aligning the origin of the moving image towards the one of the static image

DiffeomorphicMap#

class dipy.align.imwarp.DiffeomorphicMap(dim, disp_shape, *, disp_grid2world=None, domain_shape=None, domain_grid2world=None, codomain_shape=None, codomain_grid2world=None, prealign=None)[source]#

Bases: object

Methods

allocate()

Creates a zero displacement field

compute_inversion_error()

Inversion error of the displacement fields

expand_fields(expand_factors, new_shape)

Expands the displacement fields from current shape to new_shape

get_backward_field()

Deformation field to transform an image in the backward direction

get_forward_field()

Deformation field to transform an image in the forward direction

get_simplified_transform()

Constructs a simplified version of this Diffeomorhic Map

interpret_matrix(obj)

Try to interpret obj as a matrix

inverse()

Inverse of this DiffeomorphicMap instance

shallow_copy()

Shallow copy of this DiffeomorphicMap instance

transform(image, *[, interpolation, ...])

Warps an image in the forward direction

transform_inverse(image, *[, interpolation, ...])

Warps an image in the backward direction

transform_points(points, *[, coord2world, ...])

Warp the list of points in the forward direction.

transform_points_inverse(points, *[, ...])

Warp the list of points in the backward direction.

warp_endomorphism(phi)

Composition of this DiffeomorphicMap with a given endomorphism

allocate()[source]#

Creates a zero displacement field

Creates a zero displacement field (the identity transformation).

compute_inversion_error()[source]#

Inversion error of the displacement fields

Estimates the inversion error of the displacement fields by computing statistics of the residual vectors obtained after composing the forward and backward displacement fields.

Returns:
residualarray, shape (R, C) or (S, R, C)

the displacement field resulting from composing the forward and backward displacement fields of this transformation (the residual should be zero for a perfect diffeomorphism)

statsarray, shape (3,)

statistics from the norms of the vectors of the residual displacement field: maximum, mean and standard deviation

Notes

Since the forward and backward displacement fields have the same discretization, the final composition is given by

comp[i] = forward[ i + Dinv * backward[i]]

where Dinv is the space-to-grid transformation of the displacement fields

expand_fields(expand_factors, new_shape)[source]#

Expands the displacement fields from current shape to new_shape

Up-samples the discretization of the displacement fields to be of new_shape shape.

Parameters:
expand_factorsarray, shape (dim,)

the factors scaling current spacings (voxel sizes) to spacings in the expanded discretization.

new_shapearray, shape (dim,)

the shape of the arrays holding the up-sampled discretization

get_backward_field()[source]#

Deformation field to transform an image in the backward direction

Returns the deformation field that must be used to warp an image under this transformation in the backward direction (note the ‘is_inverse’ flag).

get_forward_field()[source]#

Deformation field to transform an image in the forward direction

Returns the deformation field that must be used to warp an image under this transformation in the forward direction (note the ‘is_inverse’ flag).

get_simplified_transform()[source]#

Constructs a simplified version of this Diffeomorhic Map

The simplified version incorporates the pre-align transform, as well as the domain and codomain affine transforms into the displacement field. The resulting transformation may be regarded as operating on the image spaces given by the domain and codomain discretization. As a result, self.prealign, self.disp_grid2world, self.domain_grid2world and self.codomain affine will be None (denoting Identity) in the resulting diffeomorphic map.

interpret_matrix(obj)[source]#

Try to interpret obj as a matrix

Some operations are performed faster if we know in advance if a matrix is the identity (so we can skip the actual matrix-vector multiplication). This function returns None if the given object is None or the ‘identity’ string. It returns the same object if it is a numpy array. It raises an exception otherwise.

Parameters:
objobject

any object

Returns:
objobject

the same object given as argument if obj is None or a numpy array. None if obj is the ‘identity’ string.

inverse()[source]#

Inverse of this DiffeomorphicMap instance

Returns a diffeomorphic map object representing the inverse of this transformation. The internal arrays are not copied but just referenced.

Returns:
invDiffeomorphicMap object

the inverse of this diffeomorphic map.

shallow_copy()[source]#

Shallow copy of this DiffeomorphicMap instance

Creates a shallow copy of this diffeomorphic map (the arrays are not copied but just referenced)

Returns:
new_mapDiffeomorphicMap object

the shallow copy of this diffeomorphic map

transform(image, *, interpolation='linear', image_world2grid=None, out_shape=None, out_grid2world=None)[source]#

Warps an image in the forward direction

Transforms the input image under this transformation in the forward direction. It uses the “is_inverse” flag to switch between “forward” and “backward” (if is_inverse is False, then transform(…) warps the image forwards, else it warps the image backwards).

Parameters:
imagearray, shape (s, r, c) if dim = 3 or (r, c) if dim = 2

the image to be warped under this transformation in the forward direction

interpolationstring, either ‘linear’ or ‘nearest’

the type of interpolation to be used for warping, either ‘linear’ (for k-linear interpolation) or ‘nearest’ for nearest neighbor

image_world2gridarray, shape (dim+1, dim+1)

the transformation bringing world (space) coordinates to voxel coordinates of the image given as input

out_shapearray, shape (dim,)

the number of slices, rows and columns of the desired warped image

out_grid2worldthe transformation bringing voxel coordinates of the

warped image to physical space

Returns:
warpedarray, shape = out_shape or self.codomain_shape if None

the warped image under this transformation in the forward direction

Notes

See _warp_forward and _warp_backward documentation for further information.

transform_inverse(image, *, interpolation='linear', image_world2grid=None, out_shape=None, out_grid2world=None)[source]#

Warps an image in the backward direction

Transforms the input image under this transformation in the backward direction. It uses the “is_inverse” flag to switch between “forward” and “backward” (if is_inverse is False, then transform_inverse(…) warps the image backwards, else it warps the image forwards)

Parameters:
imagearray, shape (s, r, c) if dim = 3 or (r, c) if dim = 2

the image to be warped under this transformation in the forward direction

interpolationstring, either ‘linear’ or ‘nearest’

the type of interpolation to be used for warping, either ‘linear’ (for k-linear interpolation) or ‘nearest’ for nearest neighbor

image_world2gridarray, shape (dim+1, dim+1)

the transformation bringing world (space) coordinates to voxel coordinates of the image given as input

out_shapearray, shape (dim,)

the number of slices, rows, and columns of the desired warped image

out_grid2worldthe transformation bringing voxel coordinates of the

warped image to physical space

Returns:
warpedarray, shape = out_shape or self.codomain_shape if None

warped image under this transformation in the backward direction

Notes

See _warp_forward and _warp_backward documentation for further information.

transform_points(points, *, coord2world=None, world2coord=None)[source]#

Warp the list of points in the forward direction.

Applies this diffeomorphic map to the list of points (or streamlines) given by points. We assume that the points’ coordinates are mapped to world coordinates by applying the coord2world affine transform. The warped coordinates are given in world coordinates unless world2coord affine transform is specified, in which case the warped points will be transformed to the corresponding coordinate system.

Parameters:
pointsarray, shape (N, dim) or Streamlines object
coord2worldarray, shape (dim+1, dim+1), optional

affine matrix mapping points to world coordinates

world2coordarray, shape (dim+1, dim+1), optional

affine matrix mapping world coordinates to points

transform_points_inverse(points, *, coord2world=None, world2coord=None)[source]#

Warp the list of points in the backward direction.

Applies this diffeomorphic map to the list of points (or streamlines) given by points. We assume that the points’ coordinates are mapped to world coordinates by applying the coord2world affine transform. The warped coordinates are given in world coordinates unless world2coord affine transform is specified, in which case the warped points will be transformed to the corresponding coordinate system.

Parameters:
pointsarray, shape (N, dim) or Streamlines object
coord2worldarray, shape (dim+1, dim+1), optional

affine matrix mapping points to world coordinates

world2coordarray, shape (dim+1, dim+1), optional

affine matrix mapping world coordinates to points

warp_endomorphism(phi)[source]#

Composition of this DiffeomorphicMap with a given endomorphism

Creates a new DiffeomorphicMap C with the same properties as self and composes its displacement fields with phi’s corresponding fields. The resulting diffeomorphism is of the form C(x) = phi(self(x)) with inverse C^{-1}(y) = self^{-1}(phi^{-1}(y)). We assume that phi is an endomorphism with the same discretization and domain affine as self to ensure that the composition inherits self’s properties (we also assume that the pre-aligning matrix of phi is None or identity).

Parameters:
phiDiffeomorphicMap object

the endomorphism to be warped by this diffeomorphic map

Returns:
compositionthe composition of this diffeomorphic map with the

endomorphism given as input

Notes

The problem with our current representation of a DiffeomorphicMap is that the set of Diffeomorphism that can be represented this way (a pre-aligning matrix followed by a non-linear endomorphism given as a displacement field) is not closed under the composition operation.

Supporting a general DiffeomorphicMap class, closed under composition, may be extremely costly computationally, and the kind of transformations we actually need for Avants’ mid-point algorithm (SyN) are much simpler.

DiffeomorphicRegistration#

class dipy.align.imwarp.DiffeomorphicRegistration(*, metric=None)[source]#

Bases: object

Methods

get_map()

Returns the resulting diffeomorphic map after optimization

optimize()

Starts the metric optimization

set_level_iters(level_iters)

Sets the number of iterations at each pyramid level

abstract get_map()[source]#

Returns the resulting diffeomorphic map after optimization

abstract optimize()[source]#

Starts the metric optimization

This is the main function each specialized class derived from this must implement. Upon completion, the deformation field must be available from the forward transformation model.

set_level_iters(level_iters)[source]#

Sets the number of iterations at each pyramid level

Establishes the maximum number of iterations to be performed at each level of the Gaussian pyramid, similar to ANTS.

Parameters:
level_iterslist

the number of iterations at each level of the Gaussian pyramid. level_iters[0] corresponds to the finest level, level_iters[n-1] the coarsest, where n is the length of the list

SymmetricDiffeomorphicRegistration#

class dipy.align.imwarp.SymmetricDiffeomorphicRegistration(metric, *, level_iters=None, step_length=0.25, ss_sigma_factor=0.2, opt_tol=1e-05, inv_iter=20, inv_tol=0.001, callback=None)[source]#

Bases: DiffeomorphicRegistration

Methods

get_map()

Return the resulting diffeomorphic map.

optimize(static, moving, *[, ...])

Starts the optimization

set_level_iters(level_iters)

Sets the number of iterations at each pyramid level

update(current_displacement, ...)

Composition of the current displacement field with the given field

get_map()[source]#

Return the resulting diffeomorphic map.

Returns the DiffeomorphicMap registering the moving image towards the static image.

optimize(static, moving, *, static_grid2world=None, moving_grid2world=None, prealign=None)[source]#

Starts the optimization

Parameters:
staticarray, shape (S, R, C) or (R, C)

the image to be used as reference during optimization. The displacement fields will have the same discretization as the static image.

movingarray, shape (S, R, C) or (R, C)

the image to be used as “moving” during optimization. Since the deformation fields’ discretization is the same as the static image, it is necessary to pre-align the moving image to ensure its domain lies inside the domain of the deformation fields. This is assumed to be accomplished by “pre-aligning” the moving image towards the static using an affine transformation given by the ‘prealign’ matrix

static_grid2worldarray, shape (dim+1, dim+1)

the voxel-to-space transformation associated to the static image

moving_grid2worldarray, shape (dim+1, dim+1)

the voxel-to-space transformation associated to the moving image

prealignarray, shape (dim+1, dim+1)

the affine transformation (operating on the physical space) pre-aligning the moving image towards the static

Returns:
static_to_refDiffeomorphicMap object

the diffeomorphic map that brings the moving image towards the static one in the forward direction (i.e. by calling static_to_ref.transform) and the static image towards the moving one in the backward direction (i.e. by calling static_to_ref.transform_inverse).

update(current_displacement, new_displacement, disp_world2grid, time_scaling)[source]#

Composition of the current displacement field with the given field

Interpolates new displacement at the locations defined by current_displacement. Equivalently, computes the composition C of the given displacement fields as C(x) = B(A(x)), where A is current_displacement and B is new_displacement. This function is intended to be used with deformation fields of the same sampling (e.g. to be called by a registration algorithm).

Parameters:
current_displacementarray, shape (R’, C’, 2) or (S’, R’, C’, 3)

the displacement field defining where to interpolate new_displacement

new_displacementarray, shape (R, C, 2) or (S, R, C, 3)

the displacement field to be warped by current_displacement

disp_world2gridarray, shape (dim+1, dim+1)

the space-to-grid transform associated with the displacements’ grid (we assume that both displacements are discretized over the same grid)

time_scalingfloat

scaling factor applied to d2. The effect may be interpreted as moving d1 displacements along a factor (time_scaling) of d2.

Returns:
updatedarray, shape (the same as new_displacement)

the warped displacement field

mean_normthe mean norm of all vectors in current_displacement

mult_aff#

dipy.align.imwarp.mult_aff(A, B)[source]#

Returns the matrix product A.dot(B) considering None as the identity

Parameters:
Aarray, shape (n,k)
Barray, shape (k,m)
Returns:
The matrix product A.dot(B). If any of the input matrices is None, it is
treated as the identity matrix. If both matrices are None, None is returned

get_direction_and_spacings#

dipy.align.imwarp.get_direction_and_spacings(affine, dim)[source]#

Extracts the rotational and spacing components from a matrix

Extracts the rotational and spacing (voxel dimensions) components from a matrix. An image gradient represents the local variation of the image’s gray values per voxel. Since we are iterating on the physical space, we need to compute the gradients as variation per millimeter, so we need to divide each gradient’s component by the voxel size along the corresponding axis, that’s what the spacings are used for. Since the image’s gradients are oriented along the grid axes, we also need to re-orient the gradients to be given in physical space coordinates.

Parameters:
affinearray, shape (k, k), k = 3, 4

the matrix transforming grid coordinates to physical space.

Returns:
directionarray, shape (k-1, k-1)

the rotational component of the input matrix

spacingsarray, shape (k-1,)

the scaling component (voxel size) of the matrix

SimilarityMetric#

class dipy.align.metrics.SimilarityMetric(dim)[source]#

Bases: object

Methods

compute_backward()

Computes one step bringing the static image towards the moving.

compute_forward()

Computes one step bringing the reference image towards the static.

free_iteration()

Releases the resources no longer needed by the metric

get_energy()

Numerical value assigned by this metric to the current image pair

initialize_iteration()

Prepares the metric to compute one displacement field iteration.

set_levels_above(levels)

Informs the metric how many pyramid levels are above the current one

set_levels_below(levels)

Informs the metric how many pyramid levels are below the current one

set_moving_image(moving_image, ...)

Sets the moving image being compared against the static one.

set_static_image(static_image, ...)

Sets the static image being compared against the moving one.

use_moving_image_dynamics(...)

This is called by the optimizer just after setting the moving image

use_static_image_dynamics(...)

This is called by the optimizer just after setting the static image.

abstract compute_backward()[source]#

Computes one step bringing the static image towards the moving.

Computes the backward update field to register the static image towards the moving image in a gradient-based optimization algorithm

abstract compute_forward()[source]#

Computes one step bringing the reference image towards the static.

Computes the forward update field to register the moving image towards the static image in a gradient-based optimization algorithm

abstract free_iteration()[source]#

Releases the resources no longer needed by the metric

This method is called by the RegistrationOptimizer after the required iterations have been computed (forward and / or backward) so that the SimilarityMetric can safely delete any data it computed as part of the initialization

abstract get_energy()[source]#

Numerical value assigned by this metric to the current image pair

Must return the numeric value of the similarity between the given static and moving images

abstract initialize_iteration()[source]#

Prepares the metric to compute one displacement field iteration.

This method will be called before any compute_forward or compute_backward call, this allows the Metric to pre-compute any useful information for speeding up the update computations. This initialization was needed in ANTS because the updates are called once per voxel. In Python this is unpractical, though.

set_levels_above(levels)[source]#

Informs the metric how many pyramid levels are above the current one

Informs this metric the number of pyramid levels above the current one. The metric may change its behavior (e.g. number of inner iterations) accordingly

Parameters:
levelsint

the number of levels above the current Gaussian Pyramid level

set_levels_below(levels)[source]#

Informs the metric how many pyramid levels are below the current one

Informs this metric the number of pyramid levels below the current one. The metric may change its behavior (e.g. number of inner iterations) accordingly

Parameters:
levelsint

the number of levels below the current Gaussian Pyramid level

set_moving_image(moving_image, moving_affine, moving_spacing, moving_direction)[source]#

Sets the moving image being compared against the static one.

Sets the moving image. The default behavior (of this abstract class) is simply to assign the reference to an attribute, but generalizations of the metric may need to perform other operations

Parameters:
moving_imagearray, shape (R, C) or (S, R, C)

the moving image

set_static_image(static_image, static_affine, static_spacing, static_direction)[source]#

Sets the static image being compared against the moving one.

Sets the static image. The default behavior (of this abstract class) is simply to assign the reference to an attribute, but generalizations of the metric may need to perform other operations

Parameters:
static_imagearray, shape (R, C) or (S, R, C)

the static image

use_moving_image_dynamics(original_moving_image, transformation)[source]#

This is called by the optimizer just after setting the moving image

This method allows the metric to compute any useful information from knowing how the current static image was generated (as the transformation of an original static image). This method is called by the optimizer just after it sets the static image. Transformation will be an instance of DiffeomorficMap or None if the original_moving_image equals self.moving_image.

Parameters:
original_moving_imagearray, shape (R, C) or (S, R, C)

original image from which the current moving image was generated

transformationDiffeomorphicMap object

the transformation that was applied to the original image to generate the current moving image

use_static_image_dynamics(original_static_image, transformation)[source]#

This is called by the optimizer just after setting the static image.

This method allows the metric to compute any useful information from knowing how the current static image was generated (as the transformation of an original static image). This method is called by the optimizer just after it sets the static image. Transformation will be an instance of DiffeomorficMap or None if the original_static_image equals self.moving_image.

Parameters:
original_static_imagearray, shape (R, C) or (S, R, C)

original image from which the current static image was generated

transformationDiffeomorphicMap object

the transformation that was applied to original image to generate the current static image

CCMetric#

class dipy.align.metrics.CCMetric(dim, *, sigma_diff=2.0, radius=4)[source]#

Bases: SimilarityMetric

Methods

compute_backward()

Computes one step bringing the static image towards the moving.

compute_forward()

Computes one step bringing the moving image towards the static.

free_iteration()

Frees the resources allocated during initialization

get_energy()

Numerical value assigned by this metric to the current image pair

initialize_iteration()

Prepares the metric to compute one displacement field iteration.

set_levels_above(levels)

Informs the metric how many pyramid levels are above the current one

set_levels_below(levels)

Informs the metric how many pyramid levels are below the current one

set_moving_image(moving_image, ...)

Sets the moving image being compared against the static one.

set_static_image(static_image, ...)

Sets the static image being compared against the moving one.

use_moving_image_dynamics(...)

This is called by the optimizer just after setting the moving image

use_static_image_dynamics(...)

This is called by the optimizer just after setting the static image.

compute_backward()[source]#

Computes one step bringing the static image towards the moving.

Computes the update displacement field to be used for registration of the static image towards the moving image

compute_forward()[source]#

Computes one step bringing the moving image towards the static.

Computes the update displacement field to be used for registration of the moving image towards the static image

free_iteration()[source]#

Frees the resources allocated during initialization

get_energy()[source]#

Numerical value assigned by this metric to the current image pair

Returns the Cross Correlation (data term) energy computed at the largest iteration

initialize_iteration()[source]#

Prepares the metric to compute one displacement field iteration.

Pre-computes the cross-correlation factors for efficient computation of the gradient of the Cross Correlation w.r.t. the displacement field. It also pre-computes the image gradients in the physical space by re-orienting the gradients in the voxel space using the corresponding affine transformations.

EMMetric#

class dipy.align.metrics.EMMetric(dim, *, smooth=1.0, inner_iter=5, q_levels=256, double_gradient=True, step_type='gauss_newton')[source]#

Bases: SimilarityMetric

Methods

compute_backward()

Computes one step bringing the static image towards the moving.

compute_demons_step(*[, forward_step])

Demons step for EM metric

compute_forward()

Computes one step bringing the reference image towards the static.

compute_gauss_newton_step(*[, forward_step])

Computes the Gauss-Newton energy minimization step

free_iteration()

Frees the resources allocated during initialization

get_energy()

The numerical value assigned by this metric to the current image pair

initialize_iteration()

Prepares the metric to compute one displacement field iteration.

set_levels_above(levels)

Informs the metric how many pyramid levels are above the current one

set_levels_below(levels)

Informs the metric how many pyramid levels are below the current one

set_moving_image(moving_image, ...)

Sets the moving image being compared against the static one.

set_static_image(static_image, ...)

Sets the static image being compared against the moving one.

use_moving_image_dynamics(...)

This is called by the optimizer just after setting the moving image.

use_static_image_dynamics(...)

This is called by the optimizer just after setting the static image.

compute_backward()[source]#

Computes one step bringing the static image towards the moving.

Computes the update displacement field to be used for registration of the static image towards the moving image

compute_demons_step(*, forward_step=True)[source]#

Demons step for EM metric

Parameters:
forward_stepboolean

if True, computes the Demons step in the forward direction (warping the moving towards the static image). If False, computes the backward step (warping the static image to the moving image)

Returns:
displacementarray, shape (R, C, 2) or (S, R, C, 3)

the Demons step

compute_forward()[source]#

Computes one step bringing the reference image towards the static.

Computes the forward update field to register the moving image towards the static image in a gradient-based optimization algorithm

compute_gauss_newton_step(*, forward_step=True)[source]#

Computes the Gauss-Newton energy minimization step

Computes the Newton step to minimize this energy, i.e., minimizes the linearized energy function with respect to the regularized displacement field (this step does not require post-smoothing, as opposed to the demons step, which does not include regularization). To accelerate convergence we use the multi-grid Gauss-Seidel algorithm proposed by Bruhn and Weickert[8].

Parameters:
forward_stepboolean

if True, computes the Newton step in the forward direction (warping the moving towards the static image). If False, computes the backward step (warping the static image to the moving image)

Returns:
displacementarray, shape (R, C, 2) or (S, R, C, 3)

the Newton step

References

free_iteration()[source]#

Frees the resources allocated during initialization

get_energy()[source]#

The numerical value assigned by this metric to the current image pair

Returns the EM (data term) energy computed at the largest iteration

initialize_iteration()[source]#

Prepares the metric to compute one displacement field iteration.

Pre-computes the transfer functions (hidden random variables) and variances of the estimators. Also pre-computes the gradient of both input images. Note that once the images are transformed to the opposite modality, the gradient of the transformed images can be used with the gradient of the corresponding modality in the same fashion as diff-demons does for mono-modality images. If the flag self.use_double_gradient is True these gradients are averaged.

use_moving_image_dynamics(original_moving_image, transformation)[source]#

This is called by the optimizer just after setting the moving image.

EMMetric takes advantage of the image dynamics by computing the current moving image mask from the original_moving_image mask (warped by nearest neighbor interpolation)

Parameters:
original_moving_imagearray, shape (R, C) or (S, R, C)

the original moving image from which the current moving image was generated, the current moving image is the one that was provided via ‘set_moving_image(…)’, which may not be the same as the original moving image but a warped version of it.

transformationDiffeomorphicMap object

the transformation that was applied to the original_moving_image to generate the current moving image

use_static_image_dynamics(original_static_image, transformation)[source]#

This is called by the optimizer just after setting the static image.

EMMetric takes advantage of the image dynamics by computing the current static image mask from the originalstaticImage mask (warped by nearest neighbor interpolation)

Parameters:
original_static_imagearray, shape (R, C) or (S, R, C)

the original static image from which the current static image was generated, the current static image is the one that was provided via ‘set_static_image(…)’, which may not be the same as the original static image but a warped version of it (even the static image changes during Symmetric Normalization, not only the moving one).

transformationDiffeomorphicMap object

the transformation that was applied to the original_static_image to generate the current static image

SSDMetric#

class dipy.align.metrics.SSDMetric(dim, *, smooth=4, inner_iter=10, step_type='demons')[source]#

Bases: SimilarityMetric

Methods

compute_backward()

Computes one step bringing the static image towards the moving.

compute_demons_step(*[, forward_step])

Demons step for SSD metric

compute_forward()

Computes one step bringing the reference image towards the static.

compute_gauss_newton_step(*[, forward_step])

Computes the Gauss-Newton energy minimization step

free_iteration()

Nothing to free for the SSD metric

get_energy()

The numerical value assigned by this metric to the current image pair

initialize_iteration()

Prepares the metric to compute one displacement field iteration.

set_levels_above(levels)

Informs the metric how many pyramid levels are above the current one

set_levels_below(levels)

Informs the metric how many pyramid levels are below the current one

set_moving_image(moving_image, ...)

Sets the moving image being compared against the static one.

set_static_image(static_image, ...)

Sets the static image being compared against the moving one.

use_moving_image_dynamics(...)

This is called by the optimizer just after setting the moving image

use_static_image_dynamics(...)

This is called by the optimizer just after setting the static image.

compute_backward()[source]#

Computes one step bringing the static image towards the moving.

Computes the updated displacement field to be used for registration of the static image towards the moving image

compute_demons_step(*, forward_step=True)[source]#

Demons step for SSD metric

Computes the demons step proposed by Vercauteren et al.[6] for the SSD metric.

Parameters:
forward_stepboolean

if True, computes the Demons step in the forward direction (warping the moving towards the static image). If False, computes the backward step (warping the static image to the moving image)

Returns:
displacementarray, shape (R, C, 2) or (S, R, C, 3)

the Demons step

References

compute_forward()[source]#

Computes one step bringing the reference image towards the static.

Computes the update displacement field to be used for registration of the moving image towards the static image

compute_gauss_newton_step(*, forward_step=True)[source]#

Computes the Gauss-Newton energy minimization step

Minimizes the linearized energy function (Newton step) defined by the sum of squared differences of corresponding pixels of the input images with respect to the displacement field.

Parameters:
forward_stepboolean

if True, computes the Newton step in the forward direction (warping the moving towards the static image). If False, computes the backward step (warping the static image to the moving image)

Returns:
displacementarray, shape = static_image.shape + (3,)

if forward_step==True, the forward SSD Gauss-Newton step, else, the backward step

free_iteration()[source]#

Nothing to free for the SSD metric

get_energy()[source]#

The numerical value assigned by this metric to the current image pair

Returns the Sum of Squared Differences (data term) energy computed at the largest iteration

initialize_iteration()[source]#

Prepares the metric to compute one displacement field iteration.

Pre-computes the gradient of the input images to be used in the computation of the forward and backward steps.

v_cycle_2d#

dipy.align.metrics.v_cycle_2d(n, k, delta_field, sigma_sq_field, gradient_field, target, lambda_param, displacement, *, depth=0)[source]#

Multi-resolution Gauss-Seidel solver using V-type cycles

Multi-resolution Gauss-Seidel solver: solves the Gauss-Newton linear system by first filtering (GS-iterate) the current level, then solves for the residual at a coarser resolution and finally refines the solution at the current resolution. This scheme corresponds to the V-cycle proposed by Bruhn and Weickert[8].

Parameters:
nint

number of levels of the multi-resolution algorithm (it will be called recursively until level n == 0)

kint

the number of iterations at each multi-resolution level

delta_fieldarray, shape (R, C)

the difference between the static and moving image (the ‘derivative w.r.t. time’ in the optical flow model)

sigma_sq_fieldarray, shape (R, C)

the variance of the gray level value at each voxel, according to the EM model (for SSD, it is 1 for all voxels). Inf and 0 values are processed specially to support infinite and zero variance.

gradient_fieldarray, shape (R, C, 2)

the gradient of the moving image

targetarray, shape (R, C, 2)

right-hand side of the linear system to be solved in the Weickert’s multi-resolution algorithm

lambda_paramfloat

smoothness parameter, the larger its value the smoother the displacement field

displacementarray, shape (R, C, 2)

the displacement field to start the optimization from

Returns:
energythe energy of the EM (or SSD if sigmafield[…]==1) metric at this

iteration

References

v_cycle_3d#

dipy.align.metrics.v_cycle_3d(n, k, delta_field, sigma_sq_field, gradient_field, target, lambda_param, displacement, *, depth=0)[source]#

Multi-resolution Gauss-Seidel solver using V-type cycles

Multi-resolution Gauss-Seidel solver: solves the linear system by first filtering (GS-iterate) the current level, then solves for the residual at a coarser resolution and finally refines the solution at the current resolution. This scheme corresponds to the V-cycle proposed by Bruhn and Weickert[8].

Parameters:
nint

number of levels of the multi-resolution algorithm (it will be called recursively until level n == 0)

kint

the number of iterations at each multi-resolution level

delta_fieldarray, shape (S, R, C)

the difference between the static and moving image (the ‘derivative w.r.t. time’ in the optical flow model)

sigma_sq_fieldarray, shape (S, R, C)

the variance of the gray level value at each voxel, according to the EM model (for SSD, it is 1 for all voxels). Inf and 0 values are processed specially to support infinite and zero variance.

gradient_fieldarray, shape (S, R, C, 3)

the gradient of the moving image

targetarray, shape (S, R, C, 3)

right-hand side of the linear system to be solved in the Weickert’s multi-resolution algorithm

lambda_paramfloat

smoothness parameter, the larger its value the smoother the displacement field

displacementarray, shape (S, R, C, 3)

the displacement field to start the optimization from

Returns:
energythe energy of the EM (or SSD if sigmafield[…]==1) metric at this

iteration

References

ParzenJointHistogram#

class dipy.align.parzenhist.ParzenJointHistogram(nbins)#

Bases: object

Methods

bin_index(xnorm)

Bin index associated with the given normalized intensity

bin_normalize_moving(x)

Maps intensity x to the range covered by the moving histogram

bin_normalize_static(x)

Maps intensity x to the range covered by the static histogram

setup(static, moving[, smask, mmask])

Compute histogram settings to store the PDF of input images

update_gradient_dense(theta, transform, ...)

Computes the Gradient of the joint PDF w.r.t.

update_gradient_sparse(theta, transform, ...)

Computes the Gradient of the joint PDF w.r.t.

update_pdfs_dense(static, moving[, smask, mmask])

Computes the Probability Density Functions of two images

update_pdfs_sparse(sval, mval)

Computes the Probability Density Functions from a set of samples

bin_index(xnorm)#

Bin index associated with the given normalized intensity

The return value is an integer in [padding, nbins - 1 - padding]

Parameters:
xnormfloat

intensity value normalized to the range covered by the histogram

Returns:
binint

the bin index associated with the given normalized intensity

bin_normalize_moving(x)#

Maps intensity x to the range covered by the moving histogram

If the input intensity is in [self.mmin, self.mmax] then the normalized intensity will be in [self.padding, self.nbins - self.padding]

Parameters:
xfloat

the intensity to be normalized

Returns:
xnormfloat

normalized intensity to the range covered by the moving histogram

bin_normalize_static(x)#

Maps intensity x to the range covered by the static histogram

If the input intensity is in [self.smin, self.smax] then the normalized intensity will be in [self.padding, self.nbins - self.padding]

Parameters:
xfloat

the intensity to be normalized

Returns:
xnormfloat

normalized intensity to the range covered by the static histogram

setup(static, moving, smask=None, mmask=None)#

Compute histogram settings to store the PDF of input images

Parameters:
staticarray

static image

movingarray

moving image

smaskarray

mask of static object being registered (a binary array with 1’s inside the object of interest and 0’s along the background). If None, the behaviour is equivalent to smask=ones_like(static)

mmaskarray

mask of moving object being registered (a binary array with 1’s inside the object of interest and 0’s along the background). If None, the behaviour is equivalent to mmask=ones_like(static)

update_gradient_dense(theta, transform, static, moving, grid2world, mgradient, smask=None, mmask=None)#

Computes the Gradient of the joint PDF w.r.t. transform parameters

Computes the vector of partial derivatives of the joint histogram w.r.t. each transformation parameter.

The gradient is stored in self.joint_grad.

Parameters:
thetaarray, shape (n,)

parameters of the transformation to compute the gradient from

transforminstance of Transform

the transformation with respect to whose parameters the gradient must be computed

staticarray, shape (S, R, C)

static image

movingarray, shape (S, R, C)

moving image

grid2worldarray, shape (4, 4)

we assume that both images have already been sampled at a common grid. This transform must map voxel coordinates of this common grid to physical coordinates of its corresponding voxel in the moving image. For example, if the moving image was sampled on the static image’s grid (this is the typical setting) using an aligning matrix A, then

  1. grid2world = A.dot(static_affine)

where static_affine is the transformation mapping static image’s grid coordinates to physical space.

mgradientarray, shape (S, R, C, 3)

the gradient of the moving image

smaskarray, shape (S, R, C), optional

mask of static object being registered (a binary array with 1’s inside the object of interest and 0’s along the background). The default is None, indicating all voxels are considered.

mmaskarray, shape (S, R, C), optional

mask of moving object being registered (a binary array with 1’s inside the object of interest and 0’s along the background). The default is None, indicating all voxels are considered.

update_gradient_sparse(theta, transform, sval, mval, sample_points, mgradient)#

Computes the Gradient of the joint PDF w.r.t. transform parameters

Computes the vector of partial derivatives of the joint histogram w.r.t. each transformation parameter.

The list of intensities sval and mval are assumed to be sampled from the static and moving images, respectively, at the same physical points. Of course, the images may not be perfectly aligned at the moment the sampling was performed. The resulting gradient corresponds to the paired intensities according to the alignment at the moment the images were sampled.

The gradient is stored in self.joint_grad.

Parameters:
thetaarray, shape (n,)

parameters to compute the gradient at

transforminstance of Transform

the transformation with respect to whose parameters the gradient must be computed

svalarray, shape (m,)

sampled intensities from the static image at sampled_points

mvalarray, shape (m,)

sampled intensities from the moving image at sampled_points

sample_pointsarray, shape (m, 3)

coordinates (in physical space) of the points the images were sampled at

mgradientarray, shape (m, 3)

the gradient of the moving image at the sample points

update_pdfs_dense(static, moving, smask=None, mmask=None)#

Computes the Probability Density Functions of two images

The joint PDF is stored in self.joint. The marginal distributions corresponding to the static and moving images are computed and stored in self.smarginal and self.mmarginal, respectively.

Parameters:
staticarray, shape (S, R, C)

static image

movingarray, shape (S, R, C)

moving image

smaskarray, shape (S, R, C)

mask of static object being registered (a binary array with 1’s inside the object of interest and 0’s along the background). If None, ones_like(static) is used as mask.

mmaskarray, shape (S, R, C)

mask of moving object being registered (a binary array with 1’s inside the object of interest and 0’s along the background). If None, ones_like(moving) is used as mask.

update_pdfs_sparse(sval, mval)#

Computes the Probability Density Functions from a set of samples

The list of intensities sval and mval are assumed to be sampled from the static and moving images, respectively, at the same physical points. Of course, the images may not be perfectly aligned at the moment the sampling was performed. The resulting distributions corresponds to the paired intensities according to the alignment at the moment the images were sampled.

The joint PDF is stored in self.joint. The marginal distributions corresponding to the static and moving images are computed and stored in self.smarginal and self.mmarginal, respectively.

Parameters:
svalarray, shape (n,)

sampled intensities from the static image at sampled_points

mvalarray, shape (n,)

sampled intensities from the moving image at sampled_points

compute_parzen_mi#

dipy.align.parzenhist.compute_parzen_mi(joint, joint_gradient, smarginal, mmarginal, mi_gradient)#

Computes the mutual information and its gradient (if requested)

Parameters:
jointarray, shape (nbins, nbins)

the joint intensity distribution

joint_gradientarray, shape (nbins, nbins, n)

the gradient of the joint distribution w.r.t. the transformation parameters

smarginalarray, shape (nbins,)

the marginal intensity distribution of the static image

mmarginalarray, shape (nbins,)

the marginal intensity distribution of the moving image

mi_gradientarray, shape (n,)

the buffer in which to write the gradient of the mutual information. If None, the gradient is not computed

cubic_spline#

dipy.align.parzenhist.cubic_spline(x)#

Evaluates the cubic spline at a set of values

Parameters:
xarray, shape (n)

input values

cubic_spline_derivative#

dipy.align.parzenhist.cubic_spline_derivative(x)#

Evaluates the cubic spline derivative at a set of values

Parameters:
xarray, shape (n)

input values

sample_domain_regular#

dipy.align.parzenhist.sample_domain_regular(k, shape, grid2world, sigma=0.25, rng=None)#

Take floor(total_voxels/k) samples from a (2D or 3D) grid

The sampling is made by taking all pixels whose index (in lexicographical order) is a multiple of k. Each selected point is slightly perturbed by adding a realization of a normally distributed random variable and then mapped to physical space by the given grid-to-space transform.

The lexicographical order of a pixels in a grid of shape (a, b, c) is defined by assigning to each voxel position (i, j, k) the integer index

F((i, j, k)) = i * (b * c) + j * (c) + k

and sorting increasingly by this index.

Parameters:
kint

the sampling rate, as described before

shapearray, shape (dim,)

the shape of the grid to be sampled

grid2worldarray, shape (dim+1, dim+1)

the grid-to-space transform

sigmafloat

the standard deviation of the Normal random distortion to be applied to the sampled points

Returns:
samplesarray, shape (total_pixels//k, dim)

the matrix whose rows are the sampled points

Examples

>>> from dipy.align.parzenhist import sample_domain_regular
>>> import dipy.align.vector_fields as vf
>>> shape = np.array((10, 10), dtype=np.int32)
>>> sigma = 0
>>> dim = len(shape)
>>> grid2world = np.eye(dim+1)
>>> n = shape[0]*shape[1]
>>> k = 2
>>> samples = sample_domain_regular(k, shape, grid2world, sigma)
>>> (samples.shape[0], samples.shape[1]) == (n//k, dim)
True
>>> isamples = np.array(samples, dtype=np.int32)
>>> indices = (isamples[:, 0] * shape[1] + isamples[:, 1])
>>> len(set(indices)) == len(indices)
True
>>> (indices%k).sum()
0

reslice#

dipy.align.reslice.reslice(data, affine, zooms, new_zooms, *, order=1, mode='constant', cval=0, num_processes=1)[source]#

Reslice data with new voxel resolution defined by new_zooms.

Parameters:
dataarray, shape (I,J,K) or (I,J,K,N)

3d volume or 4d volume with datasets

affinearray, shape (4,4)

mapping from voxel coordinates to world coordinates

zoomstuple, shape (3,)

voxel size for (i,j,k) dimensions

new_zoomstuple, shape (3,)

new voxel size for (i,j,k) after resampling

orderint, from 0 to 5

order of interpolation for resampling/reslicing, 0 nearest interpolation, 1 trilinear etc.. if you don’t want any smoothing 0 is the option you need.

modestring (‘constant’, ‘nearest’, ‘reflect’ or ‘wrap’)

Points outside the boundaries of the input are filled according to the given mode.

cvalfloat

Value used for points outside the boundaries of the input if mode=’constant’.

num_processesint, optional

Split the calculation to a pool of children processes. This only applies to 4D data arrays. Default is 1. If < 0 the maximal number of cores minus num_processes + 1 is used (enter -1 to use as many cores as possible). 0 raises an error.

Returns:
data2array, shape (I,J,K) or (I,J,K,N)

datasets resampled into isotropic voxel size

affine2array, shape (4,4)

new affine for the resampled image

Examples

>>> from dipy.io.image import load_nifti
>>> from dipy.align.reslice import reslice
>>> from dipy.data import get_fnames
>>> f_name = get_fnames(name="aniso_vox")
>>> data, affine, zooms = load_nifti(f_name, return_voxsize=True)
>>> data.shape == (58, 58, 24)
True
>>> zooms
(4.0, 4.0, 5.0)
>>> new_zooms = (3.,3.,3.)
>>> new_zooms
(3.0, 3.0, 3.0)
>>> data2, affine2 = reslice(data, affine, zooms, new_zooms)
>>> data2.shape == (77, 77, 40)
True

ScaleSpace#

class dipy.align.scalespace.ScaleSpace(image, num_levels, *, image_grid2world=None, input_spacing=None, sigma_factor=0.2, mask0=False)[source]#

Bases: object

Methods

get_affine(level)

Voxel-to-space transformation at a given level.

get_affine_inv(level)

Space-to-voxel transformation at a given level.

get_domain_shape(level)

Shape the sub-sampled image must have at a particular level.

get_expand_factors(from_level, to_level)

Ratio of voxel size from pyramid level from_level to to_level.

get_image(level)

Smoothed image at a given level.

get_scaling(level)

Adjustment factor for input-spacing to reflect voxel sizes at level.

get_sigmas(level)

Smoothing parameters used at a given level.

get_spacing(level)

Spacings the sub-sampled image must have at a particular level.

print_level(level)

Prints properties of a pyramid level.

get_affine(level)[source]#

Voxel-to-space transformation at a given level.

Returns the voxel-to-space transformation associated with the sub-sampled image at a particular resolution of the scale space (note that this object does not explicitly subsample the smoothed images, but only provides the properties the sub-sampled images must have).

Parameters:
levelint, 0 <= from_level < L, (L = number of resolutions)

the scale space level to get affine transform from

Returns:
the affine (voxel-to-space) transform at the requested resolution
or None if an invalid level was requested
get_affine_inv(level)[source]#

Space-to-voxel transformation at a given level.

Returns the space-to-voxel transformation associated with the sub-sampled image at a particular resolution of the scale space (note that this object does not explicitly subsample the smoothed images, but only provides the properties the sub-sampled images must have).

Parameters:
levelint, 0 <= from_level < L, (L = number of resolutions)

the scale space level to get the inverse transform from

Returns:
the inverse (space-to-voxel) transform at the requested resolution or
None if an invalid level was requested
get_domain_shape(level)[source]#

Shape the sub-sampled image must have at a particular level.

Returns the shape the sub-sampled image must have at a particular resolution of the scale space (note that this object does not explicitly subsample the smoothed images, but only provides the properties the sub-sampled images must have).

Parameters:
levelint, 0 <= from_level < L, (L = number of resolutions)

the scale space level to get the sub-sampled shape from

Returns:
the sub-sampled shape at the requested resolution or None if an
invalid level was requested
get_expand_factors(from_level, to_level)[source]#

Ratio of voxel size from pyramid level from_level to to_level.

Given two scale space resolutions a = from_level, b = to_level, returns the ratio of voxels size at level b to voxel size at level a (the factor that must be used to multiply voxels at level a to ‘expand’ them to level b).

Parameters:
from_levelint, 0 <= from_level < L, (L = number of resolutions)

the resolution to expand voxels from

to_levelint, 0 <= to_level < from_level

the resolution to expand voxels to

Returns:
factorsarray, shape (k,), k = 2, 3

the expand factors (a scalar for each voxel dimension)

get_image(level)[source]#

Smoothed image at a given level.

Returns the smoothed image at the requested level in the Scale Space.

Parameters:
levelint, 0 <= from_level < L, (L = number of resolutions)

the scale space level to get the smooth image from

Returns:
the smooth image at the requested resolution or None if an invalid
level was requested
get_scaling(level)[source]#

Adjustment factor for input-spacing to reflect voxel sizes at level.

Returns the scaling factor that needs to be applied to the input spacing (the voxel sizes of the image at level 0 of the scale space) to transform them to voxel sizes at the requested level.

Parameters:
levelint, 0 <= from_level < L, (L = number of resolutions)

the scale space level to get the scalings from

Returns:
the scaling factors from the original spacing to the spacings at the
requested level
get_sigmas(level)[source]#

Smoothing parameters used at a given level.

Returns the smoothing parameters (a scalar for each axis) used at the requested level of the scale space

Parameters:
levelint, 0 <= from_level < L, (L = number of resolutions)

the scale space level to get the smoothing parameters from

Returns:
the smoothing parameters at the requested level
get_spacing(level)[source]#

Spacings the sub-sampled image must have at a particular level.

Returns the spacings (voxel sizes) the sub-sampled image must have at a particular resolution of the scale space (note that this object does not explicitly subsample the smoothed images, but only provides the properties the sub-sampled images must have).

Parameters:
levelint, 0 <= from_level < L, (L = number of resolutions)

the scale space level to get the sub-sampled shape from

Returns:
the spacings (voxel sizes) at the requested resolution or None if an
invalid level was requested
print_level(level)[source]#

Prints properties of a pyramid level.

Prints the properties of a level of this scale space to standard output

Parameters:
levelint, 0 <= from_level < L, (L = number of resolutions)

the scale space level to be printed

IsotropicScaleSpace#

class dipy.align.scalespace.IsotropicScaleSpace(image, factors, sigmas, *, image_grid2world=None, input_spacing=None, mask0=False)[source]#

Bases: ScaleSpace

Methods

get_affine(level)

Voxel-to-space transformation at a given level.

get_affine_inv(level)

Space-to-voxel transformation at a given level.

get_domain_shape(level)

Shape the sub-sampled image must have at a particular level.

get_expand_factors(from_level, to_level)

Ratio of voxel size from pyramid level from_level to to_level.

get_image(level)

Smoothed image at a given level.

get_scaling(level)

Adjustment factor for input-spacing to reflect voxel sizes at level.

get_sigmas(level)

Smoothing parameters used at a given level.

get_spacing(level)

Spacings the sub-sampled image must have at a particular level.

print_level(level)

Prints properties of a pyramid level.

StreamlineDistanceMetric#

class dipy.align.streamlinear.StreamlineDistanceMetric(*, num_threads=None)[source]#

Bases: object

Methods

distance(xopt)

calculate distance for current set of parameters.

setup

abstract distance(xopt)[source]#

calculate distance for current set of parameters.

abstract setup(static, moving)[source]#

BundleMinDistanceMetric#

class dipy.align.streamlinear.BundleMinDistanceMetric(*, num_threads=None)[source]#

Bases: StreamlineDistanceMetric

Bundle-based Minimum Distance aka BMD.

This is the cost function used by the StreamlineLinearRegistration.

See [9] for further details about the metric.

Methods

setup(static, moving)

distance(xopt)

References

distance(xopt)[source]#

Distance calculated from this Metric.

Parameters:
xoptsequence

List of affine parameters as an 1D vector,

setup(static, moving)[source]#

Setup static and moving sets of streamlines.

Parameters:
staticstreamlines

Fixed or reference set of streamlines.

movingstreamlines

Moving streamlines.

Notes

Call this after the object is initiated and before distance.

BundleMinDistanceMatrixMetric#

class dipy.align.streamlinear.BundleMinDistanceMatrixMetric(*, num_threads=None)[source]#

Bases: StreamlineDistanceMetric

Bundle-based Minimum Distance aka BMD

This is the cost function used by the StreamlineLinearRegistration

Methods

setup(static, moving)

distance(xopt)

Notes

The difference with BundleMinDistanceMetric is that this creates the entire distance matrix and therefore requires more memory.

distance(xopt)[source]#

Distance calculated from this Metric.

Parameters:
xoptsequence

List of affine parameters as an 1D vector

setup(static, moving)[source]#

Setup static and moving sets of streamlines.

Parameters:
staticstreamlines

Fixed or reference set of streamlines.

movingstreamlines

Moving streamlines.

Notes

Call this after the object is initiated and before distance.

Num_threads is not used in this class. Use BundleMinDistanceMetric for a faster, threaded and less memory hungry metric

BundleMinDistanceAsymmetricMetric#

class dipy.align.streamlinear.BundleMinDistanceAsymmetricMetric(*, num_threads=None)[source]#

Bases: BundleMinDistanceMetric

Asymmetric Bundle-based Minimum distance.

This is a cost function that can be used by the StreamlineLinearRegistration class.

Methods

distance(xopt)

Distance calculated from this Metric.

setup(static, moving)

Setup static and moving sets of streamlines.

distance(xopt)[source]#

Distance calculated from this Metric.

Parameters:
xoptsequence

List of affine parameters as an 1D vector

BundleSumDistanceMatrixMetric#

class dipy.align.streamlinear.BundleSumDistanceMatrixMetric(*, num_threads=None)[source]#

Bases: BundleMinDistanceMatrixMetric

Bundle-based Sum Distance aka BMD

This is a cost function that can be used by the StreamlineLinearRegistration class.

Methods

setup(static, moving)

distance(xopt)

Notes

The difference with BundleMinDistanceMatrixMetric is that it uses uses the sum of the distance matrix and not the sum of mins.

distance(xopt)[source]#

Distance calculated from this Metric

Parameters:
xoptsequence

List of affine parameters as an 1D vector

JointBundleMinDistanceMetric#

class dipy.align.streamlinear.JointBundleMinDistanceMetric(*, num_threads=None)[source]#

Bases: StreamlineDistanceMetric

Bundle-based Minimum Distance for joint optimization.

This cost function is used by the StreamlineLinearRegistration class when running halfway streamline linear registration for unbiased groupwise bundle registration and atlasing.

It computes the BMD distance after moving both static and moving bundles to a halfway space in between both.

Methods

setup(static, moving)

distance(xopt)

Notes

In this metric both static and moving bundles are treated equally (i.e., there is no static reference bundle as both are intended to move). The naming convention is kept for consistency.

distance(xopt)[source]#

Distance calculated from this Metric.

Parameters:
xoptsequence

List of affine parameters as an 1D vector. These affine parameters are used to derive the corresponding halfway transformation parameters for each bundle.

setup(static, moving)[source]#

Setup static and moving sets of streamlines.

Parameters:
staticstreamlines

Set of streamlines

movingstreamlines

Set of streamlines

Notes

Call this after the object is initiated and before distance. Num_threads is not used in this class.

StreamlineLinearRegistration#

class dipy.align.streamlinear.StreamlineLinearRegistration(*, metric=None, x0='rigid', method='L-BFGS-B', bounds=None, verbose=False, options=None, evolution=False, num_threads=None)[source]#

Bases: object

Methods

optimize(static, moving, *[, mat])

Find the minimum of the provided metric.

optimize(static, moving, *, mat=None)[source]#

Find the minimum of the provided metric.

Parameters:
staticstreamlines

Reference or fixed set of streamlines.

movingstreamlines

Moving set of streamlines.

matarray

Transformation (4, 4) matrix to start the registration. mat is applied to moving. Default value None which means that initial transformation will be generated by shifting the centers of moving and static sets of streamlines to the origin.

Returns:
mapStreamlineRegistrationMap

StreamlineRegistrationMap#

class dipy.align.streamlinear.StreamlineRegistrationMap(matopt, xopt, fopt, matopt_history, funcs, iterations)[source]#

Bases: object

Methods

transform(moving)

Transform moving streamlines to the static.

transform(moving)[source]#

Transform moving streamlines to the static.

Parameters:
movingstreamlines
Returns:
movedstreamlines

Notes

All this does is apply self.matrix to the input streamlines.

JointStreamlineRegistrationMap#

class dipy.align.streamlinear.JointStreamlineRegistrationMap(xopt, fopt, matopt_history, funcs, iterations)[source]#

Bases: object

Methods

transform(static, moving)

Transform both static and moving bundles to the halfway space.

transform(static, moving)[source]#

Transform both static and moving bundles to the halfway space.

All this does is apply self.matrix1 and self.matrix2` to the static and moving bundles, respectively.

Parameters:
staticstreamlines
movingstreamlines
Returns:
staticstreamlines
movingstreamlines

bundle_sum_distance#

dipy.align.streamlinear.bundle_sum_distance(t, static, moving, *, num_threads=None)[source]#

MDF distance optimization function (SUM).

We minimize the distance between moving streamlines as they align with the static streamlines.

Parameters:
tndarray

t is a vector of affine transformation parameters with size at least 6. If the size is 6, t is interpreted as translation + rotation. If the size is 7, t is interpreted as translation + rotation + isotropic scaling. If size is 12, t is interpreted as translation + rotation + scaling + shearing.

staticlist

Static streamlines

movinglist

Moving streamlines. These will be transformed to align with the static streamlines

num_threadsint, optional

Number of threads. If -1 then all available threads will be used.

Returns:
cost: float

bundle_min_distance#

dipy.align.streamlinear.bundle_min_distance(t, static, moving)[source]#

MDF-based pairwise distance optimization function (MIN).

We minimize the distance between moving streamlines as they align with the static streamlines.

Parameters:
tndarray

t is a vector of affine transformation parameters with size at least 6. If size is 6, t is interpreted as translation + rotation. If size is 7, t is interpreted as translation + rotation + isotropic scaling. If size is 12, t is interpreted as translation + rotation + scaling + shearing.

staticlist

Static streamlines

movinglist

Moving streamlines.

Returns:
cost: float

bundle_min_distance_fast#

dipy.align.streamlinear.bundle_min_distance_fast(t, static, moving, block_size, *, num_threads=None)[source]#

MDF-based pairwise distance optimization function (MIN).

We minimize the distance between moving streamlines as they align with the static streamlines.

Parameters:
tarray

1D array. t is a vector of affine transformation parameters with size at least 6. If the size is 6, t is interpreted as translation + rotation. If the size is 7, t is interpreted as translation + rotation + isotropic scaling. If size is 12, t is interpreted as translation + rotation + scaling + shearing.

staticarray

N*M x 3 array. All the points of the static streamlines. With order of streamlines intact. Where N is the number of streamlines and M is the number of points per streamline.

movingarray

K*M x 3 array. All the points of the moving streamlines. With order of streamlines intact. Where K is the number of streamlines and M is the number of points per streamline.

block_sizeint

Number of points per streamline. All streamlines in static and moving should have the same number of points M.

num_threadsint, optional

Number of threads to be used for OpenMP parallelization. If None (default) the value of OMP_NUM_THREADS environment variable is used if it is set, otherwise all available threads are used. If < 0 the maximal number of threads minus \(|num_threads + 1|\) is used (enter -1 to use as many threads as possible). 0 raises an error.

Returns:
cost: float

Notes

This is a faster implementation of bundle_min_distance, which requires that all the points of each streamline are allocated into an ndarray (of shape N*M by 3, with N the number of points per streamline and M the number of streamlines). This can be done by calling dipy.tracking.streamlines.unlist_streamlines.

bundle_min_distance_asymmetric_fast#

dipy.align.streamlinear.bundle_min_distance_asymmetric_fast(t, static, moving, block_size)[source]#

MDF-based pairwise distance optimization function (MIN).

We minimize the distance between moving streamlines as they align with the static streamlines.

Parameters:
tarray

1D array. t is a vector of affine transformation parameters with size at least 6. If the size is 6, t is interpreted as translation + rotation. If the size is 7, t is interpreted as translation + rotation + isotropic scaling. If size is 12, t is interpreted as translation + rotation + scaling + shearing.

staticarray

N*M x 3 array. All the points of the static streamlines. With order of streamlines intact. Where N is the number of streamlines and M is the number of points per streamline.

movingarray

K*M x 3 array. All the points of the moving streamlines. With order of streamlines intact. Where K is the number of streamlines and M is the number of points per streamline.

block_sizeint

Number of points per streamline. All streamlines in static and moving should have the same number of points M.

Returns:
cost: float

remove_clusters_by_size#

dipy.align.streamlinear.remove_clusters_by_size(clusters, min_size=0)[source]#

progressive_slr#

dipy.align.streamlinear.progressive_slr(static, moving, metric, x0, bounds, *, method='L-BFGS-B', verbose=False, num_threads=None)[source]#

Progressive SLR.

This is a utility function that allows for example to do affine registration using Streamline-based Linear Registration (SLR) [10] by starting with translation first, then rigid, then similarity, scaling and finally affine.

Similarly, if for example, you want to perform rigid then you start with translation first. This progressive strategy can help with finding the optimal parameters of the final transformation.

Parameters:
staticStreamlines

Static streamlines.

movingStreamlines

Moving streamlines.

metricStreamlineDistanceMetric

Distance metric for registration optimization.

x0string

Could be any of ‘translation’, ‘rigid’, ‘similarity’, ‘scaling’, ‘affine’

boundsarray

Boundaries of registration parameters. See variable DEFAULT_BOUNDS for example.

methodstring

L_BFGS_B’ or ‘Powell’ optimizers can be used. Default is ‘L_BFGS_B’.

verbosebool, optional.

If True, log messages.

num_threadsint, optional

Number of threads to be used for OpenMP parallelization. If None (default) the value of OMP_NUM_THREADS environment variable is used if it is set, otherwise all available threads are used. If < 0 the maximal number of threads minus \(|num_threads + 1|\) is used (enter -1 to use as many threads as possible). 0 raises an error. Only metrics using OpenMP will use this variable.

References

slr_with_qbx#

dipy.align.streamlinear.slr_with_qbx(static, moving, *, x0='affine', rm_small_clusters=50, maxiter=100, select_random=None, verbose=False, greater_than=50, less_than=250, qbx_thr=(40, 30, 20, 15), nb_pts=20, progressive=True, rng=None, num_threads=None)[source]#

Utility function for registering large tractograms.

For efficiency, we apply the registration on cluster centroids and remove small clusters.

See [9], [10] and [11] for details about the methods involved.

Parameters:
staticStreamlines

Fixed or reference set of streamlines.

movingstreamlines

Moving streamlines.

x0str, optional.

rigid, similarity or affine transformation model

rm_small_clustersint, optional

Remove clusters that have less than rm_small_clusters

maxiterint, optional

Maximum number of iterations to perform.

select_randomint, optional.

If not, None selects a random number of streamlines to apply clustering

verbosebool, optional

If True, logs information about optimization.

greater_thanint, optional

Keep streamlines that have length greater than this value.

less_thanint, optional

Keep streamlines have length less than this value.

qbx_thrvariable int

Thresholds for QuickBundlesX.

nb_ptsint, optional

Number of points for discretizing each streamline.

progressiveboolean, optional

True to enable progressive registration.

rngnp.random.Generator

If None creates random generator in function.

num_threadsint, optional

Number of threads to be used for OpenMP parallelization. If None (default) the value of OMP_NUM_THREADS environment variable is used if it is set, otherwise all available threads are used. If < 0 the maximal number of threads minus \(|num_threads + 1|\) is used (enter -1 to use as many threads as possible). 0 raises an error. Only metrics using OpenMP will use this variable.

Notes

The order of operations is the following. First short or long streamlines are removed. Second, the tractogram or a random selection of the tractogram is clustered with QuickBundles. Then SLR [10] is applied.

References

groupwise_slr#

dipy.align.streamlinear.groupwise_slr(bundles, *, x0='affine', tol=0, max_iter=20, qbx_thr=(4,), nb_pts=20, select_random=10000, verbose=False, rng=None)[source]#

Function to perform unbiased groupwise bundle registration

All bundles are moved to the same space by iteratively applying halfway streamline linear registration in pairs. With each iteration, bundles get closer to each other until the procedure converges and there is no more improvement.

See [9], [10] and [11].

Parameters:
bundleslist

List with streamlines of the bundles to be registered.

x0str, optional

rigid, similarity or affine transformation model.

tolfloat, optional

Tolerance value to be used to assume convergence.

max_iterint, optional

Maximum number of iterations. Depending on the number of bundles to be registered this may need to be larger.

qbx_thrvariable int, optional

Thresholds for Quickbundles used for clustering streamlines and reduce computational time. If None, no clustering is performed. Higher values cluster streamlines into a smaller number of centroids.

nb_ptsint, optional

Number of points for discretizing each streamline.

select_randomint, optional

Maximum number of streamlines for each bundle. If None, all the streamlines are used.

verbosebool, optional

If True, logs information.

rngnp.random.Generator

If None, creates random generator in function.

References

get_unique_pairs#

dipy.align.streamlinear.get_unique_pairs(n_bundle, *, pairs=None)[source]#

Make unique pairs from n_bundle bundles.

The function allows to input a previous pairs assignment so that the new pairs are different.

Parameters:
n_bundleint

Number of bundles to be matched in pairs.

pairsarray, optional

array containing the indexes of previous pairs.

compose_matrix44#

dipy.align.streamlinear.compose_matrix44(t, *, dtype=<class 'numpy.float64'>)[source]#

Compose a 4x4 transformation matrix.

Parameters:
tndarray

This is a 1D vector of affine transformation parameters with size at least 3. If the size is 3, t is interpreted as translation. If the size is 6, t is interpreted as translation + rotation. If the size is 7, t is interpreted as translation + rotation + isotropic scaling. If the size is 9, t is interpreted as translation + rotation + anisotropic scaling. If size is 12, t is interpreted as translation + rotation + scaling + shearing.

Returns:
Tndarray

Homogeneous transformation matrix of size 4x4.

decompose_matrix44#

dipy.align.streamlinear.decompose_matrix44(mat, *, size=12)[source]#

Given a 4x4 homogeneous matrix return the parameter vector.

Parameters:
matarray

Homogeneous 4x4 transformation matrix

sizeint

Size of the output vector. 3, for translation, 6 for rigid, 7 for similarity, 9 for scaling and 12 for affine. Default is 12.

Returns:
tndarray

One dimensional ndarray of 3, 6, 7, 9 or 12 affine parameters.

average_bundle_length#

dipy.align.streamwarp.average_bundle_length(bundle)[source]#

Find average Euclidean length of the bundle in mm.

Parameters:
bundleStreamlines

Bundle who’s average length is to be calculated.

Returns:
int

Average Euclidean length of bundle in mm.

find_missing#

dipy.align.streamwarp.find_missing(lst, cb)[source]#

Find unmatched streamline indices in moving bundle.

Parameters:
lstList

List of integers containing all the streamlines indices in moving bundle.

cbList

List of integers containing streamline indices of the moving bundle that were not matched to any streamline in static bundle.

Returns:
list

List containing unmatched streamlines from moving bundle

bundlewarp#

dipy.align.streamwarp.bundlewarp(static, moving, *, dist=None, alpha=0.5, beta=20, max_iter=15, affine=True)[source]#

Register two bundles using nonlinear method.

See [12] for further details about the method.

Parameters:
staticStreamlines

Reference/fixed bundle.

movingStreamlines

Target bundle that will be moved/registered to match the static bundle.

distfloat, optional

Precomputed distance matrix.

alphafloat, optional

Represents the trade-off between regularizing the deformation and having points match very closely. Lower value of alpha means high deformations.

betaint, optional

Represents the strength of the interaction between points Gaussian kernel size.

max_iterint, optional

Maximum number of iterations for deformation process in ml-CPD method.

affineboolean, optional

If False, use rigid registration as starting point.

Returns:
deformed_bundleStreamlines

Nonlinearly moved bundle (warped bundle)

moving_alignedStreamlines

Linearly moved bundle (affinely moved)

distnp.ndarray

Float array containing distance between moving and static bundle

matched_pairsnp.ndarray

Int array containing streamline correspondences between two bundles

warpnp.ndarray

Nonlinear warp map generated by BundleWarp

References

bundlewarp_vector_filed#

dipy.align.streamwarp.bundlewarp_vector_filed(moving_aligned, deformed_bundle)[source]#

Calculate vector fields.

Vector field computation as the difference between each streamline point in the deformed and linearly aligned bundles

Parameters:
moving_alignedStreamlines

Linearly (affinely) moved bundle

deformed_bundleStreamlines

Nonlinearly (warped) bundle

Returns:
offsetsList

Vector field modules

directionsList

Unitary vector directions

colorsList

Colors for bundle warping field vectors. Colors follow the convention used in DTI-derived maps (e.g. color FA) [13].

References

bundlewarp_shape_analysis#

dipy.align.streamwarp.bundlewarp_shape_analysis(moving_aligned, deformed_bundle, *, no_disks=10, plotting=False)[source]#

Calculate bundle shape difference profile.

Bundle shape difference analysis using magnitude from BundleWarp displacements and BUAN.

Depending on the number of points of a streamline, and the number of segments requested, multiple points may be considered for the computation of a given segment; a segment may contain information from a single point; or some segments may not contain information from any points. In the latter case, the segment will contain an np.nan value. The point-to-segment mapping is defined by the assignment_map(): for each segment index, the point information of the matching index positions, as returned by assignment_map(), are considered for the computation.

Parameters:
moving_alignedStreamlines

Linearly (affinely) moved bundle

deformed_bundleStreamlines

Nonlinearly (warped) moved bundle

no_disksint, optional

Number of segments to be created along the length of the bundle

plottingBoolean, optional

Plot bundle shape profile

Returns:
shape_profilenp.ndarray

Float array containing bundlewarp displacement magnitudes along the length of the bundle

stdvnp.ndarray

Float array containing standard deviations

compute_energy_ssd_2d#

dipy.align.sumsqdiff.compute_energy_ssd_2d(delta_field)#

Sum of squared differences between two 2D images

Computes the Sum of Squared Differences between the static and moving image. Those differences are given by delta_field

Parameters:
delta_fieldarray, shape (R, C)

the difference between the static and moving image (the ‘derivative w.r.t. time’ in the optical flow model)

Returns:
energyfloat

the SSD energy at this iteration

Notes

The numeric value of the energy is used only to detect convergence. This function returns only the energy corresponding to the data term (excluding the energy corresponding to the regularization term) because the Greedy-SyN algorithm is an unconstrained gradient descent algorithm in the space of diffeomorphisms: in each iteration it makes a step along the negative smoothed gradient –of the data term– and then makes sure the resulting diffeomorphisms are invertible using an explicit inversion algorithm. Since it is not clear how to reflect the energy corresponding to this re-projection to the space of diffeomorphisms, a more precise energy computation including the regularization term is useless. Instead, convergence is checked considering the data-term energy only and detecting oscilations in the energy profile.

compute_energy_ssd_3d#

dipy.align.sumsqdiff.compute_energy_ssd_3d(delta_field)#

Sum of squared differences between two 3D volumes

Computes the Sum of Squared Differences between the static and moving volume Those differences are given by delta_field

Parameters:
delta_fieldarray, shape (R, C)

the difference between the static and moving image (the ‘derivative w.r.t. time’ in the optical flow model)

Returns:
energyfloat

the SSD energy at this iteration

Notes

The numeric value of the energy is used only to detect convergence. This function returns only the energy corresponding to the data term (excluding the energy corresponding to the regularization term) because the Greedy-SyN algorithm is an unconstrained gradient descent algorithm in the space of diffeomorphisms: in each iteration it makes a step along the negative smoothed gradient –of the data term– and then makes sure the resulting diffeomorphisms are invertible using an explicit inversion algorithm. Since it is not clear how to reflect the energy corresponding to this re-projection to the space of diffeomorphisms, a more precise energy computation including the regularization term is useless. Instead, convergence is checked considering the data-term energy only and detecting oscilations in the energy profile.

compute_residual_displacement_field_ssd_2d#

dipy.align.sumsqdiff.compute_residual_displacement_field_ssd_2d(delta_field, sigmasq_field, gradient_field, target, lambda_param, d, residual)#

The residual displacement field to be fit on the next iteration

Computes the residual displacement field corresponding to the current displacement field in the Multi-resolution Gauss-Seidel solver proposed by Bruhn and Weickert[8].

Parameters:
delta_fieldarray, shape (R, C)

the difference between the static and moving image (the ‘derivative w.r.t. time’ in the optical flow model)

sigmasq_fieldarray, shape (R, C)

the variance of the gray level value at each voxel, according to the EM model (for SSD, it is 1 for all voxels). Inf and 0 values are processed specially to support infinite and zero variance.

gradient_fieldarray, shape (R, C, 2)

the gradient of the moving image

targetarray, shape (R, C, 2)

right-hand side of the linear system to be solved in the Weickert’s multi-resolution algorithm

lambda_paramfloat

smoothness parameter in the objective function

darray, shape (R, C, 2)

the current displacement field to compute the residual from

residualarray, shape (R, C, 2)

the displacement field to put the residual to

Returns:
residualarray, shape (R, C, 2)

the residual displacement field. If residual was None a input, then a new field is returned, otherwise the same array is returned

References

compute_residual_displacement_field_ssd_3d#

dipy.align.sumsqdiff.compute_residual_displacement_field_ssd_3d(delta_field, sigmasq_field, gradient_field, target, lambda_param, disp, residual)#

The residual displacement field to be fit on the next iteration

Computes the residual displacement field corresponding to the current displacement field (given by ‘disp’) in the Multi-resolution Gauss-Seidel solver proposed by Bruhn and Weickert[8].

Parameters:
delta_fieldarray, shape (S, R, C)

the difference between the static and moving image (the ‘derivative w.r.t. time’ in the optical flow model)

sigmasq_fieldarray, shape (S, R, C)

the variance of the gray level value at each voxel, according to the EM model (for SSD, it is 1 for all voxels). Inf and 0 values are processed specially to support infinite and zero variance.

gradient_fieldarray, shape (S, R, C, 3)

the gradient of the moving image

targetarray, shape (S, R, C, 3)

right-hand side of the linear system to be solved in the Weickert’s multi-resolution algorithm

lambda_paramfloat

smoothness parameter in the objective function

disparray, shape (S, R, C, 3)

the current displacement field to compute the residual from

residualarray, shape (S, R, C, 3)

the displacement field to put the residual to

Returns:
residualarray, shape (S, R, C, 3)

the residual displacement field. If residual was None a input, then a new field is returned, otherwise the same array is returned

References

compute_ssd_demons_step_2d#

dipy.align.sumsqdiff.compute_ssd_demons_step_2d(delta_field, gradient_moving, sigma_sq_x, out)#

Demons step for 2D SSD-driven registration

Computes the demons step for SSD-driven registration ( eq. 4 in [8] )

Parameters:
delta_fieldarray, shape (R, C)

the difference between the static and moving image (the ‘derivative w.r.t. time’ in the optical flow model)

gradient_fieldarray, shape (R, C, 2)

the gradient of the moving image

sigma_sq_xfloat

parameter controlling the amount of regularization. It corresponds to \(\sigma_x^2\) in algorithm 1 of Vercauteren et al.[6].

outarray, shape (R, C, 2)

if None, a new array will be created to store the demons step. Otherwise the provided array will be used.

Returns:
demons_steparray, shape (R, C, 2)

the demons step to be applied for updating the current displacement field

energyfloat

the current ssd energy (before applying the returned demons_step)

References

compute_ssd_demons_step_3d#

dipy.align.sumsqdiff.compute_ssd_demons_step_3d(delta_field, gradient_moving, sigma_sq_x, out)#

Demons step for 3D SSD-driven registration

Computes the demons step for SSD-driven registration ( eq. 4 in [8] )

Parameters:
delta_fieldarray, shape (S, R, C)

the difference between the static and moving image (the ‘derivative w.r.t. time’ in the optical flow model)

gradient_fieldarray, shape (S, R, C, 2)

the gradient of the moving image

sigma_sq_xfloat

parameter controlling the amount of regularization. It corresponds to \(\sigma_x^2\) in algorithm 1 of Vercauteren et al.[6].

outarray, shape (S, R, C, 2)

if None, a new array will be created to store the demons step. Otherwise the provided array will be used.

Returns:
demons_steparray, shape (S, R, C, 3)

the demons step to be applied for updating the current displacement field

energyfloat

the current ssd energy (before applying the returned demons_step)

References

iterate_residual_displacement_field_ssd_2d#

dipy.align.sumsqdiff.iterate_residual_displacement_field_ssd_2d(delta_field, sigmasq_field, grad, target, lambda_param, displacement_field)#

One iteration of a large linear system solver for 2D SSD registration

Performs one iteration at one level of the Multi-resolution Gauss-Seidel solver proposed by Bruhn and Weickert[8].

Parameters:
delta_fieldarray, shape (R, C)

the difference between the static and moving image (the ‘derivative w.r.t. time’ in the optical flow model)

sigmasq_fieldarray, shape (R, C)

the variance of the gray level value at each voxel, according to the EM model (for SSD, it is 1 for all voxels). Inf and 0 values are processed specially to support infinite and zero variance.

gradarray, shape (R, C, 2)

the gradient of the moving image

targetarray, shape (R, C, 2)

right-hand side of the linear system to be solved in the Weickert’s multi-resolution algorithm

lambda_paramfloat

smoothness parameter of the objective function

displacement_fieldarray, shape (R, C, 2)

current displacement field to start the iteration from

Returns:
max_displacementfloat

the norm of the maximum change in the displacement field after the iteration

References

iterate_residual_displacement_field_ssd_3d#

dipy.align.sumsqdiff.iterate_residual_displacement_field_ssd_3d(delta_field, sigmasq_field, grad, target, lambda_param, disp)#

One iteration of a large linear system solver for 3D SSD registration

Performs one iteration at one level of the Multi-resolution Gauss-Seidel solver proposed by Bruhn and Weickert[8].

Parameters:
delta_fieldarray, shape (S, R, C)

the difference between the static and moving image (the ‘derivative w.r.t. time’ in the optical flow model)

sigmasq_fieldarray, shape (S, R, C)

the variance of the gray level value at each voxel, according to the EM model (for SSD, it is 1 for all voxels). Inf and 0 values are processed specially to support infinite and zero variance.

gradarray, shape (S, R, C, 3)

the gradient of the moving image

targetarray, shape (S, R, C, 3)

right-hand side of the linear system to be solved in the Weickert’s multi-resolution algorithm

lambda_paramfloat

smoothness parameter of the objective function

disparray, shape (S, R, C, 3)

the displacement field to start the optimization from

Returns:
max_displacementfloat

the norm of the maximum change in the displacement field after the iteration

References

solve_2d_symmetric_positive_definite#

dipy.align.sumsqdiff.solve_2d_symmetric_positive_definite(A, y, det)#

Solves a 2-variable symmetric positive-definite linear system

Solves the symmetric positive-definite linear system \(Mx = y\) given by:

M = [[A[0], A[1]],
     [A[1], A[2]]]
Parameters:
Aarray, shape (3,)

the array containing the entries of the symmetric 2x2 matrix

yarray, shape (2,)

right-hand side of the system to be solved

Returns:
outarray, shape (2,)

the array the output will be stored in

solve_3d_symmetric_positive_definite#

dipy.align.sumsqdiff.solve_3d_symmetric_positive_definite(g, y, tau)#

Solves a 3-variable symmetric positive-definite linear system

Solves the symmetric semi-positive-definite linear system \(Mx = y\) given by \(M = (g g^{T} + \tau I)\).

Parameters:
garray, shape (3,)

the vector in the outer product above

yarray, shape (3,)

right-hand side of the system to be solved

taudouble

\(\tau\) in \(M = (g g^{T} + \tau I)\)

Returns:
outarray, shape (3,)

the array the output will be stored in

is_singularint

1 if M is singular, otherwise 0

AffineTransform2D#

class dipy.align.transforms.AffineTransform2D#

Bases: Transform

Methods

get_identity_parameters()

Parameter values corresponding to the identity transform

jacobian(theta, x)

Jacobian function of this transform

param_to_matrix(theta)

Matrix representation of this transform with the given parameters

get_dim

get_number_of_parameters

AffineTransform3D#

class dipy.align.transforms.AffineTransform3D#

Bases: Transform

Methods

get_identity_parameters()

Parameter values corresponding to the identity transform

jacobian(theta, x)

Jacobian function of this transform

param_to_matrix(theta)

Matrix representation of this transform with the given parameters

get_dim

get_number_of_parameters

RigidIsoScalingTransform2D#

class dipy.align.transforms.RigidIsoScalingTransform2D#

Bases: Transform

Methods

get_identity_parameters()

Parameter values corresponding to the identity transform

jacobian(theta, x)

Jacobian function of this transform

param_to_matrix(theta)

Matrix representation of this transform with the given parameters

get_dim

get_number_of_parameters

RigidIsoScalingTransform3D#

class dipy.align.transforms.RigidIsoScalingTransform3D#

Bases: Transform

Methods

get_identity_parameters()

Parameter values corresponding to the identity transform

jacobian(theta, x)

Jacobian function of this transform

param_to_matrix(theta)

Matrix representation of this transform with the given parameters

get_dim

get_number_of_parameters

RigidScalingTransform2D#

class dipy.align.transforms.RigidScalingTransform2D#

Bases: Transform

Methods

get_identity_parameters()

Parameter values corresponding to the identity transform

jacobian(theta, x)

Jacobian function of this transform

param_to_matrix(theta)

Matrix representation of this transform with the given parameters

get_dim

get_number_of_parameters

RigidScalingTransform3D#

class dipy.align.transforms.RigidScalingTransform3D#

Bases: Transform

Methods

get_identity_parameters()

Parameter values corresponding to the identity transform

jacobian(theta, x)

Jacobian function of this transform

param_to_matrix(theta)

Matrix representation of this transform with the given parameters

get_dim

get_number_of_parameters

RigidTransform2D#

class dipy.align.transforms.RigidTransform2D#

Bases: Transform

Methods

get_identity_parameters()

Parameter values corresponding to the identity transform

jacobian(theta, x)

Jacobian function of this transform

param_to_matrix(theta)

Matrix representation of this transform with the given parameters

get_dim

get_number_of_parameters

RigidTransform3D#

class dipy.align.transforms.RigidTransform3D#

Bases: Transform

Methods

get_identity_parameters()

Parameter values corresponding to the identity transform

jacobian(theta, x)

Jacobian function of this transform

param_to_matrix(theta)

Matrix representation of this transform with the given parameters

get_dim

get_number_of_parameters

RotationTransform2D#

class dipy.align.transforms.RotationTransform2D#

Bases: Transform

Methods

get_identity_parameters()

Parameter values corresponding to the identity transform

jacobian(theta, x)

Jacobian function of this transform

param_to_matrix(theta)

Matrix representation of this transform with the given parameters

get_dim

get_number_of_parameters

RotationTransform3D#

class dipy.align.transforms.RotationTransform3D#

Bases: Transform

Methods

get_identity_parameters()

Parameter values corresponding to the identity transform

jacobian(theta, x)

Jacobian function of this transform

param_to_matrix(theta)

Matrix representation of this transform with the given parameters

get_dim

get_number_of_parameters

ScalingTransform2D#

class dipy.align.transforms.ScalingTransform2D#

Bases: Transform

Methods

get_identity_parameters()

Parameter values corresponding to the identity transform

jacobian(theta, x)

Jacobian function of this transform

param_to_matrix(theta)

Matrix representation of this transform with the given parameters

get_dim

get_number_of_parameters

ScalingTransform3D#

class dipy.align.transforms.ScalingTransform3D#

Bases: Transform

Methods

get_identity_parameters()

Parameter values corresponding to the identity transform

jacobian(theta, x)

Jacobian function of this transform

param_to_matrix(theta)

Matrix representation of this transform with the given parameters

get_dim

get_number_of_parameters

Transform#

class dipy.align.transforms.Transform#

Bases: object

Base class (contract) for all transforms for affine image registration Each transform must define the following (fast, nogil) methods:

  1. _jacobian(theta, x, J): receives a parameter vector theta, a point in x, and a matrix J with shape (dim, len(theta)). It must writes in J, the Jacobian of the transform with parameters theta evaluated at x.

  2. _get_identity_parameters(theta): receives a vector theta whose length is the number of parameters of the transform and sets in theta the values that define the identity transform.

  3. _param_to_matrix(theta, T): receives a parameter vector theta, and a matrix T of shape (dim + 1, dim + 1) and writes in T the matrix representation of the transform with parameters theta

This base class defines the (slow, convenient) python wrappers for each of the above functions, which also do parameter checking and raise a ValueError in case the provided parameters are invalid.

Methods

get_identity_parameters()

Parameter values corresponding to the identity transform

jacobian(theta, x)

Jacobian function of this transform

param_to_matrix(theta)

Matrix representation of this transform with the given parameters

get_dim

get_number_of_parameters

get_dim()#
get_identity_parameters()#

Parameter values corresponding to the identity transform

Returns:
thetaarray, shape (n,)

the n parameter values corresponding to the identity transform

get_number_of_parameters()#
jacobian(theta, x)#

Jacobian function of this transform

Parameters:
thetaarray, shape (n,)

vector containing the n parameters of this transform

xarray, shape (dim,)

vector containing the point where the Jacobian must be evaluated

Returns:
Jarray, shape (dim, n)

Jacobian matrix of the transform with parameters theta at point x

param_to_matrix(theta)#

Matrix representation of this transform with the given parameters

Parameters:
thetaarray, shape (n,)

the parameter values of the transform

Returns:
Tarray, shape (dim + 1, dim + 1)

the matrix representation of this transform with parameters theta

TranslationTransform2D#

class dipy.align.transforms.TranslationTransform2D#

Bases: Transform

Methods

get_identity_parameters()

Parameter values corresponding to the identity transform

jacobian(theta, x)

Jacobian function of this transform

param_to_matrix(theta)

Matrix representation of this transform with the given parameters

get_dim

get_number_of_parameters

TranslationTransform3D#

class dipy.align.transforms.TranslationTransform3D#

Bases: Transform

Methods

get_identity_parameters()

Parameter values corresponding to the identity transform

jacobian(theta, x)

Jacobian function of this transform

param_to_matrix(theta)

Matrix representation of this transform with the given parameters

get_dim

get_number_of_parameters

compose_vector_fields_2d#

dipy.align.vector_fields.compose_vector_fields_2d(d1, d2, premult_index, premult_disp, time_scaling, comp)#

Computes the composition of two 2D displacement fields

Computes the composition of the two 2-D displacements d1 and d2. The evaluation of d2 at non-lattice points is computed using tri-linear interpolation. The actual composition is computed as:

comp[i] = d1[i] + t * d2[ A * i + B * d1[i] ]

where t = time_scaling, A = premult_index and B=premult_disp and i denotes the voxel coordinates of a voxel in d1’s grid. Using this parameters it is possible to compose vector fields with arbitrary discretizations: let R and S be the voxel-to-space transformation associated to d1 and d2, respectively then the composition at a voxel with coordinates i in d1’s grid is given by:

comp[i] = d1[i] + R*i + d2[Sinv*(R*i + d1[i])] - R*i

(the R*i terms cancel each other) where Sinv = S^{-1} we can then define A = Sinv * R and B = Sinv to compute the composition using this function.

Parameters:
d1array, shape (R, C, 2)

first displacement field to be applied. R, C are the number of rows and columns of the displacement field, respectively.

d2array, shape (R’, C’, 2)

second displacement field to be applied. R’, C’ are the number of rows and columns of the displacement field, respectively.

premult_indexarray, shape (3, 3)

the matrix A in the explanation above

premult_disparray, shape (3, 3)

the matrix B in the explanation above

time_scalingfloat

this corresponds to the time scaling ‘t’ in the above explanation

comparray, shape (R, C, 2)

the buffer to write the composition to. If None, the buffer is created internally

Returns:
comparray, shape (R, C, 2), same dimension as d1

on output, this array will contain the composition of the two fields

statsarray, shape (3,)

on output, this array will contain three statistics of the vector norms of the composition (maximum, mean, standard_deviation)

compose_vector_fields_3d#

dipy.align.vector_fields.compose_vector_fields_3d(d1, d2, premult_index, premult_disp, time_scaling, comp)#

Computes the composition of two 3D displacement fields

Computes the composition of the two 3-D displacements d1 and d2. The evaluation of d2 at non-lattice points is computed using tri-linear interpolation. The actual composition is computed as:

comp[i] = d1[i] + t * d2[ A * i + B * d1[i] ]

where t = time_scaling, A = premult_index and B=premult_disp and i denotes the voxel coordinates of a voxel in d1’s grid. Using this parameters it is possible to compose vector fields with arbitrary discretization: let R and S be the voxel-to-space transformation associated to d1 and d2, respectively then the composition at a voxel with coordinates i in d1’s grid is given by:

comp[i] = d1[i] + R*i + d2[Sinv*(R*i + d1[i])] - R*i

(the R*i terms cancel each other) where Sinv = S^{-1} we can then define A = Sinv * R and B = Sinv to compute the composition using this function.

Parameters:
d1array, shape (S, R, C, 3)

first displacement field to be applied. S, R, C are the number of slices, rows and columns of the displacement field, respectively.

d2array, shape (S’, R’, C’, 3)

second displacement field to be applied. R’, C’ are the number of rows and columns of the displacement field, respectively.

premult_indexarray, shape (4, 4)

the matrix A in the explanation above

premult_disparray, shape (4, 4)

the matrix B in the explanation above

time_scalingfloat

this corresponds to the time scaling ‘t’ in the above explanation

comparray, shape (S, R, C, 3), same dimension as d1

the buffer to write the composition to. If None, the buffer will be created internally

Returns:
comparray, shape (S, R, C, 3), same dimension as d1

on output, this array will contain the composition of the two fields

statsarray, shape (3,)

on output, this array will contain three statistics of the vector norms of the composition (maximum, mean, standard_deviation)

Notes

If d1[s,r,c] lies outside the domain of d2, then comp[s,r,c] will contain a zero vector.

create_circle#

dipy.align.vector_fields.create_circle(nrows, ncols, radius)#

Create a binary 2D image where pixel values are 1 iff their distance to the center of the image is less than or equal to radius.

Parameters:
nrowsint

number of rows of the resulting image

ncolsint

number of columns of the resulting image

radiusint

the radius of the circle

Returns:
carray, shape (nrows, ncols)

the binary image of the circle with the requested dimensions

create_harmonic_fields_2d#

dipy.align.vector_fields.create_harmonic_fields_2d(nrows, ncols, b, m)#

Creates an invertible 2D displacement field

Creates the invertible displacement fields used in Chen et al. eqs. 9 and 10 [1]

Parameters:
nrowsint

number of rows in the resulting harmonic field

ncolsint

number of columns in the resulting harmonic field

b, mfloat

parameters of the harmonic field (as in [1]). To understand the effect of these parameters, please consider plotting a deformed image (a circle or a grid) under the deformation field, or see examples in [1]

Returns:
darray, shape (nrows, ncols, 2)

the harmonic displacement field

invarray, shape (nrows, ncols, 2)

the analytical inverse of the harmonic displacement field

[1] Chen, M., Lu, W., Chen, Q., Ruchala, K. J., & Olivera, G. H. (2008).

A simple fixed-point approach to invert a deformation field. Medical Physics, 35(1), 81. doi:10.1118/1.2816107

create_harmonic_fields_3d#

dipy.align.vector_fields.create_harmonic_fields_3d(nslices, nrows, ncols, b, m)#

Creates an invertible 3D displacement field

Creates the invertible displacement fields used in Chen et al. eqs. 9 and 10 [1] computing the angle theta along z-slides.

Parameters:
nslicesint

number of slices in the resulting harmonic field

nrowsint

number of rows in the resulting harmonic field

ncolsint

number of columns in the resulting harmonic field

b, ffloat

parameters of the harmonic field (as in [1]). To understand the effect of these parameters, please consider plotting a deformed image (e.g. a circle or a grid) under the deformation field, or see examples in [1]

Returns:
darray, shape (nslices, nrows, ncols, 3)

the harmonic displacement field

invarray, shape (nslices, nrows, ncols, 3)

the analytical inverse of the harmonic displacement field

[1] Chen, M., Lu, W., Chen, Q., Ruchala, K. J., & Olivera, G. H. (2008).

A simple fixed-point approach to invert a deformation field. Medical Physics, 35(1), 81. doi:10.1118/1.2816107

create_random_displacement_2d#

dipy.align.vector_fields.create_random_displacement_2d(from_shape, from_grid2world, to_shape, to_grid2world, rng=None)#

Creates a random 2D displacement ‘exactly’ mapping points of two grids

Creates a random 2D displacement field mapping points of an input discrete domain (with dimensions given by from_shape) to points of an output discrete domain (with shape given by to_shape). The affine matrices bringing discrete coordinates to physical space are given by from_grid2world (for the displacement field discretization) and to_grid2world (for the target discretization). Since this function is intended to be used for testing, voxels in the input domain will never be assigned to boundary voxels on the output domain.

Parameters:
from_shapearray, shape (2,)

the grid shape where the displacement field will be defined on.

from_grid2worldarray, shape (3,3)

the grid-to-space transformation of the displacement field

to_shapearray, shape (2,)

the grid shape where the deformation field will map the input grid to.

to_grid2worldarray, shape (3,3)

the grid-to-space transformation of the mapped grid

rngnumpy.rnadom.Generator class, optional

Numpy’s random generator for setting seed values when needed. Default is None.

Returns:
outputarray, shape = from_shape

the random displacement field in the physical domain

int_fieldarray, shape = from_shape

the assignment of each point in the input grid to the target grid

create_random_displacement_3d#

dipy.align.vector_fields.create_random_displacement_3d(from_shape, from_grid2world, to_shape, to_grid2world, rng=None)#

Creates a random 3D displacement ‘exactly’ mapping points of two grids

Creates a random 3D displacement field mapping points of an input discrete domain (with dimensions given by from_shape) to points of an output discrete domain (with shape given by to_shape). The affine matrices bringing discrete coordinates to physical space are given by from_grid2world (for the displacement field discretization) and to_grid2world (for the target discretization). Since this function is intended to be used for testing, voxels in the input domain will never be assigned to boundary voxels on the output domain.

Parameters:
from_shapearray, shape (3,)

the grid shape where the displacement field will be defined on.

from_grid2worldarray, shape (4,4)

the grid-to-space transformation of the displacement field

to_shapearray, shape (3,)

the grid shape where the deformation field will map the input grid to.

to_grid2worldarray, shape (4,4)

the grid-to-space transformation of the mapped grid

rngnumpy.rnadom.Generator class, optional

Numpy’s random generator for setting seed values when needed. Default is None.

Returns:
outputarray, shape = from_shape

the random displacement field in the physical domain

int_fieldarray, shape = from_shape

the assignment of each point in the input grid to the target grid

create_sphere#

dipy.align.vector_fields.create_sphere(nslices, nrows, ncols, radius)#

Create a binary 3D image where voxel values are 1 iff their distance to the center of the image is less than or equal to radius.

Parameters:
nslicesint

number if slices of the resulting image

nrowsint

number of rows of the resulting image

ncolsint

number of columns of the resulting image

radiusint

the radius of the sphere

Returns:
carray, shape (nslices, nrows, ncols)

the binary image of the sphere with the requested dimensions

downsample_displacement_field_2d#

dipy.align.vector_fields.downsample_displacement_field_2d(field)#

Down-samples the 2D input vector field by a factor of 2 Down-samples the input vector field by a factor of 2. The value at each pixel of the resulting field is the average of its surrounding pixels in the original field.

Parameters:
fieldarray, shape (R, C)

the vector field to be down-sampled

Returns:
downarray, shape (R’, C’)

the down-sampled displacement field, where R’= ceil(R/2), C’=ceil(C/2),

downsample_displacement_field_3d#

dipy.align.vector_fields.downsample_displacement_field_3d(field)#

Down-samples the input 3D vector field by a factor of 2

Down-samples the input vector field by a factor of 2. This operation is equivalent to dividing the input image into 2x2x2 cubes and averaging the 8 vectors. The resulting field consists of these average vectors.

Parameters:
fieldarray, shape (S, R, C)

the vector field to be down-sampled

Returns:
downarray, shape (S’, R’, C’)

the down-sampled displacement field, where S’ = ceil(S/2), R’= ceil(R/2), C’=ceil(C/2)

downsample_scalar_field_2d#

dipy.align.vector_fields.downsample_scalar_field_2d(field)#

Down-samples the input 2D image by a factor of 2

Down-samples the input image by a factor of 2. The value at each pixel of the resulting image is the average of its surrounding pixels in the original image.

Parameters:
fieldarray, shape (R, C)

the image to be down-sampled

Returns:
downarray, shape (R’, C’)

the down-sampled displacement field, where R’= ceil(R/2), C’=ceil(C/2)

downsample_scalar_field_3d#

dipy.align.vector_fields.downsample_scalar_field_3d(field)#

Down-samples the input volume by a factor of 2

Down-samples the input volume by a factor of 2. The value at each voxel of the resulting volume is the average of its surrounding voxels in the original volume.

Parameters:
fieldarray, shape (S, R, C)

the volume to be down-sampled

Returns:
downarray, shape (S’, R’, C’)

the down-sampled displacement field, where S’ = ceil(S/2), R’= ceil(R/2), C’=ceil(C/2)

gradient#

dipy.align.vector_fields.gradient(img, img_world2grid, img_spacing, out_shape, out_grid2world)#

Gradient of an image in physical space

Parameters:
img2D or 3D array, shape (R, C) or (S, R, C)

the input image whose gradient will be computed

img_world2gridarray, shape (dim+1, dim+1)

the space-to-grid transform matrix associated to img

img_spacingarray, shape (dim,)

the spacing between voxels (voxel size along each axis) of the input image

out_shapearray, shape (dim,)

the number of (slices), rows and columns of the sampling grid

out_grid2worldarray, shape (dim+1, dim+1)

the grid-to-space transform associated to the sampling grid

Returns:
outarray, shape (R’, C’, 2) or (S’, R’, C’, 3)

the buffer in which to store the image gradient, where (S’), R’, C’ are given by out_shape

invert_vector_field_fixed_point_2d#

dipy.align.vector_fields.invert_vector_field_fixed_point_2d(d, d_world2grid, spacing, max_iter, tolerance, start=None)#

Computes the inverse of a 2D displacement fields

Computes the inverse of the given 2-D displacement field d using the fixed-point algorithm [1].

[1] Chen, M., Lu, W., Chen, Q., Ruchala, K. J., & Olivera, G. H. (2008).

A simple fixed-point approach to invert a deformation field. Medical Physics, 35(1), 81. doi:10.1118/1.2816107

Parameters:
darray, shape (R, C, 2)

the 2-D displacement field to be inverted

d_world2gridarray, shape (3, 3)

the space-to-grid transformation associated to the displacement field d (transforming physical space coordinates to voxel coordinates of the displacement field grid)

spacing :array, shape (2,)

the spacing between voxels (voxel size along each axis)

max_iterint

maximum number of iterations to be performed

tolerancefloat

maximum tolerated inversion error

startarray, shape (R, C)

an approximation to the inverse displacement field (if no approximation is available, None can be provided and the start displacement field will be zero)

Returns:
parray, shape (R, C, 2)

the inverse displacement field

Notes

We assume that the displacement field is an endomorphism so that the shape and voxel-to-space transformation of the inverse field’s discretization is the same as those of the input displacement field. The ‘inversion error’ at iteration t is defined as the mean norm of the displacement vectors of the input displacement field composed with the inverse at iteration t.

invert_vector_field_fixed_point_3d#

dipy.align.vector_fields.invert_vector_field_fixed_point_3d(d, d_world2grid, spacing, max_iter, tol, start=None)#

Computes the inverse of a 3D displacement fields

Computes the inverse of the given 3-D displacement field d using the fixed-point algorithm [1].

[1] Chen, M., Lu, W., Chen, Q., Ruchala, K. J., & Olivera, G. H. (2008).

A simple fixed-point approach to invert a deformation field. Medical Physics, 35(1), 81. doi:10.1118/1.2816107

Parameters:
darray, shape (S, R, C, 3)

the 3-D displacement field to be inverted

d_world2gridarray, shape (4, 4)

the space-to-grid transformation associated to the displacement field d (transforming physical space coordinates to voxel coordinates of the displacement field grid)

spacing :array, shape (3,)

the spacing between voxels (voxel size along each axis)

max_iterint

maximum number of iterations to be performed

tolfloat

maximum tolerated inversion error

startarray, shape (S, R, C)

an approximation to the inverse displacement field (if no approximation is available, None can be provided and the start displacement field will be zero)

Returns:
parray, shape (S, R, C, 3)

the inverse displacement field

Notes

We assume that the displacement field is an endomorphism so that the shape and voxel-to-space transformation of the inverse field’s discretization is the same as those of the input displacement field. The ‘inversion error’ at iteration t is defined as the mean norm of the displacement vectors of the input displacement field composed with the inverse at iteration t.

is_valid_affine#

dipy.align.vector_fields.is_valid_affine(M, dim)#

reorient_vector_field_2d#

dipy.align.vector_fields.reorient_vector_field_2d(d, affine)#

Linearly transforms all vectors of a 2D displacement field

Modifies the input displacement field by multiplying each displacement vector by the given matrix. Note that the elements of the displacement field are vectors, not points, so their last homogeneous coordinate is zero, not one, and therefore the translation component of the affine transform will not have any effect on them.

Parameters:
darray, shape (R, C, 2)

the displacement field to be re-oriented

affine: array, shape (3, 3)

the matrix to be applied

reorient_vector_field_3d#

dipy.align.vector_fields.reorient_vector_field_3d(d, affine)#

Linearly transforms all vectors of a 3D displacement field

Modifies the input displacement field by multiplying each displacement vector by the given matrix. Note that the elements of the displacement field are vectors, not points, so their last homogeneous coordinate is zero, not one, and therefore the translation component of the affine transform will not have any effect on them.

Parameters:
darray, shape (S, R, C, 3)

the displacement field to be re-oriented

affinearray, shape (4, 4)

the matrix to be applied

resample_displacement_field_2d#

dipy.align.vector_fields.resample_displacement_field_2d(field, factors, out_shape)#

Resamples a 2D vector field to a custom target shape

Resamples the given 2D displacement field on a grid of the requested shape, using the given scale factors. More precisely, the resulting displacement field at each grid cell i is given by

D[i] = field[Diag(factors) * i]

Parameters:
factorsarray, shape (2,)

the scaling factors mapping (integer) grid coordinates in the resampled grid to (floating point) grid coordinates in the original grid

out_shapearray, shape (2,)

the desired shape of the resulting grid

Returns:
expandedarray, shape = out_shape + (2, )

the resampled displacement field

resample_displacement_field_3d#

dipy.align.vector_fields.resample_displacement_field_3d(field, factors, out_shape)#

Resamples a 3D vector field to a custom target shape

Resamples the given 3D displacement field on a grid of the requested shape, using the given scale factors. More precisely, the resulting displacement field at each grid cell i is given by

D[i] = field[Diag(factors) * i]

Parameters:
factorsarray, shape (3,)

the scaling factors mapping (integer) grid coordinates in the resampled grid to (floating point) grid coordinates in the original grid

out_shapearray, shape (3,)

the desired shape of the resulting grid

Returns:
expandedarray, shape = out_shape + (3, )

the resampled displacement field

simplify_warp_function_2d#

dipy.align.vector_fields.simplify_warp_function_2d(d, affine_idx_in, affine_idx_out, affine_disp, out_shape)#

Simplifies a nonlinear warping function combined with an affine transform

Modifies the given deformation field by incorporating into it a an affine transformation and voxel-to-space transforms associated to the discretization of its domain and codomain. The resulting transformation may be regarded as operating on the image spaces given by the domain and codomain discretization. More precisely, the resulting transform is of the form:

  1. T[i] = W * d[U * i] + V * i

Where U = affine_idx_in, V = affine_idx_out, W = affine_disp.

Parameters:
darray, shape (R’, C’, 2)

the non-linear part of the transformation (displacement field)

affine_idx_inarray, shape (3, 3)

the matrix U in eq. (1) above

affine_idx_outarray, shape (3, 3)

the matrix V in eq. (1) above

affine_disparray, shape (3, 3)

the matrix W in eq. (1) above

out_shapearray, shape (2,)

the number of rows and columns of the sampling grid

Returns:
outarray, shape = out_shape

the deformation field out associated with T in eq. (1) such that: T[i] = i + out[i]

Notes

Both the direct and inverse transforms of a DiffeomorphicMap can be written in this form:

Direct: Let D be the voxel-to-space transform of the domain’s

discretization, P be the pre-align matrix, Rinv the space-to-voxel transform of the reference grid (the grid the displacement field is defined on) and Cinv be the space-to-voxel transform of the codomain’s discretization. Then, for each i in the domain’s grid, the direct transform is given by

  1. T[i] = Cinv * d[Rinv * P * D * i] + Cinv * P * D * i

and we identify U = Rinv * P * D, V = Cinv * P * D, W = Cinv

Inverse: Let C be the voxel-to-space transform of the codomain’s

discretization, Pinv be the inverse of the pre-align matrix, Rinv the space-to-voxel transform of the reference grid (the grid the displacement field is defined on) and Dinv be the space-to-voxel transform of the domain’s discretization. Then, for each j in the codomain’s grid, the inverse transform is given by

  1. Tinv[j] = Dinv * Pinv * d[Rinv * C * j] + Dinv * Pinv * C * j

and we identify U = Rinv * C, V = Dinv * Pinv * C, W = Dinv * Pinv

simplify_warp_function_3d#

dipy.align.vector_fields.simplify_warp_function_3d(d, affine_idx_in, affine_idx_out, affine_disp, out_shape)#

Simplifies a nonlinear warping function combined with an affine transform

Modifies the given deformation field by incorporating into it an affine transformation and voxel-to-space transforms associated with the discretization of its domain and codomain.

The resulting transformation may be regarded as operating on the image spaces given by the domain and codomain discretization. More precisely, the resulting transform is of the form:

  1. T[i] = W * d[U * i] + V * i

Where U = affine_idx_in, V = affine_idx_out, W = affine_disp.

Parameters:
darray, shape (S’, R’, C’, 3)

the non-linear part of the transformation (displacement field)

affine_idx_inarray, shape (4, 4)

the matrix U in eq. (1) above

affine_idx_outarray, shape (4, 4)

the matrix V in eq. (1) above

affine_disparray, shape (4, 4)

the matrix W in eq. (1) above

out_shapearray, shape (3,)

the number of slices, rows and columns of the sampling grid

Returns:
outarray, shape = out_shape

the deformation field out associated with T in eq. (1) such that: T[i] = i + out[i]

Notes

Both the direct and inverse transforms of a DiffeomorphicMap can be written in this form:

Direct: Let D be the voxel-to-space transform of the domain’s

discretization, P be the pre-align matrix, Rinv the space-to-voxel transform of the reference grid (the grid the displacement field is defined on) and Cinv be the space-to-voxel transform of the codomain’s discretization. Then, for each i in the domain’s grid, the direct transform is given by

  1. T[i] = Cinv * d[Rinv * P * D * i] + Cinv * P * D * i

and we identify U = Rinv * P * D, V = Cinv * P * D, W = Cinv

Inverse: Let C be the voxel-to-space transform of the codomain’s

discretization, Pinv be the inverse of the pre-align matrix, Rinv the space-to-voxel transform of the reference grid (the grid the displacement field is defined on) and Dinv be the space-to-voxel transform of the domain’s discretization. Then, for each j in the codomain’s grid, the inverse transform is given by

  1. Tinv[j] = Dinv * Pinv * d[Rinv * C * j] + Dinv * Pinv * C * j

and we identify U = Rinv * C, V = Dinv * Pinv * C, W = Dinv * Pinv

sparse_gradient#

dipy.align.vector_fields.sparse_gradient(img, img_world2grid, img_spacing, sample_points)#

Gradient of an image in physical space

Parameters:
img2D or 3D array, shape (R, C) or (S, R, C)

the input image whose gradient will be computed

img_world2gridarray, shape (dim+1, dim+1)

the space-to-grid transform matrix associated to img

img_spacingarray, shape (dim,)

the spacing between voxels (voxel size along each axis) of the input image

sample_points: array, shape (n, dim)

list of points where the derivative will be evaluated (one point per row)

Returns:
outarray, shape (n, dim)

the gradient at each point stored at its corresponding row

transform_2d_affine#

dipy.align.vector_fields.transform_2d_affine(image, ref_shape, affine=None)#

Transforms a 2D image by an affine transform with bilinear interp.

Deforms the input image under the given affine transformation using tri-linear interpolation. The shape of the resulting image is given by ref_shape. If the affine matrix is None, it is taken as the identity.

Parameters:
imagearray, shape (R, C)

the input image to be transformed

ref_shapearray, shape (2,)

the shape of the resulting image

affinearray, shape (3, 3)

the affine transform to be applied

Returns:
outarray, shape (R’, C’)

the transformed image

Notes

The reason it is necessary to provide the intended shape of the resulting image is because the affine transformation is defined on all R^{2} but we must sample a finite lattice. Also the resulting shape may not be necessarily equal to the input shape, unless we are interested on endomorphisms only and not general diffeomorphisms.

transform_2d_affine_nn#

dipy.align.vector_fields.transform_2d_affine_nn(image, ref_shape, affine=None)#

Transforms a 2D image by an affine transform with NN interpolation

Deforms the input image under the given affine transformation using nearest neighbor interpolation. The shape of the resulting image is given by ref_shape. If the affine matrix is None, it is taken as the identity.

Parameters:
imagearray, shape (R, C)

the input image to be transformed

ref_shapearray, shape (2,)

the shape of the resulting image

affinearray, shape (3, 3)

the affine transform to be applied

Returns:
outarray, shape (R’, C’)

the transformed image

Notes

The reason it is necessary to provide the intended shape of the resulting image is because the affine transformation is defined on all R^{2} but we must sample a finite lattice. Also the resulting shape may not be necessarily equal to the input shape, unless we are interested on endomorphisms only and not general diffeomorphisms.

transform_3d_affine#

dipy.align.vector_fields.transform_3d_affine(volume, ref_shape, affine)#

Transforms a 3D volume by an affine transform with trilinear interp.

Deforms the input volume under the given affine transformation using tri-linear interpolation. The shape of the resulting transformation is given by ref_shape. If the affine matrix is None, it is taken as the identity.

Parameters:
volumearray, shape (S, R, C)

the input volume to be transformed

ref_shapearray, shape (3,)

the shape of the resulting volume

affinearray, shape (4, 4)

the affine transform to be applied

Returns:
outarray, shape (S’, R’, C’)

the transformed volume

Notes

The reason it is necessary to provide the intended shape of the resulting volume is because the affine transformation is defined on all R^{3} but we must sample a finite lattice. Also the resulting shape may not be necessarily equal to the input shape, unless we are interested on endomorphisms only and not general diffeomorphisms.

transform_3d_affine_nn#

dipy.align.vector_fields.transform_3d_affine_nn(volume, ref_shape, affine=None)#

Transforms a 3D volume by an affine transform with NN interpolation

Deforms the input volume under the given affine transformation using nearest neighbor interpolation. The shape of the resulting volume is given by ref_shape. If the affine matrix is None, it is taken as the identity.

Parameters:
volumearray, shape (S, R, C)

the input volume to be transformed

ref_shapearray, shape (3,)

the shape of the resulting volume

affinearray, shape (4, 4)

the affine transform to be applied

Returns:
outarray, shape (S’, R’, C’)

the transformed volume

Notes

The reason it is necessary to provide the intended shape of the resulting volume is because the affine transformation is defined on all R^{3} but we must sample a finite lattice. Also the resulting shape may not be necessarily equal to the input shape, unless we are interested on endomorphisms only and not general diffeomorphisms.

warp_2d#

dipy.align.vector_fields.warp_2d(image, d1, affine_idx_in=None, affine_idx_out=None, affine_disp=None, out_shape=None)#

Warps a 2D image using bilinear interpolation

Deforms the input image under the given transformation. The warped image is computed using bi-linear interpolation and is given by:

  1. warped[i] = image[ C * d1[A*i] + B*i ]

where A = affine_idx_in, B = affine_idx_out, C = affine_disp and i denotes the discrete coordinates of a voxel in the sampling grid of shape = out_shape.

Parameters:
imagearray, shape (R, C)

the input image to be transformed

d1array, shape (R’, C’, 2)

the displacement field driving the transformation

affine_idx_inarray, shape (3, 3)

the matrix A in eq. (1) above

affine_idx_outarray, shape (3, 3)

the matrix B in eq. (1) above

affine_disparray, shape (3, 3)

the matrix C in eq. (1) above

out_shapearray, shape (2,)

the number of rows and columns of the sampling grid

Returns:
warpedarray, shape = out_shape

the transformed image

Notes

To illustrate the use of this function, consider a displacement field d1 with grid-to-space transformation R, an image with grid-to-space transformation T and let’s say we want to sample the warped image on a grid with grid-to-space transformation S (sampling grid). For each voxel in the sampling grid with discrete coordinates i, the warped image is given by:

  1. warped[i] = image[Tinv * ( d1[Rinv * S * i] + S * i ) ]

where Tinv = T^{-1} and Rinv = R^{-1}. By identifying A = Rinv * S, B = Tinv * S, C = Tinv we can use this function to efficiently warp the input image.

warp_2d_nn#

dipy.align.vector_fields.warp_2d_nn(image, d1, affine_idx_in=None, affine_idx_out=None, affine_disp=None, out_shape=None)#

Warps a 2D image using nearest neighbor interpolation

Deforms the input image under the given transformation. The warped image is computed using nearest-neighbor interpolation and is given by:

  1. warped[i] = image[ C * d1[A*i] + B*i ]

where A = affine_idx_in, B = affine_idx_out, C = affine_disp and i denotes the discrete coordinates of a voxel in the sampling grid of shape = out_shape.

Parameters:
imagearray, shape (R, C)

the input image to be transformed

d1array, shape (R’, C’, 2)

the displacement field driving the transformation

affine_idx_inarray, shape (3, 3)

the matrix A in eq. (1) above

affine_idx_outarray, shape (3, 3)

the matrix B in eq. (1) above

affine_disparray, shape (3, 3)

the matrix C in eq. (1) above

out_shapearray, shape (2,)

the number of rows and columns of the sampling grid

Returns:
warpedarray, shape = out_shape

the transformed image

Notes

To illustrate the use of this function, consider a displacement field d1 with grid-to-space transformation R, an image with grid-to-space transformation T and let’s say we want to sample the warped image on a grid with grid-to-space transformation S (sampling grid). For each voxel in the sampling grid with discrete coordinates i, the warped image is given by:

  1. warped[i] = image[Tinv * ( d1[Rinv * S * i] + S * i ) ]

where Tinv = T^{-1} and Rinv = R^{-1}. By identifying A = Rinv * S, B = Tinv * S, C = Tinv we can use this function to efficiently warp the input image.

warp_3d#

dipy.align.vector_fields.warp_3d(volume, d1, affine_idx_in=None, affine_idx_out=None, affine_disp=None, out_shape=None)#

Warps a 3D volume using trilinear interpolation

Deforms the input volume under the given transformation. The warped volume is computed using tri-linear interpolation and is given by:

  1. warped[i] = volume[ C * d1[A*i] + B*i ]

where A = affine_idx_in, B = affine_idx_out, C = affine_disp and i denotes the discrete coordinates of a voxel in the sampling grid of shape = out_shape.

Parameters:
volumearray, shape (S, R, C)

the input volume to be transformed

d1array, shape (S’, R’, C’, 3)

the displacement field driving the transformation

affine_idx_inarray, shape (4, 4)

the matrix A in eq. (1) above

affine_idx_outarray, shape (4, 4)

the matrix B in eq. (1) above

affine_disparray, shape (4, 4)

the matrix C in eq. (1) above

out_shapearray, shape (3,)

the number of slices, rows and columns of the sampling grid

Returns:
warpedarray, shape = out_shape

the transformed volume

Notes

To illustrate the use of this function, consider a displacement field d1 with grid-to-space transformation R, a volume with grid-to-space transformation T and let’s say we want to sample the warped volume on a grid with grid-to-space transformation S (sampling grid). For each voxel in the sampling grid with discrete coordinates i, the warped volume is given by:

  1. warped[i] = volume[Tinv * ( d1[Rinv * S * i] + S * i ) ]

where Tinv = T^{-1} and Rinv = R^{-1}. By identifying A = Rinv * S, B = Tinv * S, C = Tinv we can use this function to efficiently warp the input image.

warp_3d_nn#

dipy.align.vector_fields.warp_3d_nn(volume, d1, affine_idx_in=None, affine_idx_out=None, affine_disp=None, out_shape=None)#

Warps a 3D volume using using nearest-neighbor interpolation

Deforms the input volume under the given transformation. The warped volume is computed using nearest-neighbor interpolation and is given by:

  1. warped[i] = volume[ C * d1[A*i] + B*i ]

where A = affine_idx_in, B = affine_idx_out, C = affine_disp and i denotes the discrete coordinates of a voxel in the sampling grid of shape = out_shape.

Parameters:
volumearray, shape (S, R, C)

the input volume to be transformed

d1array, shape (S’, R’, C’, 3)

the displacement field driving the transformation

affine_idx_inarray, shape (4, 4)

the matrix A in eq. (1) above

affine_idx_outarray, shape (4, 4)

the matrix B in eq. (1) above

affine_disparray, shape (4, 4)

the matrix C in eq. (1) above

out_shapearray, shape (3,)

the number of slices, rows and columns of the sampling grid

Returns:
warpedarray, shape = out_shape

the transformed volume

Notes

To illustrate the use of this function, consider a displacement field d1 with grid-to-space transformation R, a volume with grid-to-space transformation T and let’s say we want to sample the warped volume on a grid with grid-to-space transformation S (sampling grid). For each voxel in the sampling grid with discrete coordinates i, the warped volume is given by:

  1. warped[i] = volume[Tinv * ( d1[Rinv * S * i] + S * i ) ]

where Tinv = T^{-1} and Rinv = R^{-1}. By identifying A = Rinv * S, B = Tinv * S, C = Tinv we can use this function to efficiently warp the input image.

warp_coordinates_2d#

dipy.align.vector_fields.warp_coordinates_2d(points, d1, in2world, world2out, field_world2grid)#
Parameters:
pointsarray, shape (n, 2)
d1array, shape (S, R, C, 2)
in2worldarray, shape (3, 3)
world2outarray, shape (3, 3)
field_world2gridarray, shape (3, 3)

warp_coordinates_3d#

dipy.align.vector_fields.warp_coordinates_3d(points, d1, in2world, world2out, field_world2grid)#
Parameters:
pointsarray, shape (n, 3)
d1array, shape (S, R, C, 3)
in2worldarray, shape (4, 4)
world2outarray, shape (4, 4)
field_world2gridarray, shape (4, 4)