align#

Bunch(**kwds)

Module: align._public#

Registration API: simplified API for registration of MRI data and of streamlines.

syn_registration(moving, static, *[, ...])

Register a 2D/3D source image (moving) to a 2D/3D target image (static).

register_dwi_to_template(dwi, gtab, *[, ...])

Register DWI data to a template through the B0 volumes.

write_mapping(mapping, fname)

Write out a syn registration mapping to a nifti file.

read_mapping(disp, domain_img, codomain_img, *)

Read a syn registration mapping from a nifti file.

resample(moving, static, *[, moving_affine, ...])

Resample an image (moving) from one space to another (static).

affine_registration(moving, static, *[, ...])

Find the affine transformation between two 3D images.

center_of_mass(moving, static, *[, ...])

Implements a center of mass transform.

translation(moving, static, *[, ...])

Implements a translation transform.

rigid(moving, static, *[, moving_affine, ...])

Implements a rigid transform.

rigid_isoscaling(moving, static, *[, ...])

Implements a rigid isoscaling transform.

rigid_scaling(moving, static, *[, ...])

Implements a rigid scaling transform.

affine(moving, static, *[, moving_affine, ...])

Implements an affine transform.

register_series(series, ref, *[, pipeline, ...])

Register a series to a reference image.

register_dwi_series(data, gtab, *[, affine, ...])

Register a DWI series to the mean of the B0 images in that series.

motion_correction(data, gtab, *[, affine, ...])

Apply a motion correction to a DWI dataset (Between-Volumes Motion correction)

streamline_registration(moving, static, *[, ...])

Register two collections of streamlines ('bundles') to each other.

Module: align.cpd#

Note#

This file is copied (possibly with major modifications) from the sources of the pycpd project - siavashk/pycpd. It remains licensed as the rest of PyCPD (MIT license as of October 2010).

# ## ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ## # # See COPYING file distributed along with the PyCPD package for the # copyright and license terms. # # ## ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ##

DeformableRegistration(X, Y, *args[, ...])

Deformable point cloud registration.

gaussian_kernel(X, beta[, Y])

low_rank_eigen(G, num_eig)

Calculate num_eig eigenvectors and eigenvalues of gaussian matrix G.

initialize_sigma2(X, Y)

Initialize the variance (sigma2).

lowrankQS(G, beta, num_eig, *[, eig_fgt])

Calculate eigenvectors and eigenvalues of gaussian matrix G.

Module: align.imaffine#

Affine image registration module consisting of the following classes:

AffineMap: encapsulates the necessary information to perform affine

transforms between two domains, defined by a static and a moving image. The domain of the transform is the set of points in the static image’s grid, and the codomain is the set of points in the moving image. When we call the transform method, AffineMap maps each point x of the domain (static grid) to the codomain (moving grid) and interpolates the moving image at that point to obtain the intensity value to be placed at x in the resulting grid. The transform_inverse method performs the opposite operation mapping points in the codomain to points in the domain.

ParzenJointHistogram: computes the marginal and joint distributions of

intensities of a pair of images, using Parzen windows [1] with a cubic spline kernel, as proposed by Mattes et al.[2]. It also computes the gradient of the joint histogram w.r.t. the parameters of a given transform.

MutualInformationMetric: computes the value and gradient of the mutual

information metric the way Optimizer needs them. That is, given a set of transform parameters, it will use ParzenJointHistogram to compute the value and gradient of the joint intensity histogram evaluated at the given parameters, and evaluate the value and gradient of the histogram’s mutual information.

AffineRegistration: it runs the multi-resolution registration, putting

all the pieces together. It needs to create the scale space of the images and run the multi-resolution registration by using the Metric and the Optimizer at each level of the Gaussian pyramid. At each level, it will setup the metric to compute value and gradient of the metric with the input images with different levels of smoothing.

References#

AffineInversionError

AffineInvalidValuesError

AffineMap(affine, *[, domain_grid_shape, ...])

Methods

MutualInformationMetric(*[, nbins, ...])

Methods

AffineRegistration(*[, metric, level_iters, ...])

Methods

transform_centers_of_mass(static, ...)

Transformation to align the center of mass of the input images.

transform_geometric_centers(static, ...)

Transformation to align the geometric center of the input images.

transform_origins(static, static_grid2world, ...)

Transformation to align the origins of the input images.

Module: align.imwarp#

Classes and functions for Symmetric Diffeomorphic Registration

DiffeomorphicMap(dim, disp_shape, *[, ...])

Methods

DiffeomorphicRegistration(*[, metric])

Methods

SymmetricDiffeomorphicRegistration(metric, *)

Methods

mult_aff(A, B)

Returns the matrix product A.dot(B) considering None as the identity

get_direction_and_spacings(affine, dim)

Extracts the rotational and spacing components from a matrix

Module: align.metrics#

Metrics for Symmetric Diffeomorphic Registration

SimilarityMetric(dim)

Methods

CCMetric(dim, *[, sigma_diff, radius])

Methods

EMMetric(dim, *[, smooth, inner_iter, ...])

Methods

SSDMetric(dim, *[, smooth, inner_iter, ...])

Methods

v_cycle_2d(n, k, delta_field, ...[, depth])

Multi-resolution Gauss-Seidel solver using V-type cycles

v_cycle_3d(n, k, delta_field, ...[, depth])

Multi-resolution Gauss-Seidel solver using V-type cycles

Module: align.reslice#

reslice(data, affine, zooms, new_zooms, *[, ...])

Reslice data with new voxel resolution defined by new_zooms.

Module: align.scalespace#

ScaleSpace(image, num_levels, *[, ...])

Methods

IsotropicScaleSpace(image, factors, sigmas, *)

Methods

Module: align.streamlinear#

StreamlineDistanceMetric(*[, num_threads])

Methods

BundleMinDistanceMetric(*[, num_threads])

Bundle-based Minimum Distance aka BMD.

BundleMinDistanceMatrixMetric(*[, num_threads])

Bundle-based Minimum Distance aka BMD

BundleMinDistanceAsymmetricMetric(*[, ...])

Asymmetric Bundle-based Minimum distance.

BundleSumDistanceMatrixMetric(*[, num_threads])

Bundle-based Sum Distance aka BMD

JointBundleMinDistanceMetric(*[, num_threads])

Bundle-based Minimum Distance for joint optimization.

StreamlineLinearRegistration(*[, metric, ...])

Methods

StreamlineRegistrationMap(matopt, xopt, ...)

Methods

JointStreamlineRegistrationMap(xopt, fopt, ...)

Methods

bundle_sum_distance(t, static, moving, *[, ...])

MDF distance optimization function (SUM).

bundle_min_distance(t, static, moving)

MDF-based pairwise distance optimization function (MIN).

bundle_min_distance_fast(t, static, moving, ...)

MDF-based pairwise distance optimization function (MIN).

bundle_min_distance_asymmetric_fast(t, ...)

MDF-based pairwise distance optimization function (MIN).

remove_clusters_by_size(clusters[, min_size])

progressive_slr(static, moving, metric, x0, ...)

Progressive SLR.

slr_with_qbx(static, moving, *[, x0, ...])

Utility function for registering large tractograms.

groupwise_slr(bundles, *[, x0, tol, ...])

Function to perform unbiased groupwise bundle registration

get_unique_pairs(n_bundle, *[, pairs])

Make unique pairs from n_bundle bundles.

compose_matrix44(t, *[, dtype])

Compose a 4x4 transformation matrix.

decompose_matrix44(mat, *[, size])

Given a 4x4 homogeneous matrix return the parameter vector.

Module: align.streamwarp#

average_bundle_length(bundle)

Find average Euclidean length of the bundle in mm.

find_missing(lst, cb)

Find unmatched streamline indices in moving bundle.

bundlewarp(static, moving, *[, dist, alpha, ...])

Register two bundles using nonlinear method.

bundlewarp_vector_filed(moving_aligned, ...)

Calculate vector fields.

bundlewarp_shape_analysis(moving_aligned, ...)

Calculate bundle shape difference profile.

Bunch#

class dipy.align.Bunch(**kwds)[source]#

Bases: object

syn_registration#

dipy.align._public.syn_registration(moving, static, *, moving_affine=None, static_affine=None, step_length=0.25, metric='CC', dim=3, level_iters=None, prealign=None, **metric_kwargs)[source]#

Register a 2D/3D source image (moving) to a 2D/3D target image (static).

Parameters:
moving, staticarray or nib.Nifti1Image or str.

Either as a 2D/3D array or as a nifti image object, or as a string containing the full path to a nifti file.

moving_affine, static_affine4x4 array, optional.

Must be provided for data provided as an array. If provided together with Nifti1Image or str data, this input will over-ride the affine that is stored in the data input. Default: use the affine stored in data.

metricstring, optional

The metric to be optimized. One of CC, EM, SSD, Default: ‘CC’ => CCMetric.

dim: int (either 2 or 3), optional

The dimensions of the image domain. Default: 3

level_iterslist of int, optional

the number of iterations at each level of the Gaussian Pyramid (the length of the list defines the number of pyramid levels to be used). Default: [10, 10, 5].

metric_kwargsdict, optional

Parameters for initialization of the metric object. If not provided, uses the default settings of each metric.

Returns:
warped_movingndarray

The data in moving, warped towards the static data.

forwardndarray (…, 3)

The vector field describing the forward warping from the source to the target.

backwardndarray (…, 3)

The vector field describing the backward warping from the target to the source.

register_dwi_to_template#

dipy.align._public.register_dwi_to_template(dwi, gtab, *, dwi_affine=None, template=None, template_affine=None, reg_method='syn', **reg_kwargs)[source]#

Register DWI data to a template through the B0 volumes.

Parameters:
dwi4D array, nifti image or str

Containing the DWI data, or full path to a nifti file with DWI.

gtabGradientTable or sequence of strings

The gradients associated with the DWI data, or a sequence with (fbval, fbvec), full paths to bvals and bvecs files.

dwi_affine4x4 array, optional

An affine transformation associated with the DWI. Required if data is provided as an array. If provided together with nifti/path, will over-ride the affine that is in the nifti.

template3D array, nifti image or str

Containing the data for the template, or full path to a nifti file with the template data.

template_affine4x4 array, optional

An affine transformation associated with the template. Required if data is provided as an array. If provided together with nifti/path, will over-ride the affine that is in the nifti.

reg_methodstr,

One of “syn” or “aff”, which designates which registration method is used. Either syn, which uses the syn_registration() function or affine_registration() function. Default: “syn”.

reg_kwargskey-word arguments for syn_registration() or

affine_registration()

Returns:
warped_b0ndarray

b0 volume warped to the template.

mappingDiffeomorphicMap or ndarray

If reg_method is “syn”, a DiffeomorphicMap class instance that can be used to transform between the two spaces. Otherwise, if reg_method is “aff”, a 4x4 matrix encoding the affine transform.

Notes

This function assumes that the DWI data is already internally registered. See register_dwi_series().

write_mapping#

dipy.align._public.write_mapping(mapping, fname)[source]#

Write out a syn registration mapping to a nifti file.

Parameters:
mappingDiffeomorphicMap

Registration mapping derived from syn_registration()

fnamestr

Full path to the nifti file storing the mapping

Notes

The data in the file is organized with shape (X, Y, Z, 3, 2), such that the forward mapping in each voxel is in data[i, j, k, :, 0] and the backward mapping in each voxel is in data[i, j, k, :, 1].

read_mapping#

dipy.align._public.read_mapping(disp, domain_img, codomain_img, *, prealign=None)[source]#

Read a syn registration mapping from a nifti file.

Parameters:
dispstr or Nifti1Image

A file of image containing the mapping displacement field in each voxel Shape (x, y, z, 3, 2)

domain_imgstr or Nifti1Image
codomain_imgstr or Nifti1Image
Returns:
A DiffeomorphicMap object.

Notes

See write_mapping() for the data format expected.

resample#

dipy.align._public.resample(moving, static, *, moving_affine=None, static_affine=None, between_affine=None)[source]#

Resample an image (moving) from one space to another (static).

Parameters:
movingarray, nifti image or str

Containing the data for the moving object, or full path to a nifti file with the moving data.

staticarray, nifti image or str

Containing the data for the static object, or full path to a nifti file with the moving data.

moving_affine4x4 array, optional

An affine transformation associated with the moving object. Required if data is provided as an array. If provided together with nifti/path, will over-ride the affine that is in the nifti.

static_affine4x4 array, optional

An affine transformation associated with the static object. Required if data is provided as an array. If provided together with nifti/path, will over-ride the affine that is in the nifti.

between_affine: 4x4 array, optional

If an additional affine is needed between the two spaces. Default: identity (no additional registration).

Returns:
A Nifti1Image class instance with the data from the moving object
resampled into the space of the static object.

affine_registration#

dipy.align._public.affine_registration(moving, static, *, moving_affine=None, static_affine=None, pipeline=None, starting_affine=None, metric='MI', level_iters=None, sigmas=None, factors=None, ret_metric=False, moving_mask=None, static_mask=None, **metric_kwargs)[source]#

Find the affine transformation between two 3D images. Alternatively, find the combination of several linear transformations.

Parameters:
movingarray, nifti image or str

Containing the data for the moving object, or full path to a nifti file with the moving data.

staticarray, nifti image or str

Containing the data for the static object, or full path to a nifti file with the moving data.

moving_affine4x4 array, optional

An affine transformation associated with the moving object. Required if data is provided as an array. If provided together with nifti/path, will over-ride the affine that is in the nifti.

static_affine4x4 array, optional

An affine transformation associated with the static object. Required if data is provided as an array. If provided together with nifti/path, will over-ride the affine that is in the nifti.

pipelinelist of str, optional

Sequence of transforms to use in the gradual fitting. Default: gradual fit of the full affine (executed from left to right): ["center_of_mass", "translation", "rigid", "affine"] Alternatively, any other combination of the following registration methods might be used: center_of_mass, translation, rigid, rigid_isoscaling, rigid_scaling and affine.

starting_affine: 4x4 array, optional

Initial guess for the transformation between the spaces. Default: identity.

metricstr, optional.

Currently only supports ‘MI’ for MutualInformationMetric.

level_iterssequence, optional

AffineRegistration key-word argument: the number of iterations at each scale of the scale space. level_iters[0] corresponds to the coarsest scale, level_iters[-1] the finest, where n is the length of the sequence. By default, a 3-level scale space with iterations sequence equal to [10000, 1000, 100] will be used.

sigmassequence of floats, optional

AffineRegistration key-word argument: custom smoothing parameter to build the scale space (one parameter for each scale). By default, the sequence of sigmas will be [3, 1, 0].

factorssequence of floats, optional

AffineRegistration key-word argument: custom scale factors to build the scale space (one factor for each scale). By default, the sequence of factors will be [4, 2, 1].

ret_metricboolean, optional

Set it to True to return the value of the optimized coefficients and the optimization quality metric.

moving_maskarray, shape (S’, R’, C’) or (R’, C’), optional

moving image mask that defines which pixels in the moving image are used to calculate the mutual information.

static_maskarray, shape (S, R, C) or (R, C), optional

static image mask that defines which pixels in the static image are used to calculate the mutual information.

nbinsint, optional

MutualInformationMetric key-word argument: the number of bins to be used for computing the intensity histograms. The default is 32.

sampling_proportionNone or float in interval (0, 1], optional

MutualInformationMetric key-word argument: There are two types of sampling: dense and sparse. Dense sampling uses all voxels for estimating the (joint and marginal) intensity histograms, while sparse sampling uses a subset of them. If sampling_proportion is None, then dense sampling is used. If sampling_proportion is a floating point value in (0,1] then sparse sampling is used, where sampling_proportion specifies the proportion of voxels to be used. The default is None (dense sampling).

Returns:
resampledarray with moving data resampled to the static space
after computing the affine transformation.
final_affinethe affine 4x4 associated with the transformation.
xoptthe value of the optimized coefficients.
foptthe value of the optimization quality metric.

Notes

Performs a gradual registration between the two inputs, using a pipeline that gradually approximates the final registration. If the final default step (affine) is omitted, the resulting affine may not have all 12 degrees of freedom adjusted.

center_of_mass#

dipy.align._public.center_of_mass(moving, static, *, moving_affine=None, static_affine=None, pipeline=['center_of_mass'], starting_affine=None, metric='MI', level_iters=None, sigmas=None, factors=None, ret_metric=False, moving_mask=None, static_mask=None, **metric_kwargs)#

Implements a center of mass transform. Based on affine_registration().

translation#

dipy.align._public.translation(moving, static, *, moving_affine=None, static_affine=None, pipeline=['translation'], starting_affine=None, metric='MI', level_iters=None, sigmas=None, factors=None, ret_metric=False, moving_mask=None, static_mask=None, **metric_kwargs)#

Implements a translation transform. Based on affine_registration().

rigid#

dipy.align._public.rigid(moving, static, *, moving_affine=None, static_affine=None, pipeline=['rigid'], starting_affine=None, metric='MI', level_iters=None, sigmas=None, factors=None, ret_metric=False, moving_mask=None, static_mask=None, **metric_kwargs)#

Implements a rigid transform. Based on affine_registration().

rigid_isoscaling#

dipy.align._public.rigid_isoscaling(moving, static, *, moving_affine=None, static_affine=None, pipeline=['rigid_isoscaling'], starting_affine=None, metric='MI', level_iters=None, sigmas=None, factors=None, ret_metric=False, moving_mask=None, static_mask=None, **metric_kwargs)#

Implements a rigid isoscaling transform. Based on affine_registration().

rigid_scaling#

dipy.align._public.rigid_scaling(moving, static, *, moving_affine=None, static_affine=None, pipeline=['rigid_scaling'], starting_affine=None, metric='MI', level_iters=None, sigmas=None, factors=None, ret_metric=False, moving_mask=None, static_mask=None, **metric_kwargs)#

Implements a rigid scaling transform. Based on affine_registration().

affine#

dipy.align._public.affine(moving, static, *, moving_affine=None, static_affine=None, pipeline=['affine'], starting_affine=None, metric='MI', level_iters=None, sigmas=None, factors=None, ret_metric=False, moving_mask=None, static_mask=None, **metric_kwargs)#

Implements an affine transform. Based on affine_registration().

register_series#

dipy.align._public.register_series(series, ref, *, pipeline=None, series_affine=None, ref_affine=None, static_mask=None)[source]#

Register a series to a reference image.

Parameters:
series4D array or nib.Nifti1Image class instance or str

The data is 4D with the last dimension separating different 3D volumes

refint or 3D array or nib.Nifti1Image class instance or str

If this is an int, this is the index of the reference image within the series. Otherwise it is an array of data to register with (associated with a ref_affine required) or a nifti img or full path to a file containing one.

pipelinesequence, optional

Sequence of transforms to do for each volume in the series. Default: (executed from left to right): [center_of_mass, translation, rigid, affine]

series_affine, ref_affine4x4 arrays, optional.

The affine. If provided, this input will over-ride the affine provided together with the nifti img or file.

static_maskarray, shape (S, R, C) or (R, C), optional

static image mask that defines which pixels in the static image are used to calculate the mutual information.

Returns:
xformed, affines4D array with transformed data and a (4,4,n) array
with 4x4 matrices associated with each of the volumes of the input moving
data that was used to transform it into register with the static data.

register_dwi_series#

dipy.align._public.register_dwi_series(data, gtab, *, affine=None, b0_ref=0, pipeline=None, static_mask=None)[source]#

Register a DWI series to the mean of the B0 images in that series.

all first registered to the first B0 volume

Parameters:
data4D array or nibabel Nifti1Image class instance or str

Diffusion data. Either as a 4D array or as a nifti image object, or as a string containing the full path to a nifti file.

gtaba GradientTable class instance or tuple of strings

If provided as a tuple of strings, these are assumed to be full paths to the bvals and bvecs files (in that order).

affine4x4 array, optional.

Must be provided for data provided as an array. If provided together with Nifti1Image or str data, this input will over-ride the affine that is stored in the data input. Default: use the affine stored in data.

b0_refint, optional.

Which b0 volume to use as reference. Default: 0

pipelinelist of callables, optional.

The transformations to perform in sequence (from left to right): Default: [center_of_mass, translation, rigid, affine]

static_maskarray, shape (S, R, C) or (R, C), optional

static image mask that defines which pixels in the static image are used to calculate the mutual information.

Returns:
xform_img, affine_array: a Nifti1Image containing the registered data and
using the affine of the original data and a list containing the affine
transforms associated with each of the

motion_correction#

dipy.align._public.motion_correction(data, gtab, *, affine=None, b0_ref=0, pipeline=['center_of_mass', 'translation', 'rigid', 'affine'], static_mask=None)#

Apply a motion correction to a DWI dataset (Between-Volumes Motion correction)

Parameters:
data4D array or nibabel Nifti1Image class instance or str

Diffusion data. Either as a 4D array or as a nifti image object, or as a string containing the full path to a nifti file.

gtaba GradientTable class instance or tuple of strings

If provided as a tuple of strings, these are assumed to be full paths to the bvals and bvecs files (in that order).

affine4x4 array, optional.

Must be provided for data provided as an array. If provided together with Nifti1Image or str data, this input will over-ride the affine that is stored in the data input. Default: use the affine stored in data.

b0_refint, optional.

Which b0 volume to use as reference. Default: 0

pipelinelist of callables, optional.

The transformations to perform in sequence (from left to right): Default: [center_of_mass, translation, rigid, affine]

static_maskarray, shape (S, R, C) or (R, C), optional

static image mask that defines which pixels in the static image are used to calculate the mutual information.

Returns:
xform_img, affine_array: a Nifti1Image containing the registered data and
using the affine of the original data and a list containing the affine
transforms associated with each of the

streamline_registration#

dipy.align._public.streamline_registration(moving, static, *, n_points=100, native_resampled=False)[source]#

Register two collections of streamlines (‘bundles’) to each other.

Parameters:
moving, staticlists of 3 by n, or str

The two bundles to be registered. Given either as lists of arrays with 3D coordinates, or strings containing full paths to these files.

n_pointsint, optional

How many points to resample to. Default: 100.

native_resampledbool, optional

Whether to return the moving bundle in the original space, but resampled in the static space to n_points.

Returns:
alignedlist

Streamlines from the moving group, moved to be closely matched to the static group.

matrixarray (4, 4)

The affine transformation that takes us from ‘moving’ to ‘static’

DeformableRegistration#

class dipy.align.cpd.DeformableRegistration(X, Y, *args, sigma2=None, alpha=None, beta=None, low_rank=False, num_eig=100, max_iterations=None, tolerance=None, w=None, **kwargs)[source]#

Bases: object

Deformable point cloud registration.

Attributes:
X: numpy array

NxD array of target points.

Y: numpy array

MxD array of source points.

TY: numpy array

MxD array of transformed source points.

sigma2: float (positive)

Initial variance of the Gaussian mixture model.

N: int

Number of target points.

M: int

Number of source points.

D: int

Dimensionality of source and target points

iteration: int

The current iteration throughout registration.

max_iterations: int

Registration will terminate once the algorithm has taken this many iterations.

tolerance: float (positive)

Registration will terminate once the difference between consecutive objective function values falls within this tolerance.

w: float (between 0 and 1)

Contribution of the uniform distribution to account for outliers. Valid values span 0 (inclusive) and 1 (exclusive).

q: float

The objective function value that represents the misalignment between source and target point clouds.

diff: float (positive)

The absolute difference between the current and previous objective function values.

P: numpy array

MxN array of probabilities. P[m, n] represents the probability that the m-th source point corresponds to the n-th target point.

Pt1: numpy array

Nx1 column array. Multiplication result between the transpose of P and a column vector of all 1s.

P1: numpy array

Mx1 column array. Multiplication result between P and a column vector of all 1s.

Np: float (positive)

The sum of all elements in P.

alpha: float (positive)

Represents the trade-off between the goodness of maximum likelihoo fit and regularization.

beta: float(positive)

Width of the Gaussian kernel.

low_rank: bool

Whether to use low rank approximation.

num_eig: int

Number of eigenvectors to use in lowrank calculation.

Methods

expectation()

Compute the expectation step of the EM algorithm.

get_registration_parameters()

Return the current estimate of the deformable transformation parameters.

iterate()

Perform one iteration of the EM algorithm.

maximization()

Compute the maximization step of the EM algorithm.

register(*[, callback])

Perform the EM registration.

transform_point_cloud(*[, Y])

Update a point cloud using the new estimate of the deformable transformation.

update_transform()

Calculate a new estimate of the deformable transformation.

update_variance()

Update the variance of the mixture model.

expectation()[source]#

Compute the expectation step of the EM algorithm.

get_registration_parameters()[source]#

Return the current estimate of the deformable transformation parameters.

Returns:
self.G: numpy array

Gaussian kernel matrix.

self.W: numpy array

Deformable transformation matrix.

iterate()[source]#

Perform one iteration of the EM algorithm.

maximization()[source]#

Compute the maximization step of the EM algorithm.

register(*, callback=<function DeformableRegistration.<lambda>>)[source]#

Perform the EM registration.

Parameters:
callback: function

A function that will be called after each iteration. Can be used to visualize the registration process.

Returns:
self.TY: numpy array

MxD array of transformed source points.

registration_parameters:

Returned params dependent on registration method used.

transform_point_cloud(*, Y=None)[source]#

Update a point cloud using the new estimate of the deformable transformation.

Parameters:
Y: numpy array, optional

Array of points to transform - use to predict on new set of points. Best for predicting on new points not used to run initial registration. If None, self.Y used.

Returns:
If Y is None, returns None.
Otherwise, returns the transformed Y.
update_transform()[source]#

Calculate a new estimate of the deformable transformation. See Eq. 22 of https://arxiv.org/pdf/0905.2635.pdf.

update_variance()[source]#

Update the variance of the mixture model.

This is using the new estimate of the deformable transformation. See the update rule for sigma2 in Eq. 23 of of https://arxiv.org/pdf/0905.2635.pdf.

gaussian_kernel#

dipy.align.cpd.gaussian_kernel(X, beta, Y=None)[source]#

low_rank_eigen#

dipy.align.cpd.low_rank_eigen(G, num_eig)[source]#

Calculate num_eig eigenvectors and eigenvalues of gaussian matrix G.

Enables lower dimensional solving.

initialize_sigma2#

dipy.align.cpd.initialize_sigma2(X, Y)[source]#

Initialize the variance (sigma2).

Parameters:
X: numpy array

NxD array of points for target.

Y: numpy array

MxD array of points for source.

Returns:
sigma2: float

Initial variance.

lowrankQS#

dipy.align.cpd.lowrankQS(G, beta, num_eig, *, eig_fgt=False)[source]#

Calculate eigenvectors and eigenvalues of gaussian matrix G.

!!! This function is a placeholder for implementing the fast gauss transform. It is not yet implemented. !!!

Parameters:
G: numpy array

Gaussian kernel matrix.

beta: float

Width of the Gaussian kernel.

num_eig: int

Number of eigenvectors to use in lowrank calculation of G

eig_fgt: bool

If True, use fast gauss transform method to speed up.

AffineInversionError#

class dipy.align.imaffine.AffineInversionError[source]#

Bases: Exception

Attributes:
args

Methods

with_traceback

Exception.with_traceback(tb) -- set self.__traceback__ to tb and return self.

AffineInvalidValuesError#

class dipy.align.imaffine.AffineInvalidValuesError[source]#

Bases: Exception

Attributes:
args

Methods

with_traceback

Exception.with_traceback(tb) -- set self.__traceback__ to tb and return self.

AffineMap#

class dipy.align.imaffine.AffineMap(affine, *, domain_grid_shape=None, domain_grid2world=None, codomain_grid_shape=None, codomain_grid2world=None)[source]#

Bases: object

Methods

get_affine()

Return the value of the transformation, not a reference.

set_affine(affine)

Set the affine transform (operating in physical space).

transform(image, *[, interpolation, ...])

Transform the input image from co-domain to domain space.

transform_inverse(image, *[, interpolation, ...])

Transform the input image from domain to co-domain space.

get_affine()[source]#

Return the value of the transformation, not a reference.

Returns:
affinendarray

Copy of the transform, not a reference.

set_affine(affine)[source]#

Set the affine transform (operating in physical space).

Also sets self.affine_inv - the inverse of affine, or None if there is no inverse.

Parameters:
affinearray, shape (dim + 1, dim + 1)

the matrix representing the affine transform operating in physical space. The domain and co-domain information remains unchanged. If None, then self represents the identity transformation.

transform(image, *, interpolation='linear', image_grid2world=None, sampling_grid_shape=None, sampling_grid2world=None, resample_only=False)[source]#

Transform the input image from co-domain to domain space.

By default, the transformed image is sampled at a grid defined by self.domain_shape and self.domain_grid2world. If such information was not provided then sampling_grid_shape is mandatory.

Parameters:
image2D or 3D array

the image to be transformed

interpolationstring, either ‘linear’ or ‘nearest’

the type of interpolation to be used, either ‘linear’ (for k-linear interpolation) or ‘nearest’ for nearest neighbor

image_grid2worldarray, shape (dim + 1, dim + 1), optional

the grid-to-world transform associated with image. If None (the default), then the grid-to-world transform is assumed to be the identity.

sampling_grid_shapesequence, shape (dim,), optional

the shape of the grid where the transformed image must be sampled. If None (the default), then self.codomain_shape is used instead (which must have been set at initialization, otherwise an exception will be raised).

sampling_grid2worldarray, shape (dim + 1, dim + 1), optional

the grid-to-world transform associated with the sampling grid (specified by sampling_grid_shape, or by default self.codomain_shape). If None (the default), then the grid-to-world transform is assumed to be the identity.

resample_onlyBoolean, optional

If False (the default) the affine transform is applied normally. If True, then the affine transform is not applied, and the input image is just re-sampled on the domain grid of this transform.

Returns:
transformedarray

the transformed image, sampled at the requested grid, with shape sampling_grid_shape or self.codomain_shape.

transform_inverse(image, *, interpolation='linear', image_grid2world=None, sampling_grid_shape=None, sampling_grid2world=None, resample_only=False)[source]#

Transform the input image from domain to co-domain space.

By default, the transformed image is sampled at a grid defined by self.codomain_shape and self.codomain_grid2world. If such information was not provided then sampling_grid_shape is mandatory.

Parameters:
image2D or 3D array

the image to be transformed

interpolationstring, either ‘linear’ or ‘nearest’

the type of interpolation to be used, either ‘linear’ (for k-linear interpolation) or ‘nearest’ for nearest neighbor

image_grid2worldarray, shape (dim + 1, dim + 1), optional

the grid-to-world transform associated with image. If None (the default), then the grid-to-world transform is assumed to be the identity.

sampling_grid_shapesequence, shape (dim,), optional

the shape of the grid where the transformed image must be sampled. If None (the default), then self.codomain_shape is used instead (which must have been set at initialization, otherwise an exception will be raised).

sampling_grid2worldarray, shape (dim + 1, dim + 1), optional

the grid-to-world transform associated with the sampling grid (specified by sampling_grid_shape, or by default self.codomain_shape). If None (the default), then the grid-to-world transform is assumed to be the identity.

resample_onlyBoolean, optional

If False (the default) the affine transform is applied normally. If True, then the affine transform is not applied, and the input image is just re-sampled on the domain grid of this transform.

Returns:
transformedarray

the transformed image, sampled at the requested grid, with shape sampling_grid_shape or self.codomain_shape.

MutualInformationMetric#

class dipy.align.imaffine.MutualInformationMetric(*, nbins=32, sampling_proportion=None)[source]#

Bases: object

Methods

distance(params)

Numeric value of the negative Mutual Information.

distance_and_gradient(params)

Numeric value of the metric and its gradient at given parameters.

gradient(params)

Numeric value of the metric's gradient at the given parameters.

setup(transform, static, moving, *[, ...])

Prepare the metric to compute intensity densities and gradients.

distance(params)[source]#

Numeric value of the negative Mutual Information.

We need to change the sign so we can use standard minimization algorithms.

Parameters:
paramsarray, shape (n,)

the parameter vector of the transform currently used by the metric (the transform name is provided when self.setup is called), n is the number of parameters of the transform

Returns:
neg_mifloat

the negative mutual information of the input images after transforming the moving image by the currently set transform with params parameters

distance_and_gradient(params)[source]#

Numeric value of the metric and its gradient at given parameters.

Parameters:
paramsarray, shape (n,)

the parameter vector of the transform currently used by the metric (the transform name is provided when self.setup is called), n is the number of parameters of the transform

Returns:
neg_mifloat

the negative mutual information of the input images after transforming the moving image by the currently set transform with params parameters

neg_mi_gradarray, shape (n,)

the gradient of the negative Mutual Information

gradient(params)[source]#

Numeric value of the metric’s gradient at the given parameters.

Parameters:
paramsarray, shape (n,)

the parameter vector of the transform currently used by the metric (the transform name is provided when self.setup is called), n is the number of parameters of the transform

Returns:
gradarray, shape (n,)

the gradient of the negative Mutual Information

setup(transform, static, moving, *, static_grid2world=None, moving_grid2world=None, starting_affine=None, static_mask=None, moving_mask=None)[source]#

Prepare the metric to compute intensity densities and gradients.

The histograms will be setup to compute probability densities of intensities within the minimum and maximum values of static and moving

Parameters:
transform: instance of Transform

the transformation with respect to whose parameters the gradient must be computed

staticarray, shape (S, R, C) or (R, C)

static image

movingarray, shape (S’, R’, C’) or (R’, C’)

moving image. The dimensions of the static (S, R, C) and moving (S’, R’, C’) images do not need to be the same.

static_grid2worldarray (dim+1, dim+1), optional

the grid-to-space transform of the static image. The default is None, implying the transform is the identity.

moving_grid2worldarray (dim+1, dim+1)

the grid-to-space transform of the moving image. The default is None, implying the spacing along all axes is 1.

starting_affinearray, shape (dim+1, dim+1), optional

the pre-aligning matrix (an affine transform) that roughly aligns the moving image towards the static image. If None, no pre-alignment is performed. If a pre-alignment matrix is available, it is recommended to provide this matrix as starting_affine instead of manually transforming the moving image to reduce interpolation artifacts. The default is None, implying no pre-alignment is performed.

static_maskarray, shape (S, R, C) or (R, C), optional

static image mask that defines which pixels in the static image are used to calculate the mutual information.

moving_maskarray, shape (S’, R’, C’) or (R’, C’), optional

moving image mask that defines which pixels in the moving image are used to calculate the mutual information.

AffineRegistration#

class dipy.align.imaffine.AffineRegistration(*, metric=None, level_iters=None, sigmas=None, factors=None, method='L-BFGS-B', ss_sigma_factor=None, options=None, verbosity=1)[source]#

Bases: object

Methods

optimize(static, moving, transform, params0, *)

Start the optimization process.

optimize(static, moving, transform, params0, *, static_grid2world=None, moving_grid2world=None, starting_affine=None, ret_metric=False, static_mask=None, moving_mask=None)[source]#

Start the optimization process.

Parameters:
static2D or 3D array

the image to be used as reference during optimization.

moving2D or 3D array

the image to be used as “moving” during optimization. It is necessary to pre-align the moving image to ensure its domain lies inside the domain of the deformation fields. This is assumed to be accomplished by “pre-aligning” the moving image towards the static using an affine transformation given by the ‘starting_affine’ matrix

transforminstance of Transform

the transformation with respect to whose parameters the gradient must be computed

params0array, shape (n,)

parameters from which to start the optimization. If None, the optimization will start at the identity transform. n is the number of parameters of the specified transformation.

static_grid2worldarray, shape (dim+1, dim+1), optional

the voxel-to-space transformation associated with the static image. The default is None, implying the transform is the identity.

moving_grid2worldarray, shape (dim+1, dim+1), optional

the voxel-to-space transformation associated with the moving image. The default is None, implying the transform is the identity.

starting_affinestring, or matrix, or None, optional
If string:

‘mass’: align centers of gravity ‘voxel-origin’: align physical coordinates of voxel (0,0,0) ‘centers’: align physical coordinates of central voxels

If matrix:

array, shape (dim+1, dim+1).

If None:

Start from identity.

ret_metricboolean, optional

if True, it returns the parameters for measuring the similarity between the images (default ‘False’). The metric containing optimal parameters and the distance between the images.

static_maskarray, shape (S, R, C) or (R, C), optional

static image mask that defines which pixels in the static image are used to calculate the mutual information.

moving_maskarray, shape (S’, R’, C’) or (R’, C’), optional

moving image mask that defines which pixels in the moving image are used to calculate the mutual information.

Returns:
affine_mapinstance of AffineMap

the affine resulting affine transformation

xoptoptimal parameters

the optimal parameters (translation, rotation shear etc.)

foptSimilarity metric

the value of the function at the optimal parameters.

transform_centers_of_mass#

dipy.align.imaffine.transform_centers_of_mass(static, static_grid2world, moving, moving_grid2world)[source]#

Transformation to align the center of mass of the input images.

Parameters:
staticarray, shape (S, R, C)

static image

static_grid2worldarray, shape (dim+1, dim+1)

the voxel-to-space transformation of the static image

movingarray, shape (S, R, C)

moving image

moving_grid2worldarray, shape (dim+1, dim+1)

the voxel-to-space transformation of the moving image

Returns:
affine_mapinstance of AffineMap

the affine transformation (translation only, in this case) aligning the center of mass of the moving image towards the one of the static image

transform_geometric_centers#

dipy.align.imaffine.transform_geometric_centers(static, static_grid2world, moving, moving_grid2world)[source]#

Transformation to align the geometric center of the input images.

With “geometric center” of a volume we mean the physical coordinates of its central voxel

Parameters:
staticarray, shape (S, R, C)

static image

static_grid2worldarray, shape (dim+1, dim+1)

the voxel-to-space transformation of the static image

movingarray, shape (S, R, C)

moving image

moving_grid2worldarray, shape (dim+1, dim+1)

the voxel-to-space transformation of the moving image

Returns:
affine_mapinstance of AffineMap

the affine transformation (translation only, in this case) aligning the geometric center of the moving image towards the one of the static image

transform_origins#

dipy.align.imaffine.transform_origins(static, static_grid2world, moving, moving_grid2world)[source]#

Transformation to align the origins of the input images.

With “origin” of a volume we mean the physical coordinates of voxel (0,0,0)

Parameters:
staticarray, shape (S, R, C)

static image

static_grid2worldarray, shape (dim+1, dim+1)

the voxel-to-space transformation of the static image

movingarray, shape (S, R, C)

moving image

moving_grid2worldarray, shape (dim+1, dim+1)

the voxel-to-space transformation of the moving image

Returns:
affine_mapinstance of AffineMap

the affine transformation (translation only, in this case) aligning the origin of the moving image towards the one of the static image

DiffeomorphicMap#

class dipy.align.imwarp.DiffeomorphicMap(dim, disp_shape, *, disp_grid2world=None, domain_shape=None, domain_grid2world=None, codomain_shape=None, codomain_grid2world=None, prealign=None)[source]#

Bases: object

Methods

allocate()

Creates a zero displacement field

compute_inversion_error()

Inversion error of the displacement fields

expand_fields(expand_factors, new_shape)

Expands the displacement fields from current shape to new_shape

get_backward_field()

Deformation field to transform an image in the backward direction

get_forward_field()

Deformation field to transform an image in the forward direction

get_simplified_transform()

Constructs a simplified version of this Diffeomorhic Map

interpret_matrix(obj)

Try to interpret obj as a matrix

inverse()

Inverse of this DiffeomorphicMap instance

shallow_copy()

Shallow copy of this DiffeomorphicMap instance

transform(image, *[, interpolation, ...])

Warps an image in the forward direction

transform_inverse(image, *[, interpolation, ...])

Warps an image in the backward direction

transform_points(points, *[, coord2world, ...])

Warp the list of points in the forward direction.

transform_points_inverse(points, *[, ...])

Warp the list of points in the backward direction.

warp_endomorphism(phi)

Composition of this DiffeomorphicMap with a given endomorphism

allocate()[source]#

Creates a zero displacement field

Creates a zero displacement field (the identity transformation).

compute_inversion_error()[source]#

Inversion error of the displacement fields

Estimates the inversion error of the displacement fields by computing statistics of the residual vectors obtained after composing the forward and backward displacement fields.

Returns:
residualarray, shape (R, C) or (S, R, C)

the displacement field resulting from composing the forward and backward displacement fields of this transformation (the residual should be zero for a perfect diffeomorphism)

statsarray, shape (3,)

statistics from the norms of the vectors of the residual displacement field: maximum, mean and standard deviation

Notes

Since the forward and backward displacement fields have the same discretization, the final composition is given by

comp[i] = forward[ i + Dinv * backward[i]]

where Dinv is the space-to-grid transformation of the displacement fields

expand_fields(expand_factors, new_shape)[source]#

Expands the displacement fields from current shape to new_shape

Up-samples the discretization of the displacement fields to be of new_shape shape.

Parameters:
expand_factorsarray, shape (dim,)

the factors scaling current spacings (voxel sizes) to spacings in the expanded discretization.

new_shapearray, shape (dim,)

the shape of the arrays holding the up-sampled discretization

get_backward_field()[source]#

Deformation field to transform an image in the backward direction

Returns the deformation field that must be used to warp an image under this transformation in the backward direction (note the ‘is_inverse’ flag).

get_forward_field()[source]#

Deformation field to transform an image in the forward direction

Returns the deformation field that must be used to warp an image under this transformation in the forward direction (note the ‘is_inverse’ flag).

get_simplified_transform()[source]#

Constructs a simplified version of this Diffeomorhic Map

The simplified version incorporates the pre-align transform, as well as the domain and codomain affine transforms into the displacement field. The resulting transformation may be regarded as operating on the image spaces given by the domain and codomain discretization. As a result, self.prealign, self.disp_grid2world, self.domain_grid2world and self.codomain affine will be None (denoting Identity) in the resulting diffeomorphic map.

interpret_matrix(obj)[source]#

Try to interpret obj as a matrix

Some operations are performed faster if we know in advance if a matrix is the identity (so we can skip the actual matrix-vector multiplication). This function returns None if the given object is None or the ‘identity’ string. It returns the same object if it is a numpy array. It raises an exception otherwise.

Parameters:
objobject

any object

Returns:
objobject

the same object given as argument if obj is None or a numpy array. None if obj is the ‘identity’ string.

inverse()[source]#

Inverse of this DiffeomorphicMap instance

Returns a diffeomorphic map object representing the inverse of this transformation. The internal arrays are not copied but just referenced.

Returns:
invDiffeomorphicMap object

the inverse of this diffeomorphic map.

shallow_copy()[source]#

Shallow copy of this DiffeomorphicMap instance

Creates a shallow copy of this diffeomorphic map (the arrays are not copied but just referenced)

Returns:
new_mapDiffeomorphicMap object

the shallow copy of this diffeomorphic map

transform(image, *, interpolation='linear', image_world2grid=None, out_shape=None, out_grid2world=None)[source]#

Warps an image in the forward direction

Transforms the input image under this transformation in the forward direction. It uses the “is_inverse” flag to switch between “forward” and “backward” (if is_inverse is False, then transform(…) warps the image forwards, else it warps the image backwards).

Parameters:
imagearray, shape (s, r, c) if dim = 3 or (r, c) if dim = 2

the image to be warped under this transformation in the forward direction

interpolationstring, either ‘linear’ or ‘nearest’

the type of interpolation to be used for warping, either ‘linear’ (for k-linear interpolation) or ‘nearest’ for nearest neighbor

image_world2gridarray, shape (dim+1, dim+1)

the transformation bringing world (space) coordinates to voxel coordinates of the image given as input

out_shapearray, shape (dim,)

the number of slices, rows and columns of the desired warped image

out_grid2worldthe transformation bringing voxel coordinates of the

warped image to physical space

Returns:
warpedarray, shape = out_shape or self.codomain_shape if None

the warped image under this transformation in the forward direction

Notes

See _warp_forward and _warp_backward documentation for further information.

transform_inverse(image, *, interpolation='linear', image_world2grid=None, out_shape=None, out_grid2world=None)[source]#

Warps an image in the backward direction

Transforms the input image under this transformation in the backward direction. It uses the “is_inverse” flag to switch between “forward” and “backward” (if is_inverse is False, then transform_inverse(…) warps the image backwards, else it warps the image forwards)

Parameters:
imagearray, shape (s, r, c) if dim = 3 or (r, c) if dim = 2

the image to be warped under this transformation in the forward direction

interpolationstring, either ‘linear’ or ‘nearest’

the type of interpolation to be used for warping, either ‘linear’ (for k-linear interpolation) or ‘nearest’ for nearest neighbor

image_world2gridarray, shape (dim+1, dim+1)

the transformation bringing world (space) coordinates to voxel coordinates of the image given as input

out_shapearray, shape (dim,)

the number of slices, rows, and columns of the desired warped image

out_grid2worldthe transformation bringing voxel coordinates of the

warped image to physical space

Returns:
warpedarray, shape = out_shape or self.codomain_shape if None

warped image under this transformation in the backward direction

Notes

See _warp_forward and _warp_backward documentation for further information.

transform_points(points, *, coord2world=None, world2coord=None)[source]#

Warp the list of points in the forward direction.

Applies this diffeomorphic map to the list of points (or streamlines) given by points. We assume that the points’ coordinates are mapped to world coordinates by applying the coord2world affine transform. The warped coordinates are given in world coordinates unless world2coord affine transform is specified, in which case the warped points will be transformed to the corresponding coordinate system.

Parameters:
pointsarray, shape (N, dim) or Streamlines object
coord2worldarray, shape (dim+1, dim+1), optional

affine matrix mapping points to world coordinates

world2coordarray, shape (dim+1, dim+1), optional

affine matrix mapping world coordinates to points

transform_points_inverse(points, *, coord2world=None, world2coord=None)[source]#

Warp the list of points in the backward direction.

Applies this diffeomorphic map to the list of points (or streamlines) given by points. We assume that the points’ coordinates are mapped to world coordinates by applying the coord2world affine transform. The warped coordinates are given in world coordinates unless world2coord affine transform is specified, in which case the warped points will be transformed to the corresponding coordinate system.

Parameters:
pointsarray, shape (N, dim) or Streamlines object
coord2worldarray, shape (dim+1, dim+1), optional

affine matrix mapping points to world coordinates

world2coordarray, shape (dim+1, dim+1), optional

affine matrix mapping world coordinates to points

warp_endomorphism(phi)[source]#

Composition of this DiffeomorphicMap with a given endomorphism

Creates a new DiffeomorphicMap C with the same properties as self and composes its displacement fields with phi’s corresponding fields. The resulting diffeomorphism is of the form C(x) = phi(self(x)) with inverse C^{-1}(y) = self^{-1}(phi^{-1}(y)). We assume that phi is an endomorphism with the same discretization and domain affine as self to ensure that the composition inherits self’s properties (we also assume that the pre-aligning matrix of phi is None or identity).

Parameters:
phiDiffeomorphicMap object

the endomorphism to be warped by this diffeomorphic map

Returns:
compositionthe composition of this diffeomorphic map with the

endomorphism given as input

Notes

The problem with our current representation of a DiffeomorphicMap is that the set of Diffeomorphism that can be represented this way (a pre-aligning matrix followed by a non-linear endomorphism given as a displacement field) is not closed under the composition operation.

Supporting a general DiffeomorphicMap class, closed under composition, may be extremely costly computationally, and the kind of transformations we actually need for Avants’ mid-point algorithm (SyN) are much simpler.

DiffeomorphicRegistration#

class dipy.align.imwarp.DiffeomorphicRegistration(*, metric=None)[source]#

Bases: object

Methods

get_map()

Returns the resulting diffeomorphic map after optimization

optimize()

Starts the metric optimization

set_level_iters(level_iters)

Sets the number of iterations at each pyramid level

abstract get_map()[source]#

Returns the resulting diffeomorphic map after optimization

abstract optimize()[source]#

Starts the metric optimization

This is the main function each specialized class derived from this must implement. Upon completion, the deformation field must be available from the forward transformation model.

set_level_iters(level_iters)[source]#

Sets the number of iterations at each pyramid level

Establishes the maximum number of iterations to be performed at each level of the Gaussian pyramid, similar to ANTS.

Parameters:
level_iterslist

the number of iterations at each level of the Gaussian pyramid. level_iters[0] corresponds to the finest level, level_iters[n-1] the coarsest, where n is the length of the list

SymmetricDiffeomorphicRegistration#

class dipy.align.imwarp.SymmetricDiffeomorphicRegistration(metric, *, level_iters=None, step_length=0.25, ss_sigma_factor=0.2, opt_tol=1e-05, inv_iter=20, inv_tol=0.001, callback=None)[source]#

Bases: DiffeomorphicRegistration

Methods

get_map()

Return the resulting diffeomorphic map.

optimize(static, moving, *[, ...])

Starts the optimization

set_level_iters(level_iters)

Sets the number of iterations at each pyramid level

update(current_displacement, ...)

Composition of the current displacement field with the given field

get_map()[source]#

Return the resulting diffeomorphic map.

Returns the DiffeomorphicMap registering the moving image towards the static image.

optimize(static, moving, *, static_grid2world=None, moving_grid2world=None, prealign=None)[source]#

Starts the optimization

Parameters:
staticarray, shape (S, R, C) or (R, C)

the image to be used as reference during optimization. The displacement fields will have the same discretization as the static image.

movingarray, shape (S, R, C) or (R, C)

the image to be used as “moving” during optimization. Since the deformation fields’ discretization is the same as the static image, it is necessary to pre-align the moving image to ensure its domain lies inside the domain of the deformation fields. This is assumed to be accomplished by “pre-aligning” the moving image towards the static using an affine transformation given by the ‘prealign’ matrix

static_grid2worldarray, shape (dim+1, dim+1)

the voxel-to-space transformation associated to the static image

moving_grid2worldarray, shape (dim+1, dim+1)

the voxel-to-space transformation associated to the moving image

prealignarray, shape (dim+1, dim+1)

the affine transformation (operating on the physical space) pre-aligning the moving image towards the static

Returns:
static_to_refDiffeomorphicMap object

the diffeomorphic map that brings the moving image towards the static one in the forward direction (i.e. by calling static_to_ref.transform) and the static image towards the moving one in the backward direction (i.e. by calling static_to_ref.transform_inverse).

update(current_displacement, new_displacement, disp_world2grid, time_scaling)[source]#

Composition of the current displacement field with the given field

Interpolates new displacement at the locations defined by current_displacement. Equivalently, computes the composition C of the given displacement fields as C(x) = B(A(x)), where A is current_displacement and B is new_displacement. This function is intended to be used with deformation fields of the same sampling (e.g. to be called by a registration algorithm).

Parameters:
current_displacementarray, shape (R’, C’, 2) or (S’, R’, C’, 3)

the displacement field defining where to interpolate new_displacement

new_displacementarray, shape (R, C, 2) or (S, R, C, 3)

the displacement field to be warped by current_displacement

disp_world2gridarray, shape (dim+1, dim+1)

the space-to-grid transform associated with the displacements’ grid (we assume that both displacements are discretized over the same grid)

time_scalingfloat

scaling factor applied to d2. The effect may be interpreted as moving d1 displacements along a factor (time_scaling) of d2.

Returns:
updatedarray, shape (the same as new_displacement)

the warped displacement field

mean_normthe mean norm of all vectors in current_displacement

mult_aff#

dipy.align.imwarp.mult_aff(A, B)[source]#

Returns the matrix product A.dot(B) considering None as the identity

Parameters:
Aarray, shape (n,k)
Barray, shape (k,m)
Returns:
The matrix product A.dot(B). If any of the input matrices is None, it is
treated as the identity matrix. If both matrices are None, None is returned

get_direction_and_spacings#

dipy.align.imwarp.get_direction_and_spacings(affine, dim)[source]#

Extracts the rotational and spacing components from a matrix

Extracts the rotational and spacing (voxel dimensions) components from a matrix. An image gradient represents the local variation of the image’s gray values per voxel. Since we are iterating on the physical space, we need to compute the gradients as variation per millimeter, so we need to divide each gradient’s component by the voxel size along the corresponding axis, that’s what the spacings are used for. Since the image’s gradients are oriented along the grid axes, we also need to re-orient the gradients to be given in physical space coordinates.

Parameters:
affinearray, shape (k, k), k = 3, 4

the matrix transforming grid coordinates to physical space.

Returns:
directionarray, shape (k-1, k-1)

the rotational component of the input matrix

spacingsarray, shape (k-1,)

the scaling component (voxel size) of the matrix

SimilarityMetric#

class dipy.align.metrics.SimilarityMetric(dim)[source]#

Bases: object

Methods

compute_backward()

Computes one step bringing the static image towards the moving.

compute_forward()

Computes one step bringing the reference image towards the static.

free_iteration()

Releases the resources no longer needed by the metric

get_energy()

Numerical value assigned by this metric to the current image pair

initialize_iteration()

Prepares the metric to compute one displacement field iteration.

set_levels_above(levels)

Informs the metric how many pyramid levels are above the current one

set_levels_below(levels)

Informs the metric how many pyramid levels are below the current one

set_moving_image(moving_image, ...)

Sets the moving image being compared against the static one.

set_static_image(static_image, ...)

Sets the static image being compared against the moving one.

use_moving_image_dynamics(...)

This is called by the optimizer just after setting the moving image

use_static_image_dynamics(...)

This is called by the optimizer just after setting the static image.

abstract compute_backward()[source]#

Computes one step bringing the static image towards the moving.

Computes the backward update field to register the static image towards the moving image in a gradient-based optimization algorithm

abstract compute_forward()[source]#

Computes one step bringing the reference image towards the static.

Computes the forward update field to register the moving image towards the static image in a gradient-based optimization algorithm

abstract free_iteration()[source]#

Releases the resources no longer needed by the metric

This method is called by the RegistrationOptimizer after the required iterations have been computed (forward and / or backward) so that the SimilarityMetric can safely delete any data it computed as part of the initialization

abstract get_energy()[source]#

Numerical value assigned by this metric to the current image pair

Must return the numeric value of the similarity between the given static and moving images

abstract initialize_iteration()[source]#

Prepares the metric to compute one displacement field iteration.

This method will be called before any compute_forward or compute_backward call, this allows the Metric to pre-compute any useful information for speeding up the update computations. This initialization was needed in ANTS because the updates are called once per voxel. In Python this is unpractical, though.

set_levels_above(levels)[source]#

Informs the metric how many pyramid levels are above the current one

Informs this metric the number of pyramid levels above the current one. The metric may change its behavior (e.g. number of inner iterations) accordingly

Parameters:
levelsint

the number of levels above the current Gaussian Pyramid level

set_levels_below(levels)[source]#

Informs the metric how many pyramid levels are below the current one

Informs this metric the number of pyramid levels below the current one. The metric may change its behavior (e.g. number of inner iterations) accordingly

Parameters:
levelsint

the number of levels below the current Gaussian Pyramid level

set_moving_image(moving_image, moving_affine, moving_spacing, moving_direction)[source]#

Sets the moving image being compared against the static one.

Sets the moving image. The default behavior (of this abstract class) is simply to assign the reference to an attribute, but generalizations of the metric may need to perform other operations

Parameters:
moving_imagearray, shape (R, C) or (S, R, C)

the moving image

set_static_image(static_image, static_affine, static_spacing, static_direction)[source]#

Sets the static image being compared against the moving one.

Sets the static image. The default behavior (of this abstract class) is simply to assign the reference to an attribute, but generalizations of the metric may need to perform other operations

Parameters:
static_imagearray, shape (R, C) or (S, R, C)

the static image

use_moving_image_dynamics(original_moving_image, transformation)[source]#

This is called by the optimizer just after setting the moving image

This method allows the metric to compute any useful information from knowing how the current static image was generated (as the transformation of an original static image). This method is called by the optimizer just after it sets the static image. Transformation will be an instance of DiffeomorficMap or None if the original_moving_image equals self.moving_image.

Parameters:
original_moving_imagearray, shape (R, C) or (S, R, C)

original image from which the current moving image was generated

transformationDiffeomorphicMap object

the transformation that was applied to the original image to generate the current moving image

use_static_image_dynamics(original_static_image, transformation)[source]#

This is called by the optimizer just after setting the static image.

This method allows the metric to compute any useful information from knowing how the current static image was generated (as the transformation of an original static image). This method is called by the optimizer just after it sets the static image. Transformation will be an instance of DiffeomorficMap or None if the original_static_image equals self.moving_image.

Parameters:
original_static_imagearray, shape (R, C) or (S, R, C)

original image from which the current static image was generated

transformationDiffeomorphicMap object

the transformation that was applied to original image to generate the current static image

CCMetric#

class dipy.align.metrics.CCMetric(dim, *, sigma_diff=2.0, radius=4)[source]#

Bases: SimilarityMetric

Methods

compute_backward()

Computes one step bringing the static image towards the moving.

compute_forward()

Computes one step bringing the moving image towards the static.

free_iteration()

Frees the resources allocated during initialization

get_energy()

Numerical value assigned by this metric to the current image pair

initialize_iteration()

Prepares the metric to compute one displacement field iteration.

set_levels_above(levels)

Informs the metric how many pyramid levels are above the current one

set_levels_below(levels)

Informs the metric how many pyramid levels are below the current one

set_moving_image(moving_image, ...)

Sets the moving image being compared against the static one.

set_static_image(static_image, ...)

Sets the static image being compared against the moving one.

use_moving_image_dynamics(...)

This is called by the optimizer just after setting the moving image

use_static_image_dynamics(...)

This is called by the optimizer just after setting the static image.

compute_backward()[source]#

Computes one step bringing the static image towards the moving.

Computes the update displacement field to be used for registration of the static image towards the moving image

compute_forward()[source]#

Computes one step bringing the moving image towards the static.

Computes the update displacement field to be used for registration of the moving image towards the static image

free_iteration()[source]#

Frees the resources allocated during initialization

get_energy()[source]#

Numerical value assigned by this metric to the current image pair

Returns the Cross Correlation (data term) energy computed at the largest iteration

initialize_iteration()[source]#

Prepares the metric to compute one displacement field iteration.

Pre-computes the cross-correlation factors for efficient computation of the gradient of the Cross Correlation w.r.t. the displacement field. It also pre-computes the image gradients in the physical space by re-orienting the gradients in the voxel space using the corresponding affine transformations.

EMMetric#

class dipy.align.metrics.EMMetric(dim, *, smooth=1.0, inner_iter=5, q_levels=256, double_gradient=True, step_type='gauss_newton')[source]#

Bases: SimilarityMetric

Methods

compute_backward()

Computes one step bringing the static image towards the moving.

compute_demons_step(*[, forward_step])

Demons step for EM metric

compute_forward()

Computes one step bringing the reference image towards the static.

compute_gauss_newton_step(*[, forward_step])

Computes the Gauss-Newton energy minimization step

free_iteration()

Frees the resources allocated during initialization

get_energy()

The numerical value assigned by this metric to the current image pair

initialize_iteration()

Prepares the metric to compute one displacement field iteration.

set_levels_above(levels)

Informs the metric how many pyramid levels are above the current one

set_levels_below(levels)

Informs the metric how many pyramid levels are below the current one

set_moving_image(moving_image, ...)

Sets the moving image being compared against the static one.

set_static_image(static_image, ...)

Sets the static image being compared against the moving one.

use_moving_image_dynamics(...)

This is called by the optimizer just after setting the moving image.

use_static_image_dynamics(...)

This is called by the optimizer just after setting the static image.

compute_backward()[source]#

Computes one step bringing the static image towards the moving.

Computes the update displacement field to be used for registration of the static image towards the moving image

compute_demons_step(*, forward_step=True)[source]#

Demons step for EM metric

Parameters:
forward_stepboolean

if True, computes the Demons step in the forward direction (warping the moving towards the static image). If False, computes the backward step (warping the static image to the moving image)

Returns:
displacementarray, shape (R, C, 2) or (S, R, C, 3)

the Demons step

compute_forward()[source]#

Computes one step bringing the reference image towards the static.

Computes the forward update field to register the moving image towards the static image in a gradient-based optimization algorithm

compute_gauss_newton_step(*, forward_step=True)[source]#

Computes the Gauss-Newton energy minimization step

Computes the Newton step to minimize this energy, i.e., minimizes the linearized energy function with respect to the regularized displacement field (this step does not require post-smoothing, as opposed to the demons step, which does not include regularization). To accelerate convergence we use the multi-grid Gauss-Seidel algorithm proposed by Bruhn and Weickert[3].

Parameters:
forward_stepboolean

if True, computes the Newton step in the forward direction (warping the moving towards the static image). If False, computes the backward step (warping the static image to the moving image)

Returns:
displacementarray, shape (R, C, 2) or (S, R, C, 3)

the Newton step

References

free_iteration()[source]#

Frees the resources allocated during initialization

get_energy()[source]#

The numerical value assigned by this metric to the current image pair

Returns the EM (data term) energy computed at the largest iteration

initialize_iteration()[source]#

Prepares the metric to compute one displacement field iteration.

Pre-computes the transfer functions (hidden random variables) and variances of the estimators. Also pre-computes the gradient of both input images. Note that once the images are transformed to the opposite modality, the gradient of the transformed images can be used with the gradient of the corresponding modality in the same fashion as diff-demons does for mono-modality images. If the flag self.use_double_gradient is True these gradients are averaged.

use_moving_image_dynamics(original_moving_image, transformation)[source]#

This is called by the optimizer just after setting the moving image.

EMMetric takes advantage of the image dynamics by computing the current moving image mask from the original_moving_image mask (warped by nearest neighbor interpolation)

Parameters:
original_moving_imagearray, shape (R, C) or (S, R, C)

the original moving image from which the current moving image was generated, the current moving image is the one that was provided via ‘set_moving_image(…)’, which may not be the same as the original moving image but a warped version of it.

transformationDiffeomorphicMap object

the transformation that was applied to the original_moving_image to generate the current moving image

use_static_image_dynamics(original_static_image, transformation)[source]#

This is called by the optimizer just after setting the static image.

EMMetric takes advantage of the image dynamics by computing the current static image mask from the originalstaticImage mask (warped by nearest neighbor interpolation)

Parameters:
original_static_imagearray, shape (R, C) or (S, R, C)

the original static image from which the current static image was generated, the current static image is the one that was provided via ‘set_static_image(…)’, which may not be the same as the original static image but a warped version of it (even the static image changes during Symmetric Normalization, not only the moving one).

transformationDiffeomorphicMap object

the transformation that was applied to the original_static_image to generate the current static image

SSDMetric#

class dipy.align.metrics.SSDMetric(dim, *, smooth=4, inner_iter=10, step_type='demons')[source]#

Bases: SimilarityMetric

Methods

compute_backward()

Computes one step bringing the static image towards the moving.

compute_demons_step(*[, forward_step])

Demons step for SSD metric

compute_forward()

Computes one step bringing the reference image towards the static.

compute_gauss_newton_step(*[, forward_step])

Computes the Gauss-Newton energy minimization step

free_iteration()

Nothing to free for the SSD metric

get_energy()

The numerical value assigned by this metric to the current image pair

initialize_iteration()

Prepares the metric to compute one displacement field iteration.

set_levels_above(levels)

Informs the metric how many pyramid levels are above the current one

set_levels_below(levels)

Informs the metric how many pyramid levels are below the current one

set_moving_image(moving_image, ...)

Sets the moving image being compared against the static one.

set_static_image(static_image, ...)

Sets the static image being compared against the moving one.

use_moving_image_dynamics(...)

This is called by the optimizer just after setting the moving image

use_static_image_dynamics(...)

This is called by the optimizer just after setting the static image.

compute_backward()[source]#

Computes one step bringing the static image towards the moving.

Computes the updated displacement field to be used for registration of the static image towards the moving image

compute_demons_step(*, forward_step=True)[source]#

Demons step for SSD metric

Computes the demons step proposed by Vercauteren et al.[4] for the SSD metric.

Parameters:
forward_stepboolean

if True, computes the Demons step in the forward direction (warping the moving towards the static image). If False, computes the backward step (warping the static image to the moving image)

Returns:
displacementarray, shape (R, C, 2) or (S, R, C, 3)

the Demons step

References

compute_forward()[source]#

Computes one step bringing the reference image towards the static.

Computes the update displacement field to be used for registration of the moving image towards the static image

compute_gauss_newton_step(*, forward_step=True)[source]#

Computes the Gauss-Newton energy minimization step

Minimizes the linearized energy function (Newton step) defined by the sum of squared differences of corresponding pixels of the input images with respect to the displacement field.

Parameters:
forward_stepboolean

if True, computes the Newton step in the forward direction (warping the moving towards the static image). If False, computes the backward step (warping the static image to the moving image)

Returns:
displacementarray, shape = static_image.shape + (3,)

if forward_step==True, the forward SSD Gauss-Newton step, else, the backward step

free_iteration()[source]#

Nothing to free for the SSD metric

get_energy()[source]#

The numerical value assigned by this metric to the current image pair

Returns the Sum of Squared Differences (data term) energy computed at the largest iteration

initialize_iteration()[source]#

Prepares the metric to compute one displacement field iteration.

Pre-computes the gradient of the input images to be used in the computation of the forward and backward steps.

v_cycle_2d#

dipy.align.metrics.v_cycle_2d(n, k, delta_field, sigma_sq_field, gradient_field, target, lambda_param, displacement, *, depth=0)[source]#

Multi-resolution Gauss-Seidel solver using V-type cycles

Multi-resolution Gauss-Seidel solver: solves the Gauss-Newton linear system by first filtering (GS-iterate) the current level, then solves for the residual at a coarser resolution and finally refines the solution at the current resolution. This scheme corresponds to the V-cycle proposed by Bruhn and Weickert[3].

Parameters:
nint

number of levels of the multi-resolution algorithm (it will be called recursively until level n == 0)

kint

the number of iterations at each multi-resolution level

delta_fieldarray, shape (R, C)

the difference between the static and moving image (the ‘derivative w.r.t. time’ in the optical flow model)

sigma_sq_fieldarray, shape (R, C)

the variance of the gray level value at each voxel, according to the EM model (for SSD, it is 1 for all voxels). Inf and 0 values are processed specially to support infinite and zero variance.

gradient_fieldarray, shape (R, C, 2)

the gradient of the moving image

targetarray, shape (R, C, 2)

right-hand side of the linear system to be solved in the Weickert’s multi-resolution algorithm

lambda_paramfloat

smoothness parameter, the larger its value the smoother the displacement field

displacementarray, shape (R, C, 2)

the displacement field to start the optimization from

Returns:
energythe energy of the EM (or SSD if sigmafield[…]==1) metric at this

iteration

References

v_cycle_3d#

dipy.align.metrics.v_cycle_3d(n, k, delta_field, sigma_sq_field, gradient_field, target, lambda_param, displacement, *, depth=0)[source]#

Multi-resolution Gauss-Seidel solver using V-type cycles

Multi-resolution Gauss-Seidel solver: solves the linear system by first filtering (GS-iterate) the current level, then solves for the residual at a coarser resolution and finally refines the solution at the current resolution. This scheme corresponds to the V-cycle proposed by Bruhn and Weickert[3].

Parameters:
nint

number of levels of the multi-resolution algorithm (it will be called recursively until level n == 0)

kint

the number of iterations at each multi-resolution level

delta_fieldarray, shape (S, R, C)

the difference between the static and moving image (the ‘derivative w.r.t. time’ in the optical flow model)

sigma_sq_fieldarray, shape (S, R, C)

the variance of the gray level value at each voxel, according to the EM model (for SSD, it is 1 for all voxels). Inf and 0 values are processed specially to support infinite and zero variance.

gradient_fieldarray, shape (S, R, C, 3)

the gradient of the moving image

targetarray, shape (S, R, C, 3)

right-hand side of the linear system to be solved in the Weickert’s multi-resolution algorithm

lambda_paramfloat

smoothness parameter, the larger its value the smoother the displacement field

displacementarray, shape (S, R, C, 3)

the displacement field to start the optimization from

Returns:
energythe energy of the EM (or SSD if sigmafield[…]==1) metric at this

iteration

References

reslice#

dipy.align.reslice.reslice(data, affine, zooms, new_zooms, *, order=1, mode='constant', cval=0, num_processes=1)[source]#

Reslice data with new voxel resolution defined by new_zooms.

Parameters:
dataarray, shape (I,J,K) or (I,J,K,N)

3d volume or 4d volume with datasets

affinearray, shape (4,4)

mapping from voxel coordinates to world coordinates

zoomstuple, shape (3,)

voxel size for (i,j,k) dimensions

new_zoomstuple, shape (3,)

new voxel size for (i,j,k) after resampling

orderint, from 0 to 5

order of interpolation for resampling/reslicing, 0 nearest interpolation, 1 trilinear etc.. if you don’t want any smoothing 0 is the option you need.

modestring (‘constant’, ‘nearest’, ‘reflect’ or ‘wrap’)

Points outside the boundaries of the input are filled according to the given mode.

cvalfloat

Value used for points outside the boundaries of the input if mode=’constant’.

num_processesint, optional

Split the calculation to a pool of children processes. This only applies to 4D data arrays. Default is 1. If < 0 the maximal number of cores minus num_processes + 1 is used (enter -1 to use as many cores as possible). 0 raises an error.

Returns:
data2array, shape (I,J,K) or (I,J,K,N)

datasets resampled into isotropic voxel size

affine2array, shape (4,4)

new affine for the resampled image

Examples

>>> from dipy.io.image import load_nifti
>>> from dipy.align.reslice import reslice
>>> from dipy.data import get_fnames
>>> f_name = get_fnames(name="aniso_vox")
>>> data, affine, zooms = load_nifti(f_name, return_voxsize=True)
>>> data.shape == (58, 58, 24)
True
>>> zooms
(4.0, 4.0, 5.0)
>>> new_zooms = (3.,3.,3.)
>>> new_zooms
(3.0, 3.0, 3.0)
>>> data2, affine2 = reslice(data, affine, zooms, new_zooms)
>>> data2.shape == (77, 77, 40)
True

ScaleSpace#

class dipy.align.scalespace.ScaleSpace(image, num_levels, *, image_grid2world=None, input_spacing=None, sigma_factor=0.2, mask0=False)[source]#

Bases: object

Methods

get_affine(level)

Voxel-to-space transformation at a given level.

get_affine_inv(level)

Space-to-voxel transformation at a given level.

get_domain_shape(level)

Shape the sub-sampled image must have at a particular level.

get_expand_factors(from_level, to_level)

Ratio of voxel size from pyramid level from_level to to_level.

get_image(level)

Smoothed image at a given level.

get_scaling(level)

Adjustment factor for input-spacing to reflect voxel sizes at level.

get_sigmas(level)

Smoothing parameters used at a given level.

get_spacing(level)

Spacings the sub-sampled image must have at a particular level.

print_level(level)

Prints properties of a pyramid level.

get_affine(level)[source]#

Voxel-to-space transformation at a given level.

Returns the voxel-to-space transformation associated with the sub-sampled image at a particular resolution of the scale space (note that this object does not explicitly subsample the smoothed images, but only provides the properties the sub-sampled images must have).

Parameters:
levelint, 0 <= from_level < L, (L = number of resolutions)

the scale space level to get affine transform from

Returns:
the affine (voxel-to-space) transform at the requested resolution
or None if an invalid level was requested
get_affine_inv(level)[source]#

Space-to-voxel transformation at a given level.

Returns the space-to-voxel transformation associated with the sub-sampled image at a particular resolution of the scale space (note that this object does not explicitly subsample the smoothed images, but only provides the properties the sub-sampled images must have).

Parameters:
levelint, 0 <= from_level < L, (L = number of resolutions)

the scale space level to get the inverse transform from

Returns:
the inverse (space-to-voxel) transform at the requested resolution or
None if an invalid level was requested
get_domain_shape(level)[source]#

Shape the sub-sampled image must have at a particular level.

Returns the shape the sub-sampled image must have at a particular resolution of the scale space (note that this object does not explicitly subsample the smoothed images, but only provides the properties the sub-sampled images must have).

Parameters:
levelint, 0 <= from_level < L, (L = number of resolutions)

the scale space level to get the sub-sampled shape from

Returns:
the sub-sampled shape at the requested resolution or None if an
invalid level was requested
get_expand_factors(from_level, to_level)[source]#

Ratio of voxel size from pyramid level from_level to to_level.

Given two scale space resolutions a = from_level, b = to_level, returns the ratio of voxels size at level b to voxel size at level a (the factor that must be used to multiply voxels at level a to ‘expand’ them to level b).

Parameters:
from_levelint, 0 <= from_level < L, (L = number of resolutions)

the resolution to expand voxels from

to_levelint, 0 <= to_level < from_level

the resolution to expand voxels to

Returns:
factorsarray, shape (k,), k = 2, 3

the expand factors (a scalar for each voxel dimension)

get_image(level)[source]#

Smoothed image at a given level.

Returns the smoothed image at the requested level in the Scale Space.

Parameters:
levelint, 0 <= from_level < L, (L = number of resolutions)

the scale space level to get the smooth image from

Returns:
the smooth image at the requested resolution or None if an invalid
level was requested
get_scaling(level)[source]#

Adjustment factor for input-spacing to reflect voxel sizes at level.

Returns the scaling factor that needs to be applied to the input spacing (the voxel sizes of the image at level 0 of the scale space) to transform them to voxel sizes at the requested level.

Parameters:
levelint, 0 <= from_level < L, (L = number of resolutions)

the scale space level to get the scalings from

Returns:
the scaling factors from the original spacing to the spacings at the
requested level
get_sigmas(level)[source]#

Smoothing parameters used at a given level.

Returns the smoothing parameters (a scalar for each axis) used at the requested level of the scale space

Parameters:
levelint, 0 <= from_level < L, (L = number of resolutions)

the scale space level to get the smoothing parameters from

Returns:
the smoothing parameters at the requested level
get_spacing(level)[source]#

Spacings the sub-sampled image must have at a particular level.

Returns the spacings (voxel sizes) the sub-sampled image must have at a particular resolution of the scale space (note that this object does not explicitly subsample the smoothed images, but only provides the properties the sub-sampled images must have).

Parameters:
levelint, 0 <= from_level < L, (L = number of resolutions)

the scale space level to get the sub-sampled shape from

Returns:
the spacings (voxel sizes) at the requested resolution or None if an
invalid level was requested
print_level(level)[source]#

Prints properties of a pyramid level.

Prints the properties of a level of this scale space to standard output

Parameters:
levelint, 0 <= from_level < L, (L = number of resolutions)

the scale space level to be printed

IsotropicScaleSpace#

class dipy.align.scalespace.IsotropicScaleSpace(image, factors, sigmas, *, image_grid2world=None, input_spacing=None, mask0=False)[source]#

Bases: ScaleSpace

Methods

get_affine(level)

Voxel-to-space transformation at a given level.

get_affine_inv(level)

Space-to-voxel transformation at a given level.

get_domain_shape(level)

Shape the sub-sampled image must have at a particular level.

get_expand_factors(from_level, to_level)

Ratio of voxel size from pyramid level from_level to to_level.

get_image(level)

Smoothed image at a given level.

get_scaling(level)

Adjustment factor for input-spacing to reflect voxel sizes at level.

get_sigmas(level)

Smoothing parameters used at a given level.

get_spacing(level)

Spacings the sub-sampled image must have at a particular level.

print_level(level)

Prints properties of a pyramid level.

StreamlineDistanceMetric#

class dipy.align.streamlinear.StreamlineDistanceMetric(*, num_threads=None)[source]#

Bases: object

Methods

distance(xopt)

calculate distance for current set of parameters.

setup

abstract distance(xopt)[source]#

calculate distance for current set of parameters.

abstract setup(static, moving)[source]#

BundleMinDistanceMetric#

class dipy.align.streamlinear.BundleMinDistanceMetric(*, num_threads=None)[source]#

Bases: StreamlineDistanceMetric

Bundle-based Minimum Distance aka BMD.

This is the cost function used by the StreamlineLinearRegistration.

See [5] for further details about the metric.

Methods

setup(static, moving)

distance(xopt)

References

distance(xopt)[source]#

Distance calculated from this Metric.

Parameters:
xoptsequence

List of affine parameters as an 1D vector,

setup(static, moving)[source]#

Setup static and moving sets of streamlines.

Parameters:
staticstreamlines

Fixed or reference set of streamlines.

movingstreamlines

Moving streamlines.

Notes

Call this after the object is initiated and before distance.

BundleMinDistanceMatrixMetric#

class dipy.align.streamlinear.BundleMinDistanceMatrixMetric(*, num_threads=None)[source]#

Bases: StreamlineDistanceMetric

Bundle-based Minimum Distance aka BMD

This is the cost function used by the StreamlineLinearRegistration

Methods

setup(static, moving)

distance(xopt)

Notes

The difference with BundleMinDistanceMetric is that this creates the entire distance matrix and therefore requires more memory.

distance(xopt)[source]#

Distance calculated from this Metric.

Parameters:
xoptsequence

List of affine parameters as an 1D vector

setup(static, moving)[source]#

Setup static and moving sets of streamlines.

Parameters:
staticstreamlines

Fixed or reference set of streamlines.

movingstreamlines

Moving streamlines.

Notes

Call this after the object is initiated and before distance.

Num_threads is not used in this class. Use BundleMinDistanceMetric for a faster, threaded and less memory hungry metric

BundleMinDistanceAsymmetricMetric#

class dipy.align.streamlinear.BundleMinDistanceAsymmetricMetric(*, num_threads=None)[source]#

Bases: BundleMinDistanceMetric

Asymmetric Bundle-based Minimum distance.

This is a cost function that can be used by the StreamlineLinearRegistration class.

Methods

distance(xopt)

Distance calculated from this Metric.

setup(static, moving)

Setup static and moving sets of streamlines.

distance(xopt)[source]#

Distance calculated from this Metric.

Parameters:
xoptsequence

List of affine parameters as an 1D vector

BundleSumDistanceMatrixMetric#

class dipy.align.streamlinear.BundleSumDistanceMatrixMetric(*, num_threads=None)[source]#

Bases: BundleMinDistanceMatrixMetric

Bundle-based Sum Distance aka BMD

This is a cost function that can be used by the StreamlineLinearRegistration class.

Methods

setup(static, moving)

distance(xopt)

Notes

The difference with BundleMinDistanceMatrixMetric is that it uses uses the sum of the distance matrix and not the sum of mins.

distance(xopt)[source]#

Distance calculated from this Metric

Parameters:
xoptsequence

List of affine parameters as an 1D vector

JointBundleMinDistanceMetric#

class dipy.align.streamlinear.JointBundleMinDistanceMetric(*, num_threads=None)[source]#

Bases: StreamlineDistanceMetric

Bundle-based Minimum Distance for joint optimization.

This cost function is used by the StreamlineLinearRegistration class when running halfway streamline linear registration for unbiased groupwise bundle registration and atlasing.

It computes the BMD distance after moving both static and moving bundles to a halfway space in between both.

Methods

setup(static, moving)

distance(xopt)

Notes

In this metric both static and moving bundles are treated equally (i.e., there is no static reference bundle as both are intended to move). The naming convention is kept for consistency.

distance(xopt)[source]#

Distance calculated from this Metric.

Parameters:
xoptsequence

List of affine parameters as an 1D vector. These affine parameters are used to derive the corresponding halfway transformation parameters for each bundle.

setup(static, moving)[source]#

Setup static and moving sets of streamlines.

Parameters:
staticstreamlines

Set of streamlines

movingstreamlines

Set of streamlines

Notes

Call this after the object is initiated and before distance. Num_threads is not used in this class.

StreamlineLinearRegistration#

class dipy.align.streamlinear.StreamlineLinearRegistration(*, metric=None, x0='rigid', method='L-BFGS-B', bounds=None, verbose=False, options=None, evolution=False, num_threads=None)[source]#

Bases: object

Methods

optimize(static, moving, *[, mat])

Find the minimum of the provided metric.

optimize(static, moving, *, mat=None)[source]#

Find the minimum of the provided metric.

Parameters:
staticstreamlines

Reference or fixed set of streamlines.

movingstreamlines

Moving set of streamlines.

matarray

Transformation (4, 4) matrix to start the registration. mat is applied to moving. Default value None which means that initial transformation will be generated by shifting the centers of moving and static sets of streamlines to the origin.

Returns:
mapStreamlineRegistrationMap

StreamlineRegistrationMap#

class dipy.align.streamlinear.StreamlineRegistrationMap(matopt, xopt, fopt, matopt_history, funcs, iterations)[source]#

Bases: object

Methods

transform(moving)

Transform moving streamlines to the static.

transform(moving)[source]#

Transform moving streamlines to the static.

Parameters:
movingstreamlines
Returns:
movedstreamlines

Notes

All this does is apply self.matrix to the input streamlines.

JointStreamlineRegistrationMap#

class dipy.align.streamlinear.JointStreamlineRegistrationMap(xopt, fopt, matopt_history, funcs, iterations)[source]#

Bases: object

Methods

transform(static, moving)

Transform both static and moving bundles to the halfway space.

transform(static, moving)[source]#

Transform both static and moving bundles to the halfway space.

All this does is apply self.matrix1 and self.matrix2` to the static and moving bundles, respectively.

Parameters:
staticstreamlines
movingstreamlines
Returns:
staticstreamlines
movingstreamlines

bundle_sum_distance#

dipy.align.streamlinear.bundle_sum_distance(t, static, moving, *, num_threads=None)[source]#

MDF distance optimization function (SUM).

We minimize the distance between moving streamlines as they align with the static streamlines.

Parameters:
tndarray

t is a vector of affine transformation parameters with size at least 6. If the size is 6, t is interpreted as translation + rotation. If the size is 7, t is interpreted as translation + rotation + isotropic scaling. If size is 12, t is interpreted as translation + rotation + scaling + shearing.

staticlist

Static streamlines

movinglist

Moving streamlines. These will be transformed to align with the static streamlines

num_threadsint, optional

Number of threads. If -1 then all available threads will be used.

Returns:
cost: float

bundle_min_distance#

dipy.align.streamlinear.bundle_min_distance(t, static, moving)[source]#

MDF-based pairwise distance optimization function (MIN).

We minimize the distance between moving streamlines as they align with the static streamlines.

Parameters:
tndarray

t is a vector of affine transformation parameters with size at least 6. If size is 6, t is interpreted as translation + rotation. If size is 7, t is interpreted as translation + rotation + isotropic scaling. If size is 12, t is interpreted as translation + rotation + scaling + shearing.

staticlist

Static streamlines

movinglist

Moving streamlines.

Returns:
cost: float

bundle_min_distance_fast#

dipy.align.streamlinear.bundle_min_distance_fast(t, static, moving, block_size, *, num_threads=None)[source]#

MDF-based pairwise distance optimization function (MIN).

We minimize the distance between moving streamlines as they align with the static streamlines.

Parameters:
tarray

1D array. t is a vector of affine transformation parameters with size at least 6. If the size is 6, t is interpreted as translation + rotation. If the size is 7, t is interpreted as translation + rotation + isotropic scaling. If size is 12, t is interpreted as translation + rotation + scaling + shearing.

staticarray

N*M x 3 array. All the points of the static streamlines. With order of streamlines intact. Where N is the number of streamlines and M is the number of points per streamline.

movingarray

K*M x 3 array. All the points of the moving streamlines. With order of streamlines intact. Where K is the number of streamlines and M is the number of points per streamline.

block_sizeint

Number of points per streamline. All streamlines in static and moving should have the same number of points M.

num_threadsint, optional

Number of threads to be used for OpenMP parallelization. If None (default) the value of OMP_NUM_THREADS environment variable is used if it is set, otherwise all available threads are used. If < 0 the maximal number of threads minus \(|num_threads + 1|\) is used (enter -1 to use as many threads as possible). 0 raises an error.

Returns:
cost: float

Notes

This is a faster implementation of bundle_min_distance, which requires that all the points of each streamline are allocated into an ndarray (of shape N*M by 3, with N the number of points per streamline and M the number of streamlines). This can be done by calling dipy.tracking.streamlines.unlist_streamlines.

bundle_min_distance_asymmetric_fast#

dipy.align.streamlinear.bundle_min_distance_asymmetric_fast(t, static, moving, block_size)[source]#

MDF-based pairwise distance optimization function (MIN).

We minimize the distance between moving streamlines as they align with the static streamlines.

Parameters:
tarray

1D array. t is a vector of affine transformation parameters with size at least 6. If the size is 6, t is interpreted as translation + rotation. If the size is 7, t is interpreted as translation + rotation + isotropic scaling. If size is 12, t is interpreted as translation + rotation + scaling + shearing.

staticarray

N*M x 3 array. All the points of the static streamlines. With order of streamlines intact. Where N is the number of streamlines and M is the number of points per streamline.

movingarray

K*M x 3 array. All the points of the moving streamlines. With order of streamlines intact. Where K is the number of streamlines and M is the number of points per streamline.

block_sizeint

Number of points per streamline. All streamlines in static and moving should have the same number of points M.

Returns:
cost: float

remove_clusters_by_size#

dipy.align.streamlinear.remove_clusters_by_size(clusters, min_size=0)[source]#

progressive_slr#

dipy.align.streamlinear.progressive_slr(static, moving, metric, x0, bounds, *, method='L-BFGS-B', verbose=False, num_threads=None)[source]#

Progressive SLR.

This is a utility function that allows for example to do affine registration using Streamline-based Linear Registration (SLR) [6] by starting with translation first, then rigid, then similarity, scaling and finally affine.

Similarly, if for example, you want to perform rigid then you start with translation first. This progressive strategy can help with finding the optimal parameters of the final transformation.

Parameters:
staticStreamlines

Static streamlines.

movingStreamlines

Moving streamlines.

metricStreamlineDistanceMetric

Distance metric for registration optimization.

x0string

Could be any of ‘translation’, ‘rigid’, ‘similarity’, ‘scaling’, ‘affine’

boundsarray

Boundaries of registration parameters. See variable DEFAULT_BOUNDS for example.

methodstring

L_BFGS_B’ or ‘Powell’ optimizers can be used. Default is ‘L_BFGS_B’.

verbosebool, optional.

If True, log messages.

num_threadsint, optional

Number of threads to be used for OpenMP parallelization. If None (default) the value of OMP_NUM_THREADS environment variable is used if it is set, otherwise all available threads are used. If < 0 the maximal number of threads minus \(|num_threads + 1|\) is used (enter -1 to use as many threads as possible). 0 raises an error. Only metrics using OpenMP will use this variable.

References

slr_with_qbx#

dipy.align.streamlinear.slr_with_qbx(static, moving, *, x0='affine', rm_small_clusters=50, maxiter=100, select_random=None, verbose=False, greater_than=50, less_than=250, qbx_thr=(40, 30, 20, 15), nb_pts=20, progressive=True, rng=None, num_threads=None)[source]#

Utility function for registering large tractograms.

For efficiency, we apply the registration on cluster centroids and remove small clusters.

See [5], [6] and [7] for details about the methods involved.

Parameters:
staticStreamlines

Fixed or reference set of streamlines.

movingstreamlines

Moving streamlines.

x0str, optional.

rigid, similarity or affine transformation model

rm_small_clustersint, optional

Remove clusters that have less than rm_small_clusters

maxiterint, optional

Maximum number of iterations to perform.

select_randomint, optional.

If not, None selects a random number of streamlines to apply clustering

verbosebool, optional

If True, logs information about optimization.

greater_thanint, optional

Keep streamlines that have length greater than this value.

less_thanint, optional

Keep streamlines have length less than this value.

qbx_thrvariable int

Thresholds for QuickBundlesX.

nb_ptsint, optional

Number of points for discretizing each streamline.

progressiveboolean, optional

True to enable progressive registration.

rngnp.random.Generator

If None creates random generator in function.

num_threadsint, optional

Number of threads to be used for OpenMP parallelization. If None (default) the value of OMP_NUM_THREADS environment variable is used if it is set, otherwise all available threads are used. If < 0 the maximal number of threads minus \(|num_threads + 1|\) is used (enter -1 to use as many threads as possible). 0 raises an error. Only metrics using OpenMP will use this variable.

Notes

The order of operations is the following. First short or long streamlines are removed. Second, the tractogram or a random selection of the tractogram is clustered with QuickBundles. Then SLR [6] is applied.

References

groupwise_slr#

dipy.align.streamlinear.groupwise_slr(bundles, *, x0='affine', tol=0, max_iter=20, qbx_thr=(4,), nb_pts=20, select_random=10000, verbose=False, rng=None)[source]#

Function to perform unbiased groupwise bundle registration

All bundles are moved to the same space by iteratively applying halfway streamline linear registration in pairs. With each iteration, bundles get closer to each other until the procedure converges and there is no more improvement.

See [5], [6] and [7].

Parameters:
bundleslist

List with streamlines of the bundles to be registered.

x0str, optional

rigid, similarity or affine transformation model.

tolfloat, optional

Tolerance value to be used to assume convergence.

max_iterint, optional

Maximum number of iterations. Depending on the number of bundles to be registered this may need to be larger.

qbx_thrvariable int, optional

Thresholds for Quickbundles used for clustering streamlines and reduce computational time. If None, no clustering is performed. Higher values cluster streamlines into a smaller number of centroids.

nb_ptsint, optional

Number of points for discretizing each streamline.

select_randomint, optional

Maximum number of streamlines for each bundle. If None, all the streamlines are used.

verbosebool, optional

If True, logs information.

rngnp.random.Generator

If None, creates random generator in function.

References

get_unique_pairs#

dipy.align.streamlinear.get_unique_pairs(n_bundle, *, pairs=None)[source]#

Make unique pairs from n_bundle bundles.

The function allows to input a previous pairs assignment so that the new pairs are different.

Parameters:
n_bundleint

Number of bundles to be matched in pairs.

pairsarray, optional

array containing the indexes of previous pairs.

compose_matrix44#

dipy.align.streamlinear.compose_matrix44(t, *, dtype=<class 'numpy.float64'>)[source]#

Compose a 4x4 transformation matrix.

Parameters:
tndarray

This is a 1D vector of affine transformation parameters with size at least 3. If the size is 3, t is interpreted as translation. If the size is 6, t is interpreted as translation + rotation. If the size is 7, t is interpreted as translation + rotation + isotropic scaling. If the size is 9, t is interpreted as translation + rotation + anisotropic scaling. If size is 12, t is interpreted as translation + rotation + scaling + shearing.

Returns:
Tndarray

Homogeneous transformation matrix of size 4x4.

decompose_matrix44#

dipy.align.streamlinear.decompose_matrix44(mat, *, size=12)[source]#

Given a 4x4 homogeneous matrix return the parameter vector.

Parameters:
matarray

Homogeneous 4x4 transformation matrix

sizeint

Size of the output vector. 3, for translation, 6 for rigid, 7 for similarity, 9 for scaling and 12 for affine. Default is 12.

Returns:
tndarray

One dimensional ndarray of 3, 6, 7, 9 or 12 affine parameters.

average_bundle_length#

dipy.align.streamwarp.average_bundle_length(bundle)[source]#

Find average Euclidean length of the bundle in mm.

Parameters:
bundleStreamlines

Bundle who’s average length is to be calculated.

Returns:
int

Average Euclidean length of bundle in mm.

find_missing#

dipy.align.streamwarp.find_missing(lst, cb)[source]#

Find unmatched streamline indices in moving bundle.

Parameters:
lstList

List of integers containing all the streamlines indices in moving bundle.

cbList

List of integers containing streamline indices of the moving bundle that were not matched to any streamline in static bundle.

Returns:
list

List containing unmatched streamlines from moving bundle

bundlewarp#

dipy.align.streamwarp.bundlewarp(static, moving, *, dist=None, alpha=0.5, beta=20, max_iter=15, affine=True)[source]#

Register two bundles using nonlinear method.

See [8] for further details about the method.

Parameters:
staticStreamlines

Reference/fixed bundle.

movingStreamlines

Target bundle that will be moved/registered to match the static bundle.

distfloat, optional

Precomputed distance matrix.

alphafloat, optional

Represents the trade-off between regularizing the deformation and having points match very closely. Lower value of alpha means high deformations.

betaint, optional

Represents the strength of the interaction between points Gaussian kernel size.

max_iterint, optional

Maximum number of iterations for deformation process in ml-CPD method.

affineboolean, optional

If False, use rigid registration as starting point.

Returns:
deformed_bundleStreamlines

Nonlinearly moved bundle (warped bundle)

moving_alignedStreamlines

Linearly moved bundle (affinely moved)

distnp.ndarray

Float array containing distance between moving and static bundle

matched_pairsnp.ndarray

Int array containing streamline correspondences between two bundles

warpnp.ndarray

Nonlinear warp map generated by BundleWarp

References

bundlewarp_vector_filed#

dipy.align.streamwarp.bundlewarp_vector_filed(moving_aligned, deformed_bundle)[source]#

Calculate vector fields.

Vector field computation as the difference between each streamline point in the deformed and linearly aligned bundles

Parameters:
moving_alignedStreamlines

Linearly (affinely) moved bundle

deformed_bundleStreamlines

Nonlinearly (warped) bundle

Returns:
offsetsList

Vector field modules

directionsList

Unitary vector directions

colorsList

Colors for bundle warping field vectors. Colors follow the convention used in DTI-derived maps (e.g. color FA) [9].

References

bundlewarp_shape_analysis#

dipy.align.streamwarp.bundlewarp_shape_analysis(moving_aligned, deformed_bundle, *, no_disks=10, plotting=False)[source]#

Calculate bundle shape difference profile.

Bundle shape difference analysis using magnitude from BundleWarp displacements and BUAN.

Depending on the number of points of a streamline, and the number of segments requested, multiple points may be considered for the computation of a given segment; a segment may contain information from a single point; or some segments may not contain information from any points. In the latter case, the segment will contain an np.nan value. The point-to-segment mapping is defined by the assignment_map(): for each segment index, the point information of the matching index positions, as returned by assignment_map(), are considered for the computation.

Parameters:
moving_alignedStreamlines

Linearly (affinely) moved bundle

deformed_bundleStreamlines

Nonlinearly (warped) moved bundle

no_disksint, optional

Number of segments to be created along the length of the bundle

plottingBoolean, optional

Plot bundle shape profile

Returns:
shape_profilenp.ndarray

Float array containing bundlewarp displacement magnitudes along the length of the bundle

stdvnp.ndarray

Float array containing standard deviations