This file is copied (possibly with major modifications) from the
sources of the pycpd project - siavashk/pycpd.
It remains licensed as the rest of PyCPD (MIT license as of October 2010).
# ## ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ##
#
# See COPYING file distributed along with the PyCPD package for the
# copyright and license terms.
#
# ## ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ##
Affine image registration module consisting of the following classes:
AffineMap: encapsulates the necessary information to perform affine
transforms between two domains, defined by a static and a moving
image. The domain of the transform is the set of points in the
static image’s grid, and the codomain is the set of points in
the moving image. When we call the transform method, AffineMap
maps each point x of the domain (static grid) to the codomain
(moving grid) and interpolates the moving image at that point
to obtain the intensity value to be placed at x in the resulting
grid. The transform_inverse method performs the opposite operation
mapping points in the codomain to points in the domain.
ParzenJointHistogram: computes the marginal and joint distributions of
intensities of a pair of images, using Parzen windows
[1] with a cubic spline kernel, as proposed by
Mattes et al.[2]. It also computes the gradient of the joint
histogram w.r.t. the parameters of a given transform.
MutualInformationMetric: computes the value and gradient of the mutual
information metric the way Optimizer needs them. That is, given
a set of transform parameters, it will use ParzenJointHistogram
to compute the value and gradient of the joint intensity histogram
evaluated at the given parameters, and evaluate the value and
gradient of the histogram’s mutual information.
AffineRegistration: it runs the multi-resolution registration, putting
all the pieces together. It needs to create the scale space of the
images and run the multi-resolution registration by using the Metric
and the Optimizer at each level of the Gaussian pyramid. At each
level, it will setup the metric to compute value and gradient of the
metric with the input images with different levels of smoothing.
Register a 2D/3D source image (moving) to a 2D/3D target image (static).
Parameters:
moving, staticarray or nib.Nifti1Image or str.
Either as a 2D/3D array or as a nifti image object, or as
a string containing the full path to a nifti file.
moving_affine, static_affine4x4 array, optional.
Must be provided for data provided as an array. If provided together
with Nifti1Image or str data, this input will over-ride the affine
that is stored in the data input. Default: use the affine stored
in data.
metricstring, optional
The metric to be optimized. One of CC, EM, SSD,
Default: ‘CC’ => CCMetric.
dim: int (either 2 or 3), optional
The dimensions of the image domain. Default: 3
level_iterslist of int, optional
the number of iterations at each level of the Gaussian Pyramid (the
length of the list defines the number of pyramid levels to be
used). Default: [10, 10, 5].
metric_kwargsdict, optional
Parameters for initialization of the metric object. If not provided,
uses the default settings of each metric.
Returns:
warped_movingndarray
The data in moving, warped towards the static data.
forwardndarray (…, 3)
The vector field describing the forward warping from the source to the
target.
backwardndarray (…, 3)
The vector field describing the backward warping from the target to the
source.
Register DWI data to a template through the B0 volumes.
Parameters:
dwi4D array, nifti image or str
Containing the DWI data, or full path to a nifti file with DWI.
gtabGradientTable or sequence of strings
The gradients associated with the DWI data, or a sequence with
(fbval, fbvec), full paths to bvals and bvecs files.
dwi_affine4x4 array, optional
An affine transformation associated with the DWI. Required if data
is provided as an array. If provided together with nifti/path,
will over-ride the affine that is in the nifti.
template3D array, nifti image or str
Containing the data for the template, or full path to a nifti file
with the template data.
template_affine4x4 array, optional
An affine transformation associated with the template. Required if data
is provided as an array. If provided together with nifti/path,
will over-ride the affine that is in the nifti.
reg_methodstr,
One of “syn” or “aff”, which designates which registration method is
used. Either syn, which uses the syn_registration() function
or affine_registration() function. Default: “syn”.
If reg_method is “syn”, a DiffeomorphicMap class instance that can be
used to transform between the two spaces. Otherwise, if reg_method is
“aff”, a 4x4 matrix encoding the affine transform.
Notes
This function assumes that the DWI data is already internally registered.
See register_dwi_series().
The data in the file is organized with shape (X, Y, Z, 3, 2), such
that the forward mapping in each voxel is in data[i, j, k, :, 0] and
the backward mapping in each voxel is in data[i, j, k, :, 1].
Resample an image (moving) from one space to another (static).
Parameters:
movingarray, nifti image or str
Containing the data for the moving object, or full path to a nifti file
with the moving data.
staticarray, nifti image or str
Containing the data for the static object, or full path to a nifti file
with the moving data.
moving_affine4x4 array, optional
An affine transformation associated with the moving object. Required if
data is provided as an array. If provided together with nifti/path,
will over-ride the affine that is in the nifti.
static_affine4x4 array, optional
An affine transformation associated with the static object. Required if
data is provided as an array. If provided together with nifti/path,
will over-ride the affine that is in the nifti.
between_affine: 4x4 array, optional
If an additional affine is needed between the two spaces.
Default: identity (no additional registration).
Returns:
A Nifti1Image class instance with the data from the moving object
Find the affine transformation between two 3D images. Alternatively, find
the combination of several linear transformations.
Parameters:
movingarray, nifti image or str
Containing the data for the moving object, or full path to a nifti file
with the moving data.
staticarray, nifti image or str
Containing the data for the static object, or full path to a nifti file
with the moving data.
moving_affine4x4 array, optional
An affine transformation associated with the moving object. Required if
data is provided as an array. If provided together with nifti/path,
will over-ride the affine that is in the nifti.
static_affine4x4 array, optional
An affine transformation associated with the static object. Required if
data is provided as an array. If provided together with nifti/path,
will over-ride the affine that is in the nifti.
pipelinelist of str, optional
Sequence of transforms to use in the gradual fitting. Default: gradual
fit of the full affine (executed from left to right):
["center_of_mass","translation","rigid","affine"]
Alternatively, any other combination of the following registration
methods might be used: center_of_mass, translation, rigid,
rigid_isoscaling, rigid_scaling and affine.
starting_affine: 4x4 array, optional
Initial guess for the transformation between the spaces.
Default: identity.
metricstr, optional.
Currently only supports ‘MI’ for MutualInformationMetric.
level_iterssequence, optional
AffineRegistration key-word argument: the number of iterations at each
scale of the scale space. level_iters[0] corresponds to the coarsest
scale, level_iters[-1] the finest, where n is the length of the
sequence. By default, a 3-level scale space with iterations
sequence equal to [10000, 1000, 100] will be used.
sigmassequence of floats, optional
AffineRegistration key-word argument: custom smoothing parameter to
build the scale space (one parameter for each scale). By default,
the sequence of sigmas will be [3, 1, 0].
factorssequence of floats, optional
AffineRegistration key-word argument: custom scale factors to build the
scale space (one factor for each scale). By default, the sequence of
factors will be [4, 2, 1].
ret_metricboolean, optional
Set it to True to return the value of the optimized coefficients and
the optimization quality metric.
moving_maskarray, shape (S’, R’, C’) or (R’, C’), optional
moving image mask that defines which pixels in the moving image
are used to calculate the mutual information.
static_maskarray, shape (S, R, C) or (R, C), optional
static image mask that defines which pixels in the static image
are used to calculate the mutual information.
nbinsint, optional
MutualInformationMetric key-word argument: the number of bins to be
used for computing the intensity histograms. The default is 32.
sampling_proportionNone or float in interval (0, 1], optional
MutualInformationMetric key-word argument: There are two types of
sampling: dense and sparse. Dense sampling uses all voxels for
estimating the (joint and marginal) intensity histograms, while
sparse sampling uses a subset of them. If sampling_proportion is
None, then dense sampling is used. If sampling_proportion is a
floating point value in (0,1] then sparse sampling is used,
where sampling_proportion specifies the proportion of voxels to
be used. The default is None (dense sampling).
Returns:
resampledarray with moving data resampled to the static space
after computing the affine transformation.
final_affinethe affine 4x4 associated with the transformation.
xoptthe value of the optimized coefficients.
foptthe value of the optimization quality metric.
Notes
Performs a gradual registration between the two inputs, using a pipeline
that gradually approximates the final registration. If the final default
step (affine) is omitted, the resulting affine may not have all 12
degrees of freedom adjusted.
series4D array or nib.Nifti1Image class instance or str
The data is 4D with the last dimension separating different 3D volumes
refint or 3D array or nib.Nifti1Image class instance or str
If this is an int, this is the index of the reference image within the
series. Otherwise it is an array of data to register with (associated
with a ref_affine required) or a nifti img or full path to a file
containing one.
pipelinesequence, optional
Sequence of transforms to do for each volume in the series.
Default: (executed from left to right):
[center_of_mass, translation, rigid, affine]
series_affine, ref_affine4x4 arrays, optional.
The affine. If provided, this input will over-ride the affine provided
together with the nifti img or file.
static_maskarray, shape (S, R, C) or (R, C), optional
static image mask that defines which pixels in the static image
are used to calculate the mutual information.
Returns:
xformed, affines4D array with transformed data and a (4,4,n) array
with 4x4 matrices associated with each of the volumes of the input moving
data that was used to transform it into register with the static data.
Register a DWI series to the mean of the B0 images in that series.
all first registered to the first B0 volume
Parameters:
data4D array or nibabel Nifti1Image class instance or str
Diffusion data. Either as a 4D array or as a nifti image object, or as
a string containing the full path to a nifti file.
gtaba GradientTable class instance or tuple of strings
If provided as a tuple of strings, these are assumed to be full paths
to the bvals and bvecs files (in that order).
affine4x4 array, optional.
Must be provided for data provided as an array. If provided together
with Nifti1Image or str data, this input will over-ride the affine
that is stored in the data input. Default: use the affine stored
in data.
b0_refint, optional.
Which b0 volume to use as reference. Default: 0
pipelinelist of callables, optional.
The transformations to perform in sequence (from left to right):
Default: [center_of_mass,translation,rigid,affine]
static_maskarray, shape (S, R, C) or (R, C), optional
static image mask that defines which pixels in the static image
are used to calculate the mutual information.
Returns:
xform_img, affine_array: a Nifti1Image containing the registered data and
using the affine of the original data and a list containing the affine
Apply a motion correction to a DWI dataset (Between-Volumes Motion correction)
Parameters:
data4D array or nibabel Nifti1Image class instance or str
Diffusion data. Either as a 4D array or as a nifti image object, or as
a string containing the full path to a nifti file.
gtaba GradientTable class instance or tuple of strings
If provided as a tuple of strings, these are assumed to be full paths
to the bvals and bvecs files (in that order).
affine4x4 array, optional.
Must be provided for data provided as an array. If provided together
with Nifti1Image or str data, this input will over-ride the affine
that is stored in the data input. Default: use the affine stored
in data.
b0_refint, optional.
Which b0 volume to use as reference. Default: 0
pipelinelist of callables, optional.
The transformations to perform in sequence (from left to right):
Default: [center_of_mass,translation,rigid,affine]
static_maskarray, shape (S, R, C) or (R, C), optional
static image mask that defines which pixels in the static image
are used to calculate the mutual information.
Returns:
xform_img, affine_array: a Nifti1Image containing the registered data and
using the affine of the original data and a list containing the affine
Determine the effective number of threads to be used for OpenMP calls
For num_threads=None,
- if the OMP_NUM_THREADS environment variable is set, return that
value
- otherwise, return the maximum number of cores retrieved by
openmp.opm_get_num_procs().
For num_threads>0, return this value.
For num_threads<0, return the maximal number of threads minus
|num_threads+1|. In particular num_threads=-1 will use as
many threads as there are available cores on the machine.
Update a point cloud using the new estimate of the deformable
transformation.
Parameters:
Y: numpy array, optional
Array of points to transform - use to predict on new set of points.
Best for predicting on new points not used to run initial
registration. If None, self.Y used.
Gradient of the CC Metric w.r.t. the backward transformation.
Computes the gradient of the Cross Correlation metric for symmetric
registration (SyN) [3] w.r.t. the displacement
associated to the static image (‘forward’ step) as in
Avants et al.[4].
Parameters:
grad_movingarray, shape (R, C, 2)
the gradient of the moving image
factorsarray, shape (R, C, 5)
the precomputed cross correlation terms obtained via
precompute_cc_factors_2d
Returns:
outarray, shape (R, C, 2)
the gradient of the cross correlation metric with respect to the
displacement associated to the static image
energythe cross correlation energy (data term) at this iteration
Gradient of the CC Metric w.r.t. the backward transformation.
Computes the gradient of the Cross Correlation metric for symmetric
registration (SyN) [3]. w.r.t. the displacement
associated to the static volume (‘backward’ step) as in
Avants et al.[4].
Parameters:
grad_movingarray, shape (S, R, C, 3)
the gradient of the moving volume
factorsarray, shape (S, R, C, 5)
the precomputed cross correlation terms obtained via
precompute_cc_factors_3d
radiusint
the radius of the neighborhood used for the CC metric when
computing the factors. The returned vector field will be
zero along a boundary of width radius voxels.
Returns:
outarray, shape (S, R, C, 3)
the gradient of the cross correlation metric with respect to the
displacement associated to the static volume
energythe cross correlation energy (data term) at this iteration
Gradient of the CC Metric w.r.t. the forward transformation.
Computes the gradient of the Cross Correlation metric for symmetric
registration (SyN) [3] w.r.t. the displacement
associated to the moving image (‘backward’ step) as in
Avants et al.[4].
Parameters:
grad_staticarray, shape (R, C, 2)
the gradient of the static image
factorsarray, shape (R, C, 5)
the precomputed cross correlation terms obtained via
precompute_cc_factors_2d
Returns:
outarray, shape (R, C, 2)
the gradient of the cross correlation metric with respect to the
displacement associated to the moving image
energythe cross correlation energy (data term) at this iteration
Notes
Currently, the gradient of the static image is not being used, but some
authors suggest that symmetrizing the gradient by including both, the
moving and static gradients may improve the registration quality. We are
leaving this parameter as a placeholder for future investigation
Gradient of the CC Metric w.r.t. the forward transformation.
Computes the gradient of the Cross Correlation metric for symmetric
registration (SyN) [3] w.r.t. the displacement
associated to the moving volume (‘forward’ step) as in
Avants et al.[4].
Parameters:
grad_staticarray, shape (S, R, C, 3)
the gradient of the static volume
factorsarray, shape (S, R, C, 5)
the precomputed cross correlation terms obtained via
precompute_cc_factors_3d
radiusint
the radius of the neighborhood used for the CC metric when
computing the factors. The returned vector field will be
zero along a boundary of width radius voxels.
Returns:
outarray, shape (S, R, C, 3)
the gradient of the cross correlation metric with respect to the
displacement associated to the moving volume
energythe cross correlation energy (data term) at this iteration
Precomputations to quickly compute the gradient of the CC Metric.
Pre-computes the separate terms of the cross correlation metric
[3] and image norms at each voxel considering a
neighborhood of the given radius to efficiently compute the gradient of the
metric with respect to the deformation field [4].
Parameters:
staticarray, shape (R, C)
the static volume, which also defines the reference registration domain
movingarray, shape (R, C)
the moving volume (notice that both images must already be in a common
reference domain, i.e. the same R, C)
radiusthe radius of the neighborhood(square of (2*radius + 1)^2 voxels)
Precomputations to quickly compute the gradient of the CC Metric.
Pre-computes the separate terms of the cross correlation metric and image
norms at each voxel considering a neighborhood of the given radius to
efficiently compute the gradient of the metric with respect to the
deformation field [5], [3],
[4].
Parameters:
staticarray, shape (S, R, C)
the static volume, which also defines the reference registration domain
movingarray, shape (S, R, C)
the moving volume (notice that both images must already be in a common
reference domain, i.e. the same S, R, C)
radiusthe radius of the neighborhood (cube of (2 * radius + 1)^3 voxels)
Precomputations to quickly compute the gradient of the CC Metric.
This version of precompute_cc_factors_3d is for testing purposes, it
directly computes the local cross-correlation factors without any
optimization, so it is less error-prone than the accelerated version.
Computes the demons step [6] for SSD-driven
registration ( eq. 4 in [6] ) using the EM
algorithm [7] to handle multi-modality images.
In this case, \(\sigma_i\) in eq. 4 of [6] is
estimated using the EM algorithm, while in the original version of
diffeomorphic demons it is estimated by the difference between the image
values at each pixel.
Parameters:
delta_fieldarray, shape (R, C)
contains, at each pixel, the difference between the moving image (warped
under the current deformation s(. , .) ) J and the static image I:
delta_field[i,j] = J(s(i,j)) - I(i,j). The order is important, changing
to delta_field[i,j] = I(i,j) - J(s(i,j)) yields the backward demons step
warping the static image towards the moving, which may not be the
intended behavior unless the ‘gradient_moving’ passed corresponds to
the gradient of the static image
sigma_sq_fieldarray, shape (R, C)
contains, at each pixel (i, j), the estimated variance (not std) of the
hidden variable associated to the intensity at static[i,j] (which must
have been previously quantized)
gradient_movingarray, shape (R, C, 2)
the gradient of the moving image
sigma_sq_xfloat
parameter controlling the amount of regularization. It corresponds to
\(\sigma_x^2\) in algorithm 1 of [6]
outarray, shape (R, C, 2)
the resulting demons step will be written to this array
Returns:
demons_steparray, shape (R, C, 2)
the demons step to be applied for updating the current displacement
field
energyfloat
the current em energy (before applying the returned demons_step)
Computes the demons step [6] for SSD-driven
registration ( eq. 4 in [6] ) using the EM
algorithm [7] to handle multi-modality images.
In this case, \(\sigma_i\) in eq. 4 of [6] is
estimated using the EM algorithm, while in the original version of
diffeomorphic demons it is estimated by the difference between the image
values at each pixel.
Parameters:
delta_fieldarray, shape (S, R, C)
contains, at each pixel, the difference between the moving image (warped
under the current deformation s ) J and the static image I:
delta_field[k,i,j] = J(s(k,i,j)) - I(k,i,j). The order is important,
changing to delta_field[k,i,j] = I(k,i,j) - J(s(k,i,j)) yields the
backward demons step warping the static image towards the moving, which
may not be the intended behavior unless the ‘gradient_moving’ passed
corresponds to the gradient of the static image
sigma_sq_fieldarray, shape (S, R, C)
contains, at each pixel (k, i, j), the estimated variance (not std) of
the hidden variable associated to the intensity at static[k,i,j] (which
must have been previously quantized)
gradient_movingarray, shape (S, R, C, 2)
the gradient of the moving image
sigma_sq_xfloat
parameter controlling the amount of regularization. It corresponds to
\(\sigma_x^2\) in algorithm 1 of footcite:p:Vercauteren2009.
outarray, shape (S, R, C, 2)
the resulting demons step will be written to this array
Returns:
demons_steparray, shape (S, R, C, 3)
the demons step to be applied for updating the current displacement
field
energyfloat
the current em energy (before applying the returned demons_step)
Computes the mean and std. for each quantization level.
Computes the mean and standard deviation of the intensities in ‘v’ for
each corresponding label in ‘labels’. In other words, for each label
L, it computes the mean and standard deviation of the intensities in ‘v’
at pixels whose label in ‘labels’ is L. This is used by the EM metric
to compute statistics for each hidden variable represented by the labels.
Parameters:
maskarray, shape (R, C)
the mask of pixels that will be taken into account for computing the
statistics. All zero pixels in mask will be ignored
varray, shape (R, C)
the image which the statistics will be computed from
num_labelsint
the number of different labels in ‘labels’ (equal to the
number of hidden variables in the EM metric)
labelsarray, shape (R, C)
the label assigned to each pixel
Returns:
meansarray, shape (num_labels,)
means[i], 0<=i<num_labels will be the mean intensity in v of all
voxels labeled i, or 0 if no voxels are labeled i
variancesarray, shape (num_labels,)
variances[i], 0<=i<num_labels will be the standard deviation of the
intensities in v of all voxels labeled i, or infinite if less than 2
voxels are labeled i.
Computes the mean and std. for each quantization level.
Computes the mean and standard deviation of the intensities in ‘v’ for
each corresponding label in ‘labels’. In other words, for each label
L, it computes the mean and standard deviation of the intensities in ‘v’
at voxels whose label in ‘labels’ is L. This is used by the EM metric
to compute statistics for each hidden variable represented by the labels.
Parameters:
maskarray, shape (S, R, C)
the mask of voxels that will be taken into account for computing the
statistics. All zero voxels in mask will be ignored
varray, shape (S, R, C)
the volume which the statistics will be computed from
num_labelsint
the number of different labels in ‘labels’ (equal to the
number of hidden variables in the EM metric)
labelsarray, shape (S, R, C)
the label assigned to each pixel
Returns:
meansarray, shape (num_labels,)
means[i], 0<=i<num_labels will be the mean intensity in v of all
voxels labeled i, or 0 if no voxels are labeled i
variancesarray, shape (num_labels,)
variances[i], 0<=i<num_labels will be the standard deviation of the
intensities in v of all voxels labeled i, or infinite if less than 2
voxels are labeled i.
Quantize a 2D image to num_levels quantization levels.
Quantizes the input image at num_levels intensity levels considering <=0
as a special value. Those input pixels <=0, and only those, will be
assigned a quantization level of 0. The positive values are divided into
the remaining num_levels-1 uniform quantization levels.
The following are undefined, and raise a ValueError:
the quantization values: levels[0]=0, and levels[i] is the mid-point
of the interval of intensities that are assigned to quantization
level i, i=1, …, num_levels-1.
histarray, shape (num_levels,)
histogram: the number of pixels that were assigned to each quantization
level
Quantize a 3D volume to num_levels quantization levels.
Quantize the input volume at num_levels intensity levels considering <=0
as a special value. Those input voxels <=0, and only those, will be
assigned a quantization level of 0. The positive values are divided into
the remaining num_levels-1 uniform quantization levels.
The following are undefined, and raise a ValueError:
the quantization values: levels[0]=0, and levels[i] is the mid-point
of the interval of intensities that are assigned to quantization
level i, i=1, …, num_levels-1.
histarray, shape (num_levels,)
histogram: the number of voxels that were assigned to each quantization
level
Set the affine transform (operating in physical space).
Also sets self.affine_inv - the inverse of affine, or None if
there is no inverse.
Parameters:
affinearray, shape (dim + 1, dim + 1)
the matrix representing the affine transform operating in
physical space. The domain and co-domain information
remains unchanged. If None, then self represents the identity
transformation.
Transform the input image from co-domain to domain space.
By default, the transformed image is sampled at a grid defined by
self.domain_shape and self.domain_grid2world. If such
information was not provided then sampling_grid_shape is mandatory.
Parameters:
image2D or 3D array
the image to be transformed
interpolationstring, either ‘linear’ or ‘nearest’
the type of interpolation to be used, either ‘linear’
(for k-linear interpolation) or ‘nearest’ for nearest neighbor
image_grid2worldarray, shape (dim + 1, dim + 1), optional
the grid-to-world transform associated with image.
If None (the default), then the grid-to-world transform is assumed
to be the identity.
the shape of the grid where the transformed image must be sampled.
If None (the default), then self.codomain_shape is used instead
(which must have been set at initialization, otherwise an exception
will be raised).
sampling_grid2worldarray, shape (dim + 1, dim + 1), optional
the grid-to-world transform associated with the sampling grid
(specified by sampling_grid_shape, or by default
self.codomain_shape). If None (the default), then the
grid-to-world transform is assumed to be the identity.
resample_onlyBoolean, optional
If False (the default) the affine transform is applied normally.
If True, then the affine transform is not applied, and the input
image is just re-sampled on the domain grid of this transform.
Returns:
transformedarray
the transformed image, sampled at the requested grid, with shape
sampling_grid_shape or self.codomain_shape.
Transform the input image from domain to co-domain space.
By default, the transformed image is sampled at a grid defined by
self.codomain_shape and self.codomain_grid2world. If such
information was not provided then sampling_grid_shape is mandatory.
Parameters:
image2D or 3D array
the image to be transformed
interpolationstring, either ‘linear’ or ‘nearest’
the type of interpolation to be used, either ‘linear’
(for k-linear interpolation) or ‘nearest’ for nearest neighbor
image_grid2worldarray, shape (dim + 1, dim + 1), optional
the grid-to-world transform associated with image.
If None (the default), then the grid-to-world transform is assumed
to be the identity.
the shape of the grid where the transformed image must be sampled.
If None (the default), then self.codomain_shape is used instead
(which must have been set at initialization, otherwise an exception
will be raised).
sampling_grid2worldarray, shape (dim + 1, dim + 1), optional
the grid-to-world transform associated with the sampling grid
(specified by sampling_grid_shape, or by default
self.codomain_shape). If None (the default), then the
grid-to-world transform is assumed to be the identity.
resample_onlyBoolean, optional
If False (the default) the affine transform is applied normally.
If True, then the affine transform is not applied, and the input
image is just re-sampled on the domain grid of this transform.
Returns:
transformedarray
the transformed image, sampled at the requested grid, with shape
sampling_grid_shape or self.codomain_shape.
We need to change the sign so we can use standard minimization
algorithms.
Parameters:
paramsarray, shape (n,)
the parameter vector of the transform currently used by the metric
(the transform name is provided when self.setup is called), n is
the number of parameters of the transform
Returns:
neg_mifloat
the negative mutual information of the input images after
transforming the moving image by the currently set transform
with params parameters
Numeric value of the metric and its gradient at given parameters.
Parameters:
paramsarray, shape (n,)
the parameter vector of the transform currently used by the metric
(the transform name is provided when self.setup is called), n is
the number of parameters of the transform
Returns:
neg_mifloat
the negative mutual information of the input images after
transforming the moving image by the currently set transform
with params parameters
Numeric value of the metric’s gradient at the given parameters.
Parameters:
paramsarray, shape (n,)
the parameter vector of the transform currently used by the metric
(the transform name is provided when self.setup is called), n is
the number of parameters of the transform
the pre-aligning matrix (an affine transform) that roughly aligns
the moving image towards the static image. If None, no
pre-alignment is performed. If a pre-alignment matrix is available,
it is recommended to provide this matrix as starting_affine
instead of manually transforming the moving image to reduce
interpolation artifacts. The default is None, implying no
pre-alignment is performed.
static_maskarray, shape (S, R, C) or (R, C), optional
static image mask that defines which pixels in the static image
are used to calculate the mutual information.
moving_maskarray, shape (S’, R’, C’) or (R’, C’), optional
moving image mask that defines which pixels in the moving image
are used to calculate the mutual information.
the image to be used as reference during optimization.
moving2D or 3D array
the image to be used as “moving” during optimization. It is
necessary to pre-align the moving image to ensure its domain
lies inside the domain of the deformation fields. This is assumed
to be accomplished by “pre-aligning” the moving image towards the
static using an affine transformation given by the
‘starting_affine’ matrix
transforminstance of Transform
the transformation with respect to whose parameters the gradient
must be computed
params0array, shape (n,)
parameters from which to start the optimization. If None, the
optimization will start at the identity transform. n is the
number of parameters of the specified transformation.
the voxel-to-space transformation associated with the moving
image. The default is None, implying the transform is the
identity.
starting_affinestring, or matrix, or None, optional
If string:
‘mass’: align centers of gravity
‘voxel-origin’: align physical coordinates of voxel (0,0,0)
‘centers’: align physical coordinates of central voxels
If matrix:
array, shape (dim+1, dim+1).
If None:
Start from identity.
ret_metricboolean, optional
if True, it returns the parameters for measuring the
similarity between the images (default ‘False’).
The metric containing optimal parameters and
the distance between the images.
static_maskarray, shape (S, R, C) or (R, C), optional
static image mask that defines which pixels in the static image
are used to calculate the mutual information.
moving_maskarray, shape (S’, R’, C’) or (R’, C’), optional
moving image mask that defines which pixels in the moving image
are used to calculate the mutual information.
Returns:
affine_mapinstance of AffineMap
the affine resulting affine transformation
xoptoptimal parameters
the optimal parameters (translation, rotation shear etc.)
foptSimilarity metric
the value of the function at the optimal parameters.
Estimates the inversion error of the displacement fields by computing
statistics of the residual vectors obtained after composing the forward
and backward displacement fields.
Returns:
residualarray, shape (R, C) or (S, R, C)
the displacement field resulting from composing the forward and
backward displacement fields of this transformation (the residual
should be zero for a perfect diffeomorphism)
statsarray, shape (3,)
statistics from the norms of the vectors of the residual
displacement field: maximum, mean and standard deviation
Notes
Since the forward and backward displacement fields have the same
discretization, the final composition is given by
comp[i] = forward[ i + Dinv * backward[i]]
where Dinv is the space-to-grid transformation of the displacement
fields
Constructs a simplified version of this Diffeomorhic Map
The simplified version incorporates the pre-align transform, as well as
the domain and codomain affine transforms into the displacement field.
The resulting transformation may be regarded as operating on the
image spaces given by the domain and codomain discretization. As a
result, self.prealign, self.disp_grid2world, self.domain_grid2world and
self.codomain affine will be None (denoting Identity) in the resulting
diffeomorphic map.
Some operations are performed faster if we know in advance if a matrix
is the identity (so we can skip the actual matrix-vector
multiplication). This function returns None if the given object
is None or the ‘identity’ string. It returns the same object if it is
a numpy array. It raises an exception otherwise.
Parameters:
objobject
any object
Returns:
objobject
the same object given as argument if obj is None or a numpy
array. None if obj is the ‘identity’ string.
Transforms the input image under this transformation in the forward
direction. It uses the “is_inverse” flag to switch between “forward”
and “backward” (if is_inverse is False, then transform(…) warps the
image forwards, else it warps the image backwards).
Parameters:
imagearray, shape (s, r, c) if dim = 3 or (r, c) if dim = 2
the image to be warped under this transformation in the forward
direction
interpolationstring, either ‘linear’ or ‘nearest’
the type of interpolation to be used for warping, either ‘linear’
(for k-linear interpolation) or ‘nearest’ for nearest neighbor
image_world2gridarray, shape (dim+1, dim+1)
the transformation bringing world (space) coordinates to voxel
coordinates of the image given as input
out_shapearray, shape (dim,)
the number of slices, rows and columns of the desired warped image
out_grid2worldthe transformation bringing voxel coordinates of the
warped image to physical space
Returns:
warpedarray, shape = out_shape or self.codomain_shape if None
the warped image under this transformation in the forward direction
Notes
See _warp_forward and _warp_backward documentation for further
information.
Transforms the input image under this transformation in the backward
direction. It uses the “is_inverse” flag to switch between “forward”
and “backward” (if is_inverse is False, then transform_inverse(…)
warps the image backwards, else it warps the image forwards)
Parameters:
imagearray, shape (s, r, c) if dim = 3 or (r, c) if dim = 2
the image to be warped under this transformation in the forward
direction
interpolationstring, either ‘linear’ or ‘nearest’
the type of interpolation to be used for warping, either ‘linear’
(for k-linear interpolation) or ‘nearest’ for nearest neighbor
image_world2gridarray, shape (dim+1, dim+1)
the transformation bringing world (space) coordinates to voxel
coordinates of the image given as input
out_shapearray, shape (dim,)
the number of slices, rows, and columns of the desired warped image
out_grid2worldthe transformation bringing voxel coordinates of the
warped image to physical space
Returns:
warpedarray, shape = out_shape or self.codomain_shape if None
warped image under this transformation in the backward direction
Notes
See _warp_forward and _warp_backward documentation for further
information.
Applies this diffeomorphic map to the list of points (or streamlines)
given by points. We assume that the points’ coordinates are mapped
to world coordinates by applying the coord2world affine transform.
The warped coordinates are given in world coordinates unless
world2coord affine transform is specified, in which case the warped
points will be transformed to the corresponding coordinate system.
Warp the list of points in the backward direction.
Applies this diffeomorphic map to the list of points (or streamlines)
given by points. We assume that the points’ coordinates are mapped
to world coordinates by applying the coord2world affine transform.
The warped coordinates are given in world coordinates unless
world2coord affine transform is specified, in which case the warped
points will be transformed to the corresponding coordinate system.
Composition of this DiffeomorphicMap with a given endomorphism
Creates a new DiffeomorphicMap C with the same properties as self and
composes its displacement fields with phi’s corresponding fields.
The resulting diffeomorphism is of the form C(x) = phi(self(x)) with
inverse C^{-1}(y) = self^{-1}(phi^{-1}(y)). We assume that phi is an
endomorphism with the same discretization and domain affine as self
to ensure that the composition inherits self’s properties (we also
assume that the pre-aligning matrix of phi is None or identity).
Parameters:
phiDiffeomorphicMap object
the endomorphism to be warped by this diffeomorphic map
Returns:
compositionthe composition of this diffeomorphic map with the
endomorphism given as input
Notes
The problem with our current representation of a DiffeomorphicMap is
that the set of Diffeomorphism that can be represented this way (a
pre-aligning matrix followed by a non-linear endomorphism given as a
displacement field) is not closed under the composition operation.
Supporting a general DiffeomorphicMap class, closed under composition,
may be extremely costly computationally, and the kind of
transformations we actually need for Avants’ mid-point algorithm (SyN)
are much simpler.
This is the main function each specialized class derived from this must
implement. Upon completion, the deformation field must be available
from the forward transformation model.
Sets the number of iterations at each pyramid level
Establishes the maximum number of iterations to be performed at each
level of the Gaussian pyramid, similar to ANTS.
Parameters:
level_iterslist
the number of iterations at each level of the Gaussian pyramid.
level_iters[0] corresponds to the finest level, level_iters[n-1]
the coarsest, where n is the length of the list
the image to be used as reference during optimization. The
displacement fields will have the same discretization as the static
image.
movingarray, shape (S, R, C) or (R, C)
the image to be used as “moving” during optimization. Since the
deformation fields’ discretization is the same as the static image,
it is necessary to pre-align the moving image to ensure its domain
lies inside the domain of the deformation fields. This is assumed
to be accomplished by “pre-aligning” the moving image towards the
static using an affine transformation given by the ‘prealign’
matrix
static_grid2worldarray, shape (dim+1, dim+1)
the voxel-to-space transformation associated to the static image
moving_grid2worldarray, shape (dim+1, dim+1)
the voxel-to-space transformation associated to the moving image
prealignarray, shape (dim+1, dim+1)
the affine transformation (operating on the physical space)
pre-aligning the moving image towards the static
Returns:
static_to_refDiffeomorphicMap object
the diffeomorphic map that brings the moving image towards the
static one in the forward direction (i.e. by calling
static_to_ref.transform) and the static image towards the
moving one in the backward direction (i.e. by calling
static_to_ref.transform_inverse).
Composition of the current displacement field with the given field
Interpolates new displacement at the locations defined by
current_displacement. Equivalently, computes the composition C of the
given displacement fields as C(x) = B(A(x)), where A is
current_displacement and B is new_displacement. This function is
intended to be used with deformation fields of the same sampling
(e.g. to be called by a registration algorithm).
Parameters:
current_displacementarray, shape (R’, C’, 2) or (S’, R’, C’, 3)
the displacement field defining where to interpolate
new_displacement
new_displacementarray, shape (R, C, 2) or (S, R, C, 3)
the displacement field to be warped by current_displacement
disp_world2gridarray, shape (dim+1, dim+1)
the space-to-grid transform associated with the displacements’
grid (we assume that both displacements are discretized over the
same grid)
time_scalingfloat
scaling factor applied to d2. The effect may be interpreted as
moving d1 displacements along a factor (time_scaling) of d2.
Returns:
updatedarray, shape (the same as new_displacement)
the warped displacement field
mean_normthe mean norm of all vectors in current_displacement
Extracts the rotational and spacing components from a matrix
Extracts the rotational and spacing (voxel dimensions) components from a
matrix. An image gradient represents the local variation of the image’s
gray values per voxel. Since we are iterating on the physical space, we
need to compute the gradients as variation per millimeter, so we need to
divide each gradient’s component by the voxel size along the corresponding
axis, that’s what the spacings are used for. Since the image’s gradients
are oriented along the grid axes, we also need to re-orient the gradients
to be given in physical space coordinates.
Parameters:
affinearray, shape (k, k), k = 3, 4
the matrix transforming grid coordinates to physical space.
Releases the resources no longer needed by the metric
This method is called by the RegistrationOptimizer after the required
iterations have been computed (forward and / or backward) so that the
SimilarityMetric can safely delete any data it computed as part of the
initialization
Prepares the metric to compute one displacement field iteration.
This method will be called before any compute_forward or
compute_backward call, this allows the Metric to pre-compute any useful
information for speeding up the update computations. This
initialization was needed in ANTS because the updates are called once
per voxel. In Python this is unpractical, though.
Informs the metric how many pyramid levels are above the current one
Informs this metric the number of pyramid levels above the current one.
The metric may change its behavior (e.g. number of inner iterations)
accordingly
Parameters:
levelsint
the number of levels above the current Gaussian Pyramid level
Informs the metric how many pyramid levels are below the current one
Informs this metric the number of pyramid levels below the current one.
The metric may change its behavior (e.g. number of inner iterations)
accordingly
Parameters:
levelsint
the number of levels below the current Gaussian Pyramid level
Sets the moving image being compared against the static one.
Sets the moving image. The default behavior (of this abstract class) is
simply to assign the reference to an attribute, but
generalizations of the metric may need to perform other operations
Sets the static image being compared against the moving one.
Sets the static image. The default behavior (of this abstract class) is
simply to assign the reference to an attribute, but
generalizations of the metric may need to perform other operations
This is called by the optimizer just after setting the moving image
This method allows the metric to compute any useful
information from knowing how the current static image was generated
(as the transformation of an original static image). This method is
called by the optimizer just after it sets the static image.
Transformation will be an instance of DiffeomorficMap or None if
the original_moving_image equals self.moving_image.
Parameters:
original_moving_imagearray, shape (R, C) or (S, R, C)
original image from which the current moving image was generated
transformationDiffeomorphicMap object
the transformation that was applied to the original image to generate
the current moving image
This is called by the optimizer just after setting the static image.
This method allows the metric to compute any useful
information from knowing how the current static image was generated
(as the transformation of an original static image). This method is
called by the optimizer just after it sets the static image.
Transformation will be an instance of DiffeomorficMap or None
if the original_static_image equals self.moving_image.
Parameters:
original_static_imagearray, shape (R, C) or (S, R, C)
original image from which the current static image was generated
transformationDiffeomorphicMap object
the transformation that was applied to original image to generate
the current static image
Prepares the metric to compute one displacement field iteration.
Pre-computes the cross-correlation factors for efficient computation
of the gradient of the Cross Correlation w.r.t. the displacement field.
It also pre-computes the image gradients in the physical space by
re-orienting the gradients in the voxel space using the corresponding
affine transformations.
if True, computes the Demons step in the forward direction
(warping the moving towards the static image). If False,
computes the backward step (warping the static image to the
moving image)
Returns:
displacementarray, shape (R, C, 2) or (S, R, C, 3)
Computes the Gauss-Newton energy minimization step
Computes the Newton step to minimize this energy, i.e., minimizes the
linearized energy function with respect to the
regularized displacement field (this step does not require
post-smoothing, as opposed to the demons step, which does not include
regularization). To accelerate convergence we use the multi-grid
Gauss-Seidel algorithm proposed by Bruhn and Weickert[8].
Parameters:
forward_stepboolean
if True, computes the Newton step in the forward direction
(warping the moving towards the static image). If False,
computes the backward step (warping the static image to the
moving image)
Returns:
displacementarray, shape (R, C, 2) or (S, R, C, 3)
Prepares the metric to compute one displacement field iteration.
Pre-computes the transfer functions (hidden random variables) and
variances of the estimators. Also pre-computes the gradient of both
input images. Note that once the images are transformed to the opposite
modality, the gradient of the transformed images can be used with the
gradient of the corresponding modality in the same fashion as
diff-demons does for mono-modality images. If the flag
self.use_double_gradient is True these gradients are averaged.
This is called by the optimizer just after setting the moving image.
EMMetric takes advantage of the image dynamics by computing the
current moving image mask from the original_moving_image mask (warped
by nearest neighbor interpolation)
Parameters:
original_moving_imagearray, shape (R, C) or (S, R, C)
the original moving image from which the current moving image was
generated, the current moving image is the one that was provided
via ‘set_moving_image(…)’, which may not be the same as the
original moving image but a warped version of it.
transformationDiffeomorphicMap object
the transformation that was applied to the original_moving_image
to generate the current moving image
This is called by the optimizer just after setting the static image.
EMMetric takes advantage of the image dynamics by computing the
current static image mask from the originalstaticImage mask (warped
by nearest neighbor interpolation)
Parameters:
original_static_imagearray, shape (R, C) or (S, R, C)
the original static image from which the current static image was
generated, the current static image is the one that was provided
via ‘set_static_image(…)’, which may not be the same as the
original static image but a warped version of it (even the static
image changes during Symmetric Normalization, not only the moving
one).
transformationDiffeomorphicMap object
the transformation that was applied to the original_static_image
to generate the current static image
Computes the demons step proposed by Vercauteren et al.[6] for
the SSD metric.
Parameters:
forward_stepboolean
if True, computes the Demons step in the forward direction
(warping the moving towards the static image). If False,
computes the backward step (warping the static image to the
moving image)
Returns:
displacementarray, shape (R, C, 2) or (S, R, C, 3)
Computes the Gauss-Newton energy minimization step
Minimizes the linearized energy function (Newton step) defined by the
sum of squared differences of corresponding pixels of the input images
with respect to the displacement field.
Parameters:
forward_stepboolean
if True, computes the Newton step in the forward direction
(warping the moving towards the static image). If False,
computes the backward step (warping the static image to the
moving image)
Multi-resolution Gauss-Seidel solver using V-type cycles
Multi-resolution Gauss-Seidel solver: solves the Gauss-Newton linear system
by first filtering (GS-iterate) the current level, then solves for the
residual at a coarser resolution and finally refines the solution at the
current resolution. This scheme corresponds to the V-cycle proposed by
Bruhn and Weickert[8].
Parameters:
nint
number of levels of the multi-resolution algorithm (it will be called
recursively until level n == 0)
kint
the number of iterations at each multi-resolution level
delta_fieldarray, shape (R, C)
the difference between the static and moving image (the ‘derivative
w.r.t. time’ in the optical flow model)
sigma_sq_fieldarray, shape (R, C)
the variance of the gray level value at each voxel, according to the
EM model (for SSD, it is 1 for all voxels). Inf and 0 values
are processed specially to support infinite and zero variance.
gradient_fieldarray, shape (R, C, 2)
the gradient of the moving image
targetarray, shape (R, C, 2)
right-hand side of the linear system to be solved in the Weickert’s
multi-resolution algorithm
lambda_paramfloat
smoothness parameter, the larger its value the smoother the
displacement field
displacementarray, shape (R, C, 2)
the displacement field to start the optimization from
Returns:
energythe energy of the EM (or SSD if sigmafield[…]==1) metric at this
Multi-resolution Gauss-Seidel solver using V-type cycles
Multi-resolution Gauss-Seidel solver: solves the linear system by first
filtering (GS-iterate) the current level, then solves for the residual
at a coarser resolution and finally refines the solution at the current
resolution. This scheme corresponds to the V-cycle proposed by
Bruhn and Weickert[8].
Parameters:
nint
number of levels of the multi-resolution algorithm (it will be called
recursively until level n == 0)
kint
the number of iterations at each multi-resolution level
delta_fieldarray, shape (S, R, C)
the difference between the static and moving image (the ‘derivative
w.r.t. time’ in the optical flow model)
sigma_sq_fieldarray, shape (S, R, C)
the variance of the gray level value at each voxel, according to the
EM model (for SSD, it is 1 for all voxels). Inf and 0 values
are processed specially to support infinite and zero variance.
gradient_fieldarray, shape (S, R, C, 3)
the gradient of the moving image
targetarray, shape (S, R, C, 3)
right-hand side of the linear system to be solved in the Weickert’s
multi-resolution algorithm
lambda_paramfloat
smoothness parameter, the larger its value the smoother the
displacement field
displacementarray, shape (S, R, C, 3)
the displacement field to start the optimization from
Returns:
energythe energy of the EM (or SSD if sigmafield[…]==1) metric at this
Compute histogram settings to store the PDF of input images
Parameters:
staticarray
static image
movingarray
moving image
smaskarray
mask of static object being registered (a binary array with 1’s
inside the object of interest and 0’s along the background).
If None, the behaviour is equivalent to smask=ones_like(static)
mmaskarray
mask of moving object being registered (a binary array with 1’s
inside the object of interest and 0’s along the background).
If None, the behaviour is equivalent to mmask=ones_like(static)
Computes the Gradient of the joint PDF w.r.t. transform parameters
Computes the vector of partial derivatives of the joint histogram
w.r.t. each transformation parameter.
The gradient is stored in self.joint_grad.
Parameters:
thetaarray, shape (n,)
parameters of the transformation to compute the gradient from
transforminstance of Transform
the transformation with respect to whose parameters the gradient
must be computed
staticarray, shape (S, R, C)
static image
movingarray, shape (S, R, C)
moving image
grid2worldarray, shape (4, 4)
we assume that both images have already been sampled at a common
grid. This transform must map voxel coordinates of this common grid
to physical coordinates of its corresponding voxel in the moving
image. For example, if the moving image was sampled on the static
image’s grid (this is the typical setting) using an aligning
matrix A, then
grid2world = A.dot(static_affine)
where static_affine is the transformation mapping static image’s
grid coordinates to physical space.
mgradientarray, shape (S, R, C, 3)
the gradient of the moving image
smaskarray, shape (S, R, C), optional
mask of static object being registered (a binary array with 1’s
inside the object of interest and 0’s along the background).
The default is None, indicating all voxels are considered.
mmaskarray, shape (S, R, C), optional
mask of moving object being registered (a binary array with 1’s
inside the object of interest and 0’s along the background).
The default is None, indicating all voxels are considered.
Computes the Gradient of the joint PDF w.r.t. transform parameters
Computes the vector of partial derivatives of the joint histogram
w.r.t. each transformation parameter.
The list of intensities sval and mval are assumed to be sampled
from the static and moving images, respectively, at the same
physical points. Of course, the images may not be perfectly aligned
at the moment the sampling was performed. The resulting gradient
corresponds to the paired intensities according to the alignment at the
moment the images were sampled.
The gradient is stored in self.joint_grad.
Parameters:
thetaarray, shape (n,)
parameters to compute the gradient at
transforminstance of Transform
the transformation with respect to whose parameters the gradient
must be computed
svalarray, shape (m,)
sampled intensities from the static image at sampled_points
mvalarray, shape (m,)
sampled intensities from the moving image at sampled_points
sample_pointsarray, shape (m, 3)
coordinates (in physical space) of the points the images were
sampled at
mgradientarray, shape (m, 3)
the gradient of the moving image at the sample points
Computes the Probability Density Functions of two images
The joint PDF is stored in self.joint. The marginal distributions
corresponding to the static and moving images are computed and
stored in self.smarginal and self.mmarginal, respectively.
Parameters:
staticarray, shape (S, R, C)
static image
movingarray, shape (S, R, C)
moving image
smaskarray, shape (S, R, C)
mask of static object being registered (a binary array with 1’s
inside the object of interest and 0’s along the background).
If None, ones_like(static) is used as mask.
mmaskarray, shape (S, R, C)
mask of moving object being registered (a binary array with 1’s
inside the object of interest and 0’s along the background).
If None, ones_like(moving) is used as mask.
Computes the Probability Density Functions from a set of samples
The list of intensities sval and mval are assumed to be sampled
from the static and moving images, respectively, at the same
physical points. Of course, the images may not be perfectly aligned
at the moment the sampling was performed. The resulting distributions
corresponds to the paired intensities according to the alignment at the
moment the images were sampled.
The joint PDF is stored in self.joint. The marginal distributions
corresponding to the static and moving images are computed and
stored in self.smarginal and self.mmarginal, respectively.
Parameters:
svalarray, shape (n,)
sampled intensities from the static image at sampled_points
mvalarray, shape (n,)
sampled intensities from the moving image at sampled_points
Take floor(total_voxels/k) samples from a (2D or 3D) grid
The sampling is made by taking all pixels whose index (in lexicographical
order) is a multiple of k. Each selected point is slightly perturbed by
adding a realization of a normally distributed random variable and then
mapped to physical space by the given grid-to-space transform.
The lexicographical order of a pixels in a grid of shape (a, b, c) is
defined by assigning to each voxel position (i, j, k) the integer index
F((i, j, k)) = i * (b * c) + j * (c) + k
and sorting increasingly by this index.
Parameters:
kint
the sampling rate, as described before
shapearray, shape (dim,)
the shape of the grid to be sampled
grid2worldarray, shape (dim+1, dim+1)
the grid-to-space transform
sigmafloat
the standard deviation of the Normal random distortion to be applied
to the sampled points
Reslice data with new voxel resolution defined by new_zooms.
Parameters:
dataarray, shape (I,J,K) or (I,J,K,N)
3d volume or 4d volume with datasets
affinearray, shape (4,4)
mapping from voxel coordinates to world coordinates
zoomstuple, shape (3,)
voxel size for (i,j,k) dimensions
new_zoomstuple, shape (3,)
new voxel size for (i,j,k) after resampling
orderint, from 0 to 5
order of interpolation for resampling/reslicing,
0 nearest interpolation, 1 trilinear etc..
if you don’t want any smoothing 0 is the option you need.
modestring (‘constant’, ‘nearest’, ‘reflect’ or ‘wrap’)
Points outside the boundaries of the input are filled according
to the given mode.
cvalfloat
Value used for points outside the boundaries of the input if
mode=’constant’.
num_processesint, optional
Split the calculation to a pool of children processes. This only
applies to 4D data arrays. Default is 1. If < 0 the maximal number
of cores minus num_processes+1 is used (enter -1 to use as many
cores as possible). 0 raises an error.
Returns the voxel-to-space transformation associated with the
sub-sampled image at a particular resolution of the scale space (note
that this object does not explicitly subsample the smoothed images, but
only provides the properties the sub-sampled images must have).
Parameters:
levelint, 0 <= from_level < L, (L = number of resolutions)
the scale space level to get affine transform from
Returns:
the affine (voxel-to-space) transform at the requested resolution
Returns the space-to-voxel transformation associated with the
sub-sampled image at a particular resolution of the scale space (note
that this object does not explicitly subsample the smoothed images, but
only provides the properties the sub-sampled images must have).
Parameters:
levelint, 0 <= from_level < L, (L = number of resolutions)
the scale space level to get the inverse transform from
Returns:
the inverse (space-to-voxel) transform at the requested resolution or
Shape the sub-sampled image must have at a particular level.
Returns the shape the sub-sampled image must have at a particular
resolution of the scale space (note that this object does not
explicitly subsample the smoothed images, but only provides the
properties the sub-sampled images must have).
Parameters:
levelint, 0 <= from_level < L, (L = number of resolutions)
the scale space level to get the sub-sampled shape from
Returns:
the sub-sampled shape at the requested resolution or None if an
Ratio of voxel size from pyramid level from_level to to_level.
Given two scale space resolutions a = from_level, b = to_level,
returns the ratio of voxels size at level b to voxel size at level a
(the factor that must be used to multiply voxels at level a to
‘expand’ them to level b).
Parameters:
from_levelint, 0 <= from_level < L, (L = number of resolutions)
the resolution to expand voxels from
to_levelint, 0 <= to_level < from_level
the resolution to expand voxels to
Returns:
factorsarray, shape (k,), k = 2, 3
the expand factors (a scalar for each voxel dimension)
Adjustment factor for input-spacing to reflect voxel sizes at level.
Returns the scaling factor that needs to be applied to the input
spacing (the voxel sizes of the image at level 0 of the scale space) to
transform them to voxel sizes at the requested level.
Parameters:
levelint, 0 <= from_level < L, (L = number of resolutions)
the scale space level to get the scalings from
Returns:
the scaling factors from the original spacing to the spacings at the
Spacings the sub-sampled image must have at a particular level.
Returns the spacings (voxel sizes) the sub-sampled image must have at a
particular resolution of the scale space (note that this object does
not explicitly subsample the smoothed images, but only provides the
properties the sub-sampled images must have).
Parameters:
levelint, 0 <= from_level < L, (L = number of resolutions)
the scale space level to get the sub-sampled shape from
Returns:
the spacings (voxel sizes) at the requested resolution or None if an
Bundle-based Minimum Distance for joint optimization.
This cost function is used by the StreamlineLinearRegistration class when
running halfway streamline linear registration for unbiased groupwise
bundle registration and atlasing.
It computes the BMD distance after moving both static and moving bundles to
a halfway space in between both.
Methods
setup(static, moving)
distance(xopt)
Notes
In this metric both static and moving bundles are treated equally (i.e.,
there is no static reference bundle as both are intended to move). The
naming convention is kept for consistency.
List of affine parameters as an 1D vector. These affine parameters
are used to derive the corresponding halfway transformation
parameters for each bundle.
Transformation (4, 4) matrix to start the registration. mat
is applied to moving. Default value None which means that initial
transformation will be generated by shifting the centers of moving
and static sets of streamlines to the origin.
We minimize the distance between moving streamlines as they align
with the static streamlines.
Parameters:
tndarray
t is a vector of affine transformation parameters with
size at least 6.
If the size is 6, t is interpreted as translation + rotation.
If the size is 7, t is interpreted as translation + rotation +
isotropic scaling.
If size is 12, t is interpreted as translation + rotation +
scaling + shearing.
staticlist
Static streamlines
movinglist
Moving streamlines. These will be transformed to align with
the static streamlines
num_threadsint, optional
Number of threads. If -1 then all available threads will be used.
MDF-based pairwise distance optimization function (MIN).
We minimize the distance between moving streamlines as they align
with the static streamlines.
Parameters:
tndarray
t is a vector of affine transformation parameters with
size at least 6.
If size is 6, t is interpreted as translation + rotation.
If size is 7, t is interpreted as translation + rotation +
isotropic scaling.
If size is 12, t is interpreted as translation + rotation +
scaling + shearing.
MDF-based pairwise distance optimization function (MIN).
We minimize the distance between moving streamlines as they align
with the static streamlines.
Parameters:
tarray
1D array. t is a vector of affine transformation parameters with
size at least 6.
If the size is 6, t is interpreted as translation + rotation.
If the size is 7, t is interpreted as translation + rotation +
isotropic scaling.
If size is 12, t is interpreted as translation + rotation +
scaling + shearing.
staticarray
N*M x 3 array. All the points of the static streamlines. With order of
streamlines intact. Where N is the number of streamlines and M
is the number of points per streamline.
movingarray
K*M x 3 array. All the points of the moving streamlines. With order of
streamlines intact. Where K is the number of streamlines and M
is the number of points per streamline.
block_sizeint
Number of points per streamline. All streamlines in static and moving
should have the same number of points M.
num_threadsint, optional
Number of threads to be used for OpenMP parallelization. If None
(default) the value of OMP_NUM_THREADS environment variable is used
if it is set, otherwise all available threads are used. If < 0 the
maximal number of threads minus \(|num_threads + 1|\) is used (enter -1 to
use as many threads as possible). 0 raises an error.
Returns:
cost: float
Notes
This is a faster implementation of bundle_min_distance, which requires
that all the points of each streamline are allocated into an ndarray
(of shape N*M by 3, with N the number of points per streamline and M the
number of streamlines). This can be done by calling
dipy.tracking.streamlines.unlist_streamlines.
MDF-based pairwise distance optimization function (MIN).
We minimize the distance between moving streamlines as they align
with the static streamlines.
Parameters:
tarray
1D array. t is a vector of affine transformation parameters with
size at least 6.
If the size is 6, t is interpreted as translation + rotation.
If the size is 7, t is interpreted as translation + rotation +
isotropic scaling.
If size is 12, t is interpreted as translation + rotation +
scaling + shearing.
staticarray
N*M x 3 array. All the points of the static streamlines. With order of
streamlines intact. Where N is the number of streamlines and M
is the number of points per streamline.
movingarray
K*M x 3 array. All the points of the moving streamlines. With order of
streamlines intact. Where K is the number of streamlines and M
is the number of points per streamline.
block_sizeint
Number of points per streamline. All streamlines in static and moving
should have the same number of points M.
This is a utility function that allows for example to do affine
registration using Streamline-based Linear Registration (SLR)
[10] by starting with translation first,
then rigid, then similarity, scaling and finally affine.
Similarly, if for example, you want to perform rigid then you start with
translation first. This progressive strategy can help with finding the
optimal parameters of the final transformation.
Parameters:
staticStreamlines
Static streamlines.
movingStreamlines
Moving streamlines.
metricStreamlineDistanceMetric
Distance metric for registration optimization.
x0string
Could be any of ‘translation’, ‘rigid’, ‘similarity’, ‘scaling’,
‘affine’
boundsarray
Boundaries of registration parameters. See variable DEFAULT_BOUNDS
for example.
methodstring
L_BFGS_B’ or ‘Powell’ optimizers can be used. Default is ‘L_BFGS_B’.
verbosebool, optional.
If True, log messages.
num_threadsint, optional
Number of threads to be used for OpenMP parallelization. If None
(default) the value of OMP_NUM_THREADS environment variable is used
if it is set, otherwise all available threads are used. If < 0 the
maximal number of threads minus \(|num_threads + 1|\) is used (enter -1 to
use as many threads as possible). 0 raises an error. Only metrics
using OpenMP will use this variable.
Utility function for registering large tractograms.
For efficiency, we apply the registration on cluster centroids and remove
small clusters.
See [9], [10] and
[11] for details about the methods involved.
Parameters:
staticStreamlines
Fixed or reference set of streamlines.
movingstreamlines
Moving streamlines.
x0str, optional.
rigid, similarity or affine transformation model
rm_small_clustersint, optional
Remove clusters that have less than rm_small_clusters
maxiterint, optional
Maximum number of iterations to perform.
select_randomint, optional.
If not, None selects a random number of streamlines to apply clustering
verbosebool, optional
If True, logs information about optimization.
greater_thanint, optional
Keep streamlines that have length greater than this value.
less_thanint, optional
Keep streamlines have length less than this value.
qbx_thrvariable int
Thresholds for QuickBundlesX.
nb_ptsint, optional
Number of points for discretizing each streamline.
progressiveboolean, optional
True to enable progressive registration.
rngnp.random.Generator
If None creates random generator in function.
num_threadsint, optional
Number of threads to be used for OpenMP parallelization. If None
(default) the value of OMP_NUM_THREADS environment variable is used
if it is set, otherwise all available threads are used. If < 0 the
maximal number of threads minus \(|num_threads + 1|\) is used (enter -1 to
use as many threads as possible). 0 raises an error. Only metrics
using OpenMP will use this variable.
Notes
The order of operations is the following. First short or long streamlines
are removed. Second, the tractogram or a random selection of the tractogram
is clustered with QuickBundles. Then SLR [10] is
applied.
Function to perform unbiased groupwise bundle registration
All bundles are moved to the same space by iteratively applying halfway
streamline linear registration in pairs. With each iteration, bundles get
closer to each other until the procedure converges and there is no more
improvement.
List with streamlines of the bundles to be registered.
x0str, optional
rigid, similarity or affine transformation model.
tolfloat, optional
Tolerance value to be used to assume convergence.
max_iterint, optional
Maximum number of iterations. Depending on the number of bundles to be
registered this may need to be larger.
qbx_thrvariable int, optional
Thresholds for Quickbundles used for clustering streamlines and reduce
computational time. If None, no clustering is performed. Higher values
cluster streamlines into a smaller number of centroids.
nb_ptsint, optional
Number of points for discretizing each streamline.
select_randomint, optional
Maximum number of streamlines for each bundle. If None, all the
streamlines are used.
This is a 1D vector of affine transformation parameters with
size at least 3.
If the size is 3, t is interpreted as translation.
If the size is 6, t is interpreted as translation + rotation.
If the size is 7, t is interpreted as translation + rotation +
isotropic scaling.
If the size is 9, t is interpreted as translation + rotation +
anisotropic scaling.
If size is 12, t is interpreted as translation + rotation +
scaling + shearing.
Bundle shape difference analysis using magnitude from BundleWarp
displacements and BUAN.
Depending on the number of points of a streamline, and the number of
segments requested, multiple points may be considered for the computation
of a given segment; a segment may contain information from a single point;
or some segments may not contain information from any points. In the latter
case, the segment will contain an np.nan value. The point-to-segment
mapping is defined by the assignment_map(): for each segment index,
the point information of the matching index positions, as returned by
assignment_map(), are considered for the computation.
Parameters:
moving_alignedStreamlines
Linearly (affinely) moved bundle
deformed_bundleStreamlines
Nonlinearly (warped) moved bundle
no_disksint, optional
Number of segments to be created along the length of the bundle
plottingBoolean, optional
Plot bundle shape profile
Returns:
shape_profilenp.ndarray
Float array containing bundlewarp displacement magnitudes along the
length of the bundle
Computes the Sum of Squared Differences between the static and moving image.
Those differences are given by delta_field
Parameters:
delta_fieldarray, shape (R, C)
the difference between the static and moving image (the ‘derivative
w.r.t. time’ in the optical flow model)
Returns:
energyfloat
the SSD energy at this iteration
Notes
The numeric value of the energy is used only to detect convergence.
This function returns only the energy corresponding to the data term
(excluding the energy corresponding to the regularization term) because
the Greedy-SyN algorithm is an unconstrained gradient descent algorithm
in the space of diffeomorphisms: in each iteration it makes a step
along the negative smoothed gradient –of the data term– and then makes
sure the resulting diffeomorphisms are invertible using an explicit
inversion algorithm. Since it is not clear how to reflect the energy
corresponding to this re-projection to the space of diffeomorphisms,
a more precise energy computation including the regularization term
is useless. Instead, convergence is checked considering the data-term
energy only and detecting oscilations in the energy profile.
Computes the Sum of Squared Differences between the static and moving volume
Those differences are given by delta_field
Parameters:
delta_fieldarray, shape (R, C)
the difference between the static and moving image (the ‘derivative
w.r.t. time’ in the optical flow model)
Returns:
energyfloat
the SSD energy at this iteration
Notes
The numeric value of the energy is used only to detect convergence.
This function returns only the energy corresponding to the data term
(excluding the energy corresponding to the regularization term) because
the Greedy-SyN algorithm is an unconstrained gradient descent algorithm
in the space of diffeomorphisms: in each iteration it makes a step
along the negative smoothed gradient –of the data term– and then makes
sure the resulting diffeomorphisms are invertible using an explicit
inversion algorithm. Since it is not clear how to reflect the energy
corresponding to this re-projection to the space of diffeomorphisms,
a more precise energy computation including the regularization term
is useless. Instead, convergence is checked considering the data-term
energy only and detecting oscilations in the energy profile.
The residual displacement field to be fit on the next iteration
Computes the residual displacement field corresponding to the current
displacement field in the Multi-resolution Gauss-Seidel solver proposed by
Bruhn and Weickert[8].
Parameters:
delta_fieldarray, shape (R, C)
the difference between the static and moving image (the ‘derivative
w.r.t. time’ in the optical flow model)
sigmasq_fieldarray, shape (R, C)
the variance of the gray level value at each voxel, according to the
EM model (for SSD, it is 1 for all voxels). Inf and 0 values
are processed specially to support infinite and zero variance.
gradient_fieldarray, shape (R, C, 2)
the gradient of the moving image
targetarray, shape (R, C, 2)
right-hand side of the linear system to be solved in the Weickert’s
multi-resolution algorithm
lambda_paramfloat
smoothness parameter in the objective function
darray, shape (R, C, 2)
the current displacement field to compute the residual from
residualarray, shape (R, C, 2)
the displacement field to put the residual to
Returns:
residualarray, shape (R, C, 2)
the residual displacement field. If residual was None a input, then
a new field is returned, otherwise the same array is returned
The residual displacement field to be fit on the next iteration
Computes the residual displacement field corresponding to the current
displacement field (given by ‘disp’) in the Multi-resolution
Gauss-Seidel solver proposed by Bruhn and Weickert[8].
Parameters:
delta_fieldarray, shape (S, R, C)
the difference between the static and moving image (the ‘derivative
w.r.t. time’ in the optical flow model)
sigmasq_fieldarray, shape (S, R, C)
the variance of the gray level value at each voxel, according to the
EM model (for SSD, it is 1 for all voxels). Inf and 0 values
are processed specially to support infinite and zero variance.
gradient_fieldarray, shape (S, R, C, 3)
the gradient of the moving image
targetarray, shape (S, R, C, 3)
right-hand side of the linear system to be solved in the Weickert’s
multi-resolution algorithm
lambda_paramfloat
smoothness parameter in the objective function
disparray, shape (S, R, C, 3)
the current displacement field to compute the residual from
residualarray, shape (S, R, C, 3)
the displacement field to put the residual to
Returns:
residualarray, shape (S, R, C, 3)
the residual displacement field. If residual was None a input, then
a new field is returned, otherwise the same array is returned
dipy.align.sumsqdiff.iterate_residual_displacement_field_ssd_2d(delta_field, sigmasq_field, grad, target, lambda_param, displacement_field)#
One iteration of a large linear system solver for 2D SSD registration
Performs one iteration at one level of the Multi-resolution Gauss-Seidel
solver proposed by Bruhn and Weickert[8].
Parameters:
delta_fieldarray, shape (R, C)
the difference between the static and moving image (the ‘derivative
w.r.t. time’ in the optical flow model)
sigmasq_fieldarray, shape (R, C)
the variance of the gray level value at each voxel, according to the
EM model (for SSD, it is 1 for all voxels). Inf and 0 values
are processed specially to support infinite and zero variance.
gradarray, shape (R, C, 2)
the gradient of the moving image
targetarray, shape (R, C, 2)
right-hand side of the linear system to be solved in the Weickert’s
multi-resolution algorithm
lambda_paramfloat
smoothness parameter of the objective function
displacement_fieldarray, shape (R, C, 2)
current displacement field to start the iteration from
Returns:
max_displacementfloat
the norm of the maximum change in the displacement field after the
iteration
dipy.align.sumsqdiff.iterate_residual_displacement_field_ssd_3d(delta_field, sigmasq_field, grad, target, lambda_param, disp)#
One iteration of a large linear system solver for 3D SSD registration
Performs one iteration at one level of the Multi-resolution Gauss-Seidel
solver proposed by Bruhn and Weickert[8].
Parameters:
delta_fieldarray, shape (S, R, C)
the difference between the static and moving image (the ‘derivative
w.r.t. time’ in the optical flow model)
sigmasq_fieldarray, shape (S, R, C)
the variance of the gray level value at each voxel, according to the
EM model (for SSD, it is 1 for all voxels). Inf and 0 values
are processed specially to support infinite and zero variance.
gradarray, shape (S, R, C, 3)
the gradient of the moving image
targetarray, shape (S, R, C, 3)
right-hand side of the linear system to be solved in the Weickert’s
multi-resolution algorithm
lambda_paramfloat
smoothness parameter of the objective function
disparray, shape (S, R, C, 3)
the displacement field to start the optimization from
Returns:
max_displacementfloat
the norm of the maximum change in the displacement field after the
iteration
Base class (contract) for all transforms for affine image registration
Each transform must define the following (fast, nogil) methods:
_jacobian(theta, x, J): receives a parameter vector theta, a point in
x, and a matrix J with shape (dim, len(theta)). It must writes in J, the
Jacobian of the transform with parameters theta evaluated at x.
_get_identity_parameters(theta): receives a vector theta whose length is
the number of parameters of the transform and sets in theta the values
that define the identity transform.
_param_to_matrix(theta, T): receives a parameter vector theta, and a
matrix T of shape (dim + 1, dim + 1) and writes in T the matrix
representation of the transform with parameters theta
This base class defines the (slow, convenient) python wrappers for each
of the above functions, which also do parameter checking and raise
a ValueError in case the provided parameters are invalid.
Computes the composition of two 2D displacement fields
Computes the composition of the two 2-D displacements d1 and d2. The
evaluation of d2 at non-lattice points is computed using tri-linear
interpolation. The actual composition is computed as:
comp[i] = d1[i] + t * d2[ A * i + B * d1[i] ]
where t = time_scaling, A = premult_index and B=premult_disp and i denotes
the voxel coordinates of a voxel in d1’s grid. Using this parameters it is
possible to compose vector fields with arbitrary discretizations: let R and
S be the voxel-to-space transformation associated to d1 and d2,
respectively then the composition at a voxel with coordinates i in d1’s
grid is given by:
Computes the composition of two 3D displacement fields
Computes the composition of the two 3-D displacements d1 and d2. The
evaluation of d2 at non-lattice points is computed using tri-linear
interpolation. The actual composition is computed as:
comp[i] = d1[i] + t * d2[ A * i + B * d1[i] ]
where t = time_scaling, A = premult_index and B=premult_disp and i denotes
the voxel coordinates of a voxel in d1’s grid. Using this parameters it is
possible to compose vector fields with arbitrary discretization: let R and
S be the voxel-to-space transformation associated to d1 and d2,
respectively then the composition at a voxel with coordinates i in d1’s
grid is given by:
Creates the invertible displacement fields used in Chen et al. eqs.
9 and 10 [1]
Parameters:
nrowsint
number of rows in the resulting harmonic field
ncolsint
number of columns in the resulting harmonic field
b, mfloat
parameters of the harmonic field (as in [1]). To understand the effect
of these parameters, please consider plotting a deformed image
(a circle or a grid) under the deformation field, or see examples
in [1]
Returns:
darray, shape (nrows, ncols, 2)
the harmonic displacement field
invarray, shape (nrows, ncols, 2)
the analytical inverse of the harmonic displacement field
[1] Chen, M., Lu, W., Chen, Q., Ruchala, K. J., & Olivera, G. H. (2008).
A simple fixed-point approach to invert a deformation field.
Medical Physics, 35(1), 81. doi:10.1118/1.2816107
Creates the invertible displacement fields used in Chen et al. eqs.
9 and 10 [1] computing the angle theta along z-slides.
Parameters:
nslicesint
number of slices in the resulting harmonic field
nrowsint
number of rows in the resulting harmonic field
ncolsint
number of columns in the resulting harmonic field
b, ffloat
parameters of the harmonic field (as in [1]). To understand the effect
of these parameters, please consider plotting a deformed image
(e.g. a circle or a grid) under the deformation field, or see examples
in [1]
Returns:
darray, shape (nslices, nrows, ncols, 3)
the harmonic displacement field
invarray, shape (nslices, nrows, ncols, 3)
the analytical inverse of the harmonic displacement field
[1] Chen, M., Lu, W., Chen, Q., Ruchala, K. J., & Olivera, G. H. (2008).
A simple fixed-point approach to invert a deformation field.
Medical Physics, 35(1), 81. doi:10.1118/1.2816107
Creates a random 2D displacement ‘exactly’ mapping points of two grids
Creates a random 2D displacement field mapping points of an input discrete
domain (with dimensions given by from_shape) to points of an output
discrete domain (with shape given by to_shape). The affine matrices
bringing discrete coordinates to physical space are given by
from_grid2world (for the displacement field discretization) and
to_grid2world (for the target discretization). Since this function is
intended to be used for testing, voxels in the input domain will never be
assigned to boundary voxels on the output domain.
Parameters:
from_shapearray, shape (2,)
the grid shape where the displacement field will be defined on.
from_grid2worldarray, shape (3,3)
the grid-to-space transformation of the displacement field
to_shapearray, shape (2,)
the grid shape where the deformation field will map the input grid to.
to_grid2worldarray, shape (3,3)
the grid-to-space transformation of the mapped grid
rngnumpy.rnadom.Generator class, optional
Numpy’s random generator for setting seed values when needed.
Default is None.
Returns:
outputarray, shape = from_shape
the random displacement field in the physical domain
int_fieldarray, shape = from_shape
the assignment of each point in the input grid to the target grid
Creates a random 3D displacement ‘exactly’ mapping points of two grids
Creates a random 3D displacement field mapping points of an input discrete
domain (with dimensions given by from_shape) to points of an output
discrete domain (with shape given by to_shape). The affine matrices
bringing discrete coordinates to physical space are given by
from_grid2world (for the displacement field discretization) and
to_grid2world (for the target discretization). Since this function is
intended to be used for testing, voxels in the input domain will never be
assigned to boundary voxels on the output domain.
Parameters:
from_shapearray, shape (3,)
the grid shape where the displacement field will be defined on.
from_grid2worldarray, shape (4,4)
the grid-to-space transformation of the displacement field
to_shapearray, shape (3,)
the grid shape where the deformation field will map the input grid to.
to_grid2worldarray, shape (4,4)
the grid-to-space transformation of the mapped grid
rngnumpy.rnadom.Generator class, optional
Numpy’s random generator for setting seed values when needed.
Default is None.
Returns:
outputarray, shape = from_shape
the random displacement field in the physical domain
int_fieldarray, shape = from_shape
the assignment of each point in the input grid to the target grid
Down-samples the 2D input vector field by a factor of 2
Down-samples the input vector field by a factor of 2. The value at each
pixel of the resulting field is the average of its surrounding pixels in
the original field.
Parameters:
fieldarray, shape (R, C)
the vector field to be down-sampled
Returns:
downarray, shape (R’, C’)
the down-sampled displacement field, where R’= ceil(R/2), C’=ceil(C/2),
Down-samples the input 3D vector field by a factor of 2
Down-samples the input vector field by a factor of 2. This operation
is equivalent to dividing the input image into 2x2x2 cubes and averaging
the 8 vectors. The resulting field consists of these average vectors.
Parameters:
fieldarray, shape (S, R, C)
the vector field to be down-sampled
Returns:
downarray, shape (S’, R’, C’)
the down-sampled displacement field, where S’ = ceil(S/2),
R’= ceil(R/2), C’=ceil(C/2)
Down-samples the input image by a factor of 2. The value at each pixel
of the resulting image is the average of its surrounding pixels in the
original image.
Parameters:
fieldarray, shape (R, C)
the image to be down-sampled
Returns:
downarray, shape (R’, C’)
the down-sampled displacement field, where R’= ceil(R/2), C’=ceil(C/2)
Down-samples the input volume by a factor of 2. The value at each voxel
of the resulting volume is the average of its surrounding voxels in the
original volume.
Parameters:
fieldarray, shape (S, R, C)
the volume to be down-sampled
Returns:
downarray, shape (S’, R’, C’)
the down-sampled displacement field, where S’ = ceil(S/2),
R’= ceil(R/2), C’=ceil(C/2)
Computes the inverse of the given 2-D displacement field d using the
fixed-point algorithm [1].
[1] Chen, M., Lu, W., Chen, Q., Ruchala, K. J., & Olivera, G. H. (2008).
A simple fixed-point approach to invert a deformation field.
Medical Physics, 35(1), 81. doi:10.1118/1.2816107
Parameters:
darray, shape (R, C, 2)
the 2-D displacement field to be inverted
d_world2gridarray, shape (3, 3)
the space-to-grid transformation associated to the displacement field
d (transforming physical space coordinates to voxel coordinates of the
displacement field grid)
spacing :array, shape (2,)
the spacing between voxels (voxel size along each axis)
max_iterint
maximum number of iterations to be performed
tolerancefloat
maximum tolerated inversion error
startarray, shape (R, C)
an approximation to the inverse displacement field (if no approximation
is available, None can be provided and the start displacement field
will be zero)
Returns:
parray, shape (R, C, 2)
the inverse displacement field
Notes
We assume that the displacement field is an endomorphism so that the shape
and voxel-to-space transformation of the inverse field’s discretization is
the same as those of the input displacement field. The ‘inversion error’ at
iteration t is defined as the mean norm of the displacement vectors of the
input displacement field composed with the inverse at iteration t.
Computes the inverse of the given 3-D displacement field d using the
fixed-point algorithm [1].
[1] Chen, M., Lu, W., Chen, Q., Ruchala, K. J., & Olivera, G. H. (2008).
A simple fixed-point approach to invert a deformation field.
Medical Physics, 35(1), 81. doi:10.1118/1.2816107
Parameters:
darray, shape (S, R, C, 3)
the 3-D displacement field to be inverted
d_world2gridarray, shape (4, 4)
the space-to-grid transformation associated to the displacement field
d (transforming physical space coordinates to voxel coordinates of the
displacement field grid)
spacing :array, shape (3,)
the spacing between voxels (voxel size along each axis)
max_iterint
maximum number of iterations to be performed
tolfloat
maximum tolerated inversion error
startarray, shape (S, R, C)
an approximation to the inverse displacement field (if no approximation
is available, None can be provided and the start displacement field
will be zero)
Returns:
parray, shape (S, R, C, 3)
the inverse displacement field
Notes
We assume that the displacement field is an endomorphism so that the shape
and voxel-to-space transformation of the inverse field’s discretization is
the same as those of the input displacement field. The ‘inversion error’ at
iteration t is defined as the mean norm of the displacement vectors of the
input displacement field composed with the inverse at iteration t.
Linearly transforms all vectors of a 2D displacement field
Modifies the input displacement field by multiplying each displacement
vector by the given matrix. Note that the elements of the displacement
field are vectors, not points, so their last homogeneous coordinate is
zero, not one, and therefore the translation component of the affine
transform will not have any effect on them.
Linearly transforms all vectors of a 3D displacement field
Modifies the input displacement field by multiplying each displacement
vector by the given matrix. Note that the elements of the displacement
field are vectors, not points, so their last homogeneous coordinate is
zero, not one, and therefore the translation component of the affine
transform will not have any effect on them.
Resamples a 2D vector field to a custom target shape
Resamples the given 2D displacement field on a grid of the requested shape,
using the given scale factors. More precisely, the resulting displacement
field at each grid cell i is given by
D[i] = field[Diag(factors) * i]
Parameters:
factorsarray, shape (2,)
the scaling factors mapping (integer) grid coordinates in the resampled
grid to (floating point) grid coordinates in the original grid
Resamples a 3D vector field to a custom target shape
Resamples the given 3D displacement field on a grid of the requested shape,
using the given scale factors. More precisely, the resulting displacement
field at each grid cell i is given by
D[i] = field[Diag(factors) * i]
Parameters:
factorsarray, shape (3,)
the scaling factors mapping (integer) grid coordinates in the resampled
grid to (floating point) grid coordinates in the original grid
Simplifies a nonlinear warping function combined with an affine transform
Modifies the given deformation field by incorporating into it a
an affine transformation and voxel-to-space transforms associated to
the discretization of its domain and codomain.
The resulting transformation may be regarded as operating on the
image spaces given by the domain and codomain discretization.
More precisely, the resulting transform is of the form:
T[i] = W * d[U * i] + V * i
Where U = affine_idx_in, V = affine_idx_out, W = affine_disp.
Parameters:
darray, shape (R’, C’, 2)
the non-linear part of the transformation (displacement field)
affine_idx_inarray, shape (3, 3)
the matrix U in eq. (1) above
affine_idx_outarray, shape (3, 3)
the matrix V in eq. (1) above
affine_disparray, shape (3, 3)
the matrix W in eq. (1) above
out_shapearray, shape (2,)
the number of rows and columns of the sampling grid
Returns:
outarray, shape = out_shape
the deformation field out associated with T in eq. (1) such that:
T[i] = i + out[i]
Notes
Both the direct and inverse transforms of a DiffeomorphicMap can be written
in this form:
Direct: Let D be the voxel-to-space transform of the domain’s
discretization, P be the pre-align matrix, Rinv the space-to-voxel
transform of the reference grid (the grid the displacement field
is defined on) and Cinv be the space-to-voxel transform of the
codomain’s discretization. Then, for each i in the domain’s grid,
the direct transform is given by
T[i] = Cinv * d[Rinv * P * D * i] + Cinv * P * D * i
and we identify U = Rinv * P * D, V = Cinv * P * D, W = Cinv
Inverse: Let C be the voxel-to-space transform of the codomain’s
discretization, Pinv be the inverse of the pre-align matrix, Rinv
the space-to-voxel transform of the reference grid (the grid the
displacement field is defined on) and Dinv be the space-to-voxel
transform of the domain’s discretization. Then, for each j in the
codomain’s grid, the inverse transform is given by
Tinv[j] = Dinv * Pinv * d[Rinv * C * j] + Dinv * Pinv * C * j
and we identify U = Rinv * C, V = Dinv * Pinv * C, W = Dinv * Pinv
Simplifies a nonlinear warping function combined with an affine transform
Modifies the given deformation field by incorporating into it
an affine transformation and voxel-to-space transforms associated with
the discretization of its domain and codomain.
The resulting transformation may be regarded as operating on the
image spaces given by the domain and codomain discretization.
More precisely, the resulting transform is of the form:
T[i] = W * d[U * i] + V * i
Where U = affine_idx_in, V = affine_idx_out, W = affine_disp.
Parameters:
darray, shape (S’, R’, C’, 3)
the non-linear part of the transformation (displacement field)
affine_idx_inarray, shape (4, 4)
the matrix U in eq. (1) above
affine_idx_outarray, shape (4, 4)
the matrix V in eq. (1) above
affine_disparray, shape (4, 4)
the matrix W in eq. (1) above
out_shapearray, shape (3,)
the number of slices, rows and columns of the sampling grid
Returns:
outarray, shape = out_shape
the deformation field out associated with T in eq. (1) such that:
T[i] = i + out[i]
Notes
Both the direct and inverse transforms of a DiffeomorphicMap can be written
in this form:
Direct: Let D be the voxel-to-space transform of the domain’s
discretization, P be the pre-align matrix, Rinv the space-to-voxel
transform of the reference grid (the grid the displacement field
is defined on) and Cinv be the space-to-voxel transform of the
codomain’s discretization. Then, for each i in the domain’s grid,
the direct transform is given by
T[i] = Cinv * d[Rinv * P * D * i] + Cinv * P * D * i
and we identify U = Rinv * P * D, V = Cinv * P * D, W = Cinv
Inverse: Let C be the voxel-to-space transform of the codomain’s
discretization, Pinv be the inverse of the pre-align matrix, Rinv
the space-to-voxel transform of the reference grid (the grid the
displacement field is defined on) and Dinv be the space-to-voxel
transform of the domain’s discretization. Then, for each j in the
codomain’s grid, the inverse transform is given by
Tinv[j] = Dinv * Pinv * d[Rinv * C * j] + Dinv * Pinv * C * j
and we identify U = Rinv * C, V = Dinv * Pinv * C, W = Dinv * Pinv
Transforms a 2D image by an affine transform with bilinear interp.
Deforms the input image under the given affine transformation using
tri-linear interpolation. The shape of the resulting image
is given by ref_shape. If the affine matrix is None, it is taken as the
identity.
Parameters:
imagearray, shape (R, C)
the input image to be transformed
ref_shapearray, shape (2,)
the shape of the resulting image
affinearray, shape (3, 3)
the affine transform to be applied
Returns:
outarray, shape (R’, C’)
the transformed image
Notes
The reason it is necessary to provide the intended shape of the resulting
image is because the affine transformation is defined on all R^{2}
but we must sample a finite lattice. Also the resulting shape may not be
necessarily equal to the input shape, unless we are interested on
endomorphisms only and not general diffeomorphisms.
Transforms a 2D image by an affine transform with NN interpolation
Deforms the input image under the given affine transformation using
nearest neighbor interpolation. The shape of the resulting image
is given by ref_shape. If the affine matrix is None, it is taken as the
identity.
Parameters:
imagearray, shape (R, C)
the input image to be transformed
ref_shapearray, shape (2,)
the shape of the resulting image
affinearray, shape (3, 3)
the affine transform to be applied
Returns:
outarray, shape (R’, C’)
the transformed image
Notes
The reason it is necessary to provide the intended shape of the resulting
image is because the affine transformation is defined on all R^{2}
but we must sample a finite lattice. Also the resulting shape may not be
necessarily equal to the input shape, unless we are interested on
endomorphisms only and not general diffeomorphisms.
Transforms a 3D volume by an affine transform with trilinear interp.
Deforms the input volume under the given affine transformation using
tri-linear interpolation. The shape of the resulting transformation
is given by ref_shape. If the affine matrix is None, it is taken as the
identity.
Parameters:
volumearray, shape (S, R, C)
the input volume to be transformed
ref_shapearray, shape (3,)
the shape of the resulting volume
affinearray, shape (4, 4)
the affine transform to be applied
Returns:
outarray, shape (S’, R’, C’)
the transformed volume
Notes
The reason it is necessary to provide the intended shape of the resulting
volume is because the affine transformation is defined on all R^{3}
but we must sample a finite lattice. Also the resulting shape may not be
necessarily equal to the input shape, unless we are interested on
endomorphisms only and not general diffeomorphisms.
Transforms a 3D volume by an affine transform with NN interpolation
Deforms the input volume under the given affine transformation using
nearest neighbor interpolation. The shape of the resulting volume
is given by ref_shape. If the affine matrix is None, it is taken as the
identity.
Parameters:
volumearray, shape (S, R, C)
the input volume to be transformed
ref_shapearray, shape (3,)
the shape of the resulting volume
affinearray, shape (4, 4)
the affine transform to be applied
Returns:
outarray, shape (S’, R’, C’)
the transformed volume
Notes
The reason it is necessary to provide the intended shape of the resulting
volume is because the affine transformation is defined on all R^{3}
but we must sample a finite lattice. Also the resulting shape may not be
necessarily equal to the input shape, unless we are interested on
endomorphisms only and not general diffeomorphisms.
Deforms the input image under the given transformation. The warped image
is computed using bi-linear interpolation and is given by:
warped[i] = image[ C * d1[A*i] + B*i ]
where A = affine_idx_in, B = affine_idx_out, C = affine_disp and i denotes
the discrete coordinates of a voxel in the sampling grid of
shape = out_shape.
Parameters:
imagearray, shape (R, C)
the input image to be transformed
d1array, shape (R’, C’, 2)
the displacement field driving the transformation
affine_idx_inarray, shape (3, 3)
the matrix A in eq. (1) above
affine_idx_outarray, shape (3, 3)
the matrix B in eq. (1) above
affine_disparray, shape (3, 3)
the matrix C in eq. (1) above
out_shapearray, shape (2,)
the number of rows and columns of the sampling grid
Returns:
warpedarray, shape = out_shape
the transformed image
Notes
To illustrate the use of this function, consider a displacement field d1
with grid-to-space transformation R, an image with grid-to-space
transformation T and let’s say we want to sample the warped image on a
grid with grid-to-space transformation S (sampling grid). For each voxel
in the sampling grid with discrete coordinates i, the warped image is
given by:
warped[i] = image[Tinv * ( d1[Rinv * S * i] + S * i ) ]
where Tinv = T^{-1} and Rinv = R^{-1}. By identifying A = Rinv * S,
B = Tinv * S, C = Tinv we can use this function to efficiently warp the
input image.
Warps a 2D image using nearest neighbor interpolation
Deforms the input image under the given transformation. The warped image
is computed using nearest-neighbor interpolation and is given by:
warped[i] = image[ C * d1[A*i] + B*i ]
where A = affine_idx_in, B = affine_idx_out, C = affine_disp and i denotes
the discrete coordinates of a voxel in the sampling grid of
shape = out_shape.
Parameters:
imagearray, shape (R, C)
the input image to be transformed
d1array, shape (R’, C’, 2)
the displacement field driving the transformation
affine_idx_inarray, shape (3, 3)
the matrix A in eq. (1) above
affine_idx_outarray, shape (3, 3)
the matrix B in eq. (1) above
affine_disparray, shape (3, 3)
the matrix C in eq. (1) above
out_shapearray, shape (2,)
the number of rows and columns of the sampling grid
Returns:
warpedarray, shape = out_shape
the transformed image
Notes
To illustrate the use of this function, consider a displacement field d1
with grid-to-space transformation R, an image with grid-to-space
transformation T and let’s say we want to sample the warped image on a
grid with grid-to-space transformation S (sampling grid). For each voxel
in the sampling grid with discrete coordinates i, the warped image is
given by:
warped[i] = image[Tinv * ( d1[Rinv * S * i] + S * i ) ]
where Tinv = T^{-1} and Rinv = R^{-1}. By identifying A = Rinv * S,
B = Tinv * S, C = Tinv we can use this function to efficiently warp the
input image.
Deforms the input volume under the given transformation. The warped volume
is computed using tri-linear interpolation and is given by:
warped[i] = volume[ C * d1[A*i] + B*i ]
where A = affine_idx_in, B = affine_idx_out, C = affine_disp and i denotes
the discrete coordinates of a voxel in the sampling grid of
shape = out_shape.
Parameters:
volumearray, shape (S, R, C)
the input volume to be transformed
d1array, shape (S’, R’, C’, 3)
the displacement field driving the transformation
affine_idx_inarray, shape (4, 4)
the matrix A in eq. (1) above
affine_idx_outarray, shape (4, 4)
the matrix B in eq. (1) above
affine_disparray, shape (4, 4)
the matrix C in eq. (1) above
out_shapearray, shape (3,)
the number of slices, rows and columns of the sampling grid
Returns:
warpedarray, shape = out_shape
the transformed volume
Notes
To illustrate the use of this function, consider a displacement field d1
with grid-to-space transformation R, a volume with grid-to-space
transformation T and let’s say we want to sample the warped volume on a
grid with grid-to-space transformation S (sampling grid). For each voxel
in the sampling grid with discrete coordinates i, the warped volume is
given by:
warped[i] = volume[Tinv * ( d1[Rinv * S * i] + S * i ) ]
where Tinv = T^{-1} and Rinv = R^{-1}. By identifying A = Rinv * S,
B = Tinv * S, C = Tinv we can use this function to efficiently warp the
input image.
Warps a 3D volume using using nearest-neighbor interpolation
Deforms the input volume under the given transformation. The warped volume
is computed using nearest-neighbor interpolation and is given by:
warped[i] = volume[ C * d1[A*i] + B*i ]
where A = affine_idx_in, B = affine_idx_out, C = affine_disp and i denotes
the discrete coordinates of a voxel in the sampling grid of
shape = out_shape.
Parameters:
volumearray, shape (S, R, C)
the input volume to be transformed
d1array, shape (S’, R’, C’, 3)
the displacement field driving the transformation
affine_idx_inarray, shape (4, 4)
the matrix A in eq. (1) above
affine_idx_outarray, shape (4, 4)
the matrix B in eq. (1) above
affine_disparray, shape (4, 4)
the matrix C in eq. (1) above
out_shapearray, shape (3,)
the number of slices, rows and columns of the sampling grid
Returns:
warpedarray, shape = out_shape
the transformed volume
Notes
To illustrate the use of this function, consider a displacement field d1
with grid-to-space transformation R, a volume with grid-to-space
transformation T and let’s say we want to sample the warped volume on a
grid with grid-to-space transformation S (sampling grid). For each voxel
in the sampling grid with discrete coordinates i, the warped volume is
given by:
warped[i] = volume[Tinv * ( d1[Rinv * S * i] + S * i ) ]
where Tinv = T^{-1} and Rinv = R^{-1}. By identifying A = Rinv * S,
B = Tinv * S, C = Tinv we can use this function to efficiently warp the
input image.