core#

Core objects

Module: core.geometry#

Utility functions for algebra etc

sphere2cart(r, theta, phi)

Spherical to Cartesian coordinates

cart2sphere(x, y, z)

Return angles for Cartesian 3D coordinates x, y, and z

sph2latlon(theta, phi)

Convert spherical coordinates to latitude and longitude.

normalized_vector(vec, *[, axis])

Return vector divided by its Euclidean (L2) norm

vector_norm(vec, *[, axis, keepdims])

Return vector Euclidean (L2) norm

rodrigues_axis_rotation(r, theta)

Rodrigues formula

nearest_pos_semi_def(B)

Least squares positive semi-definite tensor estimation.

sphere_distance(pts1, pts2, *[, radius, ...])

Distance across sphere surface between pts1 and pts2

cart_distance(pts1, pts2)

Cartesian distance between pts1 and pts2

vector_cosine(vecs1, vecs2)

Cosine of angle between two (sets of) vectors

lambert_equal_area_projection_polar(theta, phi)

Lambert Equal Area Projection from polar sphere to plane

lambert_equal_area_projection_cart(x, y, z)

Lambert Equal Area Projection from cartesian vector to plane

euler_matrix(ai, aj, ak, *[, axes])

Return homogeneous rotation matrix from Euler angles and axis sequence.

compose_matrix(*[, scale, shear, angles, ...])

Return 4x4 transformation matrix from sequence of transformations.

decompose_matrix(matrix)

Return sequence of transformations from transformation matrix.

circumradius(a, b, c)

a, b and c are 3-dimensional vectors which are the vertices of a triangle.

vec2vec_rotmat(u, v)

rotation matrix from 2 unit vectors

compose_transformations(*mats)

Compose multiple 4x4 affine transformations in one 4x4 matrix

perpendicular_directions(v, *[, num, half])

Computes n evenly spaced perpendicular directions relative to a given vector v

dist_to_corner(affine)

Calculate the maximal distance from the center to a corner of a voxel, given an affine

is_hemispherical(vecs)

Test whether all points on a unit sphere lie in the same hemisphere.

Module: core.gradients#

GradientTable(gradients, *[, big_delta, ...])

Diffusion gradient information

b0_threshold_empty_gradient_message(bvals, ...)

Message about the b0_threshold value resulting in no gradient selection.

b0_threshold_update_slicing_message(slice_start)

Message for b0 threshold value update for slicing.

mask_non_weighted_bvals(bvals, b0_threshold)

Create a diffusion gradient-weighting mask for the b-values according to the provided b0 threshold value.

gradient_table_from_bvals_bvecs(bvals, bvecs, *)

Creates a GradientTable from a bvals array and a bvecs array

gradient_table_from_qvals_bvecs(qvals, ...)

A general function for creating diffusion MR gradients.

gradient_table_from_gradient_strength_bvecs(...)

A general function for creating diffusion MR gradients.

gradient_table(bvals, *[, bvecs, big_delta, ...])

A general function for creating diffusion MR gradients.

reorient_bvecs(gtab, affines, *[, atol])

Reorient the directions in a GradientTable.

generate_bvecs(N, *[, iters, rng])

Generates N bvectors.

round_bvals(bvals, *[, bmag])

"This function rounds the b-values

unique_bvals_tolerance(bvals, *[, tol])

Gives the unique b-values of the data, within a tolerance gap

get_bval_indices(bvals, bval, *[, tol])

Get indices where the b-value is bval

unique_bvals_magnitude(bvals, *[, bmag, rbvals])

This function gives the unique rounded b-values of the data

check_multi_b(gtab, n_bvals, *[, non_zero, bmag])

Check if you have enough different b-values in your gradient table

btens_to_params(btens, *[, ztol])

Compute trace, anisotropy and asymmetry parameters from b-tensors.

params_to_btens(bval, bdelta, b_eta)

Compute b-tensor from trace, anisotropy and asymmetry parameters.

ornt_mapping(ornt1, ornt2)

Calculate the mapping needing to get from orn1 to orn2.

reorient_vectors(bvecs, current_ornt, ...[, ...])

Change the orientation of gradients or other vectors.

reorient_on_axis(bvecs, current_ornt, ...[, ...])

orientation_from_string(string_ornt)

Return an array representation of an ornt string.

orientation_to_string(ornt)

Return a string representation of a 3d ornt.

Module: core.graph#

A simple graph class

Graph()

A simple graph class

Module: core.histeq#

histeq(arr, *[, num_bins])

Performs an histogram equalization on arr.

Module: core.interpolation#

Interpolator(data, voxel_size)

Class to be subclassed by different interpolator types

NearestNeighborInterpolator(data, voxel_size)

Interpolates data using nearest neighbor interpolation

OutsideImage

RBFInterpolator(y, d[, neighbors, ...])

Radial basis function (RBF) interpolation in N dimensions.

Rbf(*args, **kwargs)

A class for radial basis function interpolation of functions from N-D scattered data to an M-D domain.

TriLinearInterpolator(data, voxel_size)

Interpolates data using trilinear interpolation

deprecate_with_version(message, *[, since, ...])

Return decorator function function for deprecation warning / error.

interp_rbf(data, sphere_origin, sphere_target)

Interpolate data on the sphere, using radial basis functions.

interpolate_scalar_2d(image, locations)

Bilinear interpolation of a 2D scalar image

interpolate_scalar_3d(image, locations)

Trilinear interpolation of a 3D scalar image

interpolate_scalar_nn_2d(image, locations)

Nearest neighbor interpolation of a 2D scalar image

interpolate_scalar_nn_3d(image, locations)

Nearest neighbor interpolation of a 3D scalar image

interpolate_vector_2d(field, locations)

Bilinear interpolation of a 2D vector field

interpolate_vector_3d(field, locations)

Trilinear interpolation of a 3D vector field

map_coordinates_trilinear_iso(data, points, ...)

Trilinear interpolation (isotropic voxel size)

nearestneighbor_interpolate(data, point)

rbf_interpolation(data, sphere_origin, ...)

Interpolate data on the sphere, using radial basis functions, where data can be scalar- (1D), vector- (2D), or tensor-valued (3D and beyond).

trilinear_interp(data, index, voxel_size)

Interpolates vector from 4D data at 3D point given by index

trilinear_interpolate4d(data, point[, out])

Tri-linear interpolation along the last dimension of a 4d array

Module: core.ndindex#

ndindex(shape)

An N-dimensional iterator object to index arrays.

Module: core.onetime#

Descriptor support for NIPY.

Copyright (c) 2006-2011, NIPY Developers All rights reserved.

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

  • Redistributions of source code must retain the above copyright

    notice, this list of conditions and the following disclaimer.

  • Redistributions in binary form must reproduce the above

    copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.

  • Neither the name of the NIPY Developers nor the names of any

    contributors may be used to endorse or promote products derived from this software without specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS” AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

Utilities to support special Python descriptors [1], [2] in particular the use of a useful pattern for properties we call ‘one time properties’. These are object attributes which are declared as properties, but become regular attributes once they’ve been read the first time. They can thus be evaluated later in the object’s life cycle, but once evaluated they become normal, static attributes with no function call overhead on access or any other constraints.

A special ResetMixin class is provided to add a .reset() method to users who may want to have their objects capable of resetting these computed properties to their ‘untriggered’ state.

References#

[1]

How-To Guide for Descriptors, Raymond Hettinger. http://users.rcn.com/python/download/Descriptor.htm

ResetMixin()

A Mixin class to add a .reset() method to users of OneTimeProperty.

OneTimeProperty(func)

A descriptor to make special properties that become normal attributes.

auto_attr(func)

Decorator to create OneTimeProperty attributes.

Module: core.optimize#

A unified interface for performing and debugging optimization problems.

Optimizer(fun, x0[, args, method, jac, ...])

SKLearnLinearSolver(*args, **kwargs)

Provide a sklearn-like uniform interface to algorithms that solve problems of the form: \(y = Ax\) for \(x\)

NonNegativeLeastSquares(*args, **kwargs)

A sklearn-like interface to scipy.optimize.nnls

PositiveDefiniteLeastSquares(m, *[, A, L])

spdot(A, B)

The same as np.dot(A, B), except it works even if A or B or both are sparse matrices.

sparse_nnls(y, X, *[, momentum, step_size, ...])

Solve y=Xh for h, using gradient descent, with X a sparse matrix.

Module: core.profile#

Class for profiling cython code

Profiler(*args[, call])

Profile python/cython files or functions

Module: core.rng#

Random number generation utilities.

WichmannHill2006(*[, ix, iy, iz, it])

Wichmann Hill random number generator.

WichmannHill1982(*[, ix, iy, iz])

Algorithm AS 183 Appl.

LEcuyer(*[, s1, s2])

Return a LEcuyer random number generator.

Module: core.sphere#

Sphere(*[, x, y, z, theta, phi, xyz, faces, ...])

Points on the unit sphere.

HemiSphere(*[, x, y, z, theta, phi, xyz, ...])

Points on the unit sphere.

faces_from_sphere_vertices(vertices)

Triangulate a set of vertices on the sphere.

unique_edges(faces, *[, return_mapping])

Extract all unique edges from given triangular faces.

unique_sets(sets, *[, return_inverse])

Remove duplicate sets.

disperse_charges(hemi, iters, *[, const])

Models electrostatic repulsion on the unit sphere

fibonacci_sphere(n_points, *[, hemisphere, ...])

Generate points on the surface of a sphere using Fibonacci Spiral.

disperse_charges_alt(init_pointset, iters, *)

Reimplementation of disperse_charges making use of scipy.optimize.fmin_slsqp.

euler_characteristic_check(sphere, *[, chi])

Checks the euler characteristic of a sphere

Module: core.sphere_stats#

Statistics on spheres

random_uniform_on_sphere(*[, n, coords])

Random unit vectors from a uniform distribution on the sphere.

eigenstats(points, *[, alpha])

Principal direction and confidence ellipse

compare_orientation_sets(S, T)

Computes the mean cosine distance of the best match between points of two sets of vectors S and T (angular similarity)

angular_similarity(S, T)

Computes the cosine distance of the best match between points of two sets of vectors S and T

Module: core.subdivide_octahedron#

Create a unit sphere by subdividing all triangles of an octahedron recursively.

The unit sphere has a radius of 1, which also means that all points in this sphere (assumed to have centre at [0, 0, 0]) have an absolute value (modulus) of 1. Another feature of the unit sphere is that the unit normals of this sphere are exactly the same as the vertices.

This recursive method will avoid the common problem of the polar singularity, produced by 2d (lon-lat) parameterization methods.

create_unit_sphere(*[, recursion_level])

Creates a unit sphere by subdividing a unit octahedron.

create_unit_hemisphere(*[, recursion_level])

Creates a unit sphere by subdividing a unit octahedron, returns half the sphere.

Module: core.wavelet#

cshift3D(x, m, d)

3D Circular Shift

permutationinverse(perm)

Function generating inverse of the permutation

afb3D_A(x, af, d)

3D Analysis Filter Bank

sfb3D_A(lo, hi, sf, d)

3D Synthesis Filter Bank

sfb3D(lo, hi, sf1, *[, sf2, sf3])

3D Synthesis Filter Bank

afb3D(x, af1, *[, af2, af3])

3D Analysis Filter Bank

dwt3D(x, J, af)

3-D Discrete Wavelet Transform

idwt3D(w, J, sf)

Inverse 3-D Discrete Wavelet Transform

sphere2cart#

dipy.core.geometry.sphere2cart(r, theta, phi)[source]#

Spherical to Cartesian coordinates

This is the standard physics convention where theta is the inclination (polar) angle, and phi is the azimuth angle.

Imagine a sphere with center (0,0,0). Orient it with the z axis running south-north, the y axis running west-east and the x axis from posterior to anterior. theta (the inclination angle) is the angle to rotate from the z-axis (the zenith) around the y-axis, towards the x axis. Thus the rotation is counter-clockwise from the point of view of positive y. phi (azimuth) gives the angle of rotation around the z-axis towards the y axis. The rotation is counter-clockwise from the point of view of positive z.

Equivalently, given a point P on the sphere, with coordinates x, y, z, theta is the angle between P and the z-axis, and phi is the angle between the projection of P onto the XY plane, and the X axis.

Geographical nomenclature designates theta as ‘co-latitude’, and phi as ‘longitude’

Parameters:
rarray_like

radius

thetaarray_like

inclination or polar angle

phiarray_like

azimuth angle

Returns:
xarray

x coordinate(s) in Cartesian space

yarray

y coordinate(s) in Cartesian space

zarray

z coordinate

Notes

See these pages:

for excellent discussion of the many different conventions possible. Here we use the physics conventions, used in the wikipedia page.

Derivations of the formulae are simple. Consider a vector x, y, z of length r (norm of x, y, z). The inclination angle (theta) can be found from: cos(theta) == z / r -> z == r * cos(theta). This gives the hypotenuse of the projection onto the XY plane, which we will call Q. Q == r*sin(theta). Now x / Q == cos(phi) -> x == r * sin(theta) * cos(phi) and so on.

We have deliberately named this function sphere2cart rather than sph2cart to distinguish it from the Matlab function of that name, because the Matlab function uses an unusual convention for the angles that we did not want to replicate. The Matlab function is trivial to implement with the formulae given in the Matlab help.

cart2sphere#

dipy.core.geometry.cart2sphere(x, y, z)[source]#

Return angles for Cartesian 3D coordinates x, y, and z

See doc for sphere2cart for angle conventions and derivation of the formulae.

\(0\le\theta\mathrm{(theta)}\le\pi\) and \(-\pi\le\phi\mathrm{(phi)}\le\pi\)

Parameters:
xarray_like

x coordinate in Cartesian space

yarray_like

y coordinate in Cartesian space

zarray_like

z coordinate

Returns:
rarray

radius

thetaarray

inclination (polar) angle

phiarray

azimuth angle

sph2latlon#

dipy.core.geometry.sph2latlon(theta, phi)[source]#

Convert spherical coordinates to latitude and longitude.

Returns:
lat, lonndarray

Latitude and longitude.

normalized_vector#

dipy.core.geometry.normalized_vector(vec, *, axis=-1)[source]#

Return vector divided by its Euclidean (L2) norm

See unit vector and Euclidean norm

Parameters:
vecarray_like shape (3,)
Returns:
nvecarray shape (3,)

vector divided by L2 norm

Examples

>>> vec = [1, 2, 3]
>>> l2n = np.sqrt(np.dot(vec, vec))
>>> nvec = normalized_vector(vec)
>>> np.allclose(np.array(vec) / l2n, nvec)
True
>>> vec = np.array([[1, 2, 3]])
>>> vec.shape == (1, 3)
True
>>> normalized_vector(vec).shape == (1, 3)
True

vector_norm#

dipy.core.geometry.vector_norm(vec, *, axis=-1, keepdims=False)[source]#

Return vector Euclidean (L2) norm

See unit vector and Euclidean norm

Parameters:
vecarray_like

Vectors to norm.

axisint

Axis over which to norm. By default norm over last axis. If axis is None, vec is flattened then normed.

keepdimsbool

If True, the output will have the same number of dimensions as vec, with shape 1 on axis.

Returns:
normarray

Euclidean norms of vectors.

Examples

>>> import numpy as np
>>> vec = [[8, 15, 0], [0, 36, 77]]
>>> vector_norm(vec)
array([ 17.,  85.])
>>> vector_norm(vec, keepdims=True)
array([[ 17.],
       [ 85.]])
>>> vector_norm(vec, axis=0)
array([  8.,  39.,  77.])

rodrigues_axis_rotation#

dipy.core.geometry.rodrigues_axis_rotation(r, theta)[source]#

Rodrigues formula

Rotation matrix for rotation around axis r for angle theta.

The rotation matrix is given by the Rodrigues formula:

\mathbf{R} = \mathbf{I} + \sin(\theta)*\mathbf{S}_n + (1-\cos(\theta))*\mathbf{S}_n^2

with:

\[\begin{split}Sn = \begin{pmatrix} 0 & -n{z} & n{y} \\ n{z} & 0 & -n{x} \\ -n{y} & n{x} & 0 \end{pmatrix}\end{split}\]

where \(n = r / ||r||\)

In case the angle \(||r||\) is very small, the above formula may lead to numerical instabilities. We instead use a Taylor expansion around \(theta=0\):

R = I + \sin(\theta)/\theta \mathbf{S}_r + (1-\cos(\theta))/\theta^2 \mathbf{S}_r^2

leading to:

R = \mathbf{I} + (1-\theta^2/6)*\mathbf{S}_r + (\frac{1}{2}-\theta^2/24)*\mathbf{S}_r^2
Parameters:
rarray_like shape (3,)

Axis.

thetafloat

Angle in degrees.

Returns:
Rarray, shape (3,3)

Rotation matrix.

Examples

>>> import numpy as np
>>> from dipy.core.geometry import rodrigues_axis_rotation
>>> v=np.array([0,0,1])
>>> u=np.array([1,0,0])
>>> R=rodrigues_axis_rotation(v,40)
>>> ur=np.dot(R,u)
>>> np.round(np.rad2deg(np.arccos(np.dot(ur,u))))
40.0

nearest_pos_semi_def#

dipy.core.geometry.nearest_pos_semi_def(B)[source]#

Least squares positive semi-definite tensor estimation.

See [1] for further details about the method.

Parameters:
B(3,3) array_like

B matrix - symmetric. We do not check the symmetry.

Returns:
npds(3,3) array

Estimated nearest positive semi-definite array to matrix B.

References

Examples

>>> B = np.diag([1, 1, -1])
>>> nearest_pos_semi_def(B)
array([[ 0.75,  0.  ,  0.  ],
       [ 0.  ,  0.75,  0.  ],
       [ 0.  ,  0.  ,  0.  ]])

sphere_distance#

dipy.core.geometry.sphere_distance(pts1, pts2, *, radius=None, check_radius=True)[source]#

Distance across sphere surface between pts1 and pts2

Parameters:
pts1(N,R) or (R,) array_like

where N is the number of points and R is the number of coordinates defining a point (R==3 for 3D)

pts2(N,R) or (R,) array_like

where N is the number of points and R is the number of coordinates defining a point (R==3 for 3D). It should be possible to broadcast pts1 against pts2

radiusNone or float, optional

Radius of sphere. Default is to work out radius from mean of the length of each point vector

check_radiusbool, optional

If True, check if the points are on the sphere surface - i.e check if the vector lengths in pts1 and pts2 are close to radius. Default is True.

Returns:
d(N,) or (0,) array

Distances between corresponding points in pts1 and pts2 across the spherical surface, i.e. the great circle distance

See also

cart_distance

cartesian distance between points

vector_cosine

cosine of angle between vectors

Examples

>>> print('%.4f' % sphere_distance([0,1],[1,0]))
1.5708
>>> print('%.4f' % sphere_distance([0,3],[3,0]))
4.7124

cart_distance#

dipy.core.geometry.cart_distance(pts1, pts2)[source]#

Cartesian distance between pts1 and pts2

If either of pts1 or pts2 is 2D, then we take the first dimension to index points, and the second indexes coordinate. More generally, we take the last dimension to be the coordinate dimension.

Parameters:
pts1(N,R) or (R,) array_like

where N is the number of points and R is the number of coordinates defining a point (R==3 for 3D)

pts2(N,R) or (R,) array_like

where N is the number of points and R is the number of coordinates defining a point (R==3 for 3D). It should be possible to broadcast pts1 against pts2

Returns:
d(N,) or (0,) array

Cartesian distances between corresponding points in pts1 and pts2

See also

sphere_distance

distance between points on sphere surface

Examples

>>> cart_distance([0,0,0], [0,0,3])
3.0

vector_cosine#

dipy.core.geometry.vector_cosine(vecs1, vecs2)[source]#

Cosine of angle between two (sets of) vectors

The cosine of the angle between two vectors v1 and v2 is given by the inner product of v1 and v2 divided by the product of the vector lengths:

v_cos = np.inner(v1, v2) / (np.sqrt(np.sum(v1**2)) *
                            np.sqrt(np.sum(v2**2)))
Parameters:
vecs1(N, R) or (R,) array_like

N vectors (as rows) or single vector. Vectors have R elements.

vecs1(N, R) or (R,) array_like

N vectors (as rows) or single vector. Vectors have R elements. It should be possible to broadcast vecs1 against vecs2

Returns:
vcos(N,) or (0,) array

Vector cosines. To get the angles you will need np.arccos

Notes

The vector cosine will be the same as the correlation only if all the input vectors have zero mean.

lambert_equal_area_projection_polar#

dipy.core.geometry.lambert_equal_area_projection_polar(theta, phi)[source]#

Lambert Equal Area Projection from polar sphere to plane

Return positions in (y1,y2) plane corresponding to the points with polar coordinates (theta, phi) on the unit sphere, under the Lambert Equal Area Projection mapping (see Mardia and Jupp (2000), Directional Statistics, p. 161).

See doc for sphere2cart for angle conventions

  • \(0 \le \theta \le \pi\) and \(0 \le \phi \le 2 \pi\)

  • \(|(y_1,y_2)| \le 2\)

The Lambert EAP maps the upper hemisphere to the planar disc of radius 1 and the lower hemisphere to the planar annulus between radii 1 and 2, and vice versa.

Parameters:
thetaarray_like

theta spherical coordinates

phiarray_like

phi spherical coordinates

Returns:
y(N,2) array

planar coordinates of points following mapping by Lambert’s EAP.

lambert_equal_area_projection_cart#

dipy.core.geometry.lambert_equal_area_projection_cart(x, y, z)[source]#

Lambert Equal Area Projection from cartesian vector to plane

Return positions in \((y_1,y_2)\) plane corresponding to the directions of the vectors with cartesian coordinates xyz under the Lambert Equal Area Projection mapping (see Mardia and Jupp (2000), Directional Statistics, p. 161).

The Lambert EAP maps the upper hemisphere to the planar disc of radius 1 and the lower hemisphere to the planar annulus between radii 1 and 2, The Lambert EAP maps the upper hemisphere to the planar disc of radius 1 and the lower hemisphere to the planar annulus between radii 1 and 2. and vice versa.

See doc for sphere2cart for angle conventions

Parameters:
xarray_like

x coordinate in Cartesian space

yarray_like

y coordinate in Cartesian space

zarray_like

z coordinate

Returns:
y(N,2) array

planar coordinates of points following mapping by Lambert’s EAP.

euler_matrix#

dipy.core.geometry.euler_matrix(ai, aj, ak, *, axes='sxyz')[source]#

Return homogeneous rotation matrix from Euler angles and axis sequence.

Code modified from the work of Christoph Gohlke, link provided here cgohlke/transformations

Parameters:
ai, aj, akEuler’s roll, pitch and yaw angles
axesOne of 24 axis sequences as string or encoded tuple
Returns:
matrixndarray (4, 4)
Code modified from the work of Christoph Gohlke, link provided here
cgohlke/transformations

Examples

>>> import numpy
>>> R = euler_matrix(1, 2, 3, axes='syxz')
>>> numpy.allclose(numpy.sum(R[0]), -1.34786452)
True
>>> R = euler_matrix(1, 2, 3, axes=(0, 1, 0, 1))
>>> numpy.allclose(numpy.sum(R[0]), -0.383436184)
True
>>> ai, aj, ak = (4.0*math.pi) * (numpy.random.random(3) - 0.5)
>>> for axes in _AXES2TUPLE.keys():
...    _ = euler_matrix(ai, aj, ak, axes=axes)
>>> for axes in _TUPLE2AXES.keys():
...    _ = euler_matrix(ai, aj, ak, axes=axes)

compose_matrix#

dipy.core.geometry.compose_matrix(*, scale=None, shear=None, angles=None, translate=None, perspective=None)[source]#

Return 4x4 transformation matrix from sequence of transformations.

Code modified from the work of Christoph Gohlke, link provided here cgohlke/transformations

This is the inverse of the decompose_matrix function.

Parameters:
scale(3,) array_like

Scaling factors.

sheararray_like

Shear factors for x-y, x-z, y-z axes.

anglesarray_like

Euler angles about static x, y, z axes.

translatearray_like

Translation vector along x, y, z axes.

perspectivearray_like

Perspective partition of matrix.

Returns:
matrix4x4 array

Examples

>>> import math
>>> import numpy as np
>>> import dipy.core.geometry as gm
>>> scale = np.random.random(3) - 0.5
>>> shear = np.random.random(3) - 0.5
>>> angles = (np.random.random(3) - 0.5) * (2*math.pi)
>>> trans = np.random.random(3) - 0.5
>>> persp = np.random.random(4) - 0.5
>>> M0 = gm.compose_matrix(
...     scale=scale, shear=shear, angles=angles, translate=trans, perspective=persp
... )

decompose_matrix#

dipy.core.geometry.decompose_matrix(matrix)[source]#

Return sequence of transformations from transformation matrix.

Code modified from the excellent work of Christoph Gohlke, link provided here: cgohlke/transformations

Parameters:
matrixarray_like

Non-degenerate homogeneous transformation matrix

Returns:
scale(3,) ndarray

Three scaling factors.

shear(3,) ndarray

Shear factors for x-y, x-z, y-z axes.

angles(3,) ndarray

Euler angles about static x, y, z axes.

translate(3,) ndarray

Translation vector along x, y, z axes.

perspectivendarray

Perspective partition of matrix.

Raises:
ValueError

If matrix is of wrong type or degenerate.

Examples

>>> import numpy as np
>>> T0=np.diag([2,1,1,1])
>>> scale, shear, angles, trans, persp = decompose_matrix(T0)

circumradius#

dipy.core.geometry.circumradius(a, b, c)[source]#

a, b and c are 3-dimensional vectors which are the vertices of a triangle. The function returns the circumradius of the triangle, i.e the radius of the smallest circle that can contain the triangle. In the degenerate case when the 3 points are collinear it returns half the distance between the furthest apart points.

Parameters:
a, b, c(3,) array_like

the three vertices of the triangle

Returns:
circumradiusfloat

the desired circumradius

vec2vec_rotmat#

dipy.core.geometry.vec2vec_rotmat(u, v)[source]#

rotation matrix from 2 unit vectors

u, v being unit 3d vectors return a 3x3 rotation matrix R than aligns u to v.

In general there are many rotations that will map u to v. If S is any rotation using v as an axis then R.S will also map u to v since (S.R)u = S(Ru) = Sv = v. The rotation R returned by vec2vec_rotmat leaves fixed the perpendicular to the plane spanned by u and v.

The transpose of R will align v to u.

Parameters:
uarray, shape(3,)
varray, shape(3,)
Returns:
Rarray, shape(3,3)

Examples

>>> import numpy as np
>>> from dipy.core.geometry import vec2vec_rotmat
>>> u=np.array([1,0,0])
>>> v=np.array([0,1,0])
>>> R=vec2vec_rotmat(u,v)
>>> np.dot(R,u)
array([ 0.,  1.,  0.])
>>> np.dot(R.T,v)
array([ 1.,  0.,  0.])

compose_transformations#

dipy.core.geometry.compose_transformations(*mats)[source]#

Compose multiple 4x4 affine transformations in one 4x4 matrix

Parameters:
mat1array, (4, 4)
mat2array, (4, 4)
matNarray, (4, 4)
Returns:
matN x … x mat2 x mat1array, (4, 4)

perpendicular_directions#

dipy.core.geometry.perpendicular_directions(v, *, num=30, half=False)[source]#

Computes n evenly spaced perpendicular directions relative to a given vector v

Parameters:
varray (3,)

Array containing the three cartesian coordinates of vector v

numint, optional

Number of perpendicular directions to generate

halfbool, optional

If half is True, perpendicular directions are sampled on half of the unit circumference perpendicular to v, otherwive perpendicular directions are sampled on the full circumference. Default of half is False

Returns:
psamplesarray (n, 3)

array of vectors perpendicular to v

Notes

Perpendicular directions are estimated using the following two step procedure:

  1. the perpendicular directions are first sampled in a unit circumference parallel to the plane normal to the x-axis.

  2. Samples are then rotated and aligned to the plane normal to vector v. The rotational matrix for this rotation is constructed as reference frame basis which axis are the following:

    • The first axis is vector v

    • The second axis is defined as the normalized vector given by the cross product between vector v and the unit vector aligned to the x-axis

    • The third axis is defined as the cross product between the previous computed vector and vector v.

Following this two steps, coordinates of the final perpendicular directions are given as:

\[\left [ -\sin(a_{i}) \sqrt{{v_{y}}^{2}+{v_{z}}^{2}} \; , \; \frac{v_{x}v_{y}\sin(a_{i})-v_{z}\cos(a_{i})} {\sqrt{{v_{y}}^{2}+{v_{z}}^{2}}} \; , \; \frac{v_{x}v_{z}\sin(a_{i})-v_{y}\cos(a_{i})} {\sqrt{{v_{y}}^{2}+{v_{z}}^{2}}} \right ]\]

This procedure has a singularity when vector v is aligned to the x-axis. To solve this singularity, perpendicular directions in procedure’s step 1 are defined in the plane normal to y-axis and the second axis of the rotated frame of reference is computed as the normalized vector given by the cross product between vector v and the unit vector aligned to the y-axis. Following this, the coordinates of the perpendicular directions are given as:

left [ -frac{left (v_{x}v_{y}sin(a_{i})+v_{z}cos(a_{i}) right )} {sqrt{{v_{x}}^{2}+{v_{z}}^{2}}} ; , ; sin(a_{i}) sqrt{{v_{x}}^{2}+{v_{z}}^{2}} ; , ; frac{v_{y}v_{z}sin(a_{i})+v_{x}cos(a_{i})} {sqrt{{v_{x}}^{2}+{v_{z}}^{2}}} right ]

For more details on this calculation, see here.

dist_to_corner#

dipy.core.geometry.dist_to_corner(affine)[source]#

Calculate the maximal distance from the center to a corner of a voxel, given an affine

Parameters:
affine4 by 4 array.

The spatial transformation from the measurement to the scanner space.

Returns:
dist: float

The maximal distance to the corner of a voxel, given voxel size encoded in the affine.

is_hemispherical#

dipy.core.geometry.is_hemispherical(vecs)[source]#

Test whether all points on a unit sphere lie in the same hemisphere.

Parameters:
vecsnumpy.ndarray

2D numpy array with shape (N, 3) where N is the number of points. All points must lie on the unit sphere.

Returns:
is_hemibool

If True, one can find a hemisphere that contains all the points. If False, then the points do not lie in any hemisphere

polenumpy.ndarray

If is_hemi == True, then pole is the “central” pole of the input vectors. Otherwise, pole is the zero vector.

References

https://rstudio-pubs-static.s3.amazonaws.com/27121_a22e51b47c544980bad594d5e0bb2d04.html

GradientTable#

class dipy.core.gradients.GradientTable(gradients, *, big_delta=None, small_delta=None, b0_threshold=50, btens=None)[source]#

Bases: object

Diffusion gradient information

Parameters:
gradientsarray_like (N, 3)

Diffusion gradients. The direction of each of these vectors corresponds to the b-vector, and the length corresponds to the b-value.

b0_thresholdfloat

Gradients with b-value less than or equal to b0_threshold are considered as b0s i.e. without diffusion weighting.

Attributes:
gradients(N,3) ndarray

diffusion gradients

bvals(N,) ndarray

The b-value, or magnitude, of each gradient direction.

qvals: (N,) ndarray

The q-value for each gradient direction. Needs big and small delta.

bvecs(N,3) ndarray

The direction, represented as a unit vector, of each gradient.

b0s_mask(N,) ndarray

Boolean array indicating which gradients have no diffusion weighting, ie b-value is close to 0.

b0_thresholdfloat

Gradients with b-value less than or equal to b0_threshold are considered to not have diffusion weighting.

btens(N,3,3) ndarray

The b-tensor of each gradient direction.

Methods

b0s_mask

bvals

bvecs

gradient_strength

qvals

tau

See also

gradient_table

Notes

The GradientTable object is immutable. Do NOT assign attributes. If you have your gradient table in a bval & bvec format, we recommend using the factory function gradient_table

b0s_mask()[source]#
bvals()[source]#
bvecs()[source]#
gradient_strength()[source]#
property info#
qvals()[source]#
tau()[source]#

b0_threshold_empty_gradient_message#

dipy.core.gradients.b0_threshold_empty_gradient_message(bvals, idx, b0_threshold)[source]#

Message about the b0_threshold value resulting in no gradient selection.

Parameters:
bvals(N,) ndarray

The b-value, or magnitude, of each gradient direction.

idxndarray

Indices of the gradients to be selected.

b0_thresholdfloat

Gradients with b-value less than or equal to b0_threshold are considered to not have diffusion weighting.

Returns:
str

Message.

b0_threshold_update_slicing_message#

dipy.core.gradients.b0_threshold_update_slicing_message(slice_start)[source]#

Message for b0 threshold value update for slicing.

Parameters:
slice_startint

Starting index for slicing.

Returns:
str

Message.

mask_non_weighted_bvals#

dipy.core.gradients.mask_non_weighted_bvals(bvals, b0_threshold)[source]#

Create a diffusion gradient-weighting mask for the b-values according to the provided b0 threshold value.

Parameters:
bvals(N,) ndarray

The b-value, or magnitude, of each gradient direction.

b0_thresholdfloat

Gradients with b-value less than or equal to b0_threshold are considered to not have diffusion weighting.

Returns:
ndarray

Gradient-weighting mask: True for all b-value indices whose value is smaller or equal to b0_threshold; False otherwise.

gradient_table_from_bvals_bvecs#

dipy.core.gradients.gradient_table_from_bvals_bvecs(bvals, bvecs, *, b0_threshold=50, atol=0.01, btens=None, **kwargs)[source]#

Creates a GradientTable from a bvals array and a bvecs array

Parameters:
bvalsarray_like (N,)

The b-value, or magnitude, of each gradient direction.

bvecsarray_like (N, 3)

The direction, represented as a unit vector, of each gradient.

b0_thresholdfloat

Gradients with b-value less than or equal to b0_threshold are considered to not have diffusion weighting. If its value is equal to or larger than all values in b-vals, then it is assumed that no thresholding is requested.

atolfloat

Each vector in bvecs must be a unit vectors up to a tolerance of atol.

btenscan be any of three options
  1. a string specifying the shape of the encoding tensor for all volumes in data. Options: ‘LTE’, ‘PTE’, ‘STE’, ‘CTE’ corresponding to linear, planar, spherical, and “cigar-shaped” tensor encoding. Tensors are rotated so that linear and cigar tensors are aligned with the corresponding gradient direction and the planar tensor’s normal is aligned with the corresponding gradient direction. Magnitude is scaled to match the b-value.

  2. an array of strings of shape (N,), (N, 1), or (1, N) specifying encoding tensor shape for each volume separately. N corresponds to the number volumes in data. Options for elements in array: ‘LTE’, ‘PTE’, ‘STE’, ‘CTE’ corresponding to linear, planar, spherical, and “cigar-shaped” tensor encoding. Tensors are rotated so that linear and cigar tensors are aligned with the corresponding gradient direction and the planar tensor’s normal is aligned with the corresponding gradient direction. Magnitude is scaled to match the b-value.

  3. an array of shape (N,3,3) specifying the b-tensor of each volume exactly. N corresponds to the number volumes in data. No rotation or scaling is performed.

Returns:
gradientsGradientTable

A GradientTable with all the gradient information.

Other Parameters:
**kwargsdict

Other keyword inputs are passed to GradientTable.

gradient_table_from_qvals_bvecs#

dipy.core.gradients.gradient_table_from_qvals_bvecs(qvals, bvecs, big_delta, small_delta, *, b0_threshold=50, atol=0.01)[source]#

A general function for creating diffusion MR gradients.

It reads, loads and prepares scanner parameters like the b-values and b-vectors so that they can be useful during the reconstruction process.

Parameters:
qvalsan array of shape (N,),

q-value given in 1/mm

bvecscan be any of two options
  1. an array of shape (N, 3) or (3, N) with the b-vectors.

  2. a path for the file which contains an array like the previous.

big_deltafloat or array of shape (N,)

acquisition pulse separation time in seconds

small_deltafloat

acquisition pulse duration time in seconds

b0_thresholdfloat

All b-values with values less than or equal to bo_threshold are considered as b0s i.e. without diffusion weighting.

atolfloat

All b-vectors need to be unit vectors up to a tolerance.

Returns:
gradientsGradientTable

A GradientTable with all the gradient information.

Notes

  1. Often b0s (b-values which correspond to images without diffusion weighting) have 0 values however in some cases the scanner cannot provide b0s of an exact 0 value and it gives a bit higher values e.g. 6 or 12. This is the purpose of the b0_threshold in the __init__.

  2. We assume that the minimum number of b-values is 7.

  3. B-vectors should be unit vectors.

Examples

>>> from dipy.core.gradients import gradient_table_from_qvals_bvecs
>>> qvals = 30. * np.ones(7)
>>> big_delta = .03  # pulse separation of 30ms
>>> small_delta = 0.01  # pulse duration of 10ms
>>> qvals[0] = 0
>>> sq2 = np.sqrt(2) / 2
>>> bvecs = np.array([[0, 0, 0],
...                   [1, 0, 0],
...                   [0, 1, 0],
...                   [0, 0, 1],
...                   [sq2, sq2, 0],
...                   [sq2, 0, sq2],
...                   [0, sq2, sq2]])
>>> gt = gradient_table_from_qvals_bvecs(qvals, bvecs,
...                                      big_delta, small_delta)

gradient_table_from_gradient_strength_bvecs#

dipy.core.gradients.gradient_table_from_gradient_strength_bvecs(gradient_strength, bvecs, big_delta, small_delta, *, b0_threshold=50, atol=0.01)[source]#

A general function for creating diffusion MR gradients.

It reads, loads and prepares scanner parameters like the b-values and b-vectors so that they can be useful during the reconstruction process.

Parameters:
gradient_strengthan array of shape (N,),

gradient strength given in T/mm

bvecscan be any of two options
  1. an array of shape (N, 3) or (3, N) with the b-vectors.

  2. a path for the file which contains an array like the previous.

big_deltafloat or array of shape (N,)

acquisition pulse separation time in seconds

small_deltafloat

acquisition pulse duration time in seconds

b0_thresholdfloat

All b-values with values less than or equal to bo_threshold are considered as b0s i.e. without diffusion weighting.

atolfloat

All b-vectors need to be unit vectors up to a tolerance.

Returns:
gradientsGradientTable

A GradientTable with all the gradient information.

Notes

  1. Often b0s (b-values which correspond to images without diffusion weighting) have 0 values however in some cases the scanner cannot provide b0s of an exact 0 value and it gives a bit higher values e.g. 6 or 12. This is the purpose of the b0_threshold in the __init__.

  2. We assume that the minimum number of b-values is 7.

  3. B-vectors should be unit vectors.

Examples

>>> from dipy.core.gradients import (
...    gradient_table_from_gradient_strength_bvecs)
>>> gradient_strength = .03e-3 * np.ones(7)  # clinical strength at 30 mT/m
>>> big_delta = .03  # pulse separation of 30ms
>>> small_delta = 0.01  # pulse duration of 10ms
>>> gradient_strength[0] = 0
>>> sq2 = np.sqrt(2) / 2
>>> bvecs = np.array([[0, 0, 0],
...                   [1, 0, 0],
...                   [0, 1, 0],
...                   [0, 0, 1],
...                   [sq2, sq2, 0],
...                   [sq2, 0, sq2],
...                   [0, sq2, sq2]])
>>> gt = gradient_table_from_gradient_strength_bvecs(
...     gradient_strength, bvecs, big_delta, small_delta)

gradient_table#

dipy.core.gradients.gradient_table(bvals, *, bvecs=None, big_delta=None, small_delta=None, b0_threshold=50, atol=0.01, btens=None)[source]#

A general function for creating diffusion MR gradients.

It reads, loads and prepares scanner parameters like the b-values and b-vectors so that they can be useful during the reconstruction process.

Parameters:
bvalscan be any of the four options
  1. an array of shape (N,) or (1, N) or (N, 1) with the b-values.

  2. a path for the file which contains an array like the above (1).

  3. an array of shape (N, 4) or (4, N). Then this parameter is considered to be a b-table which contains both bvals and bvecs. In this case the next parameter is skipped.

  4. a path for the file which contains an array like the one at (3).

bvecscan be any of two options
  1. an array of shape (N, 3) or (3, N) with the b-vectors.

  2. a path for the file which contains an array like the previous.

big_deltafloat

acquisition pulse separation time in seconds (default None)

small_deltafloat

acquisition pulse duration time in seconds (default None)

b0_thresholdfloat

All b-values with values less than or equal to bo_threshold are considered as b0s i.e. without diffusion weighting.

atolfloat

All b-vectors need to be unit vectors up to a tolerance.

btenscan be any of three options
  1. a string specifying the shape of the encoding tensor for all volumes in data. Options: ‘LTE’, ‘PTE’, ‘STE’, ‘CTE’ corresponding to linear, planar, spherical, and “cigar-shaped” tensor encoding. Tensors are rotated so that linear and cigar tensors are aligned with the corresponding gradient direction and the planar tensor’s normal is aligned with the corresponding gradient direction. Magnitude is scaled to match the b-value.

  2. an array of strings of shape (N,), (N, 1), or (1, N) specifying encoding tensor shape for each volume separately. N corresponds to the number volumes in data. Options for elements in array: ‘LTE’, ‘PTE’, ‘STE’, ‘CTE’ corresponding to linear, planar, spherical, and “cigar-shaped” tensor encoding. Tensors are rotated so that linear and cigar tensors are aligned with the corresponding gradient direction and the planar tensor’s normal is aligned with the corresponding gradient direction. Magnitude is scaled to match the b-value.

  3. an array of shape (N,3,3) specifying the b-tensor of each volume exactly. N corresponds to the number volumes in data. No rotation or scaling is performed.

Returns:
gradientsGradientTable

A GradientTable with all the gradient information.

Notes

  1. Often b0s (b-values which correspond to images without diffusion weighting) have 0 values however in some cases the scanner cannot provide b0s of an exact 0 value and it gives a bit higher values e.g. 6 or 12. This is the purpose of the b0_threshold in the __init__.

  2. We assume that the minimum number of b-values is 7.

  3. B-vectors should be unit vectors.

Examples

>>> from dipy.core.gradients import gradient_table
>>> bvals = 1500 * np.ones(7)
>>> bvals[0] = 0
>>> sq2 = np.sqrt(2) / 2
>>> bvecs = np.array([[0, 0, 0],
...                   [1, 0, 0],
...                   [0, 1, 0],
...                   [0, 0, 1],
...                   [sq2, sq2, 0],
...                   [sq2, 0, sq2],
...                   [0, sq2, sq2]])
>>> gt = gradient_table(bvals, bvecs=bvecs)
>>> gt.bvecs.shape == bvecs.shape
True
>>> gt = gradient_table(bvals, bvecs=bvecs.T)
>>> gt.bvecs.shape == bvecs.T.shape
False

reorient_bvecs#

dipy.core.gradients.reorient_bvecs(gtab, affines, *, atol=0.01)[source]#

Reorient the directions in a GradientTable.

When correcting for motion, rotation of the diffusion-weighted volumes might cause systematic bias in rotationally invariant measures, such as FA and MD, and also cause characteristic biases in tractography, unless the gradient directions are appropriately reoriented to compensate for this effect [2].

Parameters:
gtabGradientTable

The nominal gradient table with which the data were acquired.

affineslist or ndarray of shape (4, 4, n) or (3, 3, n)

Each entry in this list or array contain either an affine transformation (4,4) or a rotation matrix (3, 3). In both cases, the transformations encode the rotation that was applied to the image corresponding to one of the non-zero gradient directions (ordered according to their order in gtab.bvecs[~gtab.b0s_mask])

atol: see gradient_table()
Returns:
gtaba GradientTable class instance with the reoriented directions

References

generate_bvecs#

dipy.core.gradients.generate_bvecs(N, *, iters=5000, rng=None)[source]#

Generates N bvectors.

Uses dipy.core.sphere.disperse_charges to model electrostatic repulsion on a unit sphere.

Parameters:
Nint

The number of bvectors to generate. This should be equal to the number of bvals used.

itersint

Number of iterations to run.

rngnumpy.random.Generator, optional

Numpy’s random number generator. If None, the generator is created. Default is None.

Returns:
bvecs(N,3) ndarray

The generated directions, represented as a unit vector, of each gradient.

round_bvals#

dipy.core.gradients.round_bvals(bvals, *, bmag=None)[source]#

“This function rounds the b-values

Parameters:
bvalsndarray

Array containing the b-values

bmagint

The order of magnitude to round the b-values. If not given b-values will be rounded relative to the order of magnitude \(bmag = (bmagmax - 1)\), where bmaxmag is the magnitude order of the larger b-value.

Returns:
rbvalsndarray

Array containing the rounded b-values

unique_bvals_tolerance#

dipy.core.gradients.unique_bvals_tolerance(bvals, *, tol=20)[source]#

Gives the unique b-values of the data, within a tolerance gap

The b-values must be regrouped in clusters easily separated by a distance greater than the tolerance gap. If all the b-values of a cluster fit within the tolerance gap, the highest b-value is kept.

Parameters:
bvalsndarray

Array containing the b-values

tolint

The tolerated gap between the b-values to extract and the actual b-values.

Returns:
ubvalsndarray

Array containing the unique b-values using the median value for each cluster

get_bval_indices#

dipy.core.gradients.get_bval_indices(bvals, bval, *, tol=20)[source]#

Get indices where the b-value is bval

Parameters:
bvals: ndarray

Array containing the b-values

bval: float or int

b-value to extract indices

tol: int

The tolerated gap between the b-values to extract and the actual b-values.

Returns:
Array of indices where the b-value is bval

unique_bvals_magnitude#

dipy.core.gradients.unique_bvals_magnitude(bvals, *, bmag=None, rbvals=False)[source]#

This function gives the unique rounded b-values of the data

Parameters:
bvalsndarray

Array containing the b-values

bmagint

The order of magnitude that the bvalues have to differ to be considered an unique b-value. B-values are also rounded up to this order of magnitude. Default: derive this value from the maximal b-value provided: \(bmag=log_{10}(max(bvals)) - 1\).

rbvalsbool, optional

If True function also returns all individual rounded b-values.

Returns:
ubvalsndarray

Array containing the rounded unique b-values

check_multi_b#

dipy.core.gradients.check_multi_b(gtab, n_bvals, *, non_zero=True, bmag=None)[source]#

Check if you have enough different b-values in your gradient table

Parameters:
gtabGradientTable class instance.

Gradient table.

n_bvalsint

The number of different b-values you are checking for.

non_zerobool

Whether to check only non-zero bvalues. In this case, we will require at least n_bvals non-zero b-values (where non-zero is defined depending on the gtab object’s b0_threshold attribute)

bmagint

The order of magnitude of the b-values used. The function will normalize the b-values relative \(10^{bmag}\). Default: derive this value from the maximal b-value provided: \(bmag=log_{10}(max(bvals)) - 1\).

Returns:
boolWhether there are at least n_bvals different b-values in the
gradient table used.

btens_to_params#

dipy.core.gradients.btens_to_params(btens, *, ztol=1e-10)[source]#

Compute trace, anisotropy and asymmetry parameters from b-tensors.

Parameters:
btens(3, 3) OR (N, 3, 3) numpy.ndarray

input b-tensor, or b-tensors, where N = number of b-tensors

ztolfloat

Any parameters smaller than this value are considered to be 0

Returns:
bval: numpy.ndarray

b-value(s) (trace(s))

bdelta: numpy.ndarray

normalized tensor anisotropy(s)

b_eta: numpy.ndarray

tensor asymmetry(s)

Notes

This function can be used to get b-tensor parameters directly from the GradientTable btens attribute.

Examples

>>> lte = np.array([[1, 0, 0], [0, 0, 0], [0, 0, 0]])
>>> bval, bdelta, b_eta = btens_to_params(lte)
>>> print("bval={}; bdelta={}; b_eta={}".format(bdelta, bval, b_eta))
bval=[ 1.]; bdelta=[ 1.]; b_eta=[ 0.]

params_to_btens#

dipy.core.gradients.params_to_btens(bval, bdelta, b_eta)[source]#

Compute b-tensor from trace, anisotropy and asymmetry parameters.

Parameters:
bval: int or float

b-value (>= 0)

bdelta: int or float

normalized tensor anisotropy (>= -0.5 and <= 1)

b_eta: int or float

tensor asymmetry (>= 0 and <= 1)

Returns:
(3, 3) numpy.ndarray

output b-tensor

Notes

Implements eq. 7.11. p. 231 in [3].

References

ornt_mapping#

dipy.core.gradients.ornt_mapping(ornt1, ornt2)[source]#

Calculate the mapping needing to get from orn1 to orn2.

reorient_vectors#

dipy.core.gradients.reorient_vectors(bvecs, current_ornt, new_ornt, *, axis=0)[source]#

Change the orientation of gradients or other vectors.

Moves vectors, storted along axis, from current_ornt to new_ornt. For example the vector [x, y, z] in “RAS” will be [-x, -y, z] in “LPS”.

R: Right A: Anterior S: Superior L: Left P: Posterior I: Inferior

reorient_on_axis#

dipy.core.gradients.reorient_on_axis(bvecs, current_ornt, new_ornt, *, axis=0)[source]#

orientation_from_string#

dipy.core.gradients.orientation_from_string(string_ornt)[source]#

Return an array representation of an ornt string.

orientation_to_string#

dipy.core.gradients.orientation_to_string(ornt)[source]#

Return a string representation of a 3d ornt.

Graph#

class dipy.core.graph.Graph[source]#

Bases: object

A simple graph class

Methods

add_edge

add_node

all_paths

children

del_node

del_node_and_edges

down

down_short

parents

shortest_path

up

up_short

add_edge(n, m, *, ws=True, wp=True)[source]#
add_node(n, *, attr=None)[source]#
all_paths(graph, start, *, end=None, path=None)[source]#
children(n)[source]#
del_node(n)[source]#
del_node_and_edges(n)[source]#
down(n)[source]#
down_short(n)[source]#
parents(n)[source]#
shortest_path(graph, start, *, end=None, path=None)[source]#
up(n)[source]#
up_short(n)[source]#

histeq#

dipy.core.histeq.histeq(arr, *, num_bins=256)[source]#

Performs an histogram equalization on arr. This was taken from: http://www.janeriksolem.net/2009/06/histogram-equalization-with-python-and.html

Parameters:
arrndarray

Image on which to perform histogram equalization.

num_binsint

Number of bins used to construct the histogram.

Returns:
resultndarray

Histogram equalized image.

Interpolator#

class dipy.core.interpolation.Interpolator(data, voxel_size)#

Bases: object

Class to be subclassed by different interpolator types

NearestNeighborInterpolator#

class dipy.core.interpolation.NearestNeighborInterpolator(data, voxel_size)#

Bases: Interpolator

Interpolates data using nearest neighbor interpolation

OutsideImage#

class dipy.core.interpolation.OutsideImage#

Bases: Exception

Attributes:
args

Methods

add_note

Exception.add_note(note) -- add a note to the exception

with_traceback

Exception.with_traceback(tb) -- set self.__traceback__ to tb and return self.

RBFInterpolator#

class dipy.core.interpolation.RBFInterpolator(y, d, neighbors=None, smoothing=0.0, kernel='thin_plate_spline', epsilon=None, degree=None)[source]#

Bases: object

Radial basis function (RBF) interpolation in N dimensions.

Parameters:
y(npoints, ndims) array_like

2-D array of data point coordinates.

d(npoints, …) array_like

N-D array of data values at y. The length of d along the first axis must be equal to the length of y. Unlike some interpolators, the interpolation axis cannot be changed.

neighborsint, optional

If specified, the value of the interpolant at each evaluation point will be computed using only this many nearest data points. All the data points are used by default.

smoothingfloat or (npoints, ) array_like, optional

Smoothing parameter. The interpolant perfectly fits the data when this is set to 0. For large values, the interpolant approaches a least squares fit of a polynomial with the specified degree. Default is 0.

kernelstr, optional

Type of RBF. This should be one of

  • ‘linear’ : -r

  • ‘thin_plate_spline’ : r**2 * log(r)

  • ‘cubic’ : r**3

  • ‘quintic’ : -r**5

  • ‘multiquadric’ : -sqrt(1 + r**2)

  • ‘inverse_multiquadric’ : 1/sqrt(1 + r**2)

  • ‘inverse_quadratic’ : 1/(1 + r**2)

  • ‘gaussian’ : exp(-r**2)

Default is ‘thin_plate_spline’.

epsilonfloat, optional

Shape parameter that scales the input to the RBF. If kernel is ‘linear’, ‘thin_plate_spline’, ‘cubic’, or ‘quintic’, this defaults to 1 and can be ignored because it has the same effect as scaling the smoothing parameter. Otherwise, this must be specified.

degreeint, optional

Degree of the added polynomial. For some RBFs the interpolant may not be well-posed if the polynomial degree is too small. Those RBFs and their corresponding minimum degrees are

  • ‘multiquadric’ : 0

  • ‘linear’ : 0

  • ‘thin_plate_spline’ : 1

  • ‘cubic’ : 1

  • ‘quintic’ : 2

The default value is the minimum degree for kernel or 0 if there is no minimum degree. Set this to -1 for no added polynomial.

Methods

__call__(x)

Evaluate the interpolant at x.

See also

NearestNDInterpolator
LinearNDInterpolator
CloughTocher2DInterpolator

Notes

An RBF is a scalar valued function in N-dimensional space whose value at \(x\) can be expressed in terms of \(r=||x - c||\), where \(c\) is the center of the RBF.

An RBF interpolant for the vector of data values \(d\), which are from locations \(y\), is a linear combination of RBFs centered at \(y\) plus a polynomial with a specified degree. The RBF interpolant is written as

\[f(x) = K(x, y) a + P(x) b,\]

where \(K(x, y)\) is a matrix of RBFs with centers at \(y\) evaluated at the points \(x\), and \(P(x)\) is a matrix of monomials, which span polynomials with the specified degree, evaluated at \(x\). The coefficients \(a\) and \(b\) are the solution to the linear equations

\[(K(y, y) + \lambda I) a + P(y) b = d\]

and

\[P(y)^T a = 0,\]

where \(\lambda\) is a non-negative smoothing parameter that controls how well we want to fit the data. The data are fit exactly when the smoothing parameter is 0.

The above system is uniquely solvable if the following requirements are met:

  • \(P(y)\) must have full column rank. \(P(y)\) always has full column rank when degree is -1 or 0. When degree is 1, \(P(y)\) has full column rank if the data point locations are not all collinear (N=2), coplanar (N=3), etc.

  • If kernel is ‘multiquadric’, ‘linear’, ‘thin_plate_spline’, ‘cubic’, or ‘quintic’, then degree must not be lower than the minimum value listed above.

  • If smoothing is 0, then each data point location must be distinct.

When using an RBF that is not scale invariant (‘multiquadric’, ‘inverse_multiquadric’, ‘inverse_quadratic’, or ‘gaussian’), an appropriate shape parameter must be chosen (e.g., through cross validation). Smaller values for the shape parameter correspond to wider RBFs. The problem can become ill-conditioned or singular when the shape parameter is too small.

The memory required to solve for the RBF interpolation coefficients increases quadratically with the number of data points, which can become impractical when interpolating more than about a thousand data points. To overcome memory limitations for large interpolation problems, the neighbors argument can be specified to compute an RBF interpolant for each evaluation point using only the nearest data points.

Added in version 1.7.0.

References

[1]

Fasshauer, G., 2007. Meshfree Approximation Methods with Matlab. World Scientific Publishing Co.

[3]

Wahba, G., 1990. Spline Models for Observational Data. SIAM.

Examples

Demonstrate interpolating scattered data to a grid in 2-D.

>>> import numpy as np
>>> import matplotlib.pyplot as plt
>>> from scipy.interpolate import RBFInterpolator
>>> from scipy.stats.qmc import Halton
>>> rng = np.random.default_rng()
>>> xobs = 2*Halton(2, seed=rng).random(100) - 1
>>> yobs = np.sum(xobs, axis=1)*np.exp(-6*np.sum(xobs**2, axis=1))
>>> xgrid = np.mgrid[-1:1:50j, -1:1:50j]
>>> xflat = xgrid.reshape(2, -1).T
>>> yflat = RBFInterpolator(xobs, yobs)(xflat)
>>> ygrid = yflat.reshape(50, 50)
>>> fig, ax = plt.subplots()
>>> ax.pcolormesh(*xgrid, ygrid, vmin=-0.25, vmax=0.25, shading='gouraud')
>>> p = ax.scatter(*xobs.T, c=yobs, s=50, ec='k', vmin=-0.25, vmax=0.25)
>>> fig.colorbar(p)
>>> plt.show()

Rbf#

class dipy.core.interpolation.Rbf(*args, **kwargs)[source]#

Bases: object

A class for radial basis function interpolation of functions from N-D scattered data to an M-D domain.

Parameters:
*argsarrays

x, y, z, …, d, where x, y, z, … are the coordinates of the nodes and d is the array of values at the nodes

functionstr or callable, optional

The radial basis function, based on the radius, r, given by the norm (default is Euclidean distance); the default is ‘multiquadric’:

'multiquadric': sqrt((r/self.epsilon)**2 + 1)
'inverse': 1.0/sqrt((r/self.epsilon)**2 + 1)
'gaussian': exp(-(r/self.epsilon)**2)
'linear': r
'cubic': r**3
'quintic': r**5
'thin_plate': r**2 * log(r)

If callable, then it must take 2 arguments (self, r). The epsilon parameter will be available as self.epsilon. Other keyword arguments passed in will be available as well.

epsilonfloat, optional

Adjustable constant for gaussian or multiquadrics functions - defaults to approximate average distance between nodes (which is a good start).

smoothfloat, optional

Values greater than zero increase the smoothness of the approximation. 0 is for interpolation (default), the function will always go through the nodal points in this case.

normstr, callable, optional

A function that returns the ‘distance’ between two points, with inputs as arrays of positions (x, y, z, …), and an output as an array of distance. E.g., the default: ‘euclidean’, such that the result is a matrix of the distances from each point in x1 to each point in x2. For more options, see documentation of scipy.spatial.distances.cdist.

modestr, optional

Mode of the interpolation, can be ‘1-D’ (default) or ‘N-D’. When it is ‘1-D’ the data d will be considered as 1-D and flattened internally. When it is ‘N-D’ the data d is assumed to be an array of shape (n_samples, m), where m is the dimension of the target domain.

Attributes:
Nint

The number of data points (as determined by the input arrays).

dindarray

The 1-D array of data values at each of the data coordinates xi.

xindarray

The 2-D array of data coordinates.

functionstr or callable

The radial basis function. See description under Parameters.

epsilonfloat

Parameter used by gaussian or multiquadrics functions. See Parameters.

smoothfloat

Smoothing parameter. See description under Parameters.

normstr or callable

The distance function. See description under Parameters.

modestr

Mode of the interpolation. See description under Parameters.

nodesndarray

A 1-D array of node values for the interpolation.

Ainternal property, do not use

Methods

__call__(*args)

Call self as a function.

See also

RBFInterpolator

Examples

>>> import numpy as np
>>> from scipy.interpolate import Rbf
>>> rng = np.random.default_rng()
>>> x, y, z, d = rng.random((4, 50))
>>> rbfi = Rbf(x, y, z, d)  # radial basis function interpolator instance
>>> xi = yi = zi = np.linspace(0, 1, 20)
>>> di = rbfi(xi, yi, zi)   # interpolated values
>>> di.shape
(20,)
property A#

TriLinearInterpolator#

class dipy.core.interpolation.TriLinearInterpolator(data, voxel_size)#

Bases: Interpolator

Interpolates data using trilinear interpolation

interpolate 4d diffusion volume using 3 indices, ie data[x, y, z]

deprecate_with_version#

dipy.core.interpolation.deprecate_with_version(message, *, since='', until='', version_comparator=<function cmp_pkg_version>, warn_class=<class 'DeprecationWarning'>, error_class=<class 'dipy.utils.deprecator.ExpiredDeprecationError'>)[source]#

Return decorator function function for deprecation warning / error.

The decorated function / method will:

  • Raise the given warning_class warning when the function / method gets called, up to (and including) version until (if specified);

  • Raise the given error_class error when the function / method gets called, when the package version is greater than version until (if specified).

Parameters:
messagestr

Message explaining deprecation, giving possible alternatives.

sincestr, optional

Released version at which object was first deprecated.

untilstr, optional

Last released version at which this function will still raise a deprecation warning. Versions higher than this will raise an error.

version_comparatorcallable

Callable accepting string as argument, and return 1 if string represents a higher version than encoded in the version_comparator, 0 if the version is equal, and -1 if the version is lower. For example, the version_comparator may compare the input version string to the current package version string.

warn_classclass, optional

Class of warning to generate for deprecation.

error_classclass, optional

Class of error to generate when version_comparator returns 1 for a given argument of until.

Returns:
deprecatorfunc

Function returning a decorator.

interp_rbf#

dipy.core.interpolation.interp_rbf(data, sphere_origin, sphere_target, function='multiquadric', epsilon=None, smooth=0.1, norm='angle')#

Interpolate data on the sphere, using radial basis functions.

dipy.core.interpolation.interp_rbf is deprecated, Please use dipy.core.interpolation.rbf_interpolation instead

  • deprecated from version: 1.10.0

  • Will raise <class ‘dipy.utils.deprecator.ExpiredDeprecationError’> as of version: 1.12.0

Parameters:
data(N,) ndarray

Function values on the unit sphere.

sphere_originSphere

Positions of data values.

sphere_targetSphere

M target positions for which to interpolate.

function{‘multiquadric’, ‘inverse’, ‘gaussian’}

Radial basis function.

epsilonfloat

Radial basis function spread parameter. Defaults to approximate average distance between nodes.

a good start
smoothfloat

values greater than zero increase the smoothness of the approximation with 0 as pure interpolation. Default: 0.1

normstr

A string indicating the function that returns the “distance” between two points. ‘angle’ - The angle between two vectors ‘euclidean_norm’ - The Euclidean distance

Returns:
v(M,) ndarray

Interpolated values.

See also

scipy.interpolate.RBFInterpolator

interpolate_scalar_2d#

dipy.core.interpolation.interpolate_scalar_2d(image, locations)#

Bilinear interpolation of a 2D scalar image

Interpolates the 2D image at the given locations. This function is a wrapper for _interpolate_scalar_2d for testing purposes, it is equivalent to scipy.ndimage.map_coordinates with bilinear interpolation

Parameters:
fieldarray, shape (S, R)

the 2D image to be interpolated

locationsarray, shape (n, 2)

(locations[i,0], locations[i,1]), 0<=i<n must contain the row and column coordinates to interpolate the image at

Returns:
outarray, shape (n,)

out[i], 0<=i<n will be the interpolated scalar at coordinates locations[i,:], or 0 if locations[i,:] is outside the image

insidearray, (n,)

if locations[i:] is inside the image then inside[i]=1, else inside[i]=0

interpolate_scalar_3d#

dipy.core.interpolation.interpolate_scalar_3d(image, locations)#

Trilinear interpolation of a 3D scalar image

Interpolates the 3D image at the given locations. This function is a wrapper for _interpolate_scalar_3d for testing purposes, it is equivalent to scipy.ndimage.map_coordinates with trilinear interpolation

Parameters:
fieldarray, shape (S, R, C)

the 3D image to be interpolated

locationsarray, shape (n, 3)

(locations[i,0], locations[i,1], locations[i,2), 0<=i<n must contain the coordinates to interpolate the image at

Returns:
outarray, shape (n,)

out[i], 0<=i<n will be the interpolated scalar at coordinates locations[i,:], or 0 if locations[i,:] is outside the image

insidearray, (n,)

if locations[i,:] is inside the image then inside[i]=1, else inside[i]=0

interpolate_scalar_nn_2d#

dipy.core.interpolation.interpolate_scalar_nn_2d(image, locations)#

Nearest neighbor interpolation of a 2D scalar image

Interpolates the 2D image at the given locations. This function is a wrapper for _interpolate_scalar_nn_2d for testing purposes, it is equivalent to scipy.ndimage.map_coordinates with nearest neighbor interpolation

Parameters:
imagearray, shape (S, R)

the 2D image to be interpolated

locationsarray, shape (n, 2)

(locations[i,0], locations[i,1]), 0<=i<n must contain the row and column coordinates to interpolate the image at

Returns:
outarray, shape (n,)

out[i], 0<=i<n will be the interpolated scalar at coordinates locations[i,:], or 0 if locations[i,:] is outside the image

insidearray, (n,)

if locations[i:] is inside the image then inside[i]=1, else inside[i]=0

interpolate_scalar_nn_3d#

dipy.core.interpolation.interpolate_scalar_nn_3d(image, locations)#

Nearest neighbor interpolation of a 3D scalar image

Interpolates the 3D image at the given locations. This function is a wrapper for _interpolate_scalar_nn_3d for testing purposes, it is equivalent to scipy.ndimage.map_coordinates with nearest neighbor interpolation

Parameters:
imagearray, shape (S, R, C)

the 3D image to be interpolated

locationsarray, shape (n, 3)

(locations[i,0], locations[i,1], locations[i,2), 0<=i<n must contain the coordinates to interpolate the image at

Returns:
outarray, shape (n,)

out[i], 0<=i<n will be the interpolated scalar at coordinates locations[i,:], or 0 if locations[i,:] is outside the image

insidearray, (n,)

if locations[i,:] is inside the image then inside[i]=1, else inside[i]=0

interpolate_vector_2d#

dipy.core.interpolation.interpolate_vector_2d(field, locations)#

Bilinear interpolation of a 2D vector field

Interpolates the 2D vector field at the given locations. This function is a wrapper for _interpolate_vector_2d for testing purposes, it is equivalent to using scipy.ndimage.map_coordinates with bilinear interpolation at each vector component

Parameters:
fieldarray, shape (S, R, 2)

the 2D vector field to be interpolated

locationsarray, shape (n, 2)

(locations[i,0], locations[i,1]), 0<=i<n must contain the row and column coordinates to interpolate the vector field at

Returns:
outarray, shape (n, 2)

out[i,:], 0<=i<n will be the interpolated vector at coordinates locations[i,:], or (0,0) if locations[i,:] is outside the field

insidearray, (n,)

if (locations[i,0], locations[i,1]) is inside the vector field then inside[i]=1, else inside[i]=0

interpolate_vector_3d#

dipy.core.interpolation.interpolate_vector_3d(field, locations)#

Trilinear interpolation of a 3D vector field

Interpolates the 3D vector field at the given locations. This function is a wrapper for _interpolate_vector_3d for testing purposes, it is equivalent to using scipy.ndimage.map_coordinates with trilinear interpolation at each vector component

Parameters:
fieldarray, shape (S, R, C, 3)

the 3D vector field to be interpolated

locationsarray, shape (n, 3)

(locations[i,0], locations[i,1], locations[i,2), 0<=i<n must contain the coordinates to interpolate the vector field at

Returns:
outarray, shape (n, 3)

out[i,:], 0<=i<n will be the interpolated vector at coordinates locations[i,:], or (0,0,0) if locations[i,:] is outside the field

insidearray, (n,)

if locations[i,:] is inside the vector field then inside[i]=1, else inside[i]=0

map_coordinates_trilinear_iso#

dipy.core.interpolation.map_coordinates_trilinear_iso(data, points, data_strides, len_points, result)#

Trilinear interpolation (isotropic voxel size)

Has similar behavior to map_coordinates from scipy.ndimage

Parameters:
dataarray, float64 shape (X, Y, Z)
pointsarray, float64 shape(N, 3)
data_stridesarray npy_intp shape (3,)

Strides sequence for data array

len_pointscnp.npy_intp

Number of points to interpolate

resultarray, float64 shape(N)

The output array. This array should be initialized before you call this function. On exit it will contain the interpolated values from data at points given by points.

Returns:
None

Notes

The output array result is filled in-place.

nearestneighbor_interpolate#

dipy.core.interpolation.nearestneighbor_interpolate(data, point)#

rbf_interpolation#

dipy.core.interpolation.rbf_interpolation(data, sphere_origin, sphere_target, *, function='multiquadric', epsilon=None, smoothing=0.1)#

Interpolate data on the sphere, using radial basis functions, where data can be scalar- (1D), vector- (2D), or tensor-valued (3D and beyond).

Parameters:
data(…, N) ndarray

Values of the spherical function evaluated at the N positions specified by sphere_origin.

sphere_originSphere

N positions on the unit sphere where the spherical function is evaluated.

sphere_targetSphere

M positions on the unit sphere where the spherical function is interpolated.

function: str, optional

Radial basis function. Possible values: {‘linear’, ‘thin_plate_spline’, ‘cubic’, ‘quintic’, ‘multiquadric’, ‘inverse_multiquadric’, ‘inverse_quadratic’, ‘gaussian’}.

epsilonfloat, optional

Radial basis function spread parameter. Defaults to 1 when function is ‘linear’, ‘thin_plate_spline’, ‘cubic’, or ‘quintic’. Otherwise, epsilon must be specified.

smoothingfloat, optional

Smoothing parameter. When smoothing is 0, the interpolation is exact. As smoothing increases, the interpolation approaches a least-squares fit of data using the supplied radial basis function. Default: 0.

Returns:
v(…, M) ndarray

Interpolated values of the spherical function at M positions specified by sphere_target.

See also

scipy.interpolate.RBFInterpolator

trilinear_interp#

dipy.core.interpolation.trilinear_interp(data, index, voxel_size)#

Interpolates vector from 4D data at 3D point given by index

Interpolates a vector of length T from a 4D volume of shape (I, J, K, T), given point (x, y, z) where (x, y, z) are the coordinates of the point in real units (not yet adjusted for voxel size).

trilinear_interpolate4d#

dipy.core.interpolation.trilinear_interpolate4d(data, point, out=None)#

Tri-linear interpolation along the last dimension of a 4d array

Parameters:
data4d array

Data to be interpolated.

point1d array (3,)

3 doubles representing a 3d point in space. If point has integer values [i, j, k], the result will be the same as data[i, j, k].

out1d array, optional

The output array for the result of the interpolation.

Returns:
out1d array

The result of interpolation.

ndindex#

dipy.core.ndindex.ndindex(shape)[source]#

An N-dimensional iterator object to index arrays.

Given the shape of an array, an ndindex instance iterates over the N-dimensional index of the array. At each iteration a tuple of indices is returned; the last dimension is iterated over first.

Parameters:
shapetuple of ints

The dimensions of the array.

Examples

>>> from dipy.core.ndindex import ndindex
>>> shape = (3, 2, 1)
>>> for index in ndindex(shape):
...     print(index)
(0, 0, 0)
(0, 1, 0)
(1, 0, 0)
(1, 1, 0)
(2, 0, 0)
(2, 1, 0)

ResetMixin#

class dipy.core.onetime.ResetMixin[source]#

Bases: object

A Mixin class to add a .reset() method to users of OneTimeProperty.

By default, auto attributes once computed, become static. If they happen to depend on other parts of an object and those parts change, their values may now be invalid.

This class offers a .reset() method that users can call explicitly when they know the state of their objects may have changed and they want to ensure that all their special attributes should be invalidated. Once reset() is called, all their auto attributes are reset to their OneTimeProperty descriptors, and their accessor functions will be triggered again.

Warning

If a class has a set of attributes that are OneTimeProperty, but that can be initialized from any one of them, do NOT use this mixin! For instance, UniformTimeSeries can be initialized with only sampling_rate and t0, sampling_interval and time are auto-computed. But if you were to reset() a UniformTimeSeries, it would lose all 4, and there would be then no way to break the circular dependency chains.

If this becomes a problem in practice (for our analyzer objects it isn’t, as they don’t have the above pattern), we can extend reset() to check for a _no_reset set of names in the instance which are meant to be kept protected. But for now this is NOT done, so caveat emptor.

Methods

reset()

Reset all OneTimeProperty attributes that may have fired already.

Examples

>>> class A(ResetMixin):
...     def __init__(self,x=1.0):
...         self.x = x
...
...     @auto_attr
...     def y(self):
...         print('*** y computation executed ***')
...         return self.x / 2.0
...
>>> a = A(10)

About to access y twice, the second time no computation is done: >>> a.y * y computation executed * 5.0 >>> a.y 5.0

Changing x >>> a.x = 20

a.y doesn’t change to 10, since it is a static attribute: >>> a.y 5.0

We now reset a, and this will then force all auto attributes to recompute the next time we access them: >>> a.reset()

About to access y twice again after reset(): >>> a.y * y computation executed * 10.0 >>> a.y 10.0

reset()[source]#

Reset all OneTimeProperty attributes that may have fired already.

OneTimeProperty#

class dipy.core.onetime.OneTimeProperty(func)[source]#

Bases: object

A descriptor to make special properties that become normal attributes.

This is meant to be used mostly by the auto_attr decorator in this module.

auto_attr#

dipy.core.onetime.auto_attr(func)[source]#

Decorator to create OneTimeProperty attributes.

Parameters:
funcmethod

The method that will be called the first time to compute a value. Afterwards, the method’s name will be a standard attribute holding the value of this computation.

Examples

>>> class MagicProp:
...     @auto_attr
...     def a(self):
...         return 99
...
>>> x = MagicProp()
>>> 'a' in x.__dict__
False
>>> x.a
99
>>> 'a' in x.__dict__
True

Optimizer#

class dipy.core.optimize.Optimizer(fun, x0, args=(), *, method='L-BFGS-B', jac=None, hess=None, hessp=None, bounds=None, constraints=(), tol=None, callback=None, options=None, evolution=False)[source]#

Bases: object

Attributes:
evolution
fopt
message
nfev
nit
xopt

Methods

print_summary

property evolution#
property fopt#
property message#
property nfev#
property nit#
print_summary()[source]#
property xopt#

SKLearnLinearSolver#

class dipy.core.optimize.SKLearnLinearSolver(*args, **kwargs)[source]#

Bases: object

Provide a sklearn-like uniform interface to algorithms that solve problems of the form: \(y = Ax\) for \(x\)

Sub-classes of SKLearnLinearSolver should provide a ‘fit’ method that have the following signature: SKLearnLinearSolver.fit(X, y), which would set an attribute SKLearnLinearSolver.coef_, with the shape (X.shape[1],), such that an estimate of y can be calculated as: y_hat = np.dot(X, SKLearnLinearSolver.coef_.T)

Methods

fit(X, y)

Implement for all derived classes

predict(X)

Predict using the result of the model

abstract fit(X, y)[source]#

Implement for all derived classes

predict(X)[source]#

Predict using the result of the model

Parameters:
Xarray-like (n_samples, n_features)

Samples.

Returns:
Carray, shape = (n_samples,)

Predicted values.

NonNegativeLeastSquares#

class dipy.core.optimize.NonNegativeLeastSquares(*args, **kwargs)[source]#

Bases: SKLearnLinearSolver

A sklearn-like interface to scipy.optimize.nnls

Methods

fit(X, y)

Fit the NonNegativeLeastSquares linear model to data

predict(X)

Predict using the result of the model

fit(X, y)[source]#

Fit the NonNegativeLeastSquares linear model to data

Parameters:
Xarray-like (n_samples, n_features)

Samples.

yarray-like (n_samples,)

Target values.

PositiveDefiniteLeastSquares#

class dipy.core.optimize.PositiveDefiniteLeastSquares(m, *, A=None, L=None)[source]#

Bases: object

Methods

solve(design_matrix, measurements, *[, check])

Solve CVXPY problem

solve(design_matrix, measurements, *, check=False, **kwargs)[source]#

Solve CVXPY problem

Solve a CVXPY problem instance for a given design matrix and a given set of observations, and return the optimum.

Parameters:
design_matrixarray (n, m)

Design matrix.

measurementsarray (n)

Measurements.

checkboolean, optional

If True check whether the unconstrained optimization solution already satisfies the constraints, before running the constrained optimization. This adds overhead, but can avoid unnecessary constrained optimization calls.

kwargskeyword arguments

Arguments passed to the CVXPY solve method.

Returns:
harray (m)

Estimated optimum for problem variables \(h\).

spdot#

dipy.core.optimize.spdot(A, B)[source]#

The same as np.dot(A, B), except it works even if A or B or both are sparse matrices.

Parameters:
A, Barrays of shape (m, n), (n, k)
Returns:
The matrix product AB. If both A and B are sparse, the result will be a
sparse matrix. Otherwise, a dense result is returned
See discussion here:
http://mail.scipy.org/pipermail/scipy-user/2010-November/027700.html

sparse_nnls#

dipy.core.optimize.sparse_nnls(y, X, *, momentum=1, step_size=0.01, non_neg=True, check_error_iter=10, max_error_checks=10, converge_on_sse=0.99)[source]#

Solve y=Xh for h, using gradient descent, with X a sparse matrix.

Parameters:
y1-d array of shape (N)

The data. Needs to be dense.

Xndarray. May be either sparse or dense. Shape (N, M)

The regressors

momentumfloat, optional

The persistence of the gradient.

step_sizefloat, optional

The increment of parameter update in each iteration

non_negBoolean, optional

Whether to enforce non-negativity of the solution.

check_error_iterint, optional

How many rounds to run between error evaluation for convergence-checking.

max_error_checksint, optional

Don’t check errors more than this number of times if no improvement in r-squared is seen.

converge_on_ssefloat, optional

a percentage improvement in SSE that is required each time to say that things are still going well.

Returns:
h_bestThe best estimate of the parameters.

Profiler#

class dipy.core.profile.Profiler(*args, call=None)[source]#

Bases: object

Profile python/cython files or functions

If you are profiling cython code you need to add # cython: profile=True on the top of your .pyx file

and for the functions that you do not want to profile you can use this decorator in your cython files

@cython.profile(False)

Parameters:
callerfile or function call
argsfunction arguments
Attributes:
statsfunction, stats.print_stats(10) will prin the 10 slower functions

Methods

print_stats(*[, N])

Print stats for profiling

References

https://docs.cython.org/src/tutorial/profiling_tutorial.html https://docs.python.org/library/profile.html rkern/line_profiler

Examples

from dipy.core.profile import Profiler import numpy as np p=Profiler(np.sum,np.random.rand(1000000,3)) fname=’test.py’ p=Profiler(fname) p.print_stats(10) p.print_stats(‘det’)

print_stats(*, N=10)[source]#

Print stats for profiling

You can use it in all different ways developed in pstats for example print_stats(10) will give you the 10 slowest calls or print_stats(‘function_name’) will give you the stats for all the calls with name ‘function_name’

Parameters:
Nstats.print_stats argument

WichmannHill2006#

dipy.core.rng.WichmannHill2006(*, ix=100001, iy=200002, iz=300003, it=400004)[source]#

Wichmann Hill random number generator.

See [4] for advice on generating many sequences for use together, and on alternative algorithms and codes

Parameters:
ix: int

First seed value. Should not be null. (default 100001)

iy: int

Second seed value. Should not be null. (default 200002)

iz: int

Third seed value. Should not be null. (default 300003)

it: int

Fourth seed value. Should not be null. (default 400004)

Returns:
r_numberfloat

pseudo-random number uniformly distributed between [0-1]

References

Examples

>>> from dipy.core import rng
>>> N = 1000
>>> a = [rng.WichmannHill2006() for i in range(N)]

WichmannHill1982#

dipy.core.rng.WichmannHill1982(*, ix=100001, iy=200002, iz=300003)[source]#

Algorithm AS 183 Appl. Statist. (1982) vol.31, no.2.

Returns a pseudo-random number rectangularly distributed between 0 and 1. The cycle length is 6.95E+12 (See page 123 of Applied Statistics (1984) vol.33), not as claimed in the original article.

ix, iy and iz should be set to integer values between 1 and 30000 before the first entry.

Integer arithmetic up to 5212632 is required.

Parameters:
ix: int

First seed value. Should not be null. (default 100001)

iy: int

Second seed value. Should not be null. (default 200002)

iz: int

Third seed value. Should not be null. (default 300003)

Returns:
r_numberfloat

pseudo-random number uniformly distributed between [0-1]

Examples

>>> from dipy.core import rng
>>> N = 1000
>>> a = [rng.WichmannHill1982() for i in range(N)]

LEcuyer#

dipy.core.rng.LEcuyer(*, s1=100001, s2=200002)[source]#

Return a LEcuyer random number generator.

Generate uniformly distributed random numbers using the 32-bit generator from figure 3 of:

L’Ecuyer, P. Efficient and portable combined random number generators, C.A.C.M., vol. 31, 742-749 & 774-?, June 1988.

The cycle length is claimed to be 2.30584E+18

Parameters:
s1: int

First seed value. Should not be null. (default 100001)

s2: int

Second seed value. Should not be null. (default 200002)

Returns:
r_numberfloat

pseudo-random number uniformly distributed between [0-1]

Examples

>>> from dipy.core import rng
>>> N = 1000
>>> a = [rng.LEcuyer() for i in range(N)]

Sphere#

class dipy.core.sphere.Sphere(*, x=None, y=None, z=None, theta=None, phi=None, xyz=None, faces=None, edges=None)[source]#

Bases: object

Points on the unit sphere.

The sphere can be constructed using one of three conventions:

Sphere(x, y, z)
Sphere(xyz=xyz)
Sphere(theta=theta, phi=phi)
Parameters:
x, y, z1-D array_like

Vertices as x-y-z coordinates.

theta, phi1-D array_like

Vertices as spherical coordinates. Theta and phi are the inclination and azimuth angles respectively.

xyz(N, 3) ndarray

Vertices as x-y-z coordinates.

faces(N, 3) ndarray

Indices into vertices that form triangular faces. If unspecified, the faces are computed using a Delaunay triangulation.

edges(N, 2) ndarray

Edges between vertices. If unspecified, the edges are derived from the faces.

Attributes:
x
y
z

Methods

find_closest(xyz)

Find the index of the vertex in the Sphere closest to the input vector

subdivide(*[, n])

Subdivides each face of the sphere into four new faces.

edges

faces

vertices

edges()[source]#
faces()[source]#
find_closest(xyz)[source]#

Find the index of the vertex in the Sphere closest to the input vector

Parameters:
xyzarray-like, 3 elements

A unit vector

Returns:
idxint

The index into the Sphere.vertices array that gives the closest vertex (in angle).

subdivide(*, n=1)[source]#

Subdivides each face of the sphere into four new faces.

New vertices are created at a, b, and c. Then each face [x, y, z] is divided into faces [x, a, c], [y, a, b], [z, b, c], and [a, b, c].

      y
      /\
     /  \
   a/____\b
   /\    /\
  /  \  /  \
 /____\/____\
x      c     z
Parameters:
nint, optional

The number of subdivisions to perform.

Returns:
new_sphereSphere

The subdivided sphere.

vertices()[source]#
property x#
property y#
property z#

HemiSphere#

class dipy.core.sphere.HemiSphere(*, x=None, y=None, z=None, theta=None, phi=None, xyz=None, faces=None, edges=None, tol=1e-05)[source]#

Bases: Sphere

Points on the unit sphere.

A HemiSphere is similar to a Sphere but it takes antipodal symmetry into account. Antipodal symmetry means that point v on a HemiSphere is the same as the point -v. Duplicate points are discarded when constructing a HemiSphere (including antipodal duplicates). edges and faces are remapped to the remaining points as closely as possible.

The HemiSphere can be constructed using one of three conventions:

HemiSphere(x, y, z)
HemiSphere(xyz=xyz)
HemiSphere(theta=theta, phi=phi)
Parameters:
x, y, z1-D array_like

Vertices as x-y-z coordinates.

theta, phi1-D array_like

Vertices as spherical coordinates. Theta and phi are the inclination and azimuth angles respectively.

xyz(N, 3) ndarray

Vertices as x-y-z coordinates.

faces(N, 3) ndarray

Indices into vertices that form triangular faces. If unspecified, the faces are computed using a Delaunay triangulation.

edges(N, 2) ndarray

Edges between vertices. If unspecified, the edges are derived from the faces.

tolfloat

Angle in degrees. Vertices that are less than tol degrees apart are treated as duplicates.

Attributes:
x
y
z

Methods

find_closest(xyz)

Find the index of the vertex in the Sphere closest to the input vector, taking into account antipodal symmetry

from_sphere(sphere, *[, tol])

Create instance from a Sphere

mirror()

Create a full Sphere from a HemiSphere

subdivide(*[, n])

Create a more subdivided HemiSphere

edges

faces

vertices

See also

Sphere
faces()[source]#
find_closest(xyz)[source]#

Find the index of the vertex in the Sphere closest to the input vector, taking into account antipodal symmetry

Parameters:
xyzarray-like, 3 elements

A unit vector

Returns:
idxint

The index into the Sphere.vertices array that gives the closest vertex (in angle).

classmethod from_sphere(sphere, *, tol=1e-05)[source]#

Create instance from a Sphere

mirror()[source]#

Create a full Sphere from a HemiSphere

subdivide(*, n=1)[source]#

Create a more subdivided HemiSphere

See Sphere.subdivide for full documentation.

faces_from_sphere_vertices#

dipy.core.sphere.faces_from_sphere_vertices(vertices)[source]#

Triangulate a set of vertices on the sphere.

Parameters:
vertices(M, 3) ndarray

XYZ coordinates of vertices on the sphere.

Returns:
faces(N, 3) ndarray

Indices into vertices; forms triangular faces.

unique_edges#

dipy.core.sphere.unique_edges(faces, *, return_mapping=False)[source]#

Extract all unique edges from given triangular faces.

Parameters:
faces(N, 3) ndarray

Vertex indices forming triangular faces.

return_mappingbool

If true, a mapping to the edges of each face is returned.

Returns:
edges(N, 2) ndarray

Unique edges.

mapping(N, 3)

For each face, [x, y, z], a mapping to its edges [a, b, c].

   y
   /               /               a/    
/                  /                   /__________          x      c     z

unique_sets#

dipy.core.sphere.unique_sets(sets, *, return_inverse=False)[source]#

Remove duplicate sets.

Parameters:
setsarray (N, k)

N sets of size k.

return_inversebool

If True, also returns the indices of unique_sets that can be used to reconstruct sets (the original ordering of each set may not be preserved).

Returns:
unique_setsarray

Unique sets.

inversearray (N,)

The indices to reconstruct sets from unique_sets.

disperse_charges#

dipy.core.sphere.disperse_charges(hemi, iters, *, const=0.2)[source]#

Models electrostatic repulsion on the unit sphere

Places charges on a sphere and simulates the repulsive forces felt by each one. Allows the charges to move for some number of iterations and returns their final location as well as the total potential of the system at each step.

Parameters:
hemiHemiSphere

Points on a unit sphere.

itersint

Number of iterations to run.

constfloat

Using a smaller const could provide a more accurate result, but will need more iterations to converge.

Returns:
hemiHemiSphere

Distributed points on a unit sphere.

potentialndarray

The electrostatic potential at each iteration. This can be useful to check if the repulsion converged to a minimum.

Notes

This function is meant to be used with diffusion imaging so antipodal symmetry is assumed. Therefore, each charge must not only be unique, but if there is a charge at +x, there cannot be a charge at -x. These are treated as the same location and because the distance between the two charges will be zero, the result will be unstable.

fibonacci_sphere#

dipy.core.sphere.fibonacci_sphere(n_points, *, hemisphere=False, randomize=True, rng=None)[source]#

Generate points on the surface of a sphere using Fibonacci Spiral.

Parameters:
n_pointsint

The number of points to generate on the sphere surface.

hemispherebool, optional

If True, generate points only on the upper hemisphere. Default is False.

randomizebool, optional

If True, randomize the starting point on the sphere.

rngnp.random.Generator, optional

If None creates random generator in function.

Returns:
pointsndarray

An array of 3D points representing coordinates on the sphere surface.

disperse_charges_alt#

dipy.core.sphere.disperse_charges_alt(init_pointset, iters, *, tol=0.001)[source]#

Reimplementation of disperse_charges making use of scipy.optimize.fmin_slsqp.

Parameters:
init_pointset(N, 3) ndarray

Points on a unit sphere.

itersint

Number of iterations to run.

tolfloat

Tolerance for the optimization.

Returns:
array-like (N, 3)

Distributed points on a unit sphere.

euler_characteristic_check#

dipy.core.sphere.euler_characteristic_check(sphere, *, chi=2)[source]#

Checks the euler characteristic of a sphere

If \(f\) = number of faces, \(e\) = number_of_edges and \(v\) = number of vertices, the Euler formula says \(f-e+v = 2\) for a mesh on a sphere. More generally, whether \(f -e + v == \chi\) where \(\chi\) is the Euler characteristic of the mesh.

  • Open chain (track) has \(\chi=1\)

  • Closed chain (loop) has \(\chi=0\)

  • Disk has \(\chi=1\)

  • Sphere has \(\chi=2\)

  • HemiSphere has \(\chi=1\)

Parameters:
sphereSphere

A Sphere instance with vertices, edges and faces attributes.

chiint, optional

The Euler characteristic of the mesh to be checked

Returns:
checkbool

True if the mesh has Euler characteristic \(\chi\)

Examples

>>> euler_characteristic_check(unit_octahedron)
True
>>> hemisphere = HemiSphere.from_sphere(unit_icosahedron)
>>> euler_characteristic_check(hemisphere, chi=1)
True

random_uniform_on_sphere#

dipy.core.sphere_stats.random_uniform_on_sphere(*, n=1, coords='xyz')[source]#

Random unit vectors from a uniform distribution on the sphere.

Parameters:
nint

Number of random vectors

coords{‘xyz’, ‘radians’, ‘degrees’}

‘xyz’ for cartesian form ‘radians’ for spherical form in rads ‘degrees’ for spherical form in degrees

Returns:
Xarray, shape (n,3) if coords=’xyz’ or shape (n,2) otherwise

Uniformly distributed vectors on the unit sphere.

Notes

The uniform distribution on the sphere, parameterized by spherical coordinates \((\theta, \phi)\), should verify \(\phi\sim U[0,2\pi]\), while \(z=\cos(\theta)\sim U[-1,1]\).

References

Examples

>>> from dipy.core.sphere_stats import random_uniform_on_sphere
>>> X = random_uniform_on_sphere(n=4, coords='radians')
>>> X.shape == (4, 2)
True
>>> X = random_uniform_on_sphere(n=4, coords='xyz')
>>> X.shape == (4, 3)
True

eigenstats#

dipy.core.sphere_stats.eigenstats(points, *, alpha=0.05)[source]#

Principal direction and confidence ellipse

Implements equations in section 6.3.1(ii) of Fisher, Lewis and Embleton, supplemented by equations in section 3.2.5.

Parameters:
pointsarray_like (N,3)

array of points on the sphere of radius 1 in \(\mathbb{R}^3\)

alphareal or None

1 minus the coverage for the confidence ellipsoid, e.g. 0.05 for 95% coverage.

Returns:
centrevector (3,)

centre of ellipsoid

b1vector (2,)

lengths of semi-axes of ellipsoid

compare_orientation_sets#

dipy.core.sphere_stats.compare_orientation_sets(S, T)[source]#

Computes the mean cosine distance of the best match between points of two sets of vectors S and T (angular similarity)

Parameters:
Sarray, shape (m,d)

First set of vectors.

Tarray, shape (n,d)

Second set of vectors.

Returns:
max_mean_cosinefloat

Maximum mean cosine distance.

Examples

>>> from dipy.core.sphere_stats import compare_orientation_sets
>>> S=np.array([[1,0,0],[0,1,0],[0,0,1]])
>>> T=np.array([[1,0,0],[0,0,1]])
>>> compare_orientation_sets(S,T)
1.0
>>> T=np.array([[0,1,0],[1,0,0],[0,0,1]])
>>> S=np.array([[1,0,0],[0,0,1]])
>>> compare_orientation_sets(S,T)
1.0
>>> from dipy.core.sphere_stats import compare_orientation_sets
>>> S=np.array([[-1,0,0],[0,1,0],[0,0,1]])
>>> T=np.array([[1,0,0],[0,0,-1]])
>>> compare_orientation_sets(S,T)
1.0

angular_similarity#

dipy.core.sphere_stats.angular_similarity(S, T)[source]#

Computes the cosine distance of the best match between points of two sets of vectors S and T

Parameters:
Sarray, shape (m,d)
Tarray, shape (n,d)
Returns:
max_cosine_distance:float

Examples

>>> import numpy as np
>>> from dipy.core.sphere_stats import angular_similarity
>>> S=np.array([[1,0,0],[0,1,0],[0,0,1]])
>>> T=np.array([[1,0,0],[0,0,1]])
>>> angular_similarity(S,T)
2.0
>>> T=np.array([[0,1,0],[1,0,0],[0,0,1]])
>>> S=np.array([[1,0,0],[0,0,1]])
>>> angular_similarity(S,T)
2.0
>>> S=np.array([[-1,0,0],[0,1,0],[0,0,1]])
>>> T=np.array([[1,0,0],[0,0,-1]])
>>> angular_similarity(S,T)
2.0
>>> T=np.array([[0,1,0],[1,0,0],[0,0,1]])
>>> S=np.array([[1,0,0],[0,1,0],[0,0,1]])
>>> angular_similarity(S,T)
3.0
>>> S=np.array([[0,1,0],[1,0,0],[0,0,1]])
>>> T=np.array([[1,0,0],[0,np.sqrt(2)/2.,np.sqrt(2)/2.],[0,0,1]])
>>> angular_similarity(S,T)
2.7071067811865475
>>> S=np.array([[0,1,0],[1,0,0],[0,0,1]])
>>> T=np.array([[1,0,0]])
>>> angular_similarity(S,T)
1.0
>>> S=np.array([[0,1,0],[1,0,0]])
>>> T=np.array([[0,0,1]])
>>> angular_similarity(S,T)
0.0
>>> S=np.array([[0,1,0],[1,0,0]])
>>> T=np.array([[0,np.sqrt(2)/2.,np.sqrt(2)/2.]])

Now we use print to reduce the precision of of the printed output (so the doctests don’t detect unimportant differences)

>>> print('%.12f' % angular_similarity(S,T))
0.707106781187
>>> S=np.array([[0,1,0]])
>>> T=np.array([[0,np.sqrt(2)/2.,np.sqrt(2)/2.]])
>>> print('%.12f' % angular_similarity(S,T))
0.707106781187
>>> S=np.array([[0,1,0],[0,0,1]])
>>> T=np.array([[0,np.sqrt(2)/2.,np.sqrt(2)/2.]])
>>> print('%.12f' % angular_similarity(S,T))
0.707106781187

create_unit_sphere#

dipy.core.subdivide_octahedron.create_unit_sphere(*, recursion_level=2)[source]#

Creates a unit sphere by subdividing a unit octahedron.

Starts with a unit octahedron and subdivides the faces, projecting the resulting points onto the surface of a unit sphere.

Parameters:
recursion_levelint

Level of subdivision, recursion_level=1 will return an octahedron, anything bigger will return a more subdivided sphere. The sphere will have \(4^recursion_level+2\) vertices.

Returns:
Sphere

The unit sphere.

See also

create_unit_hemisphere, Sphere

create_unit_hemisphere#

dipy.core.subdivide_octahedron.create_unit_hemisphere(*, recursion_level=2)[source]#

Creates a unit sphere by subdividing a unit octahedron, returns half the sphere.

Parameters:
recursion_levelint

Level of subdivision, recursion_level=1 will return an octahedron, anything bigger will return a more subdivided sphere. The sphere will have \((4^recursion_level+2)/2\) vertices.

Returns:
HemiSphere

Half of a unit sphere.

See also

create_unit_sphere, Sphere, HemiSphere

cshift3D#

dipy.core.wavelet.cshift3D(x, m, d)[source]#

3D Circular Shift

Parameters:
x3D ndarray

N1 by N2 by N3 array

mint

amount of shift

dint

dimension of shift (d = 1,2,3)

Returns:
y3D ndarray

array x will be shifted by m samples down along dimension d

permutationinverse#

dipy.core.wavelet.permutationinverse(perm)[source]#

Function generating inverse of the permutation

Parameters:
perm1D array
Returns:
inverse1D array

permutation inverse of the input

afb3D_A#

dipy.core.wavelet.afb3D_A(x, af, d)[source]#
3D Analysis Filter Bank

(along one dimension only)

Parameters:
x3D ndarray
N1xN2xN2 matrix, where min(N1,N2,N3) > 2*length(filter)

(Ni are even)

af2D ndarray

analysis filter for the columns af[:, 1] - lowpass filter af[:, 2] - highpass filter

dint

dimension of filtering (d = 1, 2 or 3)

Returns:
lo1D array

lowpass subbands

hi1D array

highpass subbands

sfb3D_A#

dipy.core.wavelet.sfb3D_A(lo, hi, sf, d)[source]#
3D Synthesis Filter Bank

(along single dimension only)

Parameters:
lo1D array

lowpass subbands

hi1D array

highpass subbands

sf2D ndarray

synthesis filters

dint

dimension of filtering

Returns:
y3D ndarray

the N1xN2xN3 matrix

sfb3D#

dipy.core.wavelet.sfb3D(lo, hi, sf1, *, sf2=None, sf3=None)[source]#

3D Synthesis Filter Bank

Parameters:
lo1D array

lowpass subbands

hi1D array

highpass subbands

sfi2D ndarray

synthesis filters for dimension i

Returns:
y3D ndarray

output array

afb3D#

dipy.core.wavelet.afb3D(x, af1, *, af2=None, af3=None)[source]#

3D Analysis Filter Bank

Parameters:
x3D ndarray

N1 by N2 by N3 array matrix, where 1) N1, N2, N3 all even 2) N1 >= 2*len(af1) 3) N2 >= 2*len(af2) 4) N3 >= 2*len(af3)

afi2D ndarray

analysis filters for dimension i afi[:, 1] - lowpass filter afi[:, 2] - highpass filter

Returns:
lo1D array

lowpass subband

hi1D array

highpass subbands, h[d]- d = 1..7

dwt3D#

dipy.core.wavelet.dwt3D(x, J, af)[source]#

3-D Discrete Wavelet Transform

Parameters:
x3D ndarray

N1 x N2 x N3 matrix 1) Ni all even 2) min(Ni) >= 2^(J-1)*length(af)

Jint

number of stages

af2D ndarray

analysis filters

Returns:
wcell array

wavelet coefficients

idwt3D#

dipy.core.wavelet.idwt3D(w, J, sf)[source]#

Inverse 3-D Discrete Wavelet Transform

Parameters:
wcell array

wavelet coefficient

Jint

number of stages

sf2D ndarray

synthesis filters

Returns:
y3D ndarray

output array