core#

Core objects

Module: core.geometry#

Utility functions for algebra etc

_TUPLE2AXES

dict() -> new empty dictionary dict(mapping) -> new dictionary initialized from a mapping object's (key, value) pairs dict(iterable) -> new dictionary initialized as if via: d = {} for k, v in iterable: d[k] = v dict(**kwargs) -> new dictionary initialized with the name=value pairs in the keyword argument list. For example: dict(one=1, two=2).

sphere2cart(r, theta, phi)

Spherical to Cartesian coordinates

cart2sphere(x, y, z)

Return angles for Cartesian 3D coordinates x, y, and z See doc for sphere2cart for angle conventions and derivation of the formulae.

sph2latlon(theta, phi)

Convert spherical coordinates to latitude and longitude.

normalized_vector(vec[, axis])

Return vector divided by its Euclidean (L2) norm

vector_norm(vec[, axis, keepdims])

Return vector Euclidean (L2) norm

rodrigues_axis_rotation(r, theta)

Rodrigues formula

nearest_pos_semi_def(B)

Least squares positive semi-definite tensor estimation

sphere_distance(pts1, pts2[, radius, ...])

Distance across sphere surface between pts1 and pts2

cart_distance(pts1, pts2)

Cartesian distance between pts1 and pts2

vector_cosine(vecs1, vecs2)

Cosine of angle between two (sets of) vectors

lambert_equal_area_projection_polar(theta, phi)

Lambert Equal Area Projection from polar sphere to plane Return positions in (y1,y2) plane corresponding to the points with polar coordinates (theta, phi) on the unit sphere, under the Lambert Equal Area Projection mapping (see Mardia and Jupp (2000), Directional Statistics, p.

lambert_equal_area_projection_cart(x, y, z)

Lambert Equal Area Projection from cartesian vector to plane Return positions in \((y_1,y_2)\) plane corresponding to the directions of the vectors with cartesian coordinates xyz under the Lambert Equal Area Projection mapping (see Mardia and Jupp (2000), Directional Statistics, p.

euler_matrix(ai, aj, ak[, axes])

Return homogeneous rotation matrix from Euler angles and axis sequence.

compose_matrix([scale, shear, angles, ...])

Return 4x4 transformation matrix from sequence of transformations.

decompose_matrix(matrix)

Return sequence of transformations from transformation matrix.

circumradius(a, b, c)

a, b and c are 3-dimensional vectors which are the vertices of a triangle.

vec2vec_rotmat(u, v)

rotation matrix from 2 unit vectors

compose_transformations(*mats)

Compose multiple 4x4 affine transformations in one 4x4 matrix

perpendicular_directions(v[, num, half])

Computes n evenly spaced perpendicular directions relative to a given vector v

dist_to_corner(affine)

Calculate the maximal distance from the center to a corner of a voxel, given an affine

is_hemispherical(vecs)

Test whether all points on a unit sphere lie in the same hemisphere.

Module: core.gradients#

GradientTable(gradients[, big_delta, ...])

Diffusion gradient information

logger

Instances of the Logger class represent a single logging channel.

unique_bvals(bvals[, bmag, rbvals])

This function gives the unique rounded b-values of the data dipy.core.gradients.unique_bvals is deprecated, Please use dipy.core.gradients.unique_bvals_magnitude instead * deprecated from version: 1.2 * Raises <class 'dipy.utils.deprecator.ExpiredDeprecationError'> as of version: 1.4 Parameters ---------- bvals : ndarray Array containing the b-values bmag : int The order of magnitude that the bvalues have to differ to be considered an unique b-value.

b0_threshold_empty_gradient_message(bvals, ...)

Message about the b0_threshold value resulting in no gradient selection.

b0_threshold_update_slicing_message(slice_start)

Message for b0 threshold value update for slicing.

mask_non_weighted_bvals(bvals, b0_threshold)

Create a diffusion gradient-weighting mask for the b-values according to the provided b0 threshold value.

gradient_table_from_bvals_bvecs(bvals, bvecs)

Creates a GradientTable from a bvals array and a bvecs array

gradient_table_from_qvals_bvecs(qvals, ...)

A general function for creating diffusion MR gradients.

gradient_table_from_gradient_strength_bvecs(...)

A general function for creating diffusion MR gradients.

gradient_table(bvals[, bvecs, big_delta, ...])

A general function for creating diffusion MR gradients.

reorient_bvecs(gtab, affines[, atol])

Reorient the directions in a GradientTable.

generate_bvecs(N[, iters, rng])

Generates N bvectors.

round_bvals(bvals[, bmag])

"This function rounds the b-values Parameters ---------- bvals : ndarray Array containing the b-values bmag : int The order of magnitude to round the b-values.

unique_bvals_tolerance(bvals[, tol])

Gives the unique b-values of the data, within a tolerance gap

get_bval_indices(bvals, bval[, tol])

Get indices where the b-value is bval

unique_bvals_magnitude(bvals[, bmag, rbvals])

This function gives the unique rounded b-values of the data Parameters ---------- bvals : ndarray Array containing the b-values bmag : int The order of magnitude that the bvalues have to differ to be considered an unique b-value.

check_multi_b(gtab, n_bvals[, non_zero, bmag])

Check if you have enough different b-values in your gradient table Parameters ---------- gtab : GradientTable class instance.

btens_to_params(btens[, ztol])

Compute trace, anisotropy and asymmetry parameters from b-tensors.

params_to_btens(bval, bdelta, b_eta)

Compute b-tensor from trace, anisotropy and asymmetry parameters.

ornt_mapping(ornt1, ornt2)

Calculate the mapping needing to get from orn1 to orn2.

reorient_vectors(bvecs, current_ornt, new_ornt)

Change the orientation of gradients or other vectors.

reorient_on_axis(bvecs, current_ornt, new_ornt)

orientation_from_string(string_ornt)

Return an array representation of an ornt string.

orientation_to_string(ornt)

Return a string representation of a 3d ornt.

Module: core.graph#

A simple graph class

Graph()

A simple graph class

Module: core.histeq#

histeq(arr[, num_bins])

Performs an histogram equalization on arr.

Module: core.interpolation#

Interpolator(data, voxel_size)

Class to be subclassed by different interpolator types

NearestNeighborInterpolator(data, voxel_size)

Interpolates data using nearest neighbor interpolation

OutsideImage

TriLinearInterpolator(data, voxel_size)

Interpolates data using trilinear interpolation

interp_rbf(data, sphere_origin, sphere_target)

Interpolate data on the sphere, using radial basis functions.

interpolate_scalar_2d(image, locations)

Bilinear interpolation of a 2D scalar image

interpolate_scalar_3d(image, locations)

Trilinear interpolation of a 3D scalar image

interpolate_scalar_nn_2d(image, locations)

Nearest neighbor interpolation of a 2D scalar image

interpolate_scalar_nn_3d(image, locations)

Nearest neighbor interpolation of a 3D scalar image

interpolate_vector_2d(field, locations)

Bilinear interpolation of a 2D vector field

interpolate_vector_3d(field, locations)

Trilinear interpolation of a 3D vector field

map_coordinates_trilinear_iso(data, points, ...)

Trilinear interpolation (isotropic voxel size)

nearestneighbor_interpolate(data, point)

trilinear_interp(data, index, voxel_size)

Interpolates vector from 4D data at 3D point given by index

trilinear_interpolate4d(data, point[, out])

Tri-linear interpolation along the last dimension of a 4d array

Module: core.ndindex#

ndindex(shape)

An N-dimensional iterator object to index arrays.

Module: core.onetime#

Descriptor support for NIPY.

Copyright (c) 2006-2011, NIPY Developers All rights reserved.

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

  • Redistributions of source code must retain the above copyright

    notice, this list of conditions and the following disclaimer.

  • Redistributions in binary form must reproduce the above

    copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.

  • Neither the name of the NIPY Developers nor the names of any

    contributors may be used to endorse or promote products derived from this software without specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS” AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

Utilities to support special Python descriptors [1,2], in particular the use of a useful pattern for properties we call ‘one time properties’. These are object attributes which are declared as properties, but become regular attributes once they’ve been read the first time. They can thus be evaluated later in the object’s life cycle, but once evaluated they become normal, static attributes with no function call overhead on access or any other constraints.

A special ResetMixin class is provided to add a .reset() method to users who may want to have their objects capable of resetting these computed properties to their ‘untriggered’ state.

References#

[1] How-To Guide for Descriptors, Raymond Hettinger. http://users.rcn.com/python/download/Descriptor.htm

[2] Python data model, https://docs.python.org/reference/datamodel.html

ResetMixin()

A Mixin class to add a .reset() method to users of OneTimeProperty.

OneTimeProperty(func)

A descriptor to make special properties that become normal attributes.

auto_attr(func)

Decorator to create OneTimeProperty attributes.

Module: core.optimize#

A unified interface for performing and debugging optimization problems.

Optimizer(fun, x0[, args, method, jac, ...])

SKLearnLinearSolver(*args, **kwargs)

Provide a sklearn-like uniform interface to algorithms that solve problems of the form: \(y = Ax\) for \(x\) Sub-classes of SKLearnLinearSolver should provide a 'fit' method that have the following signature: SKLearnLinearSolver.fit(X, y), which would set an attribute SKLearnLinearSolver.coef_, with the shape (X.shape[1],), such that an estimate of y can be calculated as: y_hat = np.dot(X, SKLearnLinearSolver.coef_.T)

NonNegativeLeastSquares(*args, **kwargs)

A sklearn-like interface to scipy.optimize.nnls

PositiveDefiniteLeastSquares(m[, A, L])

spdot(A, B)

The same as np.dot(A, B), except it works even if A or B or both are sparse matrices.

sparse_nnls(y, X[, momentum, step_size, ...])

Solve y=Xh for h, using gradient descent, with X a sparse matrix.

Module: core.profile#

Class for profiling cython code

Profiler([call])

Profile python/cython files or functions

Module: core.rng#

Random number generation utilities.

WichmannHill2006([ix, iy, iz, it])

Wichmann Hill (2006) random number generator.

WichmannHill1982([ix, iy, iz])

Algorithm AS 183 Appl.

LEcuyer([s1, s2])

Return a LEcuyer random number generator.

Module: core.sphere#

Sphere([x, y, z, theta, phi, xyz, faces, edges])

Points on the unit sphere.

HemiSphere([x, y, z, theta, phi, xyz, ...])

Points on the unit sphere.

faces_from_sphere_vertices(vertices)

Triangulate a set of vertices on the sphere.

unique_edges(faces[, return_mapping])

Extract all unique edges from given triangular faces.

unique_sets(sets[, return_inverse])

Remove duplicate sets.

disperse_charges(hemi, iters[, const])

Models electrostatic repulsion on the unit sphere

fibonacci_sphere(n_points[, randomize])

Generate points on the surface of a sphere using Fibonacci Spiral.

disperse_charges_alt(init_pointset, iters[, tol])

Reimplementation of disperse_charges making use of scipy.optimize.fmin_slsqp.

euler_characteristic_check(sphere[, chi])

Checks the euler characteristic of a sphere If \(f\) = number of faces, \(e\) = number_of_edges and \(v\) = number of vertices, the Euler formula says \(f-e+v = 2\) for a mesh on a sphere.

octahedron_vertices

ndarray(shape, dtype=float, buffer=None, offset=0,

octahedron_faces

ndarray(shape, dtype=float, buffer=None, offset=0,

icosahedron_vertices

ndarray(shape, dtype=float, buffer=None, offset=0,

icosahedron_faces

ndarray(shape, dtype=float, buffer=None, offset=0,

unit_octahedron

Points on the unit sphere.

unit_icosahedron

Points on the unit sphere.

hemi_icosahedron

Points on the unit sphere.

Module: core.sphere_stats#

Statistics on spheres

random_uniform_on_sphere([n, coords])

Random unit vectors from a uniform distribution on the sphere.

eigenstats(points[, alpha])

Principal direction and confidence ellipse Implements equations in section 6.3.1(ii) of Fisher, Lewis and Embleton, supplemented by equations in section 3.2.5.

compare_orientation_sets(S, T)

Computes the mean cosine distance of the best match between points of two sets of vectors S and T (angular similarity)

angular_similarity(S, T)

Computes the cosine distance of the best match between points of two sets of vectors S and T

Module: core.subdivide_octahedron#

Create a unit sphere by subdividing all triangles of an octahedron recursively.

The unit sphere has a radius of 1, which also means that all points in this sphere (assumed to have centre at [0, 0, 0]) have an absolute value (modulus) of 1. Another feature of the unit sphere is that the unit normals of this sphere are exactly the same as the vertices.

This recursive method will avoid the common problem of the polar singularity, produced by 2d (lon-lat) parameterization methods.

create_unit_sphere([recursion_level])

Creates a unit sphere by subdividing a unit octahedron.

create_unit_hemisphere([recursion_level])

Creates a unit sphere by subdividing a unit octahedron, returns half the sphere.

Module: core.wavelet#

cshift3D(x, m, d)

3D Circular Shift

permutationinverse(perm)

Function generating inverse of the permutation

afb3D_A(x, af, d)

3D Analysis Filter Bank

sfb3D_A(lo, hi, sf, d)

3D Synthesis Filter Bank

sfb3D(lo, hi, sf1[, sf2, sf3])

3D Synthesis Filter Bank

afb3D(x, af1[, af2, af3])

3D Analysis Filter Bank

dwt3D(x, J, af)

3-D Discrete Wavelet Transform

idwt3D(w, J, sf)

Inverse 3-D Discrete Wavelet Transform

_TUPLE2AXES#

dipy.core.geometry._TUPLE2AXES()#

dict() -> new empty dictionary dict(mapping) -> new dictionary initialized from a mapping object’s

(key, value) pairs

dict(iterable) -> new dictionary initialized as if via:

d = {} for k, v in iterable:

d[k] = v

dict(**kwargs) -> new dictionary initialized with the name=value pairs

in the keyword argument list. For example: dict(one=1, two=2)

sphere2cart#

dipy.core.geometry.sphere2cart(r, theta, phi)#

Spherical to Cartesian coordinates

This is the standard physics convention where theta is the inclination (polar) angle, and phi is the azimuth angle.

Imagine a sphere with center (0,0,0). Orient it with the z axis running south-north, the y axis running west-east and the x axis from posterior to anterior. theta (the inclination angle) is the angle to rotate from the z-axis (the zenith) around the y-axis, towards the x axis. Thus the rotation is counter-clockwise from the point of view of positive y. phi (azimuth) gives the angle of rotation around the z-axis towards the y axis. The rotation is counter-clockwise from the point of view of positive z.

Equivalently, given a point P on the sphere, with coordinates x, y, z, theta is the angle between P and the z-axis, and phi is the angle between the projection of P onto the XY plane, and the X axis.

Geographical nomenclature designates theta as ‘co-latitude’, and phi as ‘longitude’

Parameters#

rarray_like

radius

thetaarray_like

inclination or polar angle

phiarray_like

azimuth angle

Returns#

xarray

x coordinate(s) in Cartesian space

yarray

y coordinate(s) in Cartesian space

zarray

z coordinate

Notes#

See these pages:

for excellent discussion of the many different conventions possible. Here we use the physics conventions, used in the wikipedia page.

Derivations of the formulae are simple. Consider a vector x, y, z of length r (norm of x, y, z). The inclination angle (theta) can be found from: cos(theta) == z / r -> z == r * cos(theta). This gives the hypotenuse of the projection onto the XY plane, which we will call Q. Q == r*sin(theta). Now x / Q == cos(phi) -> x == r * sin(theta) * cos(phi) and so on.

We have deliberately named this function sphere2cart rather than sph2cart to distinguish it from the Matlab function of that name, because the Matlab function uses an unusual convention for the angles that we did not want to replicate. The Matlab function is trivial to implement with the formulae given in the Matlab help.

cart2sphere#

dipy.core.geometry.cart2sphere(x, y, z)#

Return angles for Cartesian 3D coordinates x, y, and z See doc for sphere2cart for angle conventions and derivation of the formulae. \(0\le\theta\mathrm{(theta)}\le\pi\) and \(-\pi\le\phi\mathrm{(phi)}\le\pi\) Parameters ———- x : array_like x coordinate in Cartesian space y : array_like y coordinate in Cartesian space z : array_like z coordinate Returns ——- r : array radius theta : array inclination (polar) angle phi : array azimuth angle

sph2latlon#

dipy.core.geometry.sph2latlon(theta, phi)#

Convert spherical coordinates to latitude and longitude.

Returns#

lat, lonndarray

Latitude and longitude.

normalized_vector#

dipy.core.geometry.normalized_vector(vec, axis=-1)#

Return vector divided by its Euclidean (L2) norm

See unit vector and Euclidean norm

Parameters#

vec : array_like shape (3,)

Returns#

nvecarray shape (3,)

vector divided by L2 norm

Examples#

>>> vec = [1, 2, 3]
>>> l2n = np.sqrt(np.dot(vec, vec))
>>> nvec = normalized_vector(vec)
>>> np.allclose(np.array(vec) / l2n, nvec)
True
>>> vec = np.array([[1, 2, 3]])
>>> vec.shape == (1, 3)
True
>>> normalized_vector(vec).shape == (1, 3)
True

vector_norm#

dipy.core.geometry.vector_norm(vec, axis=-1, keepdims=False)#

Return vector Euclidean (L2) norm

See unit vector and Euclidean norm

Parameters#

vecarray_like

Vectors to norm.

axisint

Axis over which to norm. By default norm over last axis. If axis is None, vec is flattened then normed.

keepdimsbool

If True, the output will have the same number of dimensions as vec, with shape 1 on axis.

Returns#

normarray

Euclidean norms of vectors.

Examples#

>>> import numpy as np
>>> vec = [[8, 15, 0], [0, 36, 77]]
>>> vector_norm(vec)
array([ 17.,  85.])
>>> vector_norm(vec, keepdims=True)
array([[ 17.],
       [ 85.]])
>>> vector_norm(vec, axis=0)
array([  8.,  39.,  77.])

rodrigues_axis_rotation#

dipy.core.geometry.rodrigues_axis_rotation(r, theta)#

Rodrigues formula

Rotation matrix for rotation around axis r for angle theta.

The rotation matrix is given by the Rodrigues formula:

R = Id + sin(theta)*Sn + (1-cos(theta))*Sn^2

with:

       0  -nz  ny
Sn =   nz   0 -nx
      -ny  nx   0

where n = r / ||r||

In case the angle ||r|| is very small, the above formula may lead to numerical instabilities. We instead use a Taylor expansion around theta=0:

R = I + sin(theta)/tetha Sr + (1-cos(theta))/teta2 Sr^2

leading to:

R = I + (1-theta2/6)*Sr + (1/2-theta2/24)*Sr^2

Parameters#

r : array_like shape (3,), axis theta : float, angle in degrees

Returns#

R : array, shape (3,3), rotation matrix

Examples#

>>> import numpy as np
>>> from dipy.core.geometry import rodrigues_axis_rotation
>>> v=np.array([0,0,1])
>>> u=np.array([1,0,0])
>>> R=rodrigues_axis_rotation(v,40)
>>> ur=np.dot(R,u)
>>> np.round(np.rad2deg(np.arccos(np.dot(ur,u))))
40.0

nearest_pos_semi_def#

dipy.core.geometry.nearest_pos_semi_def(B)#

Least squares positive semi-definite tensor estimation

Parameters#

B(3,3) array_like

B matrix - symmetric. We do not check the symmetry.

Returns#

npds(3,3) array

Estimated nearest positive semi-definite array to matrix B.

Examples#

>>> B = np.diag([1, 1, -1])
>>> nearest_pos_semi_def(B)
array([[ 0.75,  0.  ,  0.  ],
       [ 0.  ,  0.75,  0.  ],
       [ 0.  ,  0.  ,  0.  ]])

References#

sphere_distance#

dipy.core.geometry.sphere_distance(pts1, pts2, radius=None, check_radius=True)#

Distance across sphere surface between pts1 and pts2

Parameters#

pts1(N,R) or (R,) array_like

where N is the number of points and R is the number of coordinates defining a point (R==3 for 3D)

pts2(N,R) or (R,) array_like

where N is the number of points and R is the number of coordinates defining a point (R==3 for 3D). It should be possible to broadcast pts1 against pts2

radiusNone or float, optional

Radius of sphere. Default is to work out radius from mean of the length of each point vector

check_radiusbool, optional

If True, check if the points are on the sphere surface - i.e check if the vector lengths in pts1 and pts2 are close to radius. Default is True.

Returns#

d(N,) or (0,) array

Distances between corresponding points in pts1 and pts2 across the spherical surface, i.e. the great circle distance

See Also#

cart_distance : cartesian distance between points vector_cosine : cosine of angle between vectors

Examples#

>>> print('%.4f' % sphere_distance([0,1],[1,0]))
1.5708
>>> print('%.4f' % sphere_distance([0,3],[3,0]))
4.7124

cart_distance#

dipy.core.geometry.cart_distance(pts1, pts2)#

Cartesian distance between pts1 and pts2

If either of pts1 or pts2 is 2D, then we take the first dimension to index points, and the second indexes coordinate. More generally, we take the last dimension to be the coordinate dimension.

Parameters#

pts1(N,R) or (R,) array_like

where N is the number of points and R is the number of coordinates defining a point (R==3 for 3D)

pts2(N,R) or (R,) array_like

where N is the number of points and R is the number of coordinates defining a point (R==3 for 3D). It should be possible to broadcast pts1 against pts2

Returns#

d(N,) or (0,) array

Cartesian distances between corresponding points in pts1 and pts2

See Also#

sphere_distance : distance between points on sphere surface

Examples#

>>> cart_distance([0,0,0], [0,0,3])
3.0

vector_cosine#

dipy.core.geometry.vector_cosine(vecs1, vecs2)#

Cosine of angle between two (sets of) vectors

The cosine of the angle between two vectors v1 and v2 is given by the inner product of v1 and v2 divided by the product of the vector lengths:

v_cos = np.inner(v1, v2) / (np.sqrt(np.sum(v1**2)) *
                            np.sqrt(np.sum(v2**2)))

Parameters#

vecs1(N, R) or (R,) array_like

N vectors (as rows) or single vector. Vectors have R elements.

vecs1(N, R) or (R,) array_like

N vectors (as rows) or single vector. Vectors have R elements. It should be possible to broadcast vecs1 against vecs2

Returns#

vcos(N,) or (0,) array

Vector cosines. To get the angles you will need np.arccos

Notes#

The vector cosine will be the same as the correlation only if all the input vectors have zero mean.

lambert_equal_area_projection_polar#

dipy.core.geometry.lambert_equal_area_projection_polar(theta, phi)#

Lambert Equal Area Projection from polar sphere to plane Return positions in (y1,y2) plane corresponding to the points with polar coordinates (theta, phi) on the unit sphere, under the Lambert Equal Area Projection mapping (see Mardia and Jupp (2000), Directional Statistics, p. 161). See doc for sphere2cart for angle conventions - \(0 \le \theta \le \pi\) and \(0 \le \phi \le 2 \pi\) - \(|(y_1,y_2)| \le 2\) The Lambert EAP maps the upper hemisphere to the planar disc of radius 1 and the lower hemisphere to the planar annulus between radii 1 and 2, and vice versa. Parameters ———- theta : array_like theta spherical coordinates phi : array_like phi spherical coordinates Returns ——- y : (N,2) array planar coordinates of points following mapping by Lambert’s EAP.

lambert_equal_area_projection_cart#

dipy.core.geometry.lambert_equal_area_projection_cart(x, y, z)#

Lambert Equal Area Projection from cartesian vector to plane Return positions in \((y_1,y_2)\) plane corresponding to the directions of the vectors with cartesian coordinates xyz under the Lambert Equal Area Projection mapping (see Mardia and Jupp (2000), Directional Statistics, p. 161). The Lambert EAP maps the upper hemisphere to the planar disc of radius 1 and the lower hemisphere to the planar annulus between radii 1 and 2, The Lambert EAP maps the upper hemisphere to the planar disc of radius 1 and the lower hemisphere to the planar annulus between radii 1 and 2. and vice versa. See doc for sphere2cart for angle conventions Parameters ———- x : array_like x coordinate in Cartesian space y : array_like y coordinate in Cartesian space z : array_like z coordinate Returns ——- y : (N,2) array planar coordinates of points following mapping by Lambert’s EAP.

euler_matrix#

dipy.core.geometry.euler_matrix(ai, aj, ak, axes='sxyz')#

Return homogeneous rotation matrix from Euler angles and axis sequence.

Code modified from the work of Christoph Gohlke, link provided here cgohlke/transformations

Parameters#

ai, aj, ak : Euler’s roll, pitch and yaw angles axes : One of 24 axis sequences as string or encoded tuple

Returns#

matrix : ndarray (4, 4)

Code modified from the work of Christoph Gohlke, link provided here cgohlke/transformations

Examples#

>>> import numpy
>>> R = euler_matrix(1, 2, 3, 'syxz')
>>> numpy.allclose(numpy.sum(R[0]), -1.34786452)
True
>>> R = euler_matrix(1, 2, 3, (0, 1, 0, 1))
>>> numpy.allclose(numpy.sum(R[0]), -0.383436184)
True
>>> ai, aj, ak = (4.0*math.pi) * (numpy.random.random(3) - 0.5)
>>> for axes in _AXES2TUPLE.keys():
...    _ = euler_matrix(ai, aj, ak, axes)
>>> for axes in _TUPLE2AXES.keys():
...    _ = euler_matrix(ai, aj, ak, axes)

compose_matrix#

dipy.core.geometry.compose_matrix(scale=None, shear=None, angles=None, translate=None, perspective=None)#

Return 4x4 transformation matrix from sequence of transformations.

Code modified from the work of Christoph Gohlke, link provided here cgohlke/transformations

This is the inverse of the decompose_matrix function.

Parameters#

scale(3,) array_like

Scaling factors.

sheararray_like

Shear factors for x-y, x-z, y-z axes.

anglesarray_like

Euler angles about static x, y, z axes.

translatearray_like

Translation vector along x, y, z axes.

perspectivearray_like

Perspective partition of matrix.

Returns#

matrix : 4x4 array

Examples#

>>> import math
>>> import numpy as np
>>> import dipy.core.geometry as gm
>>> scale = np.random.random(3) - 0.5
>>> shear = np.random.random(3) - 0.5
>>> angles = (np.random.random(3) - 0.5) * (2*math.pi)
>>> trans = np.random.random(3) - 0.5
>>> persp = np.random.random(4) - 0.5
>>> M0 = gm.compose_matrix(scale, shear, angles, trans, persp)

decompose_matrix#

dipy.core.geometry.decompose_matrix(matrix)#

Return sequence of transformations from transformation matrix.

Code modified from the excellent work of Christoph Gohlke, link provided here: cgohlke/transformations

Parameters#

matrixarray_like

Non-degenerate homogeneous transformation matrix

Returns#

scale(3,) ndarray

Three scaling factors.

shear(3,) ndarray

Shear factors for x-y, x-z, y-z axes.

angles(3,) ndarray

Euler angles about static x, y, z axes.

translate(3,) ndarray

Translation vector along x, y, z axes.

perspectivendarray

Perspective partition of matrix.

Raises#

ValueError

If matrix is of wrong type or degenerate.

Examples#

>>> import numpy as np
>>> T0=np.diag([2,1,1,1])
>>> scale, shear, angles, trans, persp = decompose_matrix(T0)

circumradius#

dipy.core.geometry.circumradius(a, b, c)#

a, b and c are 3-dimensional vectors which are the vertices of a triangle. The function returns the circumradius of the triangle, i.e the radius of the smallest circle that can contain the triangle. In the degenerate case when the 3 points are collinear it returns half the distance between the furthest apart points.

Parameters#

a, b, c(3,) array_like

the three vertices of the triangle

Returns#

circumradiusfloat

the desired circumradius

vec2vec_rotmat#

dipy.core.geometry.vec2vec_rotmat(u, v)#

rotation matrix from 2 unit vectors

u, v being unit 3d vectors return a 3x3 rotation matrix R than aligns u to v.

In general there are many rotations that will map u to v. If S is any rotation using v as an axis then R.S will also map u to v since (S.R)u = S(Ru) = Sv = v. The rotation R returned by vec2vec_rotmat leaves fixed the perpendicular to the plane spanned by u and v.

The transpose of R will align v to u.

Parameters#

u : array, shape(3,) v : array, shape(3,)

Returns#

R : array, shape(3,3)

Examples#

>>> import numpy as np
>>> from dipy.core.geometry import vec2vec_rotmat
>>> u=np.array([1,0,0])
>>> v=np.array([0,1,0])
>>> R=vec2vec_rotmat(u,v)
>>> np.dot(R,u)
array([ 0.,  1.,  0.])
>>> np.dot(R.T,v)
array([ 1.,  0.,  0.])

compose_transformations#

dipy.core.geometry.compose_transformations(*mats)#

Compose multiple 4x4 affine transformations in one 4x4 matrix

Parameters#

mat1 : array, (4, 4) mat2 : array, (4, 4) … matN : array, (4, 4)

Returns#

matN x … x mat2 x mat1 : array, (4, 4)

perpendicular_directions#

dipy.core.geometry.perpendicular_directions(v, num=30, half=False)#

Computes n evenly spaced perpendicular directions relative to a given vector v

Parameters#

varray (3,)

Array containing the three cartesian coordinates of vector v

numint, optional

Number of perpendicular directions to generate

halfbool, optional

If half is True, perpendicular directions are sampled on half of the unit circumference perpendicular to v, otherwive perpendicular directions are sampled on the full circumference. Default of half is False

Returns#

psamplesarray (n, 3)

array of vectors perpendicular to v

Notes#

Perpendicular directions are estimated using the following two step procedure:

1) the perpendicular directions are first sampled in a unit circumference parallel to the plane normal to the x-axis.

2) Samples are then rotated and aligned to the plane normal to vector v. The rotational matrix for this rotation is constructed as reference frame basis which axis are the following:

  • The first axis is vector v

  • The second axis is defined as the normalized vector given by the

cross product between vector v and the unit vector aligned to the x-axis - The third axis is defined as the cross product between the previous computed vector and vector v.

Following this two steps, coordinates of the final perpendicular directions are given as:

\[\left [ -\sin(a_{i}) \sqrt{{v_{y}}^{2}+{v_{z}}^{2}} \; , \; \frac{v_{x}v_{y}\sin(a_{i})-v_{z}\cos(a_{i})} {\sqrt{{v_{y}}^{2}+{v_{z}}^{2}}} \; , \; \frac{v_{x}v_{z}\sin(a_{i})-v_{y}\cos(a_{i})} {\sqrt{{v_{y}}^{2}+{v_{z}}^{2}}} \right ]\]

This procedure has a singularity when vector v is aligned to the x-axis. To solve this singularity, perpendicular directions in procedure’s step 1 are defined in the plane normal to y-axis and the second axis of the rotated frame of reference is computed as the normalized vector given by the cross product between vector v and the unit vector aligned to the y-axis. Following this, the coordinates of the perpendicular directions are given as:

left [ -frac{left (v_{x}v_{y}sin(a_{i})+v_{z}cos(a_{i}) right )} {sqrt{{v_{x}}^{2}+{v_{z}}^{2}}} ; , ; sin(a_{i}) sqrt{{v_{x}}^{2}+{v_{z}}^{2}} ; , ; frac{v_{y}v_{z}sin(a_{i})+v_{x}cos(a_{i})} {sqrt{{v_{x}}^{2}+{v_{z}}^{2}}} right ]

For more details on this calculation, see here.

dist_to_corner#

dipy.core.geometry.dist_to_corner(affine)#

Calculate the maximal distance from the center to a corner of a voxel, given an affine

Parameters#

affine4 by 4 array.

The spatial transformation from the measurement to the scanner space.

Returns#

dist: float

The maximal distance to the corner of a voxel, given voxel size encoded in the affine.

is_hemispherical#

dipy.core.geometry.is_hemispherical(vecs)#

Test whether all points on a unit sphere lie in the same hemisphere.

Parameters#

vecsnumpy.ndarray

2D numpy array with shape (N, 3) where N is the number of points. All points must lie on the unit sphere.

Returns#

is_hemibool

If True, one can find a hemisphere that contains all the points. If False, then the points do not lie in any hemisphere

polenumpy.ndarray

If is_hemi == True, then pole is the “central” pole of the input vectors. Otherwise, pole is the zero vector.

References#

https://rstudio-pubs-static.s3.amazonaws.com/27121_a22e51b47c544980bad594d5e0bb2d04.html # noqa

GradientTable#

class dipy.core.gradients.GradientTable(gradients, big_delta=None, small_delta=None, b0_threshold=50, btens=None)#

Bases: object

Diffusion gradient information

Parameters#

gradientsarray_like (N, 3)

Diffusion gradients. The direction of each of these vectors corresponds to the b-vector, and the length corresponds to the b-value.

b0_thresholdfloat

Gradients with b-value less than or equal to b0_threshold are considered as b0s i.e. without diffusion weighting.

Attributes#

gradients(N,3) ndarray

diffusion gradients

bvals(N,) ndarray

The b-value, or magnitude, of each gradient direction.

qvals: (N,) ndarray

The q-value for each gradient direction. Needs big and small delta.

bvecs(N,3) ndarray

The direction, represented as a unit vector, of each gradient.

b0s_mask(N,) ndarray

Boolean array indicating which gradients have no diffusion weighting, ie b-value is close to 0.

b0_thresholdfloat

Gradients with b-value less than or equal to b0_threshold are considered to not have diffusion weighting.

btens(N,3,3) ndarray

The b-tensor of each gradient direction.

See Also#

gradient_table

Notes#

The GradientTable object is immutable. Do NOT assign attributes. If you have your gradient table in a bval & bvec format, we recommend using the factory function gradient_table

__init__(gradients, big_delta=None, small_delta=None, b0_threshold=50, btens=None)#

Constructor for GradientTable class

b0s_mask()#
bvals()#
bvecs()#
gradient_strength()#
property info#
qvals()#
tau()#

logger#

dipy.core.gradients.logger()#

Instances of the Logger class represent a single logging channel. A “logging channel” indicates an area of an application. Exactly how an “area” is defined is up to the application developer. Since an application can have any number of areas, logging channels are identified by a unique string. Application areas can be nested (e.g. an area of “input processing” might include sub-areas “read CSV files”, “read XLS files” and “read Gnumeric files”). To cater for this natural nesting, channel names are organized into a namespace hierarchy where levels are separated by periods, much like the Java or Python package namespace. So in the instance given above, channel names might be “input” for the upper level, and “input.csv”, “input.xls” and “input.gnu” for the sub-levels. There is no arbitrary limit to the depth of nesting.

unique_bvals#

dipy.core.gradients.unique_bvals(bvals, bmag=None, rbvals=False)#

This function gives the unique rounded b-values of the data dipy.core.gradients.unique_bvals is deprecated, Please use dipy.core.gradients.unique_bvals_magnitude instead * deprecated from version: 1.2 * Raises <class ‘dipy.utils.deprecator.ExpiredDeprecationError’> as of version: 1.4 Parameters ———- bvals : ndarray Array containing the b-values bmag : int The order of magnitude that the bvalues have to differ to be considered an unique b-value. B-values are also rounded up to this order of magnitude. Default: derive this value from the maximal b-value provided: \(bmag=log_{10}(max(bvals)) - 1\). rbvals : bool, optional If True function also returns all individual rounded b-values. Default: False Returns ——- ubvals : ndarray Array containing the rounded unique b-values

b0_threshold_empty_gradient_message#

dipy.core.gradients.b0_threshold_empty_gradient_message(bvals, idx, b0_threshold)#

Message about the b0_threshold value resulting in no gradient selection.

Parameters#

bvals(N,) ndarray

The b-value, or magnitude, of each gradient direction.

idxndarray

Indices of the gradients to be selected.

b0_thresholdfloat

Gradients with b-value less than or equal to b0_threshold are considered to not have diffusion weighting.

Returns#

str

Message.

b0_threshold_update_slicing_message#

dipy.core.gradients.b0_threshold_update_slicing_message(slice_start)#

Message for b0 threshold value update for slicing.

Parameters#

slice_startint

Starting index for slicing.

Returns#

str

Message.

mask_non_weighted_bvals#

dipy.core.gradients.mask_non_weighted_bvals(bvals, b0_threshold)#

Create a diffusion gradient-weighting mask for the b-values according to the provided b0 threshold value.

Parameters#

bvals(N,) ndarray

The b-value, or magnitude, of each gradient direction.

b0_thresholdfloat

Gradients with b-value less than or equal to b0_threshold are considered to not have diffusion weighting.

Returns#

ndarray

Gradient-weighting mask: True for all b-value indices whose value is smaller or equal to b0_threshold; False otherwise.

gradient_table_from_bvals_bvecs#

dipy.core.gradients.gradient_table_from_bvals_bvecs(bvals, bvecs, b0_threshold=50, atol=0.01, btens=None, **kwargs)#

Creates a GradientTable from a bvals array and a bvecs array

Parameters#

bvalsarray_like (N,)

The b-value, or magnitude, of each gradient direction.

bvecsarray_like (N, 3)

The direction, represented as a unit vector, of each gradient.

b0_thresholdfloat

Gradients with b-value less than or equal to b0_threshold are considered to not have diffusion weighting. If its value is equal to or larger than all values in b-vals, then it is assumed that no thresholding is requested.

atolfloat

Each vector in bvecs must be a unit vectors up to a tolerance of atol.

btenscan be any of three options
  1. a string specifying the shape of the encoding tensor for all volumes in data. Options: ‘LTE’, ‘PTE’, ‘STE’, ‘CTE’ corresponding to linear, planar, spherical, and “cigar-shaped” tensor encoding. Tensors are rotated so that linear and cigar tensors are aligned with the corresponding gradient direction and the planar tensor’s normal is aligned with the corresponding gradient direction. Magnitude is scaled to match the b-value.

  2. an array of strings of shape (N,), (N, 1), or (1, N) specifying encoding tensor shape for each volume separately. N corresponds to the number volumes in data. Options for elements in array: ‘LTE’, ‘PTE’, ‘STE’, ‘CTE’ corresponding to linear, planar, spherical, and “cigar-shaped” tensor encoding. Tensors are rotated so that linear and cigar tensors are aligned with the corresponding gradient direction and the planar tensor’s normal is aligned with the corresponding gradient direction. Magnitude is scaled to match the b-value.

  3. an array of shape (N,3,3) specifying the b-tensor of each volume exactly. N corresponds to the number volumes in data. No rotation or scaling is performed.

Other Parameters#

**kwargsdict

Other keyword inputs are passed to GradientTable.

Returns#

gradientsGradientTable

A GradientTable with all the gradient information.

See Also#

GradientTable, gradient_table

gradient_table_from_qvals_bvecs#

dipy.core.gradients.gradient_table_from_qvals_bvecs(qvals, bvecs, big_delta, small_delta, b0_threshold=50, atol=0.01)#

A general function for creating diffusion MR gradients.

It reads, loads and prepares scanner parameters like the b-values and b-vectors so that they can be useful during the reconstruction process.

Parameters#

qvalsan array of shape (N,),

q-value given in 1/mm

bvecs : can be any of two options

  1. an array of shape (N, 3) or (3, N) with the b-vectors.

  2. a path for the file which contains an array like the previous.

big_deltafloat or array of shape (N,)

acquisition pulse separation time in seconds

small_deltafloat

acquisition pulse duration time in seconds

b0_thresholdfloat

All b-values with values less than or equal to bo_threshold are considered as b0s i.e. without diffusion weighting.

atolfloat

All b-vectors need to be unit vectors up to a tolerance.

Returns#

gradientsGradientTable

A GradientTable with all the gradient information.

Examples#

>>> from dipy.core.gradients import gradient_table_from_qvals_bvecs
>>> qvals = 30. * np.ones(7)
>>> big_delta = .03  # pulse separation of 30ms
>>> small_delta = 0.01  # pulse duration of 10ms
>>> qvals[0] = 0
>>> sq2 = np.sqrt(2) / 2
>>> bvecs = np.array([[0, 0, 0],
...                   [1, 0, 0],
...                   [0, 1, 0],
...                   [0, 0, 1],
...                   [sq2, sq2, 0],
...                   [sq2, 0, sq2],
...                   [0, sq2, sq2]])
>>> gt = gradient_table_from_qvals_bvecs(qvals, bvecs,
...                                      big_delta, small_delta)

Notes#

  1. Often b0s (b-values which correspond to images without diffusion weighting) have 0 values however in some cases the scanner cannot provide b0s of an exact 0 value and it gives a bit higher values e.g. 6 or 12. This is the purpose of the b0_threshold in the __init__.

  2. We assume that the minimum number of b-values is 7.

  3. B-vectors should be unit vectors.

gradient_table_from_gradient_strength_bvecs#

dipy.core.gradients.gradient_table_from_gradient_strength_bvecs(gradient_strength, bvecs, big_delta, small_delta, b0_threshold=50, atol=0.01)#

A general function for creating diffusion MR gradients.

It reads, loads and prepares scanner parameters like the b-values and b-vectors so that they can be useful during the reconstruction process.

Parameters#

gradient_strengthan array of shape (N,),

gradient strength given in T/mm

bvecs : can be any of two options

  1. an array of shape (N, 3) or (3, N) with the b-vectors.

  2. a path for the file which contains an array like the previous.

big_deltafloat or array of shape (N,)

acquisition pulse separation time in seconds

small_deltafloat

acquisition pulse duration time in seconds

b0_thresholdfloat

All b-values with values less than or equal to bo_threshold are considered as b0s i.e. without diffusion weighting.

atolfloat

All b-vectors need to be unit vectors up to a tolerance.

Returns#

gradientsGradientTable

A GradientTable with all the gradient information.

Examples#

>>> from dipy.core.gradients import (
...    gradient_table_from_gradient_strength_bvecs)
>>> gradient_strength = .03e-3 * np.ones(7)  # clinical strength at 30 mT/m
>>> big_delta = .03  # pulse separation of 30ms
>>> small_delta = 0.01  # pulse duration of 10ms
>>> gradient_strength[0] = 0
>>> sq2 = np.sqrt(2) / 2
>>> bvecs = np.array([[0, 0, 0],
...                   [1, 0, 0],
...                   [0, 1, 0],
...                   [0, 0, 1],
...                   [sq2, sq2, 0],
...                   [sq2, 0, sq2],
...                   [0, sq2, sq2]])
>>> gt = gradient_table_from_gradient_strength_bvecs(
...     gradient_strength, bvecs, big_delta, small_delta)

Notes#

  1. Often b0s (b-values which correspond to images without diffusion weighting) have 0 values however in some cases the scanner cannot provide b0s of an exact 0 value and it gives a bit higher values e.g. 6 or 12. This is the purpose of the b0_threshold in the __init__.

  2. We assume that the minimum number of b-values is 7.

  3. B-vectors should be unit vectors.

gradient_table#

dipy.core.gradients.gradient_table(bvals, bvecs=None, big_delta=None, small_delta=None, b0_threshold=50, atol=0.01, btens=None)#

A general function for creating diffusion MR gradients.

It reads, loads and prepares scanner parameters like the b-values and b-vectors so that they can be useful during the reconstruction process.

Parameters#

bvals : can be any of the four options

  1. an array of shape (N,) or (1, N) or (N, 1) with the b-values.

  2. a path for the file which contains an array like the above (1).

  3. an array of shape (N, 4) or (4, N). Then this parameter is considered to be a b-table which contains both bvals and bvecs. In this case the next parameter is skipped.

  4. a path for the file which contains an array like the one at (3).

bvecs : can be any of two options

  1. an array of shape (N, 3) or (3, N) with the b-vectors.

  2. a path for the file which contains an array like the previous.

big_deltafloat

acquisition pulse separation time in seconds (default None)

small_deltafloat

acquisition pulse duration time in seconds (default None)

b0_thresholdfloat

All b-values with values less than or equal to bo_threshold are considered as b0s i.e. without diffusion weighting.

atolfloat

All b-vectors need to be unit vectors up to a tolerance.

btens : can be any of three options

  1. a string specifying the shape of the encoding tensor for all volumes in data. Options: ‘LTE’, ‘PTE’, ‘STE’, ‘CTE’ corresponding to linear, planar, spherical, and “cigar-shaped” tensor encoding. Tensors are rotated so that linear and cigar tensors are aligned with the corresponding gradient direction and the planar tensor’s normal is aligned with the corresponding gradient direction. Magnitude is scaled to match the b-value.

  2. an array of strings of shape (N,), (N, 1), or (1, N) specifying encoding tensor shape for each volume separately. N corresponds to the number volumes in data. Options for elements in array: ‘LTE’, ‘PTE’, ‘STE’, ‘CTE’ corresponding to linear, planar, spherical, and “cigar-shaped” tensor encoding. Tensors are rotated so that linear and cigar tensors are aligned with the corresponding gradient direction and the planar tensor’s normal is aligned with the corresponding gradient direction. Magnitude is scaled to match the b-value.

  3. an array of shape (N,3,3) specifying the b-tensor of each volume exactly. N corresponds to the number volumes in data. No rotation or scaling is performed.

Returns#

gradientsGradientTable

A GradientTable with all the gradient information.

Examples#

>>> from dipy.core.gradients import gradient_table
>>> bvals = 1500 * np.ones(7)
>>> bvals[0] = 0
>>> sq2 = np.sqrt(2) / 2
>>> bvecs = np.array([[0, 0, 0],
...                   [1, 0, 0],
...                   [0, 1, 0],
...                   [0, 0, 1],
...                   [sq2, sq2, 0],
...                   [sq2, 0, sq2],
...                   [0, sq2, sq2]])
>>> gt = gradient_table(bvals, bvecs)
>>> gt.bvecs.shape == bvecs.shape
True
>>> gt = gradient_table(bvals, bvecs.T)
>>> gt.bvecs.shape == bvecs.T.shape
False

Notes#

  1. Often b0s (b-values which correspond to images without diffusion weighting) have 0 values however in some cases the scanner cannot provide b0s of an exact 0 value and it gives a bit higher values e.g. 6 or 12. This is the purpose of the b0_threshold in the __init__.

  2. We assume that the minimum number of b-values is 7.

  3. B-vectors should be unit vectors.

reorient_bvecs#

dipy.core.gradients.reorient_bvecs(gtab, affines, atol=0.01)#

Reorient the directions in a GradientTable.

When correcting for motion, rotation of the diffusion-weighted volumes might cause systematic bias in rotationally invariant measures, such as FA and MD, and also cause characteristic biases in tractography, unless the gradient directions are appropriately reoriented to compensate for this effect [Leemans2009].

Parameters#

gtabGradientTable

The nominal gradient table with which the data were acquired.

affineslist or ndarray of shape (4, 4, n) or (3, 3, n)

Each entry in this list or array contain either an affine transformation (4,4) or a rotation matrix (3, 3). In both cases, the transformations encode the rotation that was applied to the image corresponding to one of the non-zero gradient directions (ordered according to their order in gtab.bvecs[~gtab.b0s_mask])

atol: see gradient_table()

Returns#

gtab : a GradientTable class instance with the reoriented directions

References#

[Leemans2009]

The B-Matrix Must Be Rotated When Correcting for Subject Motion in DTI Data. Leemans, A. and Jones, D.K. (2009). MRM, 61: 1336-1349

generate_bvecs#

dipy.core.gradients.generate_bvecs(N, iters=5000, rng=None)#

Generates N bvectors.

Uses dipy.core.sphere.disperse_charges to model electrostatic repulsion on a unit sphere.

Parameters#

Nint

The number of bvectors to generate. This should be equal to the number of bvals used.

itersint

Number of iterations to run.

rngnumpy.random.Generator, optional

Numpy’s random number generator. If None, the generator is created. Default is None.

Returns#

bvecs(N,3) ndarray

The generated directions, represented as a unit vector, of each gradient.

round_bvals#

dipy.core.gradients.round_bvals(bvals, bmag=None)#

“This function rounds the b-values Parameters ———- bvals : ndarray Array containing the b-values bmag : int The order of magnitude to round the b-values. If not given b-values will be rounded relative to the order of magnitude \(bmag = (bmagmax - 1)\), where bmaxmag is the magnitude order of the larger b-value. Returns ——- rbvals : ndarray Array containing the rounded b-values

unique_bvals_tolerance#

dipy.core.gradients.unique_bvals_tolerance(bvals, tol=20)#

Gives the unique b-values of the data, within a tolerance gap

The b-values must be regrouped in clusters easily separated by a distance greater than the tolerance gap. If all the b-values of a cluster fit within the tolerance gap, the highest b-value is kept.

Parameters#

bvalsndarray

Array containing the b-values

tolint

The tolerated gap between the b-values to extract and the actual b-values.

Returns#

ubvalsndarray

Array containing the unique b-values using the median value for each cluster

get_bval_indices#

dipy.core.gradients.get_bval_indices(bvals, bval, tol=20)#

Get indices where the b-value is bval

Parameters#

bvals: ndarray

Array containing the b-values

bval: float or int

b-value to extract indices

tol: int

The tolerated gap between the b-values to extract and the actual b-values.

Returns#

Array of indices where the b-value is bval

unique_bvals_magnitude#

dipy.core.gradients.unique_bvals_magnitude(bvals, bmag=None, rbvals=False)#

This function gives the unique rounded b-values of the data Parameters ———- bvals : ndarray Array containing the b-values bmag : int The order of magnitude that the bvalues have to differ to be considered an unique b-value. B-values are also rounded up to this order of magnitude. Default: derive this value from the maximal b-value provided: \(bmag=log_{10}(max(bvals)) - 1\). rbvals : bool, optional If True function also returns all individual rounded b-values. Default: False Returns ——- ubvals : ndarray Array containing the rounded unique b-values

check_multi_b#

dipy.core.gradients.check_multi_b(gtab, n_bvals, non_zero=True, bmag=None)#

Check if you have enough different b-values in your gradient table Parameters ———- gtab : GradientTable class instance. n_bvals : int The number of different b-values you are checking for. non_zero : bool Whether to check only non-zero bvalues. In this case, we will require at least n_bvals non-zero b-values (where non-zero is defined depending on the gtab object’s b0_threshold attribute) bmag : int The order of magnitude of the b-values used. The function will normalize the b-values relative \(10^{bmag}\). Default: derive this value from the maximal b-value provided: \(bmag=log_{10}(max(bvals)) - 1\). Returns ——- bool : Whether there are at least n_bvals different b-values in the gradient table used.

btens_to_params#

dipy.core.gradients.btens_to_params(btens, ztol=1e-10)#

Compute trace, anisotropy and asymmetry parameters from b-tensors.

Parameters#

btens(3, 3) OR (N, 3, 3) numpy.ndarray

input b-tensor, or b-tensors, where N = number of b-tensors

ztolfloat

Any parameters smaller than this value are considered to be 0

Returns#

bval: numpy.ndarray

b-value(s) (trace(s))

bdelta: numpy.ndarray

normalized tensor anisotropy(s)

b_eta: numpy.ndarray

tensor asymmetry(s)

Notes#

This function can be used to get b-tensor parameters directly from the GradientTable btens attribute.

Examples#

>>> lte = np.array([[1, 0, 0], [0, 0, 0], [0, 0, 0]])
>>> bval, bdelta, b_eta = btens_to_params(lte)
>>> print("bval={}; bdelta={}; b_eta={}".format(bdelta, bval, b_eta))
bval=[ 1.]; bdelta=[ 1.]; b_eta=[ 0.]

params_to_btens#

dipy.core.gradients.params_to_btens(bval, bdelta, b_eta)#

Compute b-tensor from trace, anisotropy and asymmetry parameters.

Parameters#

bval: int or float

b-value (>= 0)

bdelta: int or float

normalized tensor anisotropy (>= -0.5 and <= 1)

b_eta: int or float

tensor asymmetry (>= 0 and <= 1)

Returns#

(3, 3) numpy.ndarray

output b-tensor

Notes#

Implements eq. 7.11. p. 231 in [1].

References#

anisotropy, in: R. Valiullin (Ed.), Diffusion NMR of Confined Systems: Fluid Transport in Porous Solids and Heterogeneous Materials, Royal Society of Chemistry, Cambridge, UK, 2016.

ornt_mapping#

dipy.core.gradients.ornt_mapping(ornt1, ornt2)#

Calculate the mapping needing to get from orn1 to orn2.

reorient_vectors#

dipy.core.gradients.reorient_vectors(bvecs, current_ornt, new_ornt, axis=0)#

Change the orientation of gradients or other vectors.

Moves vectors, storted along axis, from current_ornt to new_ornt. For example the vector [x, y, z] in “RAS” will be [-x, -y, z] in “LPS”.

R: Right A: Anterior S: Superior L: Left P: Posterior I: Inferior

reorient_on_axis#

dipy.core.gradients.reorient_on_axis(bvecs, current_ornt, new_ornt, axis=0)#

orientation_from_string#

dipy.core.gradients.orientation_from_string(string_ornt)#

Return an array representation of an ornt string.

orientation_to_string#

dipy.core.gradients.orientation_to_string(ornt)#

Return a string representation of a 3d ornt.

Graph#

class dipy.core.graph.Graph#

Bases: object

A simple graph class

__init__()#

A graph class with nodes and edges :-)

This class allows us to:

  1. find the shortest path

  2. find all paths

  3. add/delete nodes and edges

  4. get parent & children nodes

Examples#

>>> from dipy.core.graph import Graph
>>> g=Graph()
>>> g.add_node('a',5)
>>> g.add_node('b',6)
>>> g.add_node('c',10)
>>> g.add_node('d',11)
>>> g.add_edge('a','b')
>>> g.add_edge('b','c')
>>> g.add_edge('c','d')
>>> g.add_edge('b','d')
>>> g.up_short('d')
['d', 'b', 'a']
add_edge(n, m, ws=True, wp=True)#
add_node(n, attr=None)#
all_paths(graph, start, end=None, path=None)#
children(n)#
del_node(n)#
del_node_and_edges(n)#
down(n)#
down_short(n)#
parents(n)#
shortest_path(graph, start, end=None, path=None)#
up(n)#
up_short(n)#

histeq#

dipy.core.histeq.histeq(arr, num_bins=256)#

Performs an histogram equalization on arr. This was taken from: http://www.janeriksolem.net/2009/06/histogram-equalization-with-python-and.html

Parameters#

arrndarray

Image on which to perform histogram equalization.

num_binsint

Number of bins used to construct the histogram.

Returns#

resultndarray

Histogram equalized image.

Interpolator#

class dipy.core.interpolation.Interpolator(data, voxel_size)#

Bases: object

Class to be subclassed by different interpolator types

__init__(data, voxel_size)#

NearestNeighborInterpolator#

class dipy.core.interpolation.NearestNeighborInterpolator(data, voxel_size)#

Bases: Interpolator

Interpolates data using nearest neighbor interpolation

__init__(data, voxel_size)#

OutsideImage#

class dipy.core.interpolation.OutsideImage#

Bases: Exception

__init__(*args, **kwargs)#

TriLinearInterpolator#

class dipy.core.interpolation.TriLinearInterpolator(data, voxel_size)#

Bases: Interpolator

Interpolates data using trilinear interpolation

interpolate 4d diffusion volume using 3 indices, ie data[x, y, z]

__init__(data, voxel_size)#

interp_rbf#

dipy.core.interpolation.interp_rbf(data, sphere_origin, sphere_target, function='multiquadric', epsilon=None, smooth=0.1, norm='angle')#

Interpolate data on the sphere, using radial basis functions.

Parameters#

data(N,) ndarray

Function values on the unit sphere.

sphere_originSphere

Positions of data values.

sphere_targetSphere

M target positions for which to interpolate.

function{‘multiquadric’, ‘inverse’, ‘gaussian’}

Radial basis function.

epsilonfloat

Radial basis function spread parameter. Defaults to approximate average distance between nodes.

a good start smooth : float

values greater than zero increase the smoothness of the approximation with 0 as pure interpolation. Default: 0.1

normstr

A string indicating the function that returns the “distance” between two points. ‘angle’ - The angle between two vectors ‘euclidean_norm’ - The Euclidean distance

Returns#

v(M,) ndarray

Interpolated values.

See Also#

scipy.interpolate.Rbf

interpolate_scalar_2d#

dipy.core.interpolation.interpolate_scalar_2d(image, locations)#

Bilinear interpolation of a 2D scalar image

Interpolates the 2D image at the given locations. This function is a wrapper for _interpolate_scalar_2d for testing purposes, it is equivalent to scipy.ndimage.map_coordinates with bilinear interpolation

Parameters#

fieldarray, shape (S, R)

the 2D image to be interpolated

locationsarray, shape (n, 2)

(locations[i,0], locations[i,1]), 0<=i<n must contain the row and column coordinates to interpolate the image at

Returns#

outarray, shape (n,)

out[i], 0<=i<n will be the interpolated scalar at coordinates locations[i,:], or 0 if locations[i,:] is outside the image

insidearray, (n,)

if locations[i:] is inside the image then inside[i]=1, else inside[i]=0

interpolate_scalar_3d#

dipy.core.interpolation.interpolate_scalar_3d(image, locations)#

Trilinear interpolation of a 3D scalar image

Interpolates the 3D image at the given locations. This function is a wrapper for _interpolate_scalar_3d for testing purposes, it is equivalent to scipy.ndimage.map_coordinates with trilinear interpolation

Parameters#

fieldarray, shape (S, R, C)

the 3D image to be interpolated

locationsarray, shape (n, 3)

(locations[i,0], locations[i,1], locations[i,2), 0<=i<n must contain the coordinates to interpolate the image at

Returns#

outarray, shape (n,)

out[i], 0<=i<n will be the interpolated scalar at coordinates locations[i,:], or 0 if locations[i,:] is outside the image

insidearray, (n,)

if locations[i,:] is inside the image then inside[i]=1, else inside[i]=0

interpolate_scalar_nn_2d#

dipy.core.interpolation.interpolate_scalar_nn_2d(image, locations)#

Nearest neighbor interpolation of a 2D scalar image

Interpolates the 2D image at the given locations. This function is a wrapper for _interpolate_scalar_nn_2d for testing purposes, it is equivalent to scipy.ndimage.map_coordinates with nearest neighbor interpolation

Parameters#

imagearray, shape (S, R)

the 2D image to be interpolated

locationsarray, shape (n, 2)

(locations[i,0], locations[i,1]), 0<=i<n must contain the row and column coordinates to interpolate the image at

Returns#

outarray, shape (n,)

out[i], 0<=i<n will be the interpolated scalar at coordinates locations[i,:], or 0 if locations[i,:] is outside the image

insidearray, (n,)

if locations[i:] is inside the image then inside[i]=1, else inside[i]=0

interpolate_scalar_nn_3d#

dipy.core.interpolation.interpolate_scalar_nn_3d(image, locations)#

Nearest neighbor interpolation of a 3D scalar image

Interpolates the 3D image at the given locations. This function is a wrapper for _interpolate_scalar_nn_3d for testing purposes, it is equivalent to scipy.ndimage.map_coordinates with nearest neighbor interpolation

Parameters#

imagearray, shape (S, R, C)

the 3D image to be interpolated

locationsarray, shape (n, 3)

(locations[i,0], locations[i,1], locations[i,2), 0<=i<n must contain the coordinates to interpolate the image at

Returns#

outarray, shape (n,)

out[i], 0<=i<n will be the interpolated scalar at coordinates locations[i,:], or 0 if locations[i,:] is outside the image

insidearray, (n,)

if locations[i,:] is inside the image then inside[i]=1, else inside[i]=0

interpolate_vector_2d#

dipy.core.interpolation.interpolate_vector_2d(field, locations)#

Bilinear interpolation of a 2D vector field

Interpolates the 2D vector field at the given locations. This function is a wrapper for _interpolate_vector_2d for testing purposes, it is equivalent to using scipy.ndimage.map_coordinates with bilinear interpolation at each vector component

Parameters#

fieldarray, shape (S, R, 2)

the 2D vector field to be interpolated

locationsarray, shape (n, 2)

(locations[i,0], locations[i,1]), 0<=i<n must contain the row and column coordinates to interpolate the vector field at

Returns#

outarray, shape (n, 2)

out[i,:], 0<=i<n will be the interpolated vector at coordinates locations[i,:], or (0,0) if locations[i,:] is outside the field

insidearray, (n,)

if (locations[i,0], locations[i,1]) is inside the vector field then inside[i]=1, else inside[i]=0

interpolate_vector_3d#

dipy.core.interpolation.interpolate_vector_3d(field, locations)#

Trilinear interpolation of a 3D vector field

Interpolates the 3D vector field at the given locations. This function is a wrapper for _interpolate_vector_3d for testing purposes, it is equivalent to using scipy.ndimage.map_coordinates with trilinear interpolation at each vector component

Parameters#

fieldarray, shape (S, R, C, 3)

the 3D vector field to be interpolated

locationsarray, shape (n, 3)

(locations[i,0], locations[i,1], locations[i,2), 0<=i<n must contain the coordinates to interpolate the vector field at

Returns#

outarray, shape (n, 3)

out[i,:], 0<=i<n will be the interpolated vector at coordinates locations[i,:], or (0,0,0) if locations[i,:] is outside the field

insidearray, (n,)

if locations[i,:] is inside the vector field then inside[i]=1, else inside[i]=0

map_coordinates_trilinear_iso#

dipy.core.interpolation.map_coordinates_trilinear_iso(data, points, data_strides, len_points, result)#

Trilinear interpolation (isotropic voxel size)

Has similar behavior to map_coordinates from scipy.ndimage

Parameters#

data : array, float64 shape (X, Y, Z) points : array, float64 shape(N, 3) data_strides : array npy_intp shape (3,)

Strides sequence for data array

len_pointscnp.npy_intp

Number of points to interpolate

resultarray, float64 shape(N)

The output array. This array should be initialized before you call this function. On exit it will contain the interpolated values from data at points given by points.

Returns#

None

Notes#

The output array result is filled in-place.

nearestneighbor_interpolate#

dipy.core.interpolation.nearestneighbor_interpolate(data, point)#

trilinear_interp#

dipy.core.interpolation.trilinear_interp(data, index, voxel_size)#

Interpolates vector from 4D data at 3D point given by index

Interpolates a vector of length T from a 4D volume of shape (I, J, K, T), given point (x, y, z) where (x, y, z) are the coordinates of the point in real units (not yet adjusted for voxel size).

trilinear_interpolate4d#

dipy.core.interpolation.trilinear_interpolate4d(data, point, out=None)#

Tri-linear interpolation along the last dimension of a 4d array

Parameters#

data4d array

Data to be interpolated.

point1d array (3,)

3 doubles representing a 3d point in space. If point has integer values [i, j, k], the result will be the same as data[i, j, k].

out1d array, optional

The output array for the result of the interpolation.

Returns#

out1d array

The result of interpolation.

ndindex#

dipy.core.ndindex.ndindex(shape)#

An N-dimensional iterator object to index arrays.

Given the shape of an array, an ndindex instance iterates over the N-dimensional index of the array. At each iteration a tuple of indices is returned; the last dimension is iterated over first.

Parameters#

shapetuple of ints

The dimensions of the array.

Examples#

>>> from dipy.core.ndindex import ndindex
>>> shape = (3, 2, 1)
>>> for index in ndindex(shape):
...     print(index)
(0, 0, 0)
(0, 1, 0)
(1, 0, 0)
(1, 1, 0)
(2, 0, 0)
(2, 1, 0)

ResetMixin#

class dipy.core.onetime.ResetMixin#

Bases: object

A Mixin class to add a .reset() method to users of OneTimeProperty.

By default, auto attributes once computed, become static. If they happen to depend on other parts of an object and those parts change, their values may now be invalid.

This class offers a .reset() method that users can call explicitly when they know the state of their objects may have changed and they want to ensure that all their special attributes should be invalidated. Once reset() is called, all their auto attributes are reset to their OneTimeProperty descriptors, and their accessor functions will be triggered again.

Warning

If a class has a set of attributes that are OneTimeProperty, but that can be initialized from any one of them, do NOT use this mixin! For instance, UniformTimeSeries can be initialized with only sampling_rate and t0, sampling_interval and time are auto-computed. But if you were to reset() a UniformTimeSeries, it would lose all 4, and there would be then no way to break the circular dependency chains.

If this becomes a problem in practice (for our analyzer objects it isn’t, as they don’t have the above pattern), we can extend reset() to check for a _no_reset set of names in the instance which are meant to be kept protected. But for now this is NOT done, so caveat emptor.

Examples#

>>> class A(ResetMixin):
...     def __init__(self,x=1.0):
...         self.x = x
...
...     @auto_attr
...     def y(self):
...         print('*** y computation executed ***')
...         return self.x / 2.0
...
>>> a = A(10)

About to access y twice, the second time no computation is done: >>> a.y * y computation executed * 5.0 >>> a.y 5.0

Changing x >>> a.x = 20

a.y doesn’t change to 10, since it is a static attribute: >>> a.y 5.0

We now reset a, and this will then force all auto attributes to recompute the next time we access them: >>> a.reset()

About to access y twice again after reset(): >>> a.y * y computation executed * 10.0 >>> a.y 10.0

__init__()#
reset()#

Reset all OneTimeProperty attributes that may have fired already.

OneTimeProperty#

class dipy.core.onetime.OneTimeProperty(func)#

Bases: object

A descriptor to make special properties that become normal attributes.

This is meant to be used mostly by the auto_attr decorator in this module.

__init__(func)#

Create a OneTimeProperty instance.

Parameters#

func : method

The method that will be called the first time to compute a value. Afterwards, the method’s name will be a standard attribute holding the value of this computation.

auto_attr#

dipy.core.onetime.auto_attr(func)#

Decorator to create OneTimeProperty attributes.

Parameters#

funcmethod

The method that will be called the first time to compute a value. Afterwards, the method’s name will be a standard attribute holding the value of this computation.

Examples#

>>> class MagicProp:
...     @auto_attr
...     def a(self):
...         return 99
...
>>> x = MagicProp()
>>> 'a' in x.__dict__
False
>>> x.a
99
>>> 'a' in x.__dict__
True

Optimizer#

class dipy.core.optimize.Optimizer(fun, x0, args=(), method='L-BFGS-B', jac=None, hess=None, hessp=None, bounds=None, constraints=(), tol=None, callback=None, options=None, evolution=False)#

Bases: object

__init__(fun, x0, args=(), method='L-BFGS-B', jac=None, hess=None, hessp=None, bounds=None, constraints=(), tol=None, callback=None, options=None, evolution=False)#

A class for handling minimization of scalar function of one or more variables.

Parameters#

funcallable

Objective function.

x0ndarray

Initial guess.

argstuple, optional

Extra arguments passed to the objective function and its derivatives (Jacobian, Hessian).

methodstr, optional

Type of solver. Should be one of

  • ‘Nelder-Mead’

  • ‘Powell’

  • ‘CG’

  • ‘BFGS’

  • ‘Newton-CG’

  • ‘Anneal’

  • ‘L-BFGS-B’

  • ‘TNC’

  • ‘COBYLA’

  • ‘SLSQP’

  • ‘dogleg’

  • ‘trust-ncg’

jacbool or callable, optional

Jacobian of objective function. Only for CG, BFGS, Newton-CG, dogleg, trust-ncg. If jac is a Boolean and is True, fun is assumed to return the value of Jacobian along with the objective function. If False, the Jacobian will be estimated numerically. jac can also be a callable returning the Jacobian of the objective. In this case, it must accept the same arguments as fun.

hess, hesspcallable, optional

Hessian of objective function or Hessian of objective function times an arbitrary vector p. Only for Newton-CG, dogleg, trust-ncg. Only one of hessp or hess needs to be given. If hess is provided, then hessp will be ignored. If neither hess nor hessp is provided, then the hessian product will be approximated using finite differences on jac. hessp must compute the Hessian times an arbitrary vector.

boundssequence, optional

Bounds for variables (only for L-BFGS-B, TNC and SLSQP). (min, max) pairs for each element in x, defining the bounds on that parameter. Use None for one of min or max when there is no bound in that direction.

constraintsdict or sequence of dict, optional

Constraints definition (only for COBYLA and SLSQP). Each constraint is defined in a dictionary with fields:

typestr

Constraint type: ‘eq’ for equality, ‘ineq’ for inequality.

funcallable

The function defining the constraint.

jaccallable, optional

The Jacobian of fun (only for SLSQP).

argssequence, optional

Extra arguments to be passed to the function and Jacobian.

Equality constraint means that the constraint function result is to be zero whereas inequality means that it is to be non-negative. Note that COBYLA only supports inequality constraints.

tolfloat, optional

Tolerance for termination. For detailed control, use solver-specific options.

callbackcallable, optional

Called after each iteration, as callback(xk), where xk is the current parameter vector. Only available using Scipy >= 0.12.

optionsdict, optional

A dictionary of solver options. All methods accept the following generic options:

maxiterint

Maximum number of iterations to perform.

dispbool

Set to True to print convergence messages.

For method-specific options, see show_options(‘minimize’, method).

evolutionbool, optional

save history of x for each iteration. Only available using Scipy >= 0.12.

See Also#

scipy.optimize.minimize

property evolution#
property fopt#
property message#
property nfev#
property nit#
print_summary()#
property xopt#

SKLearnLinearSolver#

class dipy.core.optimize.SKLearnLinearSolver(*args, **kwargs)#

Bases: object

Provide a sklearn-like uniform interface to algorithms that solve problems of the form: \(y = Ax\) for \(x\) Sub-classes of SKLearnLinearSolver should provide a ‘fit’ method that have the following signature: SKLearnLinearSolver.fit(X, y), which would set an attribute SKLearnLinearSolver.coef_, with the shape (X.shape[1],), such that an estimate of y can be calculated as: y_hat = np.dot(X, SKLearnLinearSolver.coef_.T)

__init__(*args, **kwargs)#
abstract fit(X, y)#

Implement for all derived classes

predict(X)#

Predict using the result of the model

Parameters#

Xarray-like (n_samples, n_features)

Samples.

Returns#

Carray, shape = (n_samples,)

Predicted values.

NonNegativeLeastSquares#

class dipy.core.optimize.NonNegativeLeastSquares(*args, **kwargs)#

Bases: SKLearnLinearSolver

A sklearn-like interface to scipy.optimize.nnls

__init__(*args, **kwargs)#
fit(X, y)#

Fit the NonNegativeLeastSquares linear model to data

Parameters#

PositiveDefiniteLeastSquares#

class dipy.core.optimize.PositiveDefiniteLeastSquares(m, A=None, L=None)#

Bases: object

__init__(m, A=None, L=None)#

Regularized least squares with linear matrix inequality constraints Generate a CVXPY representation of a regularized least squares optimization problem subject to linear matrix inequality constraints. Parameters ———- m : int Positive int indicating the number of regressors. A : array (t = m + k + 1, p, p) (optional) Constraint matrices \(A\). L : array (m, m) (optional) Regularization matrix \(L\). Default: None. Notes —– The basic problem is to solve for \(h\) the minimization of \(c=\|X h - y\|^2 + \|L h\|^2\), where \(X\) is an (m, m) upper triangular design matrix and \(y\) is a set of m measurements, subject to the constraint that \(M=A_0+\sum_{i=0}^{m-1} h_i A_{i+1}+\sum_{j=0}^{k-1} s_j A_{m+j+1}>0\), where \(s_j\) are slack variables and where the inequality sign denotes positive definiteness of the matrix \(M\). The sparsity pattern and size of \(X\) and \(y\) are fixed, because every design matrix and set of measurements can be reduced to an equivalent (minimal) formulation of this type. This formulation is used here mainly to enforce polynomial sum-of-squares constraints on various models, as described in [1]_. References ———- .. [1] Dela Haije et al. “Enforcing necessary non-negativity constraints for common diffusion MRI models using sum of squares programming”. NeuroImage 209, 2020, 116405.

solve(design_matrix, measurements, check=False, **kwargs)#

Solve CVXPY problem Solve a CVXPY problem instance for a given design matrix and a given set of observations, and return the optimum. Parameters ———- design_matrix : array (n, m) Design matrix. measurements : array (n) Measurements. check : boolean (optional) If True check whether the unconstrained optimization solution already satisfies the constraints, before running the constrained optimization. This adds overhead, but can avoid unnecessary constrained optimization calls. Default: False kwargs : keyword arguments Arguments passed to the CVXPY solve method. Returns ——- h : array (m) Estimated optimum for problem variables \(h\).

spdot#

dipy.core.optimize.spdot(A, B)#

The same as np.dot(A, B), except it works even if A or B or both are sparse matrices.

Parameters#

A, B : arrays of shape (m, n), (n, k)

Returns#

The matrix product AB. If both A and B are sparse, the result will be a sparse matrix. Otherwise, a dense result is returned

See discussion here: http://mail.scipy.org/pipermail/scipy-user/2010-November/027700.html

sparse_nnls#

dipy.core.optimize.sparse_nnls(y, X, momentum=1, step_size=0.01, non_neg=True, check_error_iter=10, max_error_checks=10, converge_on_sse=0.99)#

Solve y=Xh for h, using gradient descent, with X a sparse matrix.

Parameters#

y1-d array of shape (N)

The data. Needs to be dense.

Xndarray. May be either sparse or dense. Shape (N, M)

The regressors

momentumfloat, optional (default: 1).

The persistence of the gradient.

step_sizefloat, optional (default: 0.01).

The increment of parameter update in each iteration

non_negBoolean, optional (default: True)

Whether to enforce non-negativity of the solution.

check_error_iterint (default:10)

How many rounds to run between error evaluation for convergence-checking.

max_error_checksint (default: 10)

Don’t check errors more than this number of times if no improvement in r-squared is seen.

converge_on_ssefloat (default: 0.99)

a percentage improvement in SSE that is required each time to say that things are still going well.

Returns#

h_best : The best estimate of the parameters.

Profiler#

class dipy.core.profile.Profiler(call=None, *args)#

Bases: object

Profile python/cython files or functions

If you are profiling cython code you need to add # cython: profile=True on the top of your .pyx file

and for the functions that you do not want to profile you can use this decorator in your cython files

@cython.profile(False)

Parameters#

caller : file or function call args : function arguments

Attributes#

stats : function, stats.print_stats(10) will prin the 10 slower functions

Examples#

from dipy.core.profile import Profiler import numpy as np p=Profiler(np.sum,np.random.rand(1000000,3)) fname=’test.py’ p=Profiler(fname) p.print_stats(10) p.print_stats(‘det’)

References#

https://docs.cython.org/src/tutorial/profiling_tutorial.html https://docs.python.org/library/profile.html rkern/line_profiler

__init__(call=None, *args)#
print_stats(N=10)#

Print stats for profiling

You can use it in all different ways developed in pstats for example print_stats(10) will give you the 10 slowest calls or print_stats(‘function_name’) will give you the stats for all the calls with name ‘function_name’

Parameters#

N : stats.print_stats argument

WichmannHill2006#

dipy.core.rng.WichmannHill2006(ix=100001, iy=200002, iz=300003, it=400004)#

Wichmann Hill (2006) random number generator.

B.A. Wichmann, I.D. Hill, Generating good pseudo-random numbers, Computational Statistics & Data Analysis, Volume 51, Issue 3, 1 December 2006, Pages 1614-1622, ISSN 0167-9473, DOI: 10.1016/j.csda.2006.05.019. (https://www.sciencedirect.com/science/article/abs/pii/S0167947306001836) for advice on generating many sequences for use together, and on alternative algorithms and codes

Parameters#

ix: int

First seed value. Should not be null. (default 100001)

iy: int

Second seed value. Should not be null. (default 200002)

iz: int

Third seed value. Should not be null. (default 300003)

it: int

Fourth seed value. Should not be null. (default 400004)

Returns#

r_numberfloat

pseudo-random number uniformly distributed between [0-1]

Examples#

>>> from dipy.core import rng
>>> N = 1000
>>> a = [rng.WichmannHill2006() for i in range(N)]

WichmannHill1982#

dipy.core.rng.WichmannHill1982(ix=100001, iy=200002, iz=300003)#

Algorithm AS 183 Appl. Statist. (1982) vol.31, no.2.

Returns a pseudo-random number rectangularly distributed between 0 and 1. The cycle length is 6.95E+12 (See page 123 of Applied Statistics (1984) vol.33), not as claimed in the original article.

ix, iy and iz should be set to integer values between 1 and 30000 before the first entry.

Integer arithmetic up to 5212632 is required.

Parameters#

ix: int

First seed value. Should not be null. (default 100001)

iy: int

Second seed value. Should not be null. (default 200002)

iz: int

Third seed value. Should not be null. (default 300003)

Returns#

r_numberfloat

pseudo-random number uniformly distributed between [0-1]

Examples#

>>> from dipy.core import rng
>>> N = 1000
>>> a = [rng.WichmannHill1982() for i in range(N)]

LEcuyer#

dipy.core.rng.LEcuyer(s1=100001, s2=200002)#

Return a LEcuyer random number generator.

Generate uniformly distributed random numbers using the 32-bit generator from figure 3 of:

L’Ecuyer, P. Efficient and portable combined random number generators, C.A.C.M., vol. 31, 742-749 & 774-?, June 1988.

The cycle length is claimed to be 2.30584E+18

Parameters#

s1: int

First seed value. Should not be null. (default 100001)

s2: int

Second seed value. Should not be null. (default 200002)

Returns#

r_numberfloat

pseudo-random number uniformly distributed between [0-1]

Examples#

>>> from dipy.core import rng
>>> N = 1000
>>> a = [rng.LEcuyer() for i in range(N)]

Sphere#

class dipy.core.sphere.Sphere(x=None, y=None, z=None, theta=None, phi=None, xyz=None, faces=None, edges=None)#

Bases: object

Points on the unit sphere.

The sphere can be constructed using one of three conventions:

Sphere(x, y, z)
Sphere(xyz=xyz)
Sphere(theta=theta, phi=phi)

Parameters#

x, y, z1-D array_like

Vertices as x-y-z coordinates.

theta, phi1-D array_like

Vertices as spherical coordinates. Theta and phi are the inclination and azimuth angles respectively.

xyz(N, 3) ndarray

Vertices as x-y-z coordinates.

faces(N, 3) ndarray

Indices into vertices that form triangular faces. If unspecified, the faces are computed using a Delaunay triangulation.

edges(N, 2) ndarray

Edges between vertices. If unspecified, the edges are derived from the faces.

__init__(x=None, y=None, z=None, theta=None, phi=None, xyz=None, faces=None, edges=None)#
edges()#
faces()#
find_closest(xyz)#

Find the index of the vertex in the Sphere closest to the input vector

Parameters#
xyzarray-like, 3 elements

A unit vector

Returns#
idxint

The index into the Sphere.vertices array that gives the closest vertex (in angle).

subdivide(n=1)#

Subdivides each face of the sphere into four new faces.

New vertices are created at a, b, and c. Then each face [x, y, z] is divided into faces [x, a, c], [y, a, b], [z, b, c], and [a, b, c].

      y
      /\
     /  \
   a/____\b
   /\    /\
  /  \  /  \
 /____\/____\
x      c     z
Parameters#
nint, optional

The number of subdivisions to perform.

Returns#
new_sphereSphere

The subdivided sphere.

vertices()#
property x#
property y#
property z#

HemiSphere#

class dipy.core.sphere.HemiSphere(x=None, y=None, z=None, theta=None, phi=None, xyz=None, faces=None, edges=None, tol=1e-05)#

Bases: Sphere

Points on the unit sphere.

A HemiSphere is similar to a Sphere but it takes antipodal symmetry into account. Antipodal symmetry means that point v on a HemiSphere is the same as the point -v. Duplicate points are discarded when constructing a HemiSphere (including antipodal duplicates). edges and faces are remapped to the remaining points as closely as possible.

The HemiSphere can be constructed using one of three conventions:

HemiSphere(x, y, z)
HemiSphere(xyz=xyz)
HemiSphere(theta=theta, phi=phi)

Parameters#

x, y, z1-D array_like

Vertices as x-y-z coordinates.

theta, phi1-D array_like

Vertices as spherical coordinates. Theta and phi are the inclination and azimuth angles respectively.

xyz(N, 3) ndarray

Vertices as x-y-z coordinates.

faces(N, 3) ndarray

Indices into vertices that form triangular faces. If unspecified, the faces are computed using a Delaunay triangulation.

edges(N, 2) ndarray

Edges between vertices. If unspecified, the edges are derived from the faces.

tolfloat

Angle in degrees. Vertices that are less than tol degrees apart are treated as duplicates.

See Also#

Sphere

__init__(x=None, y=None, z=None, theta=None, phi=None, xyz=None, faces=None, edges=None, tol=1e-05)#

Create a HemiSphere from points

faces()#
find_closest(xyz)#

Find the index of the vertex in the Sphere closest to the input vector, taking into account antipodal symmetry

Parameters#
xyzarray-like, 3 elements

A unit vector

Returns#
idxint

The index into the Sphere.vertices array that gives the closest vertex (in angle).

classmethod from_sphere(sphere, tol=1e-05)#

Create instance from a Sphere

mirror()#

Create a full Sphere from a HemiSphere

subdivide(n=1)#

Create a more subdivided HemiSphere

See Sphere.subdivide for full documentation.

faces_from_sphere_vertices#

dipy.core.sphere.faces_from_sphere_vertices(vertices)#

Triangulate a set of vertices on the sphere.

Parameters#

vertices(M, 3) ndarray

XYZ coordinates of vertices on the sphere.

Returns#

faces(N, 3) ndarray

Indices into vertices; forms triangular faces.

unique_edges#

dipy.core.sphere.unique_edges(faces, return_mapping=False)#

Extract all unique edges from given triangular faces.

Parameters#

faces(N, 3) ndarray

Vertex indices forming triangular faces.

return_mappingbool

If true, a mapping to the edges of each face is returned.

Returns#

edges(N, 2) ndarray

Unique edges.

mapping(N, 3)

For each face, [x, y, z], a mapping to it’s edges [a, b, c].

   y
   /               /               a/    
/                  /                   /__________          x      c     z

unique_sets#

dipy.core.sphere.unique_sets(sets, return_inverse=False)#

Remove duplicate sets.

Parameters#

setsarray (N, k)

N sets of size k.

return_inversebool

If True, also returns the indices of unique_sets that can be used to reconstruct sets (the original ordering of each set may not be preserved).

Returns#

unique_setsarray

Unique sets.

inversearray (N,)

The indices to reconstruct sets from unique_sets.

disperse_charges#

dipy.core.sphere.disperse_charges(hemi, iters, const=0.2)#

Models electrostatic repulsion on the unit sphere

Places charges on a sphere and simulates the repulsive forces felt by each one. Allows the charges to move for some number of iterations and returns their final location as well as the total potential of the system at each step.

Parameters#

hemiHemiSphere

Points on a unit sphere.

itersint

Number of iterations to run.

constfloat

Using a smaller const could provide a more accurate result, but will need more iterations to converge.

Returns#

hemiHemiSphere

Distributed points on a unit sphere.

potentialndarray

The electrostatic potential at each iteration. This can be useful to check if the repulsion converged to a minimum.

Notes#

This function is meant to be used with diffusion imaging so antipodal symmetry is assumed. Therefore, each charge must not only be unique, but if there is a charge at +x, there cannot be a charge at -x. These are treated as the same location and because the distance between the two charges will be zero, the result will be unstable.

fibonacci_sphere#

dipy.core.sphere.fibonacci_sphere(n_points, randomize=True)#

Generate points on the surface of a sphere using Fibonacci Spiral.

Parameters#

n_pointsint

The number of points to generate on the sphere surface.

randomizebool, optional

If True, randomize the starting point on the sphere. Default is True.

Returns#

pointsndarray

An array of 3D points representing coordinates on the sphere surface.

disperse_charges_alt#

dipy.core.sphere.disperse_charges_alt(init_pointset, iters, tol=0.001)#

Reimplementation of disperse_charges making use of scipy.optimize.fmin_slsqp.

Parameters#

init_pointset(N, 3) ndarray

Points on a unit sphere.

itersint

Number of iterations to run.

tolfloat

Tolerance for the optimization.

Returns#

array-like (N, 3)

Distributed points on a unit sphere.

euler_characteristic_check#

dipy.core.sphere.euler_characteristic_check(sphere, chi=2)#

Checks the euler characteristic of a sphere If \(f\) = number of faces, \(e\) = number_of_edges and \(v\) = number of vertices, the Euler formula says \(f-e+v = 2\) for a mesh on a sphere. More generally, whether \(f -e + v == \chi\) where \(\chi\) is the Euler characteristic of the mesh. - Open chain (track) has \(\chi=1\) - Closed chain (loop) has \(\chi=0\) - Disk has \(\chi=1\) - Sphere has \(\chi=2\) - HemiSphere has \(\chi=1\) Parameters ———- sphere : Sphere A Sphere instance with vertices, edges and faces attributes. chi : int, optional The Euler characteristic of the mesh to be checked Returns ——- check : bool True if the mesh has Euler characteristic \(\chi\) Examples ——– >>> euler_characteristic_check(unit_octahedron) True >>> hemisphere = HemiSphere.from_sphere(unit_icosahedron) >>> euler_characteristic_check(hemisphere, chi=1) True

octahedron_vertices#

dipy.core.sphere.octahedron_vertices()#
ndarray(shape, dtype=float, buffer=None, offset=0,

strides=None, order=None)

An array object represents a multidimensional, homogeneous array of fixed-size items. An associated data-type object describes the format of each element in the array (its byte-order, how many bytes it occupies in memory, whether it is an integer, a floating point number, or something else, etc.)

Arrays should be constructed using array, zeros or empty (refer to the See Also section below). The parameters given here refer to a low-level method (ndarray(…)) for instantiating an array.

For more information, refer to the numpy module and examine the methods and attributes of an array.

Parameters#

(for the __new__ method; see Notes below)

shapetuple of ints

Shape of created array.

dtypedata-type, optional

Any object that can be interpreted as a numpy data type.

bufferobject exposing buffer interface, optional

Used to fill the array with data.

offsetint, optional

Offset of array data in buffer.

stridestuple of ints, optional

Strides of data in memory.

order{‘C’, ‘F’}, optional

Row-major (C-style) or column-major (Fortran-style) order.

Attributes#

Tndarray

Transpose of the array.

databuffer

The array’s elements, in memory.

dtypedtype object

Describes the format of the elements in the array.

flagsdict

Dictionary containing information related to memory use, e.g., ‘C_CONTIGUOUS’, ‘OWNDATA’, ‘WRITEABLE’, etc.

flatnumpy.flatiter object

Flattened version of the array as an iterator. The iterator allows assignments, e.g., x.flat = 3 (See ndarray.flat for assignment examples; TODO).

imagndarray

Imaginary part of the array.

realndarray

Real part of the array.

sizeint

Number of elements in the array.

itemsizeint

The memory use of each array element in bytes.

nbytesint

The total number of bytes required to store the array data, i.e., itemsize * size.

ndimint

The array’s number of dimensions.

shapetuple of ints

Shape of the array.

stridestuple of ints

The step-size required to move from one element to the next in memory. For example, a contiguous (3, 4) array of type int16 in C-order has strides (8, 2). This implies that to move from element to element in memory requires jumps of 2 bytes. To move from row-to-row, one needs to jump 8 bytes at a time (2 * 4).

ctypesctypes object

Class containing properties of the array needed for interaction with ctypes.

basendarray

If the array is a view into another array, that array is its base (unless that array is also a view). The base array is where the array data is actually stored.

See Also#

array : Construct an array. zeros : Create an array, each element of which is zero. empty : Create an array, but leave its allocated memory unchanged (i.e.,

it contains “garbage”).

dtype : Create a data-type. numpy.typing.NDArray : An ndarray alias generic

w.r.t. its dtype.type <numpy.dtype.type>.

Notes#

There are two modes of creating an array using __new__:

  1. If buffer is None, then only shape, dtype, and order are used.

  2. If buffer is an object exposing the buffer interface, then all keywords are interpreted.

No __init__ method is needed because the array is fully initialized after the __new__ method.

Examples#

These examples illustrate the low-level ndarray constructor. Refer to the See Also section above for easier ways of constructing an ndarray.

First mode, buffer is None:

>>> np.ndarray(shape=(2,2), dtype=float, order='F')
array([[0.0e+000, 0.0e+000], # random
       [     nan, 2.5e-323]])

Second mode:

>>> np.ndarray((2,), buffer=np.array([1,2,3]),
...            offset=np.int_().itemsize,
...            dtype=int) # offset = 1*itemsize, i.e. skip first element
array([2, 3])

octahedron_faces#

dipy.core.sphere.octahedron_faces()#
ndarray(shape, dtype=float, buffer=None, offset=0,

strides=None, order=None)

An array object represents a multidimensional, homogeneous array of fixed-size items. An associated data-type object describes the format of each element in the array (its byte-order, how many bytes it occupies in memory, whether it is an integer, a floating point number, or something else, etc.)

Arrays should be constructed using array, zeros or empty (refer to the See Also section below). The parameters given here refer to a low-level method (ndarray(…)) for instantiating an array.

For more information, refer to the numpy module and examine the methods and attributes of an array.

Parameters#

(for the __new__ method; see Notes below)

shapetuple of ints

Shape of created array.

dtypedata-type, optional

Any object that can be interpreted as a numpy data type.

bufferobject exposing buffer interface, optional

Used to fill the array with data.

offsetint, optional

Offset of array data in buffer.

stridestuple of ints, optional

Strides of data in memory.

order{‘C’, ‘F’}, optional

Row-major (C-style) or column-major (Fortran-style) order.

Attributes#

Tndarray

Transpose of the array.

databuffer

The array’s elements, in memory.

dtypedtype object

Describes the format of the elements in the array.

flagsdict

Dictionary containing information related to memory use, e.g., ‘C_CONTIGUOUS’, ‘OWNDATA’, ‘WRITEABLE’, etc.

flatnumpy.flatiter object

Flattened version of the array as an iterator. The iterator allows assignments, e.g., x.flat = 3 (See ndarray.flat for assignment examples; TODO).

imagndarray

Imaginary part of the array.

realndarray

Real part of the array.

sizeint

Number of elements in the array.

itemsizeint

The memory use of each array element in bytes.

nbytesint

The total number of bytes required to store the array data, i.e., itemsize * size.

ndimint

The array’s number of dimensions.

shapetuple of ints

Shape of the array.

stridestuple of ints

The step-size required to move from one element to the next in memory. For example, a contiguous (3, 4) array of type int16 in C-order has strides (8, 2). This implies that to move from element to element in memory requires jumps of 2 bytes. To move from row-to-row, one needs to jump 8 bytes at a time (2 * 4).

ctypesctypes object

Class containing properties of the array needed for interaction with ctypes.

basendarray

If the array is a view into another array, that array is its base (unless that array is also a view). The base array is where the array data is actually stored.

See Also#

array : Construct an array. zeros : Create an array, each element of which is zero. empty : Create an array, but leave its allocated memory unchanged (i.e.,

it contains “garbage”).

dtype : Create a data-type. numpy.typing.NDArray : An ndarray alias generic

w.r.t. its dtype.type <numpy.dtype.type>.

Notes#

There are two modes of creating an array using __new__:

  1. If buffer is None, then only shape, dtype, and order are used.

  2. If buffer is an object exposing the buffer interface, then all keywords are interpreted.

No __init__ method is needed because the array is fully initialized after the __new__ method.

Examples#

These examples illustrate the low-level ndarray constructor. Refer to the See Also section above for easier ways of constructing an ndarray.

First mode, buffer is None:

>>> np.ndarray(shape=(2,2), dtype=float, order='F')
array([[0.0e+000, 0.0e+000], # random
       [     nan, 2.5e-323]])

Second mode:

>>> np.ndarray((2,), buffer=np.array([1,2,3]),
...            offset=np.int_().itemsize,
...            dtype=int) # offset = 1*itemsize, i.e. skip first element
array([2, 3])

icosahedron_vertices#

dipy.core.sphere.icosahedron_vertices()#
ndarray(shape, dtype=float, buffer=None, offset=0,

strides=None, order=None)

An array object represents a multidimensional, homogeneous array of fixed-size items. An associated data-type object describes the format of each element in the array (its byte-order, how many bytes it occupies in memory, whether it is an integer, a floating point number, or something else, etc.)

Arrays should be constructed using array, zeros or empty (refer to the See Also section below). The parameters given here refer to a low-level method (ndarray(…)) for instantiating an array.

For more information, refer to the numpy module and examine the methods and attributes of an array.

Parameters#

(for the __new__ method; see Notes below)

shapetuple of ints

Shape of created array.

dtypedata-type, optional

Any object that can be interpreted as a numpy data type.

bufferobject exposing buffer interface, optional

Used to fill the array with data.

offsetint, optional

Offset of array data in buffer.

stridestuple of ints, optional

Strides of data in memory.

order{‘C’, ‘F’}, optional

Row-major (C-style) or column-major (Fortran-style) order.

Attributes#

Tndarray

Transpose of the array.

databuffer

The array’s elements, in memory.

dtypedtype object

Describes the format of the elements in the array.

flagsdict

Dictionary containing information related to memory use, e.g., ‘C_CONTIGUOUS’, ‘OWNDATA’, ‘WRITEABLE’, etc.

flatnumpy.flatiter object

Flattened version of the array as an iterator. The iterator allows assignments, e.g., x.flat = 3 (See ndarray.flat for assignment examples; TODO).

imagndarray

Imaginary part of the array.

realndarray

Real part of the array.

sizeint

Number of elements in the array.

itemsizeint

The memory use of each array element in bytes.

nbytesint

The total number of bytes required to store the array data, i.e., itemsize * size.

ndimint

The array’s number of dimensions.

shapetuple of ints

Shape of the array.

stridestuple of ints

The step-size required to move from one element to the next in memory. For example, a contiguous (3, 4) array of type int16 in C-order has strides (8, 2). This implies that to move from element to element in memory requires jumps of 2 bytes. To move from row-to-row, one needs to jump 8 bytes at a time (2 * 4).

ctypesctypes object

Class containing properties of the array needed for interaction with ctypes.

basendarray

If the array is a view into another array, that array is its base (unless that array is also a view). The base array is where the array data is actually stored.

See Also#

array : Construct an array. zeros : Create an array, each element of which is zero. empty : Create an array, but leave its allocated memory unchanged (i.e.,

it contains “garbage”).

dtype : Create a data-type. numpy.typing.NDArray : An ndarray alias generic

w.r.t. its dtype.type <numpy.dtype.type>.

Notes#

There are two modes of creating an array using __new__:

  1. If buffer is None, then only shape, dtype, and order are used.

  2. If buffer is an object exposing the buffer interface, then all keywords are interpreted.

No __init__ method is needed because the array is fully initialized after the __new__ method.

Examples#

These examples illustrate the low-level ndarray constructor. Refer to the See Also section above for easier ways of constructing an ndarray.

First mode, buffer is None:

>>> np.ndarray(shape=(2,2), dtype=float, order='F')
array([[0.0e+000, 0.0e+000], # random
       [     nan, 2.5e-323]])

Second mode:

>>> np.ndarray((2,), buffer=np.array([1,2,3]),
...            offset=np.int_().itemsize,
...            dtype=int) # offset = 1*itemsize, i.e. skip first element
array([2, 3])

icosahedron_faces#

dipy.core.sphere.icosahedron_faces()#
ndarray(shape, dtype=float, buffer=None, offset=0,

strides=None, order=None)

An array object represents a multidimensional, homogeneous array of fixed-size items. An associated data-type object describes the format of each element in the array (its byte-order, how many bytes it occupies in memory, whether it is an integer, a floating point number, or something else, etc.)

Arrays should be constructed using array, zeros or empty (refer to the See Also section below). The parameters given here refer to a low-level method (ndarray(…)) for instantiating an array.

For more information, refer to the numpy module and examine the methods and attributes of an array.

Parameters#

(for the __new__ method; see Notes below)

shapetuple of ints

Shape of created array.

dtypedata-type, optional

Any object that can be interpreted as a numpy data type.

bufferobject exposing buffer interface, optional

Used to fill the array with data.

offsetint, optional

Offset of array data in buffer.

stridestuple of ints, optional

Strides of data in memory.

order{‘C’, ‘F’}, optional

Row-major (C-style) or column-major (Fortran-style) order.

Attributes#

Tndarray

Transpose of the array.

databuffer

The array’s elements, in memory.

dtypedtype object

Describes the format of the elements in the array.

flagsdict

Dictionary containing information related to memory use, e.g., ‘C_CONTIGUOUS’, ‘OWNDATA’, ‘WRITEABLE’, etc.

flatnumpy.flatiter object

Flattened version of the array as an iterator. The iterator allows assignments, e.g., x.flat = 3 (See ndarray.flat for assignment examples; TODO).

imagndarray

Imaginary part of the array.

realndarray

Real part of the array.

sizeint

Number of elements in the array.

itemsizeint

The memory use of each array element in bytes.

nbytesint

The total number of bytes required to store the array data, i.e., itemsize * size.

ndimint

The array’s number of dimensions.

shapetuple of ints

Shape of the array.

stridestuple of ints

The step-size required to move from one element to the next in memory. For example, a contiguous (3, 4) array of type int16 in C-order has strides (8, 2). This implies that to move from element to element in memory requires jumps of 2 bytes. To move from row-to-row, one needs to jump 8 bytes at a time (2 * 4).

ctypesctypes object

Class containing properties of the array needed for interaction with ctypes.

basendarray

If the array is a view into another array, that array is its base (unless that array is also a view). The base array is where the array data is actually stored.

See Also#

array : Construct an array. zeros : Create an array, each element of which is zero. empty : Create an array, but leave its allocated memory unchanged (i.e.,

it contains “garbage”).

dtype : Create a data-type. numpy.typing.NDArray : An ndarray alias generic

w.r.t. its dtype.type <numpy.dtype.type>.

Notes#

There are two modes of creating an array using __new__:

  1. If buffer is None, then only shape, dtype, and order are used.

  2. If buffer is an object exposing the buffer interface, then all keywords are interpreted.

No __init__ method is needed because the array is fully initialized after the __new__ method.

Examples#

These examples illustrate the low-level ndarray constructor. Refer to the See Also section above for easier ways of constructing an ndarray.

First mode, buffer is None:

>>> np.ndarray(shape=(2,2), dtype=float, order='F')
array([[0.0e+000, 0.0e+000], # random
       [     nan, 2.5e-323]])

Second mode:

>>> np.ndarray((2,), buffer=np.array([1,2,3]),
...            offset=np.int_().itemsize,
...            dtype=int) # offset = 1*itemsize, i.e. skip first element
array([2, 3])

unit_octahedron#

dipy.core.sphere.unit_octahedron()#

Points on the unit sphere.

The sphere can be constructed using one of three conventions:

Sphere(x, y, z)
Sphere(xyz=xyz)
Sphere(theta=theta, phi=phi)

Parameters#

x, y, z1-D array_like

Vertices as x-y-z coordinates.

theta, phi1-D array_like

Vertices as spherical coordinates. Theta and phi are the inclination and azimuth angles respectively.

xyz(N, 3) ndarray

Vertices as x-y-z coordinates.

faces(N, 3) ndarray

Indices into vertices that form triangular faces. If unspecified, the faces are computed using a Delaunay triangulation.

edges(N, 2) ndarray

Edges between vertices. If unspecified, the edges are derived from the faces.

unit_icosahedron#

dipy.core.sphere.unit_icosahedron()#

Points on the unit sphere.

The sphere can be constructed using one of three conventions:

Sphere(x, y, z)
Sphere(xyz=xyz)
Sphere(theta=theta, phi=phi)

Parameters#

x, y, z1-D array_like

Vertices as x-y-z coordinates.

theta, phi1-D array_like

Vertices as spherical coordinates. Theta and phi are the inclination and azimuth angles respectively.

xyz(N, 3) ndarray

Vertices as x-y-z coordinates.

faces(N, 3) ndarray

Indices into vertices that form triangular faces. If unspecified, the faces are computed using a Delaunay triangulation.

edges(N, 2) ndarray

Edges between vertices. If unspecified, the edges are derived from the faces.

hemi_icosahedron#

dipy.core.sphere.hemi_icosahedron()#

Points on the unit sphere.

A HemiSphere is similar to a Sphere but it takes antipodal symmetry into account. Antipodal symmetry means that point v on a HemiSphere is the same as the point -v. Duplicate points are discarded when constructing a HemiSphere (including antipodal duplicates). edges and faces are remapped to the remaining points as closely as possible.

The HemiSphere can be constructed using one of three conventions:

HemiSphere(x, y, z)
HemiSphere(xyz=xyz)
HemiSphere(theta=theta, phi=phi)

Parameters#

x, y, z1-D array_like

Vertices as x-y-z coordinates.

theta, phi1-D array_like

Vertices as spherical coordinates. Theta and phi are the inclination and azimuth angles respectively.

xyz(N, 3) ndarray

Vertices as x-y-z coordinates.

faces(N, 3) ndarray

Indices into vertices that form triangular faces. If unspecified, the faces are computed using a Delaunay triangulation.

edges(N, 2) ndarray

Edges between vertices. If unspecified, the edges are derived from the faces.

tolfloat

Angle in degrees. Vertices that are less than tol degrees apart are treated as duplicates.

See Also#

Sphere

random_uniform_on_sphere#

dipy.core.sphere_stats.random_uniform_on_sphere(n=1, coords='xyz')#

Random unit vectors from a uniform distribution on the sphere. Parameters ———- n : int Number of random vectors coords : {‘xyz’, ‘radians’, ‘degrees’} ‘xyz’ for cartesian form ‘radians’ for spherical form in rads ‘degrees’ for spherical form in degrees Notes —– The uniform distribution on the sphere, parameterized by spherical coordinates \((\theta, \phi)\), should verify \(\phi\sim U[0,2\pi]\), while \(z=\cos(\theta)\sim U[-1,1]\). References ———- .. [1] https://mathworld.wolfram.com/SpherePointPicking.html. Returns ——- X : array, shape (n,3) if coords=’xyz’ or shape (n,2) otherwise Uniformly distributed vectors on the unit sphere. Examples ——– >>> from dipy.core.sphere_stats import random_uniform_on_sphere >>> X = random_uniform_on_sphere(4, ‘radians’) >>> X.shape == (4, 2) True >>> X = random_uniform_on_sphere(4, ‘xyz’) >>> X.shape == (4, 3) True

eigenstats#

dipy.core.sphere_stats.eigenstats(points, alpha=0.05)#

Principal direction and confidence ellipse Implements equations in section 6.3.1(ii) of Fisher, Lewis and Embleton, supplemented by equations in section 3.2.5. Parameters ———- points : array_like (N,3) array of points on the sphere of radius 1 in \(\mathbb{R}^3\) alpha : real or None 1 minus the coverage for the confidence ellipsoid, e.g. 0.05 for 95% coverage. Returns ——- centre : vector (3,) centre of ellipsoid b1 : vector (2,) lengths of semi-axes of ellipsoid

compare_orientation_sets#

dipy.core.sphere_stats.compare_orientation_sets(S, T)#

Computes the mean cosine distance of the best match between points of two sets of vectors S and T (angular similarity)

Parameters#

Sarray, shape (m,d)

First set of vectors.

Tarray, shape (n,d)

Second set of vectors.

Returns#

max_mean_cosinefloat

Maximum mean cosine distance.

Examples#

>>> from dipy.core.sphere_stats import compare_orientation_sets
>>> S=np.array([[1,0,0],[0,1,0],[0,0,1]])
>>> T=np.array([[1,0,0],[0,0,1]])
>>> compare_orientation_sets(S,T)
1.0
>>> T=np.array([[0,1,0],[1,0,0],[0,0,1]])
>>> S=np.array([[1,0,0],[0,0,1]])
>>> compare_orientation_sets(S,T)
1.0
>>> from dipy.core.sphere_stats import compare_orientation_sets
>>> S=np.array([[-1,0,0],[0,1,0],[0,0,1]])
>>> T=np.array([[1,0,0],[0,0,-1]])
>>> compare_orientation_sets(S,T)
1.0

angular_similarity#

dipy.core.sphere_stats.angular_similarity(S, T)#

Computes the cosine distance of the best match between points of two sets of vectors S and T

Parameters#

S : array, shape (m,d) T : array, shape (n,d)

Returns#

max_cosine_distance:float

Examples#

>>> import numpy as np
>>> from dipy.core.sphere_stats import angular_similarity
>>> S=np.array([[1,0,0],[0,1,0],[0,0,1]])
>>> T=np.array([[1,0,0],[0,0,1]])
>>> angular_similarity(S,T)
2.0
>>> T=np.array([[0,1,0],[1,0,0],[0,0,1]])
>>> S=np.array([[1,0,0],[0,0,1]])
>>> angular_similarity(S,T)
2.0
>>> S=np.array([[-1,0,0],[0,1,0],[0,0,1]])
>>> T=np.array([[1,0,0],[0,0,-1]])
>>> angular_similarity(S,T)
2.0
>>> T=np.array([[0,1,0],[1,0,0],[0,0,1]])
>>> S=np.array([[1,0,0],[0,1,0],[0,0,1]])
>>> angular_similarity(S,T)
3.0
>>> S=np.array([[0,1,0],[1,0,0],[0,0,1]])
>>> T=np.array([[1,0,0],[0,np.sqrt(2)/2.,np.sqrt(2)/2.],[0,0,1]])
>>> angular_similarity(S,T)
2.7071067811865475
>>> S=np.array([[0,1,0],[1,0,0],[0,0,1]])
>>> T=np.array([[1,0,0]])
>>> angular_similarity(S,T)
1.0
>>> S=np.array([[0,1,0],[1,0,0]])
>>> T=np.array([[0,0,1]])
>>> angular_similarity(S,T)
0.0
>>> S=np.array([[0,1,0],[1,0,0]])
>>> T=np.array([[0,np.sqrt(2)/2.,np.sqrt(2)/2.]])

Now we use print to reduce the precision of of the printed output (so the doctests don’t detect unimportant differences)

>>> print('%.12f' % angular_similarity(S,T))
0.707106781187
>>> S=np.array([[0,1,0]])
>>> T=np.array([[0,np.sqrt(2)/2.,np.sqrt(2)/2.]])
>>> print('%.12f' % angular_similarity(S,T))
0.707106781187
>>> S=np.array([[0,1,0],[0,0,1]])
>>> T=np.array([[0,np.sqrt(2)/2.,np.sqrt(2)/2.]])
>>> print('%.12f' % angular_similarity(S,T))
0.707106781187

create_unit_sphere#

dipy.core.subdivide_octahedron.create_unit_sphere(recursion_level=2)#

Creates a unit sphere by subdividing a unit octahedron. Starts with a unit octahedron and subdivides the faces, projecting the resulting points onto the surface of a unit sphere. Parameters ———- recursion_level : int Level of subdivision, recursion_level=1 will return an octahedron, anything bigger will return a more subdivided sphere. The sphere will have \(4^recursion_level+2\) vertices. Returns ——- Sphere : The unit sphere. See Also ——– create_unit_hemisphere, Sphere

create_unit_hemisphere#

dipy.core.subdivide_octahedron.create_unit_hemisphere(recursion_level=2)#

Creates a unit sphere by subdividing a unit octahedron, returns half the sphere. Parameters ———- recursion_level : int Level of subdivision, recursion_level=1 will return an octahedron, anything bigger will return a more subdivided sphere. The sphere will have \((4^recursion_level+2)/2\) vertices. Returns ——- HemiSphere : Half of a unit sphere. See Also ——– create_unit_sphere, Sphere, HemiSphere

cshift3D#

dipy.core.wavelet.cshift3D(x, m, d)#

3D Circular Shift

Parameters#

x3D ndarray

N1 by N2 by N3 array

mint

amount of shift

dint

dimension of shift (d = 1,2,3)

Returns#

y3D ndarray

array x will be shifted by m samples down along dimension d

permutationinverse#

dipy.core.wavelet.permutationinverse(perm)#

Function generating inverse of the permutation

Parameters#

perm : 1D array

Returns#

inverse1D array

permutation inverse of the input

afb3D_A#

dipy.core.wavelet.afb3D_A(x, af, d)#
3D Analysis Filter Bank

(along one dimension only)

Parameters#

x3D ndarray
N1xN2xN2 matrix, where min(N1,N2,N3) > 2*length(filter)

(Ni are even)

af2D ndarray

analysis filter for the columns af[:, 1] - lowpass filter af[:, 2] - highpass filter

dint

dimension of filtering (d = 1, 2 or 3)

Returns#

lo1D array

lowpass subbands

hi1D array

highpass subbands

sfb3D_A#

dipy.core.wavelet.sfb3D_A(lo, hi, sf, d)#
3D Synthesis Filter Bank

(along single dimension only)

Parameters#

lo1D array

lowpass subbands

hi1D array

highpass subbands

sf2D ndarray

synthesis filters

dint

dimension of filtering

Returns#

y3D ndarray

the N1xN2xN3 matrix

sfb3D#

dipy.core.wavelet.sfb3D(lo, hi, sf1, sf2=None, sf3=None)#

3D Synthesis Filter Bank

Parameters#

lo1D array

lowpass subbands

hi1D array

highpass subbands

sfi2D ndarray

synthesis filters for dimension i

Returns#

y3D ndarray

output array

afb3D#

dipy.core.wavelet.afb3D(x, af1, af2=None, af3=None)#

3D Analysis Filter Bank

Parameters#

x3D ndarray

N1 by N2 by N3 array matrix, where 1) N1, N2, N3 all even 2) N1 >= 2*len(af1) 3) N2 >= 2*len(af2) 4) N3 >= 2*len(af3)

afi2D ndarray

analysis filters for dimension i afi[:, 1] - lowpass filter afi[:, 2] - highpass filter

Returns#

lo1D array

lowpass subband

hi1D array

highpass subbands, h[d]- d = 1..7

dwt3D#

dipy.core.wavelet.dwt3D(x, J, af)#

3-D Discrete Wavelet Transform

Parameters#

x3D ndarray

N1 x N2 x N3 matrix 1) Ni all even 2) min(Ni) >= 2^(J-1)*length(af)

Jint

number of stages

af2D ndarray

analysis filters

Returns#

wcell array

wavelet coefficients

idwt3D#

dipy.core.wavelet.idwt3D(w, J, sf)#

Inverse 3-D Discrete Wavelet Transform

Parameters#

wcell array

wavelet coefficient

Jint

number of stages

sf2D ndarray

synthesis filters

Returns#

y3D ndarray

output array