data#

Read test or example data.

DataError

loads_compat(byte_data)

DATA_DIR

str(object='') -> str str(bytes_or_buffer[, encoding[, errors]]) -> str

get_sim_voxels([name])

provide some simulated voxel data

get_skeleton([name])

Provide skeletons generated from Local Skeleton Clustering (LSC).

get_sphere([name])

provide triangulated spheres

default_sphere

Points on the unit sphere.

small_sphere

Points on the unit sphere.

get_3shell_gtab()

get_isbi2013_2shell_gtab()

get_gtab_taiwan_dsi()

dsi_voxels()

dsi_deconv_voxels()

mrtrix_spherical_functions()

Spherical functions represented by spherical harmonic coefficients and evaluated on a discrete sphere.

get_cmap(name)

Make a callable, similar to maptlotlib.pyplot.get_cmap.

two_cingulum_bundles()

matlab_life_results()

load_sdp_constraints(model_name[, order])

Import semidefinite programming constraint matrices for different models, generated as described for example in [1]_.

Module: data.fetcher#

copyfileobj_withprogress(fsrc, fdst, ...[, ...])

check_md5(filename[, stored_md5])

Computes the md5 of filename and check if it matches with the supplied string md5

fetch_data(files, folder[, data_size])

Downloads files to folder and checks their md5 checksums

fetch_isbi2013_2shell()

Download a 2-shell software phantom dataset

fetch_stanford_labels()

Download reduced freesurfer aparc image from stanford web site

fetch_sherbrooke_3shell()

Download a 3shell HARDI dataset with 192 gradient direction

fetch_stanford_hardi()

Download a HARDI dataset with 160 gradient directions

fetch_resdnn_weights()

Download ResDNN model weights for Nath et.

fetch_synb0_weights()

Download Synb0 model weights for Schilling et.

fetch_synb0_test()

Download Synb0 test data for Schilling et.

fetch_deepn4_weights()

Download DeepN4 model weights for Kanakaraj et.

fetch_deepn4_test()

Download DeepN4 test data for Kanakaraj et.

fetch_evac_weights()

Download EVAC+ model weights for Park et.

fetch_evac_test()

Download EVAC+ test data for Park et.

fetch_stanford_t1()

fetch_stanford_pve_maps()

fetch_stanford_tracks()

Download stanford track for examples

fetch_taiwan_ntu_dsi()

Download a DSI dataset with 203 gradient directions

fetch_syn_data()

Download t1 and b0 volumes from the same session

fetch_mni_template()

fetch the MNI 2009a T1 and T2, and 2009c T1 and T1 mask files Notes ----- The templates were downloaded from the MNI (McGill University) website in July 2015.

fetch_scil_b0()

Download b=0 datasets from multiple MR systems (GE, Philips, Siemens) and different magnetic fields (1.5T and 3T)

fetch_bundles_2_subjects()

Download 2 subjects from the SNAIL dataset with their bundles

fetch_ivim()

Download IVIM dataset

fetch_cfin_multib()

Download CFIN multi b-value diffusion data

fetch_file_formats()

Download 5 bundles in various file formats and their reference

fetch_bundle_atlas_hcp842()

Download atlas tractogram from the hcp842 dataset with 80 bundles

fetch_target_tractogram_hcp()

Download tractogram of one of the hcp dataset subjects

fetch_bundle_fa_hcp()

Download map of FA within two bundles in oneof the hcp dataset subjects

fetch_qtdMRI_test_retest_2subjects()

Downloads test-retest qt-dMRI acquisitions of two C57Bl6 mice.

fetch_gold_standard_io()

Downloads the gold standard for streamlines io testing.

fetch_qte_lte_pte()

Download QTE data with linear and planar tensor encoding.

fetch_cti_rat1()

Download Rat Brain DDE data for CTI reconstruction (Rat #1 data from Henriques et al. MRM 2021).

fetch_fury_surface()

Surface for testing and examples

fetch_DiB_70_lte_pte_ste()

Download QTE data with linear, planar, and spherical tensor encoding.

fetch_DiB_217_lte_pte_ste()

Download QTE data with linear, planar, and spherical tensor encoding.

fetch_ptt_minimal_dataset()

Download FOD and seeds for PTT testing and examples

fetch_bundle_warp_dataset()

Download Bundle Warp dataset

get_fnames([name])

Provide full paths to example or test datasets.

read_qtdMRI_test_retest_2subjects()

Load test-retest qt-dMRI acquisitions of two C57Bl6 mice. These datasets were used to study test-retest reproducibility of time-dependent q-space indices (q:math:` au`-indices) in the corpus callosum of two mice [1]. The data itself and its details are publicly available and can be cited at [2]. The test-retest diffusion MRI spin echo sequences were acquired from two C57Bl6 wild-type mice on an 11.7 Tesla Bruker scanner. The test and retest acquisition were taken 48 hours from each other. The (processed) data consists of 80x160x5 voxels of size 110x110x500μm. Each data set consists of 515 Diffusion-Weighted Images (DWIs) spread over 35 acquisition shells. The shells are spread over 7 gradient strength shells with a maximum gradient strength of 491 mT/m, 5 pulse separation shells between [10.8 - 20.0]ms, and a pulse length of 5ms. We manually created a brain mask and corrected the data from eddy currents and motion artifacts using FSL's eddy. A region of interest was then drawn in the middle slice in the corpus callosum, where the tissue is reasonably coherent. Returns ------- data : list of length 4 contains the dwi datasets ordered as (subject1_test, subject1_retest, subject2_test, subject2_retest) cc_masks : list of length 4 contains the corpus callosum masks ordered in the same order as data. gtabs : list of length 4 contains the qt-dMRI gradient tables of the data sets. References ---------- .. [1] Fick, Rutger HJ, et al. "Non-Parametric GraphNet-Regularized Representation of dMRI in Space and Time", Medical Image Analysis, 2017. .. [2] Wassermann, Demian, et al., "Test-Retest qt-dMRI datasets for `Non-Parametric GraphNet-Regularized Representation of dMRI in Space and Time'". doi:10.5281/zenodo.996889, 2017. .

read_scil_b0()

Load GE 3T b0 image form the scil b0 dataset.

read_siemens_scil_b0()

Load Siemens 1.5T b0 image from the scil b0 dataset.

read_isbi2013_2shell()

Load ISBI 2013 2-shell synthetic dataset.

read_sherbrooke_3shell()

Load Sherbrooke 3-shell HARDI dataset.

read_stanford_labels()

Read stanford hardi data and label map.

read_stanford_hardi()

Load Stanford HARDI dataset.

read_stanford_t1()

read_stanford_pve_maps()

read_taiwan_ntu_dsi()

Load Taiwan NTU dataset.

read_syn_data()

Load t1 and b0 volumes from the same session.

fetch_tissue_data()

Download images to be used for tissue classification

read_tissue_data([contrast])

Load images to be used for tissue classification

read_mni_template([version, contrast])

Read the MNI template from disk.

fetch_cenir_multib([with_raw])

Fetch 'HCP-like' data, collected at multiple b-values.

read_cenir_multib([bvals])

Read CENIR multi b-value data.

read_bundles_2_subjects([subj_id, metrics, ...])

Read images and streamlines from 2 subjects of the SNAIL dataset.

read_ivim()

Load IVIM dataset.

read_cfin_dwi()

Load CFIN multi b-value DWI data.

read_cfin_t1()

Load CFIN T1-weighted data.

get_file_formats()

Returns bundles_list : all bundles (list) ref_anat : reference

get_bundle_atlas_hcp842()

Returns file1 : string file2 : string

get_two_hcp842_bundles()

Returns file1 : string file2 : string

get_target_tractogram_hcp()

Returns file1 : string

read_qte_lte_pte()

Read q-space trajectory encoding data with linear and planar tensor encoding.

read_DiB_70_lte_pte_ste()

Read q-space trajectory encoding data with 70 between linear, planar, and spherical tensor encoding measurements.

read_DiB_217_lte_pte_ste()

Read q-space trajectory encoding data with 217 between linear, planar, and spherical tensor encoding.

extract_example_tracts(out_dir)

Extract 5 'AF_L','CST_R' and 'CC_ForcepsMajor' trk files in out_dir folder.

read_five_af_bundles()

Load 5 small left arcuate fasciculus bundles.

to_bids_description(path[, fname, BIDSVersion])

Dumps a dict into a bids description at the given location

fetch_hcp(subjects[, hcp_bucket, ...])

Fetch HCP diffusion data and arrange it in a manner that resembles the BIDS [1]_ specification.

fetch_hbn(subjects[, path])

Fetch preprocessed data from the Healthy Brain Network POD2 study [1, 2]_.

DataError#

class dipy.data.DataError#

Bases: Exception

__init__(*args, **kwargs)#

loads_compat#

dipy.data.loads_compat(byte_data)#

DATA_DIR#

dipy.data.DATA_DIR()#

str(object=’’) -> str str(bytes_or_buffer[, encoding[, errors]]) -> str

Create a new string object from the given object. If encoding or errors is specified, then the object must expose a data buffer that will be decoded using the given encoding and error handler. Otherwise, returns the result of object.__str__() (if defined) or repr(object). encoding defaults to sys.getdefaultencoding(). errors defaults to ‘strict’.

get_sim_voxels#

dipy.data.get_sim_voxels(name='fib1')#

provide some simulated voxel data

Parameters#

namestr, which file?

‘fib0’, ‘fib1’ or ‘fib2’

Returns#

dixdictionary, where dix[‘data’] returns a 2d array

where every row is a simulated voxel with different orientation

Examples#

>>> from dipy.data import get_sim_voxels
>>> sv=get_sim_voxels('fib1')
>>> sv['data'].shape == (100, 102)
True
>>> sv['fibres']
'1'
>>> sv['gradients'].shape == (102, 3)
True
>>> sv['bvals'].shape == (102,)
True
>>> sv['snr']
'60'
>>> sv2=get_sim_voxels('fib2')
>>> sv2['fibres']
'2'
>>> sv2['snr']
'80'

Notes#

These sim voxels were provided by M.M. Correia using Rician noise.

get_skeleton#

dipy.data.get_skeleton(name='C1')#

Provide skeletons generated from Local Skeleton Clustering (LSC).

Parameters#

name : str, ‘C1’ or ‘C3’

Returns#

dix : dictionary

Examples#

>>> from dipy.data import get_skeleton
>>> C=get_skeleton('C1')
>>> len(C.keys())
117
>>> for c in C: break
>>> sorted(C[c].keys())
['N', 'hidden', 'indices', 'most']

get_sphere#

dipy.data.get_sphere(name='symmetric362')#

provide triangulated spheres

Parameters#

namestr

which sphere - one of: * ‘symmetric362’ * ‘symmetric642’ * ‘symmetric724’ * ‘repulsion724’ * ‘repulsion100’ * ‘repulsion200’

Returns#

sphere : a dipy.core.sphere.Sphere class instance

Examples#

>>> import numpy as np
>>> from dipy.data import get_sphere
>>> sphere = get_sphere('symmetric362')
>>> verts, faces = sphere.vertices, sphere.faces
>>> verts.shape == (362, 3)
True
>>> faces.shape == (720, 3)
True
>>> verts, faces = get_sphere('not a sphere name') 
Traceback (most recent call last):
    ...
DataError: No sphere called "not a sphere name"

default_sphere#

dipy.data.default_sphere()#

Points on the unit sphere.

A HemiSphere is similar to a Sphere but it takes antipodal symmetry into account. Antipodal symmetry means that point v on a HemiSphere is the same as the point -v. Duplicate points are discarded when constructing a HemiSphere (including antipodal duplicates). edges and faces are remapped to the remaining points as closely as possible.

The HemiSphere can be constructed using one of three conventions:

HemiSphere(x, y, z)
HemiSphere(xyz=xyz)
HemiSphere(theta=theta, phi=phi)

Parameters#

x, y, z1-D array_like

Vertices as x-y-z coordinates.

theta, phi1-D array_like

Vertices as spherical coordinates. Theta and phi are the inclination and azimuth angles respectively.

xyz(N, 3) ndarray

Vertices as x-y-z coordinates.

faces(N, 3) ndarray

Indices into vertices that form triangular faces. If unspecified, the faces are computed using a Delaunay triangulation.

edges(N, 2) ndarray

Edges between vertices. If unspecified, the edges are derived from the faces.

tolfloat

Angle in degrees. Vertices that are less than tol degrees apart are treated as duplicates.

See Also#

Sphere

small_sphere#

dipy.data.small_sphere()#

Points on the unit sphere.

A HemiSphere is similar to a Sphere but it takes antipodal symmetry into account. Antipodal symmetry means that point v on a HemiSphere is the same as the point -v. Duplicate points are discarded when constructing a HemiSphere (including antipodal duplicates). edges and faces are remapped to the remaining points as closely as possible.

The HemiSphere can be constructed using one of three conventions:

HemiSphere(x, y, z)
HemiSphere(xyz=xyz)
HemiSphere(theta=theta, phi=phi)

Parameters#

x, y, z1-D array_like

Vertices as x-y-z coordinates.

theta, phi1-D array_like

Vertices as spherical coordinates. Theta and phi are the inclination and azimuth angles respectively.

xyz(N, 3) ndarray

Vertices as x-y-z coordinates.

faces(N, 3) ndarray

Indices into vertices that form triangular faces. If unspecified, the faces are computed using a Delaunay triangulation.

edges(N, 2) ndarray

Edges between vertices. If unspecified, the edges are derived from the faces.

tolfloat

Angle in degrees. Vertices that are less than tol degrees apart are treated as duplicates.

See Also#

Sphere

get_3shell_gtab#

dipy.data.get_3shell_gtab()#

get_isbi2013_2shell_gtab#

dipy.data.get_isbi2013_2shell_gtab()#

get_gtab_taiwan_dsi#

dipy.data.get_gtab_taiwan_dsi()#

dsi_voxels#

dipy.data.dsi_voxels()#

dsi_deconv_voxels#

dipy.data.dsi_deconv_voxels()#

mrtrix_spherical_functions#

dipy.data.mrtrix_spherical_functions()#

Spherical functions represented by spherical harmonic coefficients and evaluated on a discrete sphere.

Returns#

func_coefarray (2, 3, 4, 45)

Functions represented by the coefficients associated with the mrtrix spherical harmonic basis of maximal order (l) 8.

func_discretearray (2, 3, 4, 81)

Functions evaluated on sphere.

sphereSphere

The discrete sphere, points on the surface of a unit sphere, used to evaluate the functions.

Notes#

These coefficients were obtained by using the dwi2SH command of mrtrix.

get_cmap#

dipy.data.get_cmap(name)#

Make a callable, similar to maptlotlib.pyplot.get_cmap.

two_cingulum_bundles#

dipy.data.two_cingulum_bundles()#

matlab_life_results#

dipy.data.matlab_life_results()#

load_sdp_constraints#

dipy.data.load_sdp_constraints(model_name, order=None)#

Import semidefinite programming constraint matrices for different models, generated as described for example in [1]_.

Parameters#

model_namestring

A string identifying the model that is to be constrained.

orderunsigned int, optional

A non-negative integer that represent the order or instance of the model. Default: None.

Returns#

sdp_constraintsarray

Constraint matrices

Notes#

The constraints will be loaded from a file named ‘id_constraint_order.npz’.

References#

copyfileobj_withprogress#

dipy.data.fetcher.copyfileobj_withprogress(fsrc, fdst, total_length, length=16384)#

check_md5#

dipy.data.fetcher.check_md5(filename, stored_md5=None)#

Computes the md5 of filename and check if it matches with the supplied string md5

Parameters#

filenamestring

Path to a file.

md5string

Known md5 of filename to check against. If None (default), checking is skipped

fetch_data#

dipy.data.fetcher.fetch_data(files, folder, data_size=None)#

Downloads files to folder and checks their md5 checksums

Parameters#

filesdictionary

For each file in files the value should be (url, md5). The file will be downloaded from url if the file does not already exist or if the file exists but the md5 checksum does not match.

folderstr

The directory where to save the file, the directory will be created if it does not already exist.

data_sizestr, optional

A string describing the size of the data (e.g. “91 MB”) to be logged to the screen. Default does not produce any information about data size.

Raises#

FetcherError

Raises if the md5 checksum of the file does not match the expected value. The downloaded file is not deleted when this error is raised.

fetch_isbi2013_2shell#

dipy.data.fetcher.fetch_isbi2013_2shell()#

Download a 2-shell software phantom dataset

fetch_stanford_labels#

dipy.data.fetcher.fetch_stanford_labels()#

Download reduced freesurfer aparc image from stanford web site

fetch_sherbrooke_3shell#

dipy.data.fetcher.fetch_sherbrooke_3shell()#

Download a 3shell HARDI dataset with 192 gradient direction

fetch_stanford_hardi#

dipy.data.fetcher.fetch_stanford_hardi()#

Download a HARDI dataset with 160 gradient directions

fetch_resdnn_weights#

dipy.data.fetcher.fetch_resdnn_weights()#

Download ResDNN model weights for Nath et. al 2018

fetch_synb0_weights#

dipy.data.fetcher.fetch_synb0_weights()#

Download Synb0 model weights for Schilling et. al 2019

fetch_synb0_test#

dipy.data.fetcher.fetch_synb0_test()#

Download Synb0 test data for Schilling et. al 2019

fetch_deepn4_weights#

dipy.data.fetcher.fetch_deepn4_weights()#

Download DeepN4 model weights for Kanakaraj et. al 2024

fetch_deepn4_test#

dipy.data.fetcher.fetch_deepn4_test()#

Download DeepN4 test data for Kanakaraj et. al 2024

fetch_evac_weights#

dipy.data.fetcher.fetch_evac_weights()#

Download EVAC+ model weights for Park et. al 2022

fetch_evac_test#

dipy.data.fetcher.fetch_evac_test()#

Download EVAC+ test data for Park et. al 2022

fetch_stanford_t1#

dipy.data.fetcher.fetch_stanford_t1()#

fetch_stanford_pve_maps#

dipy.data.fetcher.fetch_stanford_pve_maps()#

fetch_stanford_tracks#

dipy.data.fetcher.fetch_stanford_tracks()#

Download stanford track for examples

fetch_taiwan_ntu_dsi#

dipy.data.fetcher.fetch_taiwan_ntu_dsi()#

Download a DSI dataset with 203 gradient directions

fetch_syn_data#

dipy.data.fetcher.fetch_syn_data()#

Download t1 and b0 volumes from the same session

fetch_mni_template#

dipy.data.fetcher.fetch_mni_template()#

fetch the MNI 2009a T1 and T2, and 2009c T1 and T1 mask files Notes —– The templates were downloaded from the MNI (McGill University) website in July 2015.

The following publications should be referenced when using these templates:

License for the MNI templates:

Copyright (C) 1993-2004, Louis Collins McConnell Brain Imaging Centre, Montreal Neurological Institute, McGill University. Permission to use, copy, modify, and distribute this software and its documentation for any purpose and without fee is hereby granted, provided that the above copyright notice appear in all copies. The authors and McGill University make no representations about the suitability of this software for any purpose. It is provided “as is” without express or implied warranty. The authors are not responsible for any data loss, equipment damage, property loss, or injury to subjects or patients resulting from the use or misuse of this software package.

fetch_scil_b0#

dipy.data.fetcher.fetch_scil_b0()#

Download b=0 datasets from multiple MR systems (GE, Philips, Siemens) and different magnetic fields (1.5T and 3T)

fetch_bundles_2_subjects#

dipy.data.fetcher.fetch_bundles_2_subjects()#

Download 2 subjects from the SNAIL dataset with their bundles

fetch_ivim#

dipy.data.fetcher.fetch_ivim()#

Download IVIM dataset

fetch_cfin_multib#

dipy.data.fetcher.fetch_cfin_multib()#

Download CFIN multi b-value diffusion data

fetch_file_formats#

dipy.data.fetcher.fetch_file_formats()#

Download 5 bundles in various file formats and their reference

fetch_bundle_atlas_hcp842#

dipy.data.fetcher.fetch_bundle_atlas_hcp842()#

Download atlas tractogram from the hcp842 dataset with 80 bundles

fetch_target_tractogram_hcp#

dipy.data.fetcher.fetch_target_tractogram_hcp()#

Download tractogram of one of the hcp dataset subjects

fetch_bundle_fa_hcp#

dipy.data.fetcher.fetch_bundle_fa_hcp()#

Download map of FA within two bundles in oneof the hcp dataset subjects

fetch_qtdMRI_test_retest_2subjects#

dipy.data.fetcher.fetch_qtdMRI_test_retest_2subjects()#

Downloads test-retest qt-dMRI acquisitions of two C57Bl6 mice.

fetch_gold_standard_io#

dipy.data.fetcher.fetch_gold_standard_io()#

Downloads the gold standard for streamlines io testing.

fetch_qte_lte_pte#

dipy.data.fetcher.fetch_qte_lte_pte()#

Download QTE data with linear and planar tensor encoding.

fetch_cti_rat1#

dipy.data.fetcher.fetch_cti_rat1()#

Download Rat Brain DDE data for CTI reconstruction (Rat #1 data from Henriques et al. MRM 2021).

fetch_fury_surface#

dipy.data.fetcher.fetch_fury_surface()#

Surface for testing and examples

fetch_DiB_70_lte_pte_ste#

dipy.data.fetcher.fetch_DiB_70_lte_pte_ste()#

Download QTE data with linear, planar, and spherical tensor encoding. If using this data please cite F Szczepankiewicz, S Hoge, C-F Westin. Linear, planar and spherical tensor-valued diffusion MRI data by free waveform encoding in healthy brain, water, oil and liquid crystals. Data in Brief (2019),DOI: https://doi.org/10.1016/j.dib.2019.104208

fetch_DiB_217_lte_pte_ste#

dipy.data.fetcher.fetch_DiB_217_lte_pte_ste()#

Download QTE data with linear, planar, and spherical tensor encoding. If using this data please cite F Szczepankiewicz, S Hoge, C-F Westin. Linear, planar and spherical tensor-valued diffusion MRI data by free waveform encoding in healthy brain, water, oil and liquid crystals. Data in Brief (2019),DOI: https://doi.org/10.1016/j.dib.2019.104208

fetch_ptt_minimal_dataset#

dipy.data.fetcher.fetch_ptt_minimal_dataset()#

Download FOD and seeds for PTT testing and examples

fetch_bundle_warp_dataset#

dipy.data.fetcher.fetch_bundle_warp_dataset()#

Download Bundle Warp dataset

get_fnames#

dipy.data.fetcher.get_fnames(name='small_64D')#

Provide full paths to example or test datasets.

Parameters#

namestr

the filename/s of which dataset to return, one of:

  • ‘small_64D’ small region of interest nifti,bvecs,bvals 64 directions

  • ‘small_101D’ small region of interest nifti, bvecs, bvals 101 directions

  • ‘aniso_vox’ volume with anisotropic voxel size as Nifti

  • ‘fornix’ 300 tracks in Trackvis format (from Pittsburgh Brain Competition)

  • ‘gqi_vectors’ the scanner wave vectors needed for a GQI acquisitions of 101 directions tested on Siemens 3T Trio

  • ‘small_25’ small ROI (10x8x2) DTI data (b value 2000, 25 directions)

  • ‘test_piesno’ slice of N=8, K=14 diffusion data

  • ‘reg_c’ small 2D image used for validating registration

  • ‘reg_o’ small 2D image used for validation registration

  • ‘cb_2’ two vectorized cingulum bundles

Returns#

fnamestuple

filenames for dataset

Examples#

>>> import numpy as np
>>> from dipy.io.image import load_nifti
>>> from dipy.data import get_fnames
>>> fimg, fbvals, fbvecs = get_fnames('small_101D')
>>> bvals=np.loadtxt(fbvals)
>>> bvecs=np.loadtxt(fbvecs).T
>>> data, affine = load_nifti(fimg)
>>> data.shape == (6, 10, 10, 102)
True
>>> bvals.shape == (102,)
True
>>> bvecs.shape == (102, 3)
True

read_qtdMRI_test_retest_2subjects#

dipy.data.fetcher.read_qtdMRI_test_retest_2subjects()#

Load test-retest qt-dMRI acquisitions of two C57Bl6 mice. These datasets were used to study test-retest reproducibility of time-dependent q-space indices (q:math:` au`-indices) in the corpus callosum of two mice [1]. The data itself and its details are publicly available and can be cited at [2]. The test-retest diffusion MRI spin echo sequences were acquired from two C57Bl6 wild-type mice on an 11.7 Tesla Bruker scanner. The test and retest acquisition were taken 48 hours from each other. The (processed) data consists of 80x160x5 voxels of size 110x110x500μm. Each data set consists of 515 Diffusion-Weighted Images (DWIs) spread over 35 acquisition shells. The shells are spread over 7 gradient strength shells with a maximum gradient strength of 491 mT/m, 5 pulse separation shells between [10.8 - 20.0]ms, and a pulse length of 5ms. We manually created a brain mask and corrected the data from eddy currents and motion artifacts using FSL’s eddy. A region of interest was then drawn in the middle slice in the corpus callosum, where the tissue is reasonably coherent. Returns ——- data : list of length 4 contains the dwi datasets ordered as (subject1_test, subject1_retest, subject2_test, subject2_retest) cc_masks : list of length 4 contains the corpus callosum masks ordered in the same order as data. gtabs : list of length 4 contains the qt-dMRI gradient tables of the data sets. References ———- .. [1] Fick, Rutger HJ, et al. “Non-Parametric GraphNet-Regularized Representation of dMRI in Space and Time”, Medical Image Analysis, 2017. .. [2] Wassermann, Demian, et al., “Test-Retest qt-dMRI datasets for `Non-Parametric GraphNet-Regularized Representation of dMRI in Space and Time’”. doi:10.5281/zenodo.996889, 2017.

read_scil_b0#

dipy.data.fetcher.read_scil_b0()#

Load GE 3T b0 image form the scil b0 dataset.

Returns#

imgobj,

Nifti1Image

read_siemens_scil_b0#

dipy.data.fetcher.read_siemens_scil_b0()#

Load Siemens 1.5T b0 image from the scil b0 dataset.

Returns#

imgobj,

Nifti1Image

read_isbi2013_2shell#

dipy.data.fetcher.read_isbi2013_2shell()#

Load ISBI 2013 2-shell synthetic dataset.

Returns#

imgobj,

Nifti1Image

gtabobj,

GradientTable

read_sherbrooke_3shell#

dipy.data.fetcher.read_sherbrooke_3shell()#

Load Sherbrooke 3-shell HARDI dataset.

Returns#

imgobj,

Nifti1Image

gtabobj,

GradientTable

read_stanford_labels#

dipy.data.fetcher.read_stanford_labels()#

Read stanford hardi data and label map.

read_stanford_hardi#

dipy.data.fetcher.read_stanford_hardi()#

Load Stanford HARDI dataset.

Returns#

imgobj,

Nifti1Image

gtabobj,

GradientTable

read_stanford_t1#

dipy.data.fetcher.read_stanford_t1()#

read_stanford_pve_maps#

dipy.data.fetcher.read_stanford_pve_maps()#

read_taiwan_ntu_dsi#

dipy.data.fetcher.read_taiwan_ntu_dsi()#

Load Taiwan NTU dataset.

Returns#

imgobj,

Nifti1Image

gtabobj,

GradientTable

read_syn_data#

dipy.data.fetcher.read_syn_data()#

Load t1 and b0 volumes from the same session.

Returns#

t1obj,

Nifti1Image

b0obj,

Nifti1Image

fetch_tissue_data#

dipy.data.fetcher.fetch_tissue_data()#

Download images to be used for tissue classification

read_tissue_data#

dipy.data.fetcher.read_tissue_data(contrast='T1')#

Load images to be used for tissue classification

Parameters#

contraststr

‘T1’, ‘T1 denoised’ or ‘Anisotropic Power’

Returns#

imageobj,

Nifti1Image

read_mni_template#

dipy.data.fetcher.read_mni_template(version='a', contrast='T2')#

Read the MNI template from disk.

Parameters#

version: string

There are two MNI templates 2009a and 2009c, so options available are: “a” and “c”.

contrastlist or string, optional

Which of the contrast templates to read. For version “a” two contrasts are available: “T1” and “T2”. Similarly for version “c” there are two options, “T1” and “mask”. You can input contrast as a string or a list

Returns#

listcontains the nibabel.Nifti1Image objects requested, according to the

order they were requested in the input.

Examples#

>>> # Get only the T1 file for version c:
>>> T1 = read_mni_template("c", contrast = "T1") 
>>> # Get both files in this order for version a:
>>> T1, T2 = read_mni_template(contrast = ["T1", "T2"]) 

Notes#

The templates were downloaded from the MNI (McGill University) website in July 2015.

The following publications should be referenced when using these templates:

License for the MNI templates:

Copyright (C) 1993-2004, Louis Collins McConnell Brain Imaging Centre, Montreal Neurological Institute, McGill University. Permission to use, copy, modify, and distribute this software and its documentation for any purpose and without fee is hereby granted, provided that the above copyright notice appear in all copies. The authors and McGill University make no representations about the suitability of this software for any purpose. It is provided “as is” without express or implied warranty. The authors are not responsible for any data loss, equipment damage, property loss, or injury to subjects or patients resulting from the use or misuse of this software package.

fetch_cenir_multib#

dipy.data.fetcher.fetch_cenir_multib(with_raw=False)#

Fetch ‘HCP-like’ data, collected at multiple b-values.

Parameters#

with_rawbool

Whether to fetch the raw data. Per default, this is False, which means that only eddy-current/motion corrected data is fetched

Notes#

Details of the acquisition and processing, and additional meta-data are available through UW researchworks:

https://digital.lib.washington.edu/researchworks/handle/1773/33311

read_cenir_multib#

dipy.data.fetcher.read_cenir_multib(bvals=None)#

Read CENIR multi b-value data.

Parameters#

bvalslist or int

The b-values to read from file (200, 400, 1000, 2000, 3000).

Returns#

gtab : a GradientTable class instance img : nibabel.Nifti1Image

Notes#

Details of the acquisition and processing, and additional meta-data are available through UW researchworks:

https://digital.lib.washington.edu/researchworks/handle/1773/33311

read_bundles_2_subjects#

dipy.data.fetcher.read_bundles_2_subjects(subj_id='subj_1', metrics=('fa',), bundles=('af.left', 'cst.right', 'cc_1'))#

Read images and streamlines from 2 subjects of the SNAIL dataset. Parameters ———- subj_id : string Either subj_1 or subj_2. metrics : array-like Either [‘fa’] or [‘t1’] or [‘fa’, ‘t1’] bundles : array-like E.g., [‘af.left’, ‘cst.right’, ‘cc_1’]. See all the available bundles in the exp_bundles_maps/bundles_2_subjects directory of your $HOME/.dipy folder. Returns ——- dix : dict Dictionary with data of the metrics and the bundles as keys. Notes —– If you are using these datasets please cite the following publications. References ———- .. [1] Renauld, E., M. Descoteaux, M. Bernier, E. Garyfallidis, K. Whittingstall, “Morphology of thalamus, LGN and optic radiation do not influence EEG alpha waves”, Plos One (under submission), 2015. .. [2] Garyfallidis, E., O. Ocegueda, D. Wassermann, M. Descoteaux. Robust and efficient linear registration of fascicles in the space of streamlines , Neuroimage, 117:124-140, 2015.

read_ivim#

dipy.data.fetcher.read_ivim()#

Load IVIM dataset.

Returns#

imgobj,

Nifti1Image

gtabobj,

GradientTable

read_cfin_dwi#

dipy.data.fetcher.read_cfin_dwi()#

Load CFIN multi b-value DWI data.

Returns#

imgobj,

Nifti1Image

gtabobj,

GradientTable

read_cfin_t1#

dipy.data.fetcher.read_cfin_t1()#

Load CFIN T1-weighted data.

Returns#

imgobj,

Nifti1Image

get_file_formats#

dipy.data.fetcher.get_file_formats()#

Returns#

bundles_list : all bundles (list) ref_anat : reference

get_bundle_atlas_hcp842#

dipy.data.fetcher.get_bundle_atlas_hcp842()#

Returns#

file1 : string file2 : string

get_two_hcp842_bundles#

dipy.data.fetcher.get_two_hcp842_bundles()#

Returns#

file1 : string file2 : string

get_target_tractogram_hcp#

dipy.data.fetcher.get_target_tractogram_hcp()#

Returns#

file1 : string

read_qte_lte_pte#

dipy.data.fetcher.read_qte_lte_pte()#

Read q-space trajectory encoding data with linear and planar tensor encoding.

Returns#

data_imgnibabel.nifti1.Nifti1Image

dMRI data image.

mask_imgnibabel.nifti1.Nifti1Image

Brain mask image.

gtabdipy.core.gradients.GradientTable

Gradient table.

read_DiB_70_lte_pte_ste#

dipy.data.fetcher.read_DiB_70_lte_pte_ste()#

Read q-space trajectory encoding data with 70 between linear, planar, and spherical tensor encoding measurements.

Returns#

data_imgnibabel.nifti1.Nifti1Image

dMRI data image.

mask_imgnibabel.nifti1.Nifti1Image

Brain mask image.

gtabdipy.core.gradients.GradientTable

Gradient table.

read_DiB_217_lte_pte_ste#

dipy.data.fetcher.read_DiB_217_lte_pte_ste()#

Read q-space trajectory encoding data with 217 between linear, planar, and spherical tensor encoding.

Returns#

data_imgnibabel.nifti1.Nifti1Image

dMRI data image.

mask_imgnibabel.nifti1.Nifti1Image

Brain mask image.

gtabdipy.core.gradients.GradientTable

Gradient table.

extract_example_tracts#

dipy.data.fetcher.extract_example_tracts(out_dir)#

Extract 5 ‘AF_L’,’CST_R’ and ‘CC_ForcepsMajor’ trk files in out_dir folder.

Parameters#

out_dirstr

Folder in which to extract the files.

read_five_af_bundles#

dipy.data.fetcher.read_five_af_bundles()#

Load 5 small left arcuate fasciculus bundles.

Returns#

bundles: list of ArraySequence

List with loaded bundles.

to_bids_description#

dipy.data.fetcher.to_bids_description(path, fname='dataset_description.json', BIDSVersion='1.4.0', **kwargs)#

Dumps a dict into a bids description at the given location

fetch_hcp#

dipy.data.fetcher.fetch_hcp(subjects, hcp_bucket='hcp-openaccess', profile_name='hcp', path=None, study='HCP_1200', aws_access_key_id=None, aws_secret_access_key=None)#

Fetch HCP diffusion data and arrange it in a manner that resembles the BIDS [1]_ specification.

Parameters#

subjectslist

Each item is an integer, identifying one of the HCP subjects

hcp_bucketstring, optional

The name of the HCP S3 bucket. Default: “hcp-openaccess”

profile_namestring, optional

The name of the AWS profile used for access. Default: “hcp”

pathstring, optional

Path to save files into. Default: ‘~/.dipy’

studystring, optional

Which HCP study to grab. Default: ‘HCP_1200’

aws_access_key_idstring, optional

AWS credentials to HCP AWS S3. Will only be used if profile_name is set to False.

aws_secret_access_keystring, optional

AWS credentials to HCP AWS S3. Will only be used if profile_name is set to False.

Returns#

dict with remote and local names of these files, path to BIDS derivative dataset

Notes#

To use this function with its default setting, you need to have a file ‘~/.aws/credentials’, that includes a section:

[hcp] AWS_ACCESS_KEY_ID=XXXXXXXXXXXXXXXX AWS_SECRET_ACCESS_KEY=XXXXXXXXXXXXXXXX

The keys are credentials that you can get from HCP (see https://wiki.humanconnectome.org/display/PublicData/How+To+Connect+to+Connectome+Data+via+AWS) # noqa

Local filenames are changed to match our expected conventions.

fetch_hbn#

dipy.data.fetcher.fetch_hbn(subjects, path=None)#

Fetch preprocessed data from the Healthy Brain Network POD2 study [1, 2]_.

Parameters#

subjectslist

Identifiers of the subjects to download. For example: [“NDARAA948VFH”, “NDAREK918EC2”].

pathstring, optional

Path to save files into. Default: ‘~/.dipy’

Returns#

dict with remote and local names of these files, path to BIDS derivative dataset

Notes#