data#

Read test or example data.

DataError

loads_compat(byte_data)

get_sim_voxels(*[, name])

provide some simulated voxel data

get_skeleton(*[, name])

Provide skeletons generated from Local Skeleton Clustering (LSC).

get_sphere(*[, name])

provide triangulated spheres

get_3shell_gtab()

get_isbi2013_2shell_gtab()

get_gtab_taiwan_dsi()

dsi_voxels()

dsi_deconv_voxels()

mrtrix_spherical_functions()

Spherical functions represented by spherical harmonic coefficients and evaluated on a discrete sphere.

get_cmap(name)

Make a callable, similar to maptlotlib.pyplot.get_cmap.

two_cingulum_bundles()

matlab_life_results()

load_sdp_constraints(model_name, *[, order])

Import semidefinite programming constraint matrices for different models.

Module: data.fetcher#

copyfileobj_withprogress(fsrc, fdst, ...[, ...])

check_md5(filename, *[, stored_md5])

Computes the md5 of filename and check if it matches with the supplied string md5

fetch_data(files, folder, *[, data_size, ...])

Downloads files to folder and checks their md5 checksums

fetch_isbi2013_2shell()

Download a 2-shell software phantom dataset

fetch_stanford_labels()

Download reduced freesurfer aparc image from stanford web site

fetch_sherbrooke_3shell()

Download a 3shell HARDI dataset with 192 gradient direction

fetch_stanford_hardi()

Download a HARDI dataset with 160 gradient directions

fetch_resdnn_tf_weights()

Download ResDNN Tensorflow model weights for Nath et.

fetch_resdnn_torch_weights()

Download ResDNN Pytorch model weights for Nath et.

fetch_synb0_weights()

Download Synb0 model weights for Schilling et.

fetch_synb0_test()

Download Synb0 test data for Schilling et.

fetch_deepn4_weights()

Download DeepN4 model weights for Kanakaraj et.

fetch_deepn4_test()

Download DeepN4 test data for Kanakaraj et.

fetch_evac_tf_weights()

Download EVAC+ model weights for Park et.

fetch_evac_torch_weights()

Download EVAC+ model weights for Park et.

fetch_evac_test()

Download EVAC+ test data for Park et.

fetch_stanford_t1()

fetch_stanford_pve_maps()

fetch_stanford_tracks()

Download stanford track for examples

fetch_taiwan_ntu_dsi()

Download a DSI dataset with 203 gradient directions

fetch_syn_data()

Download t1 and b0 volumes from the same session

fetch_mni_template()

fetch the MNI 2009a T1 and T2, and 2009c T1 and T1 mask files Notes ----- The templates were downloaded from the MNI (McGill University) website in July 2015.

fetch_scil_b0()

Download b=0 datasets from multiple MR systems (GE, Philips, Siemens) and different magnetic fields (1.5T and 3T)

fetch_bundles_2_subjects()

Download 2 subjects from the SNAIL dataset with their bundles

fetch_ivim()

Download IVIM dataset

fetch_cfin_multib()

Download CFIN multi b-value diffusion data

fetch_file_formats()

Download 5 bundles in various file formats and their reference

fetch_bundle_atlas_hcp842()

Download atlas tractogram from the hcp842 dataset with 80 bundles

fetch_30_bundle_atlas_hcp842()

Download atlas tractogram from the hcp842 dataset with 30 bundles

fetch_target_tractogram_hcp()

Download tractogram of one of the hcp dataset subjects

fetch_bundle_fa_hcp()

Download map of FA within two bundles in one of the hcp dataset subjects

fetch_qtdMRI_test_retest_2subjects()

Downloads test-retest qt-dMRI acquisitions of two C57Bl6 mice.

fetch_gold_standard_io()

Downloads the gold standard for streamlines io testing.

fetch_qte_lte_pte()

Download QTE data with linear and planar tensor encoding.

fetch_cti_rat1()

Download Rat Brain DDE data for CTI reconstruction (Rat #1 data from Henriques et al. MRM 2021).

fetch_fury_surface()

Surface for testing and examples

fetch_DiB_70_lte_pte_ste()

Download QTE data with linear, planar, and spherical tensor encoding.

fetch_DiB_217_lte_pte_ste()

Download QTE data with linear, planar, and spherical tensor encoding.

fetch_ptt_minimal_dataset()

Download FOD and seeds for PTT testing and examples

fetch_bundle_warp_dataset()

Download Bundle Warp dataset

fetch_disco1_dataset()

Download DISCO 1 dataset: The Diffusion-Simulated Connectivity Dataset.

fetch_disco2_dataset()

Download DISCO 2 dataset: The Diffusion-Simulated Connectivity Dataset.

fetch_disco3_dataset()

Download DISCO 3 dataset: The Diffusion-Simulated Connectivity Dataset.

fetch_disco_dataset()

Download All DISCO datasets.

get_fnames(*[, name])

Provide full paths to example or test datasets.

read_qtdMRI_test_retest_2subjects()

Load test-retest qt-dMRI acquisitions of two C57Bl6 mice.

read_scil_b0()

Load GE 3T b0 image form the scil b0 dataset.

read_siemens_scil_b0()

Load Siemens 1.5T b0 image from the scil b0 dataset.

read_isbi2013_2shell()

Load ISBI 2013 2-shell synthetic dataset.

read_sherbrooke_3shell()

Load Sherbrooke 3-shell HARDI dataset.

read_stanford_labels()

Read stanford hardi data and label map.

read_stanford_hardi()

Load Stanford HARDI dataset.

read_stanford_t1()

read_stanford_pve_maps()

read_taiwan_ntu_dsi()

Load Taiwan NTU dataset.

read_syn_data()

Load t1 and b0 volumes from the same session.

fetch_tissue_data()

Download images to be used for tissue classification

read_tissue_data(*[, contrast])

Load images to be used for tissue classification

read_mni_template(*[, version, contrast])

Read the MNI template from disk.

fetch_cenir_multib(*[, with_raw])

Fetch 'HCP-like' data, collected at multiple b-values.

read_cenir_multib(*[, bvals])

Read CENIR multi b-value data.

read_bundles_2_subjects(*[, subj_id, ...])

Read images and streamlines from 2 subjects of the SNAIL dataset.

read_ivim()

Load IVIM dataset.

read_cfin_dwi()

Load CFIN multi b-value DWI data.

read_cfin_t1()

Load CFIN T1-weighted data.

get_file_formats()

get_bundle_atlas_hcp842(*[, size])

get_two_hcp842_bundles()

get_target_tractogram_hcp()

read_qte_lte_pte()

Read q-space trajectory encoding data with linear and planar tensor encoding.

read_DiB_70_lte_pte_ste()

Read q-space trajectory encoding data with 70 between linear, planar, and spherical tensor encoding measurements.

read_DiB_217_lte_pte_ste()

Read q-space trajectory encoding data with 217 between linear, planar, and spherical tensor encoding.

extract_example_tracts(out_dir)

Extract 5 'AF_L','CST_R' and 'CC_ForcepsMajor' trk files in out_dir folder.

read_five_af_bundles()

Load 5 small left arcuate fasciculus bundles.

to_bids_description(path, *[, fname, ...])

Dumps a dict into a bids description at the given location

fetch_hcp(subjects[, hcp_bucket, ...])

Fetch HCP diffusion data and arrange it in a manner that resembles the BIDS specification.

fetch_hbn(subjects, *[, path, include_afq])

Fetch preprocessed data from the Healthy Brain Network POD2 study.

DataError#

class dipy.data.DataError[source]#

Bases: Exception

Attributes:
args

Methods

add_note

Exception.add_note(note) -- add a note to the exception

with_traceback

Exception.with_traceback(tb) -- set self.__traceback__ to tb and return self.

loads_compat#

dipy.data.loads_compat(byte_data)[source]#

get_sim_voxels#

dipy.data.get_sim_voxels(*, name='fib1')[source]#

provide some simulated voxel data

Parameters:
namestr, which file?

‘fib0’, ‘fib1’ or ‘fib2’

Returns:
dixdictionary, where dix[‘data’] returns a 2d array

where every row is a simulated voxel with different orientation

Notes

These sim voxels were provided by M.M. Correia using Rician noise.

Examples

>>> from dipy.data import get_sim_voxels
>>> sv=get_sim_voxels(name='fib1')
>>> sv['data'].shape == (100, 102)
True
>>> sv['fibres']
'1'
>>> sv['gradients'].shape == (102, 3)
True
>>> sv['bvals'].shape == (102,)
True
>>> sv['snr']
'60'
>>> sv2=get_sim_voxels(name='fib2')
>>> sv2['fibres']
'2'
>>> sv2['snr']
'80'

get_skeleton#

dipy.data.get_skeleton(*, name='C1')[source]#

Provide skeletons generated from Local Skeleton Clustering (LSC).

Parameters:
namestr, ‘C1’ or ‘C3’
Returns:
dixdictionary

Examples

>>> from dipy.data import get_skeleton
>>> C=get_skeleton(name='C1')
>>> len(C.keys())
117
>>> for c in C: break
>>> sorted(C[c].keys())
['N', 'hidden', 'indices', 'most']

get_sphere#

dipy.data.get_sphere(*, name='symmetric362')[source]#

provide triangulated spheres

Parameters:
namestr

which sphere - one of: * ‘symmetric362’ * ‘symmetric642’ * ‘symmetric724’ * ‘repulsion724’ * ‘repulsion100’ * ‘repulsion200’

Returns:
spherea dipy.core.sphere.Sphere class instance

Examples

>>> import numpy as np
>>> from dipy.data import get_sphere
>>> sphere = get_sphere(name="symmetric362")
>>> verts, faces = sphere.vertices, sphere.faces
>>> verts.shape == (362, 3)
True
>>> faces.shape == (720, 3)
True
>>> verts, faces = get_sphere(name="not a sphere name") 
Traceback (most recent call last):
    ...
DataError: No sphere called "not a sphere name"

get_3shell_gtab#

dipy.data.get_3shell_gtab()#

get_isbi2013_2shell_gtab#

dipy.data.get_isbi2013_2shell_gtab()#

get_gtab_taiwan_dsi#

dipy.data.get_gtab_taiwan_dsi()#

dsi_voxels#

dipy.data.dsi_voxels()[source]#

dsi_deconv_voxels#

dipy.data.dsi_deconv_voxels()[source]#

mrtrix_spherical_functions#

dipy.data.mrtrix_spherical_functions()[source]#

Spherical functions represented by spherical harmonic coefficients and evaluated on a discrete sphere.

Returns:
func_coefarray (2, 3, 4, 45)

Functions represented by the coefficients associated with the mrtrix spherical harmonic basis of maximal order (l) 8.

func_discretearray (2, 3, 4, 81)

Functions evaluated on sphere.

sphereSphere

The discrete sphere, points on the surface of a unit sphere, used to evaluate the functions.

Notes

These coefficients were obtained by using the dwi2SH command of mrtrix.

get_cmap#

dipy.data.get_cmap(name)[source]#

Make a callable, similar to maptlotlib.pyplot.get_cmap.

two_cingulum_bundles#

dipy.data.two_cingulum_bundles()[source]#

matlab_life_results#

dipy.data.matlab_life_results()[source]#

load_sdp_constraints#

dipy.data.load_sdp_constraints(model_name, *, order=None)[source]#

Import semidefinite programming constraint matrices for different models.

Generated as described for example in [1].

Parameters:
model_namestring

A string identifying the model that is to be constrained.

orderunsigned int, optional

A non-negative integer that represent the order or instance of the model. Default: None.

Returns:
sdp_constraintsarray

Constraint matrices

Notes

The constraints will be loaded from a file named ‘id_constraint_order.npz’.

References

copyfileobj_withprogress#

dipy.data.fetcher.copyfileobj_withprogress(fsrc, fdst, total_length, *, length=16384)[source]#

check_md5#

dipy.data.fetcher.check_md5(filename, *, stored_md5=None)[source]#

Computes the md5 of filename and check if it matches with the supplied string md5

Parameters:
filenamestring

Path to a file.

md5string

Known md5 of filename to check against. If None (default), checking is skipped

fetch_data#

dipy.data.fetcher.fetch_data(files, folder, *, data_size=None, use_headers=False)[source]#

Downloads files to folder and checks their md5 checksums

Parameters:
filesdictionary

For each file in files the value should be (url, md5). The file will be downloaded from url if the file does not already exist or if the file exists but the md5 checksum does not match.

folderstr

The directory where to save the file, the directory will be created if it does not already exist.

data_sizestr, optional

A string describing the size of the data (e.g. “91 MB”) to be logged to the screen. Default does not produce any information about data size.

use_headersbool, optional

Whether to use headers when downloading files.

Raises:
FetcherError

Raises if the md5 checksum of the file does not match the expected value. The downloaded file is not deleted when this error is raised.

fetch_isbi2013_2shell#

dipy.data.fetcher.fetch_isbi2013_2shell()#

Download a 2-shell software phantom dataset

fetch_stanford_labels#

dipy.data.fetcher.fetch_stanford_labels()#

Download reduced freesurfer aparc image from stanford web site

fetch_sherbrooke_3shell#

dipy.data.fetcher.fetch_sherbrooke_3shell()#

Download a 3shell HARDI dataset with 192 gradient direction

fetch_stanford_hardi#

dipy.data.fetcher.fetch_stanford_hardi()#

Download a HARDI dataset with 160 gradient directions

fetch_resdnn_tf_weights#

dipy.data.fetcher.fetch_resdnn_tf_weights()#

Download ResDNN Tensorflow model weights for Nath et. al 2018

fetch_resdnn_torch_weights#

dipy.data.fetcher.fetch_resdnn_torch_weights()#

Download ResDNN Pytorch model weights for Nath et. al 2018

fetch_synb0_weights#

dipy.data.fetcher.fetch_synb0_weights()#

Download Synb0 model weights for Schilling et. al 2019

fetch_synb0_test#

dipy.data.fetcher.fetch_synb0_test()#

Download Synb0 test data for Schilling et. al 2019

fetch_deepn4_weights#

dipy.data.fetcher.fetch_deepn4_weights()#

Download DeepN4 model weights for Kanakaraj et. al 2024

fetch_deepn4_test#

dipy.data.fetcher.fetch_deepn4_test()#

Download DeepN4 test data for Kanakaraj et. al 2024

fetch_evac_tf_weights#

dipy.data.fetcher.fetch_evac_tf_weights()#

Download EVAC+ model weights for Park et. al 2022

fetch_evac_torch_weights#

dipy.data.fetcher.fetch_evac_torch_weights()#

Download EVAC+ model weights for Park et. al 2022

fetch_evac_test#

dipy.data.fetcher.fetch_evac_test()#

Download EVAC+ test data for Park et. al 2022

fetch_stanford_t1#

dipy.data.fetcher.fetch_stanford_t1()#

fetch_stanford_pve_maps#

dipy.data.fetcher.fetch_stanford_pve_maps()#

fetch_stanford_tracks#

dipy.data.fetcher.fetch_stanford_tracks()#

Download stanford track for examples

fetch_taiwan_ntu_dsi#

dipy.data.fetcher.fetch_taiwan_ntu_dsi()#

Download a DSI dataset with 203 gradient directions

fetch_syn_data#

dipy.data.fetcher.fetch_syn_data()#

Download t1 and b0 volumes from the same session

fetch_mni_template#

dipy.data.fetcher.fetch_mni_template()#

fetch the MNI 2009a T1 and T2, and 2009c T1 and T1 mask files Notes —– The templates were downloaded from the MNI (McGill University) website in July 2015.

The following publications should be referenced when using these templates:

  • Fonov et al.[2]

  • Fonov et al.[3]

License for the MNI templates:

Copyright (C) 1993-2004, Louis Collins McConnell Brain Imaging Centre, Montreal Neurological Institute, McGill University. Permission to use, copy, modify, and distribute this software and its documentation for any purpose and without fee is hereby granted, provided that the above copyright notice appear in all copies. The authors and McGill University make no representations about the suitability of this software for any purpose. It is provided “as is” without express or implied warranty. The authors are not responsible for any data loss, equipment damage, property loss, or injury to subjects or patients resulting from the use or misuse of this software package.

References

fetch_scil_b0#

dipy.data.fetcher.fetch_scil_b0()#

Download b=0 datasets from multiple MR systems (GE, Philips, Siemens) and different magnetic fields (1.5T and 3T)

fetch_bundles_2_subjects#

dipy.data.fetcher.fetch_bundles_2_subjects()#

Download 2 subjects from the SNAIL dataset with their bundles

fetch_ivim#

dipy.data.fetcher.fetch_ivim()#

Download IVIM dataset

fetch_cfin_multib#

dipy.data.fetcher.fetch_cfin_multib()#

Download CFIN multi b-value diffusion data

fetch_file_formats#

dipy.data.fetcher.fetch_file_formats()#

Download 5 bundles in various file formats and their reference

fetch_bundle_atlas_hcp842#

dipy.data.fetcher.fetch_bundle_atlas_hcp842()#

Download atlas tractogram from the hcp842 dataset with 80 bundles

fetch_30_bundle_atlas_hcp842#

dipy.data.fetcher.fetch_30_bundle_atlas_hcp842()#

Download atlas tractogram from the hcp842 dataset with 30 bundles

fetch_target_tractogram_hcp#

dipy.data.fetcher.fetch_target_tractogram_hcp()#

Download tractogram of one of the hcp dataset subjects

fetch_bundle_fa_hcp#

dipy.data.fetcher.fetch_bundle_fa_hcp()#

Download map of FA within two bundles in one of the hcp dataset subjects

fetch_qtdMRI_test_retest_2subjects#

dipy.data.fetcher.fetch_qtdMRI_test_retest_2subjects()#

Downloads test-retest qt-dMRI acquisitions of two C57Bl6 mice.

fetch_gold_standard_io#

dipy.data.fetcher.fetch_gold_standard_io()#

Downloads the gold standard for streamlines io testing.

fetch_qte_lte_pte#

dipy.data.fetcher.fetch_qte_lte_pte()#

Download QTE data with linear and planar tensor encoding.

fetch_cti_rat1#

dipy.data.fetcher.fetch_cti_rat1()#

Download Rat Brain DDE data for CTI reconstruction (Rat #1 data from Henriques et al. MRM 2021).

fetch_fury_surface#

dipy.data.fetcher.fetch_fury_surface()#

Surface for testing and examples

fetch_DiB_70_lte_pte_ste#

dipy.data.fetcher.fetch_DiB_70_lte_pte_ste()#

Download QTE data with linear, planar, and spherical tensor encoding. If using this data please cite F Szczepankiewicz, S Hoge, C-F Westin. Linear, planar and spherical tensor-valued diffusion MRI data by free waveform encoding in healthy brain, water, oil and liquid crystals. Data in Brief (2019),DOI: https://doi.org/10.1016/j.dib.2019.104208

fetch_DiB_217_lte_pte_ste#

dipy.data.fetcher.fetch_DiB_217_lte_pte_ste()#

Download QTE data with linear, planar, and spherical tensor encoding. If using this data please cite F Szczepankiewicz, S Hoge, C-F Westin. Linear, planar and spherical tensor-valued diffusion MRI data by free waveform encoding in healthy brain, water, oil and liquid crystals. Data in Brief (2019),DOI: https://doi.org/10.1016/j.dib.2019.104208

fetch_ptt_minimal_dataset#

dipy.data.fetcher.fetch_ptt_minimal_dataset()#

Download FOD and seeds for PTT testing and examples

fetch_bundle_warp_dataset#

dipy.data.fetcher.fetch_bundle_warp_dataset()#

Download Bundle Warp dataset

fetch_disco1_dataset#

dipy.data.fetcher.fetch_disco1_dataset()#

Download DISCO 1 dataset: The Diffusion-Simulated Connectivity Dataset. DOI: 10.17632/fgf86jdfg6.3

fetch_disco2_dataset#

dipy.data.fetcher.fetch_disco2_dataset()#

Download DISCO 2 dataset: The Diffusion-Simulated Connectivity Dataset. DOI: 10.17632/fgf86jdfg6.3

fetch_disco3_dataset#

dipy.data.fetcher.fetch_disco3_dataset()#

Download DISCO 3 dataset: The Diffusion-Simulated Connectivity Dataset. DOI: 10.17632/fgf86jdfg6.3

fetch_disco_dataset#

dipy.data.fetcher.fetch_disco_dataset()[source]#

Download All DISCO datasets.

Notes

see DOI: 10.17632/fgf86jdfg6.3

get_fnames#

dipy.data.fetcher.get_fnames(*, name='small_64D')[source]#

Provide full paths to example or test datasets.

Parameters:
namestr

the filename/s of which dataset to return, one of:

  • ‘small_64D’ small region of interest nifti,bvecs,bvals 64 directions

  • ‘small_101D’ small region of interest nifti, bvecs, bvals 101 directions

  • ‘aniso_vox’ volume with anisotropic voxel size as Nifti

  • ‘fornix’ 300 tracks in Trackvis format (from Pittsburgh Brain Competition)

  • ‘gqi_vectors’ the scanner wave vectors needed for a GQI acquisitions of 101 directions tested on Siemens 3T Trio

  • ‘small_25’ small ROI (10x8x2) DTI data (b value 2000, 25 directions)

  • ‘test_piesno’ slice of N=8, K=14 diffusion data

  • ‘reg_c’ small 2D image used for validating registration

  • ‘reg_o’ small 2D image used for validation registration

  • ‘cb_2’ two vectorized cingulum bundles

Returns:
fnamestuple

filenames for dataset

Examples

>>> import numpy as np
>>> from dipy.io.image import load_nifti
>>> from dipy.data import get_fnames
>>> fimg, fbvals, fbvecs = get_fnames(name='small_101D')
>>> bvals=np.loadtxt(fbvals)
>>> bvecs=np.loadtxt(fbvecs).T
>>> data, affine = load_nifti(fimg)
>>> data.shape == (6, 10, 10, 102)
True
>>> bvals.shape == (102,)
True
>>> bvecs.shape == (102, 3)
True

read_qtdMRI_test_retest_2subjects#

dipy.data.fetcher.read_qtdMRI_test_retest_2subjects()[source]#

Load test-retest qt-dMRI acquisitions of two C57Bl6 mice.

These datasets were used to study test-retest reproducibility of time-dependent q-space indices (q:math:tau-indices) in the corpus callosum of two mice [4]. The data itself and its details are publicly available and can be cited at [5].

The test-retest diffusion MRI spin echo sequences were acquired from two C57Bl6 wild-type mice on an 11.7 Tesla Bruker scanner. The test and retest acquisition were taken 48 hours from each other. The (processed) data consists of 80x160x5 voxels of size 110x110x500μm. Each data set consists of 515 Diffusion-Weighted Images (DWIs) spread over 35 acquisition shells. The shells are spread over 7 gradient strength shells with a maximum gradient strength of 491 mT/m, 5 pulse separation shells between [10.8 - 20.0]ms, and a pulse length of 5ms. We manually created a brain mask and corrected the data from eddy currents and motion artifacts using FSL’s eddy. A region of interest was then drawn in the middle slice in the corpus callosum, where the tissue is reasonably coherent.

Returns:
datalist of length 4

contains the dwi datasets ordered as (subject1_test, subject1_retest, subject2_test, subject2_retest)

cc_maskslist of length 4

contains the corpus callosum masks ordered in the same order as data.

gtabslist of length 4

contains the qt-dMRI gradient tables of the data sets.

References

read_scil_b0#

dipy.data.fetcher.read_scil_b0()[source]#

Load GE 3T b0 image form the scil b0 dataset.

Returns:
imgobj,

Nifti1Image

read_siemens_scil_b0#

dipy.data.fetcher.read_siemens_scil_b0()[source]#

Load Siemens 1.5T b0 image from the scil b0 dataset.

Returns:
imgobj,

Nifti1Image

read_isbi2013_2shell#

dipy.data.fetcher.read_isbi2013_2shell()[source]#

Load ISBI 2013 2-shell synthetic dataset.

Returns:
imgobj,

Nifti1Image

gtabobj,

GradientTable

read_sherbrooke_3shell#

dipy.data.fetcher.read_sherbrooke_3shell()[source]#

Load Sherbrooke 3-shell HARDI dataset.

Returns:
imgobj,

Nifti1Image

gtabobj,

GradientTable

read_stanford_labels#

dipy.data.fetcher.read_stanford_labels()[source]#

Read stanford hardi data and label map.

read_stanford_hardi#

dipy.data.fetcher.read_stanford_hardi()[source]#

Load Stanford HARDI dataset.

Returns:
imgobj,

Nifti1Image

gtabobj,

GradientTable

read_stanford_t1#

dipy.data.fetcher.read_stanford_t1()[source]#

read_stanford_pve_maps#

dipy.data.fetcher.read_stanford_pve_maps()[source]#

read_taiwan_ntu_dsi#

dipy.data.fetcher.read_taiwan_ntu_dsi()[source]#

Load Taiwan NTU dataset.

Returns:
imgobj,

Nifti1Image

gtabobj,

GradientTable

read_syn_data#

dipy.data.fetcher.read_syn_data()[source]#

Load t1 and b0 volumes from the same session.

Returns:
t1obj,

Nifti1Image

b0obj,

Nifti1Image

fetch_tissue_data#

dipy.data.fetcher.fetch_tissue_data()[source]#

Download images to be used for tissue classification

read_tissue_data#

dipy.data.fetcher.read_tissue_data(*, contrast='T1')[source]#

Load images to be used for tissue classification

Parameters:
contraststr

‘T1’, ‘T1 denoised’ or ‘Anisotropic Power’

Returns:
imageobj,

Nifti1Image

read_mni_template#

dipy.data.fetcher.read_mni_template(*, version='a', contrast='T2')[source]#

Read the MNI template from disk.

Parameters:
version: string

There are two MNI templates 2009a and 2009c, so options available are: “a” and “c”.

contrastlist or string, optional

Which of the contrast templates to read. For version “a” two contrasts are available: “T1” and “T2”. Similarly for version “c” there are two options, “T1” and “mask”. You can input contrast as a string or a list

Returns:
listcontains the nibabel.Nifti1Image objects requested, according to the

order they were requested in the input.

Notes

The templates were downloaded from the MNI (McGill University) website in July 2015.

The following publications should be referenced when using these templates:

  • Fonov et al.[2]

  • Fonov et al.[3]

License for the MNI templates:

Copyright (C) 1993-2004, Louis Collins McConnell Brain Imaging Centre, Montreal Neurological Institute, McGill University. Permission to use, copy, modify, and distribute this software and its documentation for any purpose and without fee is hereby granted, provided that the above copyright notice appear in all copies. The authors and McGill University make no representations about the suitability of this software for any purpose. It is provided “as is” without express or implied warranty. The authors are not responsible for any data loss, equipment damage, property loss, or injury to subjects or patients resulting from the use or misuse of this software package.

References

Examples

>>> # Get only the T1 file for version c:
>>> T1 = read_mni_template("c", contrast = "T1") 
>>> # Get both files in this order for version a:
>>> T1, T2 = read_mni_template(contrast = ["T1", "T2"]) 

fetch_cenir_multib#

dipy.data.fetcher.fetch_cenir_multib(*, with_raw=False)[source]#

Fetch ‘HCP-like’ data, collected at multiple b-values.

Parameters:
with_rawbool

Whether to fetch the raw data. Per default, this is False, which means that only eddy-current/motion corrected data is fetched

Notes

Details of the acquisition and processing, and additional meta-data are available through UW researchworks:

https://digital.lib.washington.edu/researchworks/handle/1773/33311

read_cenir_multib#

dipy.data.fetcher.read_cenir_multib(*, bvals=None)[source]#

Read CENIR multi b-value data.

Parameters:
bvalslist or int

The b-values to read from file (200, 400, 1000, 2000, 3000).

Returns:
gtaba GradientTable class instance
imgnibabel.Nifti1Image

Notes

Details of the acquisition and processing, and additional meta-data are available through UW researchworks:

https://digital.lib.washington.edu/researchworks/handle/1773/33311

read_bundles_2_subjects#

dipy.data.fetcher.read_bundles_2_subjects(*, subj_id='subj_1', metrics=('fa',), bundles=('af.left', 'cst.right', 'cc_1'))[source]#

Read images and streamlines from 2 subjects of the SNAIL dataset.

See [6] and [7] for further details about the dataset and processing pipeline.

Parameters:
subj_idstring

Either subj_1 or subj_2.

metricsarray-like

Either [‘fa’] or [‘t1’] or [‘fa’, ‘t1’]

bundlesarray-like

E.g., [‘af.left’, ‘cst.right’, ‘cc_1’]. See all the available bundles in the exp_bundles_maps/bundles_2_subjects directory of your DIPY_HOME of $HOME/.dipy folder.

Returns:
dixdict

Dictionary with data of the metrics and the bundles as keys.

Notes

If you are using these datasets please cite the following publications.

References

read_ivim#

dipy.data.fetcher.read_ivim()[source]#

Load IVIM dataset.

Returns:
imgobj,

Nifti1Image

gtabobj,

GradientTable

read_cfin_dwi#

dipy.data.fetcher.read_cfin_dwi()[source]#

Load CFIN multi b-value DWI data.

Returns:
imgobj,

Nifti1Image

gtabobj,

GradientTable

read_cfin_t1#

dipy.data.fetcher.read_cfin_t1()[source]#

Load CFIN T1-weighted data.

Returns:
imgobj,

Nifti1Image

get_file_formats#

dipy.data.fetcher.get_file_formats()[source]#
Returns:
bundles_listall bundles (list)
ref_anatreference

get_bundle_atlas_hcp842#

dipy.data.fetcher.get_bundle_atlas_hcp842(*, size=80)[source]#
Returns:
file1string
file2string

get_two_hcp842_bundles#

dipy.data.fetcher.get_two_hcp842_bundles()[source]#
Returns:
file1string
file2string

get_target_tractogram_hcp#

dipy.data.fetcher.get_target_tractogram_hcp()[source]#
Returns:
file1string

read_qte_lte_pte#

dipy.data.fetcher.read_qte_lte_pte()[source]#

Read q-space trajectory encoding data with linear and planar tensor encoding.

Returns:
data_imgnibabel.nifti1.Nifti1Image

dMRI data image.

mask_imgnibabel.nifti1.Nifti1Image

Brain mask image.

gtabdipy.core.gradients.GradientTable

Gradient table.

read_DiB_70_lte_pte_ste#

dipy.data.fetcher.read_DiB_70_lte_pte_ste()[source]#

Read q-space trajectory encoding data with 70 between linear, planar, and spherical tensor encoding measurements.

Returns:
data_imgnibabel.nifti1.Nifti1Image

dMRI data image.

mask_imgnibabel.nifti1.Nifti1Image

Brain mask image.

gtabdipy.core.gradients.GradientTable

Gradient table.

read_DiB_217_lte_pte_ste#

dipy.data.fetcher.read_DiB_217_lte_pte_ste()[source]#

Read q-space trajectory encoding data with 217 between linear, planar, and spherical tensor encoding.

Returns:
data_imgnibabel.nifti1.Nifti1Image

dMRI data image.

mask_imgnibabel.nifti1.Nifti1Image

Brain mask image.

gtabdipy.core.gradients.GradientTable

Gradient table.

extract_example_tracts#

dipy.data.fetcher.extract_example_tracts(out_dir)[source]#

Extract 5 ‘AF_L’,’CST_R’ and ‘CC_ForcepsMajor’ trk files in out_dir folder.

Parameters:
out_dirstr

Folder in which to extract the files.

read_five_af_bundles#

dipy.data.fetcher.read_five_af_bundles()[source]#

Load 5 small left arcuate fasciculus bundles.

Returns:
bundles: list of ArraySequence

List with loaded bundles.

to_bids_description#

dipy.data.fetcher.to_bids_description(path, *, fname='dataset_description.json', BIDSVersion='1.4.0', **kwargs)[source]#

Dumps a dict into a bids description at the given location

fetch_hcp#

dipy.data.fetcher.fetch_hcp(subjects, hcp_bucket='hcp-openaccess', profile_name='hcp', path=None, study='HCP_1200', aws_access_key_id=None, aws_secret_access_key=None)[source]#

Fetch HCP diffusion data and arrange it in a manner that resembles the BIDS specification.

See [8] for details about the BIDS specification.

Parameters:
subjectslist

Each item is an integer, identifying one of the HCP subjects

hcp_bucketstring, optional

The name of the HCP S3 bucket.

profile_namestring, optional

The name of the AWS profile used for access.

pathstring, optional

Path to save files into. Defaults to the value of the DIPY_HOME environment variable is set; otherwise, defaults to $HOME/.dipy.

studystring, optional

Which HCP study to grab.

aws_access_key_idstring, optional

AWS credentials to HCP AWS S3. Will only be used if profile_name is set to False.

aws_secret_access_keystring, optional

AWS credentials to HCP AWS S3. Will only be used if profile_name is set to False.

Returns:
dict with remote and local names of these files,
path to BIDS derivative dataset

Notes

To use this function with its default setting, you need to have a file ‘~/.aws/credentials’, that includes a section:

[hcp] AWS_ACCESS_KEY_ID=XXXXXXXXXXXXXXXX AWS_SECRET_ACCESS_KEY=XXXXXXXXXXXXXXXX

The keys are credentials that you can get from HCP (see https://wiki.humanconnectome.org/display/PublicData/How+To+Connect+to+Connectome+Data+via+AWS)

Local filenames are changed to match our expected conventions.

References

fetch_hbn#

dipy.data.fetcher.fetch_hbn(subjects, *, path=None, include_afq=False)[source]#

Fetch preprocessed data from the Healthy Brain Network POD2 study.

See [9] and [10] for further details about the study and processing pipeline.

Parameters:
subjectslist

Identifiers of the subjects to download. For example: [“NDARAA948VFH”, “NDAREK918EC2”].

pathstring, optional

Path to save files into. Defaults to the value of the DIPY_HOME environment variable is set; otherwise, defaults to $HOME/.dipy.

include_afqbool, optional

Whether to include pyAFQ derivatives

Returns:
dict with remote and local names of these files,
path to BIDS derivative dataset

References