├── docs ├── requirements.txt ├── source │ ├── images │ │ └── giga_connectome.png │ ├── api.rst │ ├── index.rst │ ├── installation.md │ ├── workflow.md │ ├── conf.py │ ├── changes.md │ ├── contributing.md │ ├── outputs.md │ └── usage.md ├── Makefile └── make.bat ├── .gitattributes ├── .hadolint.yaml ├── giga_connectome ├── data │ ├── test_data │ │ └── bids_filter.json │ ├── denoise_strategy │ │ ├── simple.json │ │ ├── acompcor50.json │ │ ├── icaaroma.json │ │ ├── simple+gsr.json │ │ ├── scrubbing.2.json │ │ ├── scrubbing.5.json │ │ ├── scrubbing.2+gsr.json │ │ └── scrubbing.5+gsr.json │ ├── README.md │ ├── atlas │ │ ├── MIST.json │ │ ├── HarvardOxfordCortical.json │ │ ├── HarvardOxfordSubcortical.json │ │ ├── HarvardOxfordCorticalSymmetricSplit.json │ │ ├── DiFuMo.json │ │ └── Schaefer2018.json │ ├── bids_entities.json │ └── methods │ │ └── template.jinja ├── __init__.py ├── tests │ ├── test_methods.py │ ├── test_atlas.py │ ├── test_mask.py │ ├── test_denoise.py │ ├── test_connectome.py │ ├── test_cli.py │ └── test_utils.py ├── logger.py ├── methods.py ├── workflow.py ├── connectome.py ├── run.py ├── atlas.py ├── denoise.py ├── postprocess.py ├── utils.py └── mask.py ├── .gitmodules ├── .flake8 ├── .git_archival.txt ├── .readthedocs.yaml ├── .github ├── dependabot.yml └── workflows │ ├── validate_cff.yml │ ├── draft-paper.yml │ ├── test.yml │ └── docker.yml ├── Dockerfile ├── codecov.yml ├── requirements.txt ├── CITATION.cff ├── LICENSE ├── tox.ini ├── .pre-commit-config.yaml ├── tools └── download_templates.py ├── .gitignore ├── .dockerignore ├── .all-contributorsrc ├── pyproject.toml ├── CODE_OF_CONDUCT.md ├── paper ├── paper.md └── paper.bib └── README.md /docs/requirements.txt: -------------------------------------------------------------------------------- 1 | .[docs] 2 | -------------------------------------------------------------------------------- /.gitattributes: -------------------------------------------------------------------------------- 1 | .git_archival.txt export-subst 2 | -------------------------------------------------------------------------------- /.hadolint.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | ignored: 3 | - DL3008 4 | -------------------------------------------------------------------------------- /giga_connectome/data/test_data/bids_filter.json: -------------------------------------------------------------------------------- 1 | {"bold": {"task": "probabilisticclassification"}} 2 | -------------------------------------------------------------------------------- /docs/source/images/giga_connectome.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/bids-apps/giga_connectome/HEAD/docs/source/images/giga_connectome.png -------------------------------------------------------------------------------- /.gitmodules: -------------------------------------------------------------------------------- 1 | [submodule "tools/mist2templateflow"] 2 | path = tools/mist2templateflow 3 | url = https://github.com/SIMEXP/mist2templateflow.git 4 | -------------------------------------------------------------------------------- /.flake8: -------------------------------------------------------------------------------- 1 | [flake8] 2 | doctests = False 3 | exclude = 4 | **/__init__.py 5 | **/tests/* 6 | *build/ 7 | giga_connectome/_version.py 8 | -------------------------------------------------------------------------------- /.git_archival.txt: -------------------------------------------------------------------------------- 1 | node: 84301d6d24ca615ede2879ee05a21ae3dcf08fa0 2 | node-date: 2025-12-18T16:56:30-05:00 3 | describe-name: 0.6.0-19-g84301d6 4 | ref-names: HEAD -> main 5 | -------------------------------------------------------------------------------- /giga_connectome/data/denoise_strategy/simple.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "simple", 3 | "function": "load_confounds_strategy", 4 | "parameters": { 5 | "denoise_strategy": "simple" 6 | } 7 | } 8 | -------------------------------------------------------------------------------- /giga_connectome/data/denoise_strategy/acompcor50.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "acompcor50", 3 | "function": "load_confounds_strategy", 4 | "parameters": { 5 | "denoise_strategy": "compcor" 6 | } 7 | } 8 | -------------------------------------------------------------------------------- /giga_connectome/data/denoise_strategy/icaaroma.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "icaaroma", 3 | "function": "load_confounds_strategy", 4 | "parameters": { 5 | "denoise_strategy": "ica_aroma" 6 | } 7 | } 8 | -------------------------------------------------------------------------------- /giga_connectome/data/README.md: -------------------------------------------------------------------------------- 1 | Include data files here. 2 | 3 | This file may be accessed with: 4 | 5 | ```Python 6 | from pkg_resources import resource_filename 7 | data_file = resource_filename("giga_connectome", "data/README.md") 8 | ``` 9 | -------------------------------------------------------------------------------- /giga_connectome/data/denoise_strategy/simple+gsr.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "simple+gsr", 3 | "function": "load_confounds_strategy", 4 | "parameters": { 5 | "denoise_strategy": "simple", 6 | "global_signal": "basic" 7 | } 8 | } 9 | -------------------------------------------------------------------------------- /.readthedocs.yaml: -------------------------------------------------------------------------------- 1 | version: 2 2 | 3 | build: 4 | os: "ubuntu-22.04" 5 | tools: 6 | python: "3.10" 7 | 8 | sphinx: 9 | configuration: docs/source/conf.py 10 | 11 | python: 12 | install: 13 | - requirements: docs/requirements.txt 14 | -------------------------------------------------------------------------------- /giga_connectome/data/denoise_strategy/scrubbing.2.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "scrubbing.2", 3 | "function": "load_confounds_strategy", 4 | "parameters": { 5 | "denoise_strategy": "scrubbing", 6 | "fd_threshold": 0.2, 7 | "std_dvars_threshold": null 8 | } 9 | } 10 | -------------------------------------------------------------------------------- /giga_connectome/data/denoise_strategy/scrubbing.5.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "scrubbing.5", 3 | "function": "load_confounds_strategy", 4 | "parameters": { 5 | "denoise_strategy": "scrubbing", 6 | "fd_threshold": 0.5, 7 | "std_dvars_threshold": null 8 | } 9 | } 10 | -------------------------------------------------------------------------------- /giga_connectome/__init__.py: -------------------------------------------------------------------------------- 1 | __packagename__ = "giga_connectome" 2 | __copyright__ = "2025, BIDS-Apps" 3 | 4 | try: 5 | from ._version import __version__ 6 | except ImportError: 7 | pass 8 | 9 | 10 | __all__ = [ 11 | "__copyright__", 12 | "__packagename__", 13 | "__version__", 14 | ] 15 | -------------------------------------------------------------------------------- /.github/dependabot.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # Documentation 3 | # https://docs.github.com/en/code-security/dependabot/dependabot-version-updates/configuration-options-for-the-dependabot.yml-file 4 | version: 2 5 | updates: 6 | - package-ecosystem: github-actions 7 | directory: / 8 | schedule: 9 | interval: monthly 10 | -------------------------------------------------------------------------------- /giga_connectome/data/atlas/MIST.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "MIST", 3 | "parameters": { 4 | "atlas": "BASC", 5 | "template": "MNI152NLin2009bAsym", 6 | "resolution": "03", 7 | "suffix": "dseg" 8 | }, 9 | "desc": [7, 12, 20, 36, 64, 122, 197, 325, 122, 444], 10 | "templateflow_dir" : null 11 | } 12 | -------------------------------------------------------------------------------- /giga_connectome/data/denoise_strategy/scrubbing.2+gsr.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "scrubbing.2+gsr", 3 | "function": "load_confounds_strategy", 4 | "parameters": { 5 | "denoise_strategy": "scrubbing", 6 | "global_signal": "basic", 7 | "fd_threshold": 0.2, 8 | "std_dvars_threshold": null 9 | } 10 | } 11 | -------------------------------------------------------------------------------- /giga_connectome/data/denoise_strategy/scrubbing.5+gsr.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "scrubbing.5+gsr", 3 | "function": "load_confounds_strategy", 4 | "parameters": { 5 | "denoise_strategy": "scrubbing", 6 | "global_signal": "basic", 7 | "fd_threshold": 0.5, 8 | "std_dvars_threshold": null 9 | } 10 | } 11 | -------------------------------------------------------------------------------- /giga_connectome/data/atlas/HarvardOxfordCortical.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "HarvardOxfordCortical", 3 | "parameters": { 4 | "atlas": "HOCPA", 5 | "template": "MNI152NLin2009cAsym", 6 | "resolution": "02", 7 | "suffix": "dseg" 8 | }, 9 | "desc": [ 10 | "th25", 11 | "th50" 12 | ], 13 | "templateflow_dir" : null 14 | } 15 | -------------------------------------------------------------------------------- /giga_connectome/data/atlas/HarvardOxfordSubcortical.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "HarvardOxfordSubcortical", 3 | "parameters": { 4 | "atlas": "HOSPA", 5 | "template": "MNI152NLin2009cAsym", 6 | "resolution": "02", 7 | "suffix": "dseg" 8 | }, 9 | "desc": [ 10 | "th25", 11 | "th50" 12 | ], 13 | "templateflow_dir" : null 14 | } 15 | -------------------------------------------------------------------------------- /giga_connectome/data/atlas/HarvardOxfordCorticalSymmetricSplit.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "HarvardOxfordCorticalSymmetricSplit", 3 | "parameters": { 4 | "atlas": "HOCPAL", 5 | "template": "MNI152NLin2009cAsym", 6 | "resolution": "02", 7 | "suffix": "dseg" 8 | }, 9 | "desc": [ 10 | "th25", 11 | "th50" 12 | ], 13 | "templateflow_dir" : null 14 | } 15 | -------------------------------------------------------------------------------- /giga_connectome/tests/test_methods.py: -------------------------------------------------------------------------------- 1 | from giga_connectome import methods 2 | 3 | 4 | def test_generate_method_section(tmp_path): 5 | methods.generate_method_section( 6 | output_dir=tmp_path, 7 | atlas="DiFuMo", 8 | smoothing_fwhm=5, 9 | standardize="psc", 10 | strategy="simple", 11 | mni_space="MNI152NLin6Asym", 12 | average_correlation=True, 13 | ) 14 | -------------------------------------------------------------------------------- /giga_connectome/data/atlas/DiFuMo.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "DiFuMo", 3 | "parameters": { 4 | "atlas": "DiFuMo", 5 | "template": "MNI152NLin2009cAsym", 6 | "resolution": "03", 7 | "suffix": "probseg" 8 | }, 9 | "desc": [ 10 | "64dimensions", 11 | "128dimensions", 12 | "256dimensions", 13 | "512dimensions", 14 | "1024dimensions"], 15 | "templateflow_dir" : null 16 | } 17 | -------------------------------------------------------------------------------- /.github/workflows/validate_cff.yml: -------------------------------------------------------------------------------- 1 | --- 2 | name: validate CITATION.cff 3 | 4 | on: 5 | push: 6 | branches: main 7 | pull_request: 8 | branches: ['*'] 9 | 10 | jobs: 11 | validate_cff: 12 | runs-on: ubuntu-latest 13 | steps: 14 | - uses: actions/checkout@v6 15 | - name: Check whether the citation metadata from CITATION.cff is valid 16 | uses: citation-file-format/cffconvert-github-action@2.0.0 17 | with: 18 | args: --validate 19 | -------------------------------------------------------------------------------- /giga_connectome/data/atlas/Schaefer2018.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "Schaefer2018", 3 | "parameters": { 4 | "atlas": "Schaefer2018", 5 | "template": "MNI152NLin2009cAsym", 6 | "resolution": "02", 7 | "suffix": "dseg" 8 | }, 9 | "desc": [ 10 | "100Parcels7Networks", 11 | "200Parcels7Networks", 12 | "300Parcels7Networks", 13 | "400Parcels7Networks", 14 | "500Parcels7Networks", 15 | "600Parcels7Networks", 16 | "800Parcels7Networks"], 17 | "templateflow_dir" : null 18 | } 19 | -------------------------------------------------------------------------------- /giga_connectome/logger.py: -------------------------------------------------------------------------------- 1 | """General logger for the cohort_creator package.""" 2 | 3 | from __future__ import annotations 4 | 5 | import logging 6 | 7 | from rich.logging import RichHandler 8 | 9 | 10 | def gc_logger(log_level: str = "INFO") -> logging.Logger: 11 | # FORMAT = '\n%(asctime)s - %(name)s - %(levelname)s\n\t%(message)s\n' 12 | FORMAT = "%(message)s" 13 | 14 | logging.basicConfig( 15 | level=log_level, 16 | format=FORMAT, 17 | datefmt="[%X]", 18 | handlers=[RichHandler()], 19 | ) 20 | 21 | return logging.getLogger("giga_connectome") 22 | -------------------------------------------------------------------------------- /docs/source/api.rst: -------------------------------------------------------------------------------- 1 | API 2 | === 3 | 4 | atlas 5 | ::::: 6 | 7 | .. automodule:: giga_connectome.atlas 8 | :members: 9 | 10 | connectome 11 | :::::::::: 12 | 13 | .. automodule:: giga_connectome.connectome 14 | :members: 15 | 16 | denoise 17 | ::::::: 18 | 19 | .. automodule:: giga_connectome.denoise 20 | :members: 21 | 22 | mask 23 | :::: 24 | 25 | .. automodule:: giga_connectome.mask 26 | :members: 27 | 28 | postprocess 29 | ::::::::::: 30 | 31 | .. automodule:: giga_connectome.postprocess 32 | :members: 33 | 34 | utils 35 | ::::: 36 | 37 | .. automodule:: giga_connectome.utils 38 | :members: 39 | -------------------------------------------------------------------------------- /docs/Makefile: -------------------------------------------------------------------------------- 1 | # Minimal makefile for Sphinx documentation 2 | # 3 | 4 | # You can set these variables from the command line, and also 5 | # from the environment for the first two. 6 | SPHINXOPTS ?= 7 | SPHINXBUILD ?= sphinx-build 8 | SOURCEDIR = source 9 | BUILDDIR = build 10 | 11 | # Put it first so that "make" without argument is like "make help". 12 | help: 13 | @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) 14 | 15 | .PHONY: help Makefile 16 | 17 | # Catch-all target: route all unknown targets to Sphinx using the new 18 | # "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS). 19 | %: Makefile 20 | @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) 21 | -------------------------------------------------------------------------------- /giga_connectome/data/bids_entities.json: -------------------------------------------------------------------------------- 1 | { 2 | "images": { 3 | "scope": "derivatives", 4 | "space": ["MNI152NLin6Asym", "MNI152NLin2009cAsym"], 5 | "desc": ["preproc", "smoothAROMAnonaggr"], 6 | "suffix": "bold", 7 | "extension":"nii.gz" 8 | }, 9 | "confounds": { 10 | "scope": "derivatives", 11 | "desc": "confounds", 12 | "suffix": "timeseries", 13 | "extension": "tsv" 14 | }, 15 | "masks": { 16 | "scope": "derivatives", 17 | "space": ["MNI152NLin6Asym", "MNI152NLin2009cAsym"], 18 | "desc": "brain", 19 | "suffix": "mask", 20 | "extension": "nii.gz" 21 | }, 22 | "extra_bids_entities": ["subject", "session", "task"] 23 | } 24 | -------------------------------------------------------------------------------- /docs/source/index.rst: -------------------------------------------------------------------------------- 1 | .. giga_connectome documentation master file, created by 2 | sphinx-quickstart on Wed Aug 23 14:35:15 2023. 3 | You can adapt this file completely to your liking, but it should at least 4 | contain the root `toctree` directive. 5 | 6 | .. include:: ../../README.md 7 | :parser: myst_parser.sphinx_ 8 | 9 | .. toctree:: 10 | :maxdepth: 1 11 | :caption: Contents 12 | 13 | installation.md 14 | usage.md 15 | workflow.md 16 | outputs.md 17 | 18 | .. toctree:: 19 | :maxdepth: 1 20 | :caption: Contribution and maintenance 21 | 22 | contributing.md 23 | api.rst 24 | changes.md 25 | 26 | Indices and tables 27 | ================== 28 | 29 | * :ref:`genindex` 30 | * :ref:`modindex` 31 | * :ref:`search` 32 | -------------------------------------------------------------------------------- /giga_connectome/tests/test_atlas.py: -------------------------------------------------------------------------------- 1 | from giga_connectome.atlas import load_atlas_setting 2 | import pytest 3 | from pkg_resources import resource_filename 4 | 5 | 6 | def test_load_atlas_setting(): 7 | # use Schaefer2018 when updating 0.7.0 8 | atlas_config = load_atlas_setting("Schaefer20187Networks") 9 | assert atlas_config["name"] == "Schaefer2018" 10 | atlas_config = load_atlas_setting("Schaefer2018") 11 | assert atlas_config["name"] == "Schaefer2018" 12 | atlas_config = load_atlas_setting("HarvardOxfordCortical") 13 | assert atlas_config["name"] == "HarvardOxfordCortical" 14 | pytest.raises(FileNotFoundError, load_atlas_setting, "blah") 15 | json_path = resource_filename("giga_connectome", "data/atlas/DiFuMo.json") 16 | atlas_config = load_atlas_setting(json_path) 17 | assert atlas_config["name"] == "DiFuMo" 18 | -------------------------------------------------------------------------------- /.github/workflows/draft-paper.yml: -------------------------------------------------------------------------------- 1 | on: 2 | pull_request: 3 | push: 4 | branches: [main] 5 | paths: ['paper/*'] 6 | 7 | jobs: 8 | paper: 9 | runs-on: ubuntu-latest 10 | name: Paper Draft 11 | steps: 12 | - name: Checkout 13 | uses: actions/checkout@v6 14 | - name: Build draft PDF 15 | uses: openjournals/openjournals-draft-action@master 16 | with: 17 | journal: joss 18 | # This should be the path to the paper within your repo. 19 | paper-path: paper/paper.md 20 | - name: Upload 21 | uses: actions/upload-artifact@v5 22 | with: 23 | name: paper 24 | # This is the output path where Pandoc will write the compiled 25 | # PDF. Note, this should be the same directory as the input 26 | # paper.md 27 | path: paper/paper.pdf 28 | -------------------------------------------------------------------------------- /docs/source/installation.md: -------------------------------------------------------------------------------- 1 | # Installation 2 | 3 | ## Quick start (container) 4 | 5 | Pull the latest image from docker hub, available for version > `0.5.0`(Recommended) 6 | 7 | Apptainer (recommended): 8 | 9 | ```bash 10 | apptainer build giga_connectome.simg docker://bids/giga_connectome:latest 11 | ``` 12 | 13 | Docker: 14 | ```bash 15 | docker pull bids/giga_connectome:latest 16 | ``` 17 | 18 | ## Install as a python package (not recommended) 19 | 20 | The project is written as an installable python package, however, 21 | it is not recommended for non contributors. 22 | 23 | If you wish to install giga-connectome as a python package, 24 | please follow the full instruction in 25 | [Setting up your environment for development](./contributing.md#setting-up-your-environment-for-development) step 1 to 4. 26 | These steps will ensure the installed package retain all the functions as the container image. 27 | -------------------------------------------------------------------------------- /Dockerfile: -------------------------------------------------------------------------------- 1 | # https://hub.docker.com/layers/library/python/3.9-slim-bullseye/images/sha256-de58dcff6a8ccd752899e667aded074ad3e8f5fd552969ec11276adcb18930a4 2 | FROM python@sha256:de58dcff6a8ccd752899e667aded074ad3e8f5fd552969ec11276adcb18930a4 3 | 4 | ARG DEBIAN_FRONTEND="noninteractive" 5 | 6 | RUN apt-get update -qq && \ 7 | apt-get install -y -qq --no-install-recommends \ 8 | git && \ 9 | rm -rf /var/lib/apt/lists/* 10 | 11 | ARG TEMPLATEFLOW_HOME="/templateflow" 12 | 13 | WORKDIR /code 14 | 15 | COPY [".", "/code"] 16 | 17 | RUN pip3 install --no-cache-dir pip==24.0 && \ 18 | pip3 install --no-cache-dir --requirement requirements.txt && \ 19 | pip3 --no-cache-dir install . 20 | 21 | ENV TEMPLATEFLOW_HOME=${TEMPLATEFLOW_HOME} 22 | 23 | RUN git submodule update --init --recursive && python3 /code/tools/download_templates.py 24 | 25 | ENTRYPOINT ["/usr/local/bin/giga_connectome"] 26 | -------------------------------------------------------------------------------- /docs/make.bat: -------------------------------------------------------------------------------- 1 | @ECHO OFF 2 | 3 | pushd %~dp0 4 | 5 | REM Command file for Sphinx documentation 6 | 7 | if "%SPHINXBUILD%" == "" ( 8 | set SPHINXBUILD=sphinx-build 9 | ) 10 | set SOURCEDIR=source 11 | set BUILDDIR=build 12 | 13 | %SPHINXBUILD% >NUL 2>NUL 14 | if errorlevel 9009 ( 15 | echo. 16 | echo.The 'sphinx-build' command was not found. Make sure you have Sphinx 17 | echo.installed, then set the SPHINXBUILD environment variable to point 18 | echo.to the full path of the 'sphinx-build' executable. Alternatively you 19 | echo.may add the Sphinx directory to PATH. 20 | echo. 21 | echo.If you don't have Sphinx installed, grab it from 22 | echo.https://www.sphinx-doc.org/ 23 | exit /b 1 24 | ) 25 | 26 | if "%1" == "" goto help 27 | 28 | %SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O% 29 | goto end 30 | 31 | :help 32 | %SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O% 33 | 34 | :end 35 | popd 36 | -------------------------------------------------------------------------------- /codecov.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # similar to scikit-learn .codecov.yml 3 | # coverage: 4 | # status: 5 | # project: 6 | # default: 7 | # # Commits pushed to main should not make the overall 8 | # # project coverage decrease by more than 1%: 9 | # target: auto 10 | # threshold: 1% 11 | # patch: 12 | # default: 13 | # # Be tolerant on slight code coverage diff on PRs to limit 14 | # # noisy red coverage status on github PRs. 15 | # target: auto 16 | # threshold: 1% 17 | comment: # this is a top-level key 18 | layout: reach, diff, flags, files 19 | behavior: default 20 | require_changes: false # if true: only post the comment if coverage changes 21 | require_base: no # [yes :: must have a base report to post] 22 | require_head: yes # [yes :: must have a head report to post] 23 | 24 | ignore: 25 | - '*/tests/' # ignore folders related to testing 26 | - '*/data/' 27 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | astor==0.8.1 2 | bids-validator==1.14.4 3 | certifi==2024.2.2 4 | charset-normalizer==3.3.2 5 | click==8.1.7 6 | contourpy==1.2.0 7 | cycler==0.12.1 8 | docopt==0.6.2 9 | fonttools==4.50.0 10 | formulaic==0.5.2 11 | idna==3.6 12 | interface-meta==1.3.0 13 | Jinja2==3.1.3 14 | joblib==1.3.2 15 | kaleido==0.2.1 16 | kiwisolver==1.4.5 17 | lxml==5.1.1 18 | markdown-it-py==3.0.0 19 | MarkupSafe==2.1.5 20 | matplotlib==3.8.3 21 | mdurl==0.1.2 22 | nibabel==5.2.1 23 | nilearn==0.10.3 24 | num2words==0.5.13 25 | numpy==1.26.4 26 | packaging==24.0 27 | pandas==2.2.1 28 | pillow==10.2.0 29 | plotly==5.20.0 30 | pybids==0.15.6 31 | Pygments==2.17.2 32 | pyparsing==3.1.2 33 | python-dateutil==2.9.0.post0 34 | pytz==2024.1 35 | requests==2.31.0 36 | rich==13.7.1 37 | scikit-learn==1.4.1.post1 38 | scipy==1.12.0 39 | six==1.16.0 40 | SQLAlchemy==1.3.24 41 | templateflow==0.8.1 42 | tenacity==8.2.3 43 | threadpoolctl==3.4.0 44 | tqdm==4.66.2 45 | typing_extensions==4.10.0 46 | tzdata==2024.1 47 | urllib3==2.2.1 48 | wrapt==1.16.0 49 | -------------------------------------------------------------------------------- /CITATION.cff: -------------------------------------------------------------------------------- 1 | cff-version: 1.2.0 2 | 3 | title: "giga_connectome" 4 | 5 | abstract: 6 | "Generate time series and connectomes from fMRIPrep outputs." 7 | 8 | message: "If you use this software, please cite it as below." 9 | 10 | repository-code: "https://github.com/bids-apps/giga_connectome" 11 | 12 | identifiers: 13 | - type: doi 14 | value: 10.21105/joss.07061 15 | 16 | license: MIT 17 | 18 | contact: 19 | - email: htwangtw@gmail.com 20 | family-names: Wang 21 | given-names: Hao-Ting 22 | 23 | authors: 24 | - family-names: Wang 25 | given-names: Hao-Ting 26 | email: htwangtw@gmail.com 27 | orcid: https://orcid.org/0000-0003-4078-2038 28 | - family-names: Gau 29 | given-names: Rémi 30 | orcid: https://orcid.org/0000-0002-1535-9767 31 | - family-names: Clarke 32 | given-names: Natasha 33 | orcid: https://orcid.org/0000-0003-2455-3614 34 | - family-names: Dessain 35 | given-names: Quentin 36 | orcid: https://orcid.org/0000-0002-7314-0413 37 | - family-names: Bellec 38 | given-names: Lune 39 | orcid: https://orcid.org/0000-0002-9111-0699 40 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2021 - 2023 SIMEXP; 2024 - 2025 BIDS Apps 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /tox.ini: -------------------------------------------------------------------------------- 1 | ; See https://tox.wiki/en 2 | [tox] 3 | requires = 4 | tox>=4 5 | ; run lint by default when just calling "tox" 6 | env_list = lint 7 | 8 | ; ENVIRONMENTS 9 | ; ------------ 10 | [style] 11 | description = common environment for style checkers (rely on pre-commit hooks) 12 | skip_install = true 13 | deps = 14 | pre-commit 15 | 16 | ; COMMANDS 17 | ; -------- 18 | [testenv:lint] 19 | description = run all linters and formatters 20 | skip_install = true 21 | deps = 22 | {[style]deps} 23 | commands = 24 | pre-commit run --all-files --show-diff-on-failure {posargs:} 25 | 26 | [testenv:test_data] 27 | description = install test data 28 | skip_install = true 29 | allowlist_externals = 30 | mkdir 31 | wget 32 | tar 33 | rm 34 | commands = 35 | mkdir -p giga_connectome/data/test_data 36 | wget --retry-connrefused \ 37 | --waitretry=5 \ 38 | --read-timeout=20 \ 39 | --timeout=15 \ 40 | -t 0 \ 41 | -q \ 42 | -O giga_connectome/data/test_data/ds000017.tar.gz \ 43 | "https://zenodo.org/record/8091903/files/ds000017-fmriprep22.0.1-downsampled-nosurface.tar.gz?download=1" 44 | tar -xzf giga_connectome/data/test_data/ds000017.tar.gz -C giga_connectome/data/test_data/ 45 | rm giga_connectome/data/test_data/ds000017.tar.gz 46 | -------------------------------------------------------------------------------- /docs/source/workflow.md: -------------------------------------------------------------------------------- 1 | # Workflow 2 | 3 | The current workflow is implemented with the following principles in mind: 4 | 5 | - Using existing open source projects as key infrastructures. 6 | - Implement the denoising steps as close as the fMRIPrep documentation and the benchmark literature suggested, hence, 7 | denoising strategies would be limited to implementations of upstream function `nilearn.fmriprep.interfaces.load_confounds`. 8 | - Feature extraction methods (such as connectome and details of preprocessing) would be limited to implementations through nilearn. 9 | - Perform any potential image resampling before extraction before signal processing to make the process transparent and reduce redundant computation. 10 | 11 | ## Details of the workflow 12 | 13 | Here's the description of the process and the flowchart illustrating the relationship amongst the files used during the process. 14 | 15 | 1. Create subject specific grey matter mask in MNI space. 16 | 17 | 2. Sample the atlas to the space of subject specific grey matter mask in MNI space. 18 | 19 | 3. Calculate the conjunction of the customised grey matter mask and resampled atlas to find valid parcels. 20 | 21 | 4. Use the new input specific grey matter mask and atlas to extract time series and connectomes for each subject. 22 | 23 | 5. If applied, calculate intranetwork correlation of each parcel. The values replace the diagonal of the connectomes. 24 | 25 | ![](./images/giga_connectome.png) 26 | -------------------------------------------------------------------------------- /docs/source/conf.py: -------------------------------------------------------------------------------- 1 | # Configuration file for the Sphinx documentation builder. 2 | # 3 | # For the full list of built-in configuration values, see the documentation: 4 | # https://www.sphinx-doc.org/en/master/usage/configuration.html 5 | import os 6 | import sys 7 | 8 | from giga_connectome import __copyright__, __packagename__, __version__ 9 | 10 | sys.path.insert(0, os.path.abspath("..")) 11 | # -- Project information ----------------------------------------------------- 12 | # https://www.sphinx-doc.org/en/master/usage/configuration.html#project-information 13 | 14 | project = __packagename__ 15 | copyright = __copyright__ 16 | author = "Hao-Ting Wang" 17 | release = __version__ 18 | 19 | # -- General configuration --------------------------------------------------- 20 | # https://www.sphinx-doc.org/en/master/usage/configuration.html#general-configuration 21 | 22 | extensions = [ 23 | "sphinx_rtd_theme", 24 | "myst_parser", 25 | "sphinx.ext.autodoc", 26 | "sphinx.ext.autosummary", 27 | "sphinx.ext.napoleon", 28 | "sphinxarg.ext", 29 | ] 30 | 31 | templates_path = ["_templates"] 32 | exclude_patterns = [] 33 | 34 | 35 | # -- Options for HTML output ------------------------------------------------- 36 | # https://www.sphinx-doc.org/en/master/usage/configuration.html#options-for-html-output 37 | 38 | html_theme = "sphinx_rtd_theme" 39 | 40 | # -- Options for myst_parser ------------------------------------------------- 41 | myst_enable_extensions = ["colon_fence"] 42 | -------------------------------------------------------------------------------- /.pre-commit-config.yaml: -------------------------------------------------------------------------------- 1 | # See https://pre-commit.com for more information 2 | # See https://pre-commit.com/hooks.html for more hooks 3 | repos: 4 | - repo: https://github.com/pre-commit/pre-commit-hooks 5 | rev: v6.0.0 6 | hooks: 7 | - id: trailing-whitespace 8 | - id: end-of-file-fixer 9 | exclude_types: [svg] 10 | - id: check-yaml 11 | - id: check-added-large-files 12 | args: ['--maxkb=1000'] 13 | - repo: https://github.com/pre-commit/mirrors-mypy 14 | rev: v1.19.1 15 | hooks: 16 | - id: mypy 17 | additional_dependencies: [pandas-stubs, types-tqdm, types-setuptools, types-Jinja2] 18 | args: [--config-file=pyproject.toml] 19 | - repo: https://github.com/psf/black-pre-commit-mirror 20 | rev: 25.12.0 21 | hooks: 22 | - id: black 23 | - repo: https://github.com/codespell-project/codespell 24 | rev: v2.4.1 25 | hooks: 26 | - id: codespell 27 | additional_dependencies: 28 | - tomli 29 | args: ["--skip=*.svg"] 30 | exclude: 'paper/.*' 31 | - repo: https://github.com/PyCQA/flake8 32 | rev: 7.3.0 33 | hooks: 34 | - id: flake8 35 | - repo: https://github.com/hadolint/hadolint 36 | rev: v2.14.0 37 | hooks: 38 | - id: hadolint-docker 39 | name: Lint Dockerfiles 40 | description: Runs hadolint Docker image to lint Dockerfiles 41 | language: docker_image 42 | types: [dockerfile] 43 | entry: ghcr.io/hadolint/hadolint hadolint 44 | ci: 45 | skip: [hadolint-docker] 46 | -------------------------------------------------------------------------------- /giga_connectome/methods.py: -------------------------------------------------------------------------------- 1 | """Module responsible for generating method section.""" 2 | 3 | from pathlib import Path 4 | 5 | from jinja2 import Environment, FileSystemLoader, select_autoescape 6 | from nilearn import __version__ as nilearn_version 7 | from templateflow import __version__ as templateflow_version 8 | 9 | from giga_connectome import __version__ 10 | 11 | 12 | def generate_method_section( 13 | output_dir: Path, 14 | atlas: str, 15 | smoothing_fwhm: float, 16 | strategy: str, 17 | standardize: str, 18 | mni_space: str, 19 | average_correlation: bool, 20 | ) -> None: 21 | env = Environment( 22 | loader=FileSystemLoader(Path(__file__).parent), 23 | autoescape=select_autoescape(), 24 | lstrip_blocks=True, 25 | trim_blocks=True, 26 | ) 27 | 28 | template = env.get_template("data/methods/template.jinja") 29 | 30 | output_file = output_dir / "logs" / "CITATION.md" 31 | output_file.parent.mkdir(parents=True, exist_ok=True) 32 | 33 | data = { 34 | "version": __version__, 35 | "nilearn_version": nilearn_version, 36 | "templateflow_version": templateflow_version, 37 | "atlas": atlas, 38 | "smoothing_fwhm": smoothing_fwhm, 39 | "strategy": strategy, 40 | "standardize": ( 41 | "percent signal change" if standardize == "psc" else standardize 42 | ), 43 | "mni_space": mni_space, 44 | "average_correlation": average_correlation, 45 | } 46 | 47 | with open(output_file, "w") as f: 48 | print(template.render(data=data), file=f) 49 | -------------------------------------------------------------------------------- /giga_connectome/data/methods/template.jinja: -------------------------------------------------------------------------------- 1 | These results were generated with 2 | giga_connectome (version {{ data.version }}, https://giga-connectome.readthedocs.io/en/latest/) 3 | using Nilearn (version {{ data.nilearn_version }}, RRID:SCR_001362) 4 | and TemplateFlow (version {{ data.templateflow_version }}). 5 | 6 | The following steps were followed. 7 | 8 | 1. Retrieve subject specific grey matter mask in MNI space ({{ data.mni_space }}). 9 | 10 | 1. Sampled the {{ data.atlas }} atlas to the space of subject specific grey matter mask in MNI space. 11 | 12 | 1. Calculated the conjunction of the subject specific grey matter mask and resampled atlas to find valid parcels. 13 | 14 | 1. Used the subject specific grey matter mask and atlas to extract time series and connectomes for each subject. 15 | The time series data were denoised as follow: 16 | 17 | - Time series extractions through label or map maskers are performed on the denoised nifti file. 18 | - Denoising steps were performed on the voxel level: 19 | - spatial smoothing (FWHM: {{ data.smoothing_fwhm }} mm) 20 | - detrending, only if high pass filter was not applied through confounds 21 | - regressing out confounds (using a {{ data.strategy }} strategy) 22 | - standardized (using {{ data.standardize }}) 23 | - Extracted time series from atlas 24 | - Computed correlation matrix (Pearson's correlation with LedoitWolf covariance estimator) 25 | 26 | {% if data.average_correlation %} 27 | 1. Calculate intranetwork correlation of each parcel. The values replace the diagonal of the connectomes. 28 | {% endif %} 29 | -------------------------------------------------------------------------------- /tools/download_templates.py: -------------------------------------------------------------------------------- 1 | """ 2 | Set up templateflow with customised altases. 3 | Download atlases that are relevant. 4 | """ 5 | 6 | import importlib.util 7 | import shutil 8 | import sys 9 | 10 | from pathlib import Path 11 | 12 | import templateflow as tf 13 | 14 | from giga_connectome.logger import gc_logger 15 | 16 | gc_log = gc_logger() 17 | 18 | 19 | def fetch_tpl_atlas() -> None: 20 | """Download datasets from templateflow.""" 21 | atlases = ["Schaefer2018", "DiFuMo", "HOSPA", "HOCPA", "HOCPAL"] 22 | for atlas in atlases: 23 | tf_path = tf.api.get("MNI152NLin2009cAsym", atlas=atlas) 24 | if isinstance(tf_path, list) and len(tf_path) > 0: 25 | gc_log.info(f"{atlas} exists.") 26 | else: 27 | gc_log.error(f"{atlas} does not exist.") 28 | # download MNI grey matter template 29 | tf.api.get("MNI152NLin2009cAsym", label="GM") 30 | 31 | 32 | def download_mist() -> None: 33 | """Download mist atlas and convert to templateflow format.""" 34 | tf_path = tf.api.get("MNI152NLin2009bAsym", atlas="BASC") 35 | if isinstance(tf_path, list) and len(tf_path) > 0: 36 | gc_log.info("BASC / MIST atlas exists.") 37 | return 38 | 39 | # download and convert 40 | spec = importlib.util.spec_from_file_location( 41 | "mist2templateflow", 42 | Path(__file__).parent / "mist2templateflow/mist2templateflow.py", 43 | ) 44 | mist2templateflow = importlib.util.module_from_spec(spec) 45 | sys.modules["module.name"] = mist2templateflow 46 | spec.loader.exec_module(mist2templateflow) 47 | mist2templateflow.convert_basc( 48 | tf.conf.TF_HOME, Path(__file__).parent / "tmp" 49 | ) 50 | shutil.rmtree(Path(__file__).parent / "tmp") 51 | 52 | 53 | def main() -> None: 54 | fetch_tpl_atlas() 55 | download_mist() 56 | 57 | 58 | if __name__ == "__main__": 59 | main() 60 | -------------------------------------------------------------------------------- /giga_connectome/tests/test_mask.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | import pytest 3 | from nibabel import Nifti1Image 4 | from nilearn import datasets 5 | 6 | from giga_connectome import mask 7 | 8 | 9 | def test_generate_subject_gm_mask(): 10 | """Generate group epi grey matter mask and resample atlas.""" 11 | # use different subject in the test, should work the same 12 | data = datasets.fetch_development_fmri(n_subjects=3) 13 | imgs = data.func 14 | 15 | group_epi_mask = mask.generate_subject_gm_mask(imgs) 16 | # match the post processing details: https://osf.io/wjtyq 17 | assert group_epi_mask.shape == (50, 59, 50) 18 | diff_tpl = mask.generate_subject_gm_mask( 19 | imgs, template="MNI152NLin2009aAsym" 20 | ) 21 | assert diff_tpl.shape == (50, 59, 50) 22 | 23 | # test bad inputs 24 | with pytest.raises( 25 | ValueError, match="TemplateFlow does not supply template blah" 26 | ): 27 | mask.generate_subject_gm_mask(imgs, template="blah") 28 | 29 | 30 | def test_check_mask_affine(): 31 | """Check odd affine detection.""" 32 | 33 | img_base = np.zeros([5, 5, 6]) 34 | processed_vol = img_base.copy() 35 | processed_vol[2:4, 2:4, 2:4] += 1 36 | processed = Nifti1Image(processed_vol, np.eye(4)) 37 | weird = Nifti1Image(processed_vol, np.eye(4) * np.array([1, 1, 1.5, 1]).T) 38 | weird2 = Nifti1Image(processed_vol, np.eye(4) * np.array([1, 1, 1.6, 1]).T) 39 | exclude = mask._check_mask_affine( 40 | [processed, processed, processed, processed, weird, weird, weird2] 41 | ) 42 | assert len(exclude) == 3 43 | assert exclude == [4, 5, 6] 44 | 45 | 46 | def test_get_consistent_masks(): 47 | """Check odd affine detection.""" 48 | mask_imgs = [ 49 | f"sub-{i + 1:2d}_task-rest_space-MNI_desc-brain_mask.nii.gz" 50 | for i in range(10) 51 | ] 52 | exclude = [1, 2, 5] 53 | ( 54 | cleaned_func_masks, 55 | weird_mask_identifiers, 56 | ) = mask._get_consistent_masks(mask_imgs, exclude) 57 | assert len(cleaned_func_masks) == 7 58 | assert len(weird_mask_identifiers) == 3 59 | -------------------------------------------------------------------------------- /giga_connectome/tests/test_denoise.py: -------------------------------------------------------------------------------- 1 | from giga_connectome.denoise import denoise_meta_data, get_denoise_strategy 2 | from pkg_resources import resource_filename 3 | from numpy import testing 4 | 5 | 6 | def test_denoise_nifti_voxel(): 7 | img_file = resource_filename( 8 | "giga_connectome", 9 | "data/test_data/ds000017-fmriprep22.0.1-downsampled-nosurface/sub-1/ses-timepoint1/func/sub-1_ses-timepoint1_task-probabilisticclassification_run-01_space-MNI152NLin2009cAsym_res-2_desc-preproc_bold.nii.gz", 10 | ) 11 | strategy = get_denoise_strategy("scrubbing.2") 12 | meta_data = denoise_meta_data( 13 | strategy=strategy, 14 | img=img_file, 15 | ) 16 | assert len(meta_data["ConfoundRegressors"]) == 36 17 | assert meta_data["NumberOfVolumesDiscardedByMotionScrubbing"] == 12 18 | assert meta_data["NumberOfVolumesDiscardedByNonsteadyStatesDetector"] == 2 19 | testing.assert_almost_equal( 20 | meta_data["MeanFramewiseDisplacement"], 0.107, decimal=3 21 | ) 22 | 23 | strategy = get_denoise_strategy("simple") 24 | meta_data = denoise_meta_data( 25 | strategy=strategy, 26 | img=img_file, 27 | ) 28 | assert len(meta_data["ConfoundRegressors"]) == 30 29 | assert meta_data["NumberOfVolumesDiscardedByMotionScrubbing"] == 0 30 | assert meta_data["NumberOfVolumesDiscardedByNonsteadyStatesDetector"] == 2 31 | testing.assert_almost_equal( 32 | meta_data["MeanFramewiseDisplacement"], 0.107, decimal=3 33 | ) 34 | 35 | img_file = resource_filename( 36 | "giga_connectome", 37 | "data/test_data/ds000017-fmriprep22.0.1-downsampled-nosurface/sub-1/ses-timepoint1/func/sub-1_ses-timepoint1_task-probabilisticclassification_run-01_space-MNI152NLin6Asym_desc-smoothAROMAnonaggr_bold.nii.gz", 38 | ) 39 | strategy = get_denoise_strategy("icaaroma") 40 | meta_data = denoise_meta_data( 41 | strategy=strategy, 42 | img=img_file, 43 | ) 44 | assert len(meta_data["ConfoundRegressors"]) == 6 45 | assert len(meta_data["ICAAROMANoiseComponents"]) == 9 46 | assert meta_data["NumberOfVolumesDiscardedByMotionScrubbing"] == 0 47 | assert meta_data["NumberOfVolumesDiscardedByNonsteadyStatesDetector"] == 2 48 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | */_version.py 2 | */data/test_data/ 3 | output/ 4 | work/ 5 | 6 | # Byte-compiled / optimized / DLL files 7 | __pycache__/ 8 | *.py[cod] 9 | *$py.class 10 | 11 | # C extensions 12 | *.so 13 | 14 | # Distribution / packaging 15 | .Python 16 | build/ 17 | develop-eggs/ 18 | dist/ 19 | downloads/ 20 | eggs/ 21 | .eggs/ 22 | lib/ 23 | lib64/ 24 | parts/ 25 | sdist/ 26 | var/ 27 | wheels/ 28 | pip-wheel-metadata/ 29 | share/python-wheels/ 30 | *.egg-info/ 31 | .installed.cfg 32 | *.egg 33 | MANIFEST 34 | 35 | # PyInstaller 36 | # Usually these files are written by a python script from a template 37 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 38 | *.manifest 39 | *.spec 40 | 41 | # Installer logs 42 | pip-log.txt 43 | pip-delete-this-directory.txt 44 | 45 | # Unit test / coverage reports 46 | htmlcov/ 47 | .tox/ 48 | .nox/ 49 | .coverage 50 | .coverage.* 51 | .cache 52 | nosetests.xml 53 | coverage.xml 54 | *.cover 55 | *.py,cover 56 | .hypothesis/ 57 | .pytest_cache/ 58 | 59 | # Translations 60 | *.mo 61 | *.pot 62 | 63 | # Django stuff: 64 | *.log 65 | local_settings.py 66 | db.sqlite3 67 | db.sqlite3-journal 68 | 69 | # Flask stuff: 70 | instance/ 71 | .webassets-cache 72 | 73 | # Scrapy stuff: 74 | .scrapy 75 | 76 | # Sphinx documentation 77 | docs/_build/ 78 | 79 | # PyBuilder 80 | target/ 81 | 82 | # Jupyter Notebook 83 | .ipynb_checkpoints 84 | 85 | # IPython 86 | profile_default/ 87 | ipython_config.py 88 | 89 | # pyenv 90 | .python-version 91 | 92 | # pipenv 93 | # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control. 94 | # However, in case of collaboration, if having platform-specific dependencies or dependencies 95 | # having no cross-platform support, pipenv may install dependencies that don't work, or not 96 | # install all needed dependencies. 97 | #Pipfile.lock 98 | 99 | # PEP 582; used by e.g. github.com/David-OConnor/pyflow 100 | __pypackages__/ 101 | 102 | # Celery stuff 103 | celerybeat-schedule 104 | celerybeat.pid 105 | 106 | # SageMath parsed files 107 | *.sage.py 108 | 109 | # Environments 110 | .env 111 | .venv 112 | env/ 113 | venv/ 114 | ENV/ 115 | env.bak/ 116 | venv.bak/ 117 | 118 | # Spyder project settings 119 | .spyderproject 120 | .spyproject 121 | 122 | # Rope project settings 123 | .ropeproject 124 | 125 | # mkdocs documentation 126 | /site 127 | 128 | # mypy 129 | .mypy_cache/ 130 | .dmypy.json 131 | dmypy.json 132 | 133 | # Pyre type checker 134 | .pyre/ 135 | 136 | *.code-workspace 137 | -------------------------------------------------------------------------------- /.dockerignore: -------------------------------------------------------------------------------- 1 | */_version.py 2 | ds000017-fmriprep22.0.1-downsampled-nosurface 3 | output/ 4 | work/ 5 | 6 | # Byte-compiled / optimized / DLL files 7 | __pycache__/ 8 | *.py[cod] 9 | *$py.class 10 | 11 | # C extensions 12 | *.so 13 | 14 | # Distribution / packaging 15 | .Python 16 | build/ 17 | develop-eggs/ 18 | dist/ 19 | downloads/ 20 | eggs/ 21 | .eggs/ 22 | lib/ 23 | lib64/ 24 | parts/ 25 | sdist/ 26 | var/ 27 | wheels/ 28 | pip-wheel-metadata/ 29 | share/python-wheels/ 30 | *.egg-info/ 31 | .installed.cfg 32 | *.egg 33 | MANIFEST 34 | 35 | # PyInstaller 36 | # Usually these files are written by a python script from a template 37 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 38 | *.manifest 39 | *.spec 40 | 41 | # Installer logs 42 | pip-log.txt 43 | pip-delete-this-directory.txt 44 | 45 | # Unit test / coverage reports 46 | htmlcov/ 47 | .tox/ 48 | .nox/ 49 | .coverage 50 | .coverage.* 51 | .cache 52 | nosetests.xml 53 | coverage.xml 54 | *.cover 55 | *.py,cover 56 | .hypothesis/ 57 | .pytest_cache/ 58 | 59 | # Translations 60 | *.mo 61 | *.pot 62 | 63 | # Django stuff: 64 | *.log 65 | local_settings.py 66 | db.sqlite3 67 | db.sqlite3-journal 68 | 69 | # Flask stuff: 70 | instance/ 71 | .webassets-cache 72 | 73 | # Scrapy stuff: 74 | .scrapy 75 | 76 | # Sphinx documentation 77 | docs/_build/ 78 | 79 | # PyBuilder 80 | target/ 81 | 82 | # Jupyter Notebook 83 | .ipynb_checkpoints 84 | 85 | # IPython 86 | profile_default/ 87 | ipython_config.py 88 | 89 | # pyenv 90 | .python-version 91 | 92 | # pipenv 93 | # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control. 94 | # However, in case of collaboration, if having platform-specific dependencies or dependencies 95 | # having no cross-platform support, pipenv may install dependencies that don't work, or not 96 | # install all needed dependencies. 97 | #Pipfile.lock 98 | 99 | # PEP 582; used by e.g. github.com/David-OConnor/pyflow 100 | __pypackages__/ 101 | 102 | # Celery stuff 103 | celerybeat-schedule 104 | celerybeat.pid 105 | 106 | # SageMath parsed files 107 | *.sage.py 108 | 109 | # Environments 110 | .env 111 | .venv 112 | env/ 113 | venv/ 114 | ENV/ 115 | env.bak/ 116 | venv.bak/ 117 | 118 | # Spyder project settings 119 | .spyderproject 120 | .spyproject 121 | 122 | # Rope project settings 123 | .ropeproject 124 | 125 | # mkdocs documentation 126 | /site 127 | 128 | # mypy 129 | .mypy_cache/ 130 | .dmypy.json 131 | dmypy.json 132 | 133 | # Pyre type checker 134 | .pyre/ 135 | 136 | *.code-workspace 137 | -------------------------------------------------------------------------------- /giga_connectome/tests/test_connectome.py: -------------------------------------------------------------------------------- 1 | import numpy as np 2 | from nibabel import Nifti1Image 3 | from nilearn.connectome import ConnectivityMeasure 4 | from nilearn.maskers import NiftiLabelsMasker, NiftiMasker 5 | 6 | from giga_connectome.connectome import generate_timeseries_connectomes 7 | 8 | 9 | def _extract_time_series_voxel(img, mask, confounds=None, smoothing_fwhm=None): 10 | masker = NiftiMasker( 11 | standardize=True, mask_img=mask, smoothing_fwhm=smoothing_fwhm 12 | ) 13 | time_series_voxel = masker.fit_transform(img, confounds=confounds) 14 | return time_series_voxel, masker 15 | 16 | 17 | def _simulate_img(): 18 | """Simulate data with one 'spot'""" 19 | data = np.zeros([8, 8, 8, 100]) 20 | time_series = np.random.randn(1, 1, 3, data.shape[3]) 21 | # parcel 1 with intra correlation 22 | data[4, 4, 3, :] = time_series[0, 0, 0, :] 23 | data[4, 4, 4, :] = time_series[0, 0, 0, :] + time_series[0, 0, 1, :] 24 | # parcel 2 with intra correlation (and some correlation with parcel 1) 25 | data[4, 4, 5, :] = time_series[0, 0, 1, :] 26 | data[4, 4, 6, :] = time_series[0, 0, 1, :] + time_series[0, 0, 2, :] 27 | corr = np.corrcoef(data[4, 4, 3:7, :]) 28 | img = Nifti1Image(data, np.eye(4)) 29 | 30 | mask_v = np.zeros(data.shape[0:3]) 31 | mask_v[4, 4, 3:7] = 1 32 | mask = Nifti1Image(mask_v, np.eye(4)) 33 | 34 | atlas = np.zeros(data.shape[0:3]) 35 | atlas[4, 4, 3:5] = 1 36 | atlas[4, 4, 5:7] = 2 37 | atlas = Nifti1Image(atlas, np.eye(4)) 38 | return img, mask, atlas, (corr[1, 0], corr[2, 3]) 39 | 40 | 41 | def test_calculate_intranetwork_correlation(): 42 | img, mask, atlas, corr = _simulate_img() 43 | # brute force version of intranetwork_correlation 44 | time_series_voxel, masker_voxel = _extract_time_series_voxel(img, mask) 45 | assert ( 46 | abs(np.corrcoef(time_series_voxel.transpose())[1, 0] - corr[0]) < 1e-10 47 | ) 48 | assert ( 49 | abs(np.corrcoef(time_series_voxel.transpose())[2, 3] - corr[1]) < 1e-10 50 | ) 51 | 52 | denoised_img = masker_voxel.inverse_transform(time_series_voxel) 53 | 54 | correlation_measure = ConnectivityMeasure( 55 | kind="correlation", vectorize=False, discard_diagonal=False 56 | ) 57 | conn, _, _ = generate_timeseries_connectomes( 58 | masker=NiftiLabelsMasker(labels_img=atlas), 59 | denoised_img=denoised_img, 60 | group_mask=mask, 61 | correlation_measure=correlation_measure, 62 | calculate_average_correlation=True, 63 | ) # output of the function is in float32 64 | assert np.abs(corr[0] - conn[0, 0]) < 1e-6 65 | assert np.abs(corr[1] - conn[1, 1]) < 1e-6 66 | -------------------------------------------------------------------------------- /.all-contributorsrc: -------------------------------------------------------------------------------- 1 | { 2 | "projectName": "giga_connectome", 3 | "projectOwner": "bids-apps", 4 | "files": [ 5 | "README.md" 6 | ], 7 | "commitType": "docs", 8 | "commitConvention": "angular", 9 | "contributorsPerLine": 7, 10 | "contributors": [ 11 | { 12 | "login": "htwangtw", 13 | "name": "Hao-Ting Wang", 14 | "avatar_url": "https://avatars.githubusercontent.com/u/13743617?v=4", 15 | "profile": "https://wanghaoting.com/", 16 | "contributions": [ 17 | "ideas", 18 | "research", 19 | "code", 20 | "test" 21 | ] 22 | }, 23 | { 24 | "login": "Hyedryn", 25 | "name": "Quentin Dessain", 26 | "avatar_url": "https://avatars.githubusercontent.com/u/5383293?v=4", 27 | "profile": "https://github.com/Hyedryn", 28 | "contributions": [ 29 | "userTesting", 30 | "platform" 31 | ] 32 | }, 33 | { 34 | "login": "clarkenj", 35 | "name": "Natasha Clarke", 36 | "avatar_url": "https://avatars.githubusercontent.com/u/57987005?v=4", 37 | "profile": "https://github.com/clarkenj", 38 | "contributions": [ 39 | "userTesting", 40 | "example", 41 | "bug" 42 | ] 43 | }, 44 | { 45 | "login": "Remi-Gau", 46 | "name": "Remi Gau", 47 | "avatar_url": "https://avatars.githubusercontent.com/u/6961185?v=4", 48 | "profile": "https://remi-gau.github.io/", 49 | "contributions": [ 50 | "infra", 51 | "maintenance" 52 | ] 53 | }, 54 | { 55 | "login": "pbellec", 56 | "name": "Pierre Lune Bellec", 57 | "avatar_url": "https://avatars.githubusercontent.com/u/1670887?v=4", 58 | "profile": "http://simexp.github.io", 59 | "contributions": [ 60 | "ideas", 61 | "financial" 62 | ] 63 | }, 64 | { 65 | "login": "shnizzedy", 66 | "name": "Jon Cluce", 67 | "avatar_url": "https://avatars.githubusercontent.com/u/5974438?v=4", 68 | "profile": "https://github.com/shnizzedy", 69 | "contributions": [ 70 | "bug" 71 | ] 72 | }, 73 | { 74 | "login": "emullier", 75 | "name": "Emeline Mullier", 76 | "avatar_url": "https://avatars.githubusercontent.com/u/43587002?v=4", 77 | "profile": "https://github.com/emullier", 78 | "contributions": [ 79 | "bug" 80 | ] 81 | }, 82 | { 83 | "login": "jdkent", 84 | "name": "James Kent", 85 | "avatar_url": "https://avatars.githubusercontent.com/u/12564882?v=4", 86 | "profile": "https://jdkent.github.io/", 87 | "contributions": [ 88 | "bug", 89 | "doc" 90 | ] 91 | }, 92 | { 93 | "login": "mstimberg", 94 | "name": "Marcel Stimberg", 95 | "avatar_url": "https://avatars.githubusercontent.com/u/1381982?v=4", 96 | "profile": "https://marcel.stimberg.info", 97 | "contributions": [ 98 | "userTesting", 99 | "doc", 100 | "bug" 101 | ] 102 | } 103 | ] 104 | } 105 | -------------------------------------------------------------------------------- /giga_connectome/workflow.py: -------------------------------------------------------------------------------- 1 | """ 2 | Process fMRIPrep outputs to timeseries based on denoising strategy. 3 | """ 4 | 5 | from __future__ import annotations 6 | 7 | import argparse 8 | 9 | from giga_connectome.mask import generate_gm_mask_atlas 10 | from giga_connectome.atlas import load_atlas_setting 11 | from giga_connectome.denoise import get_denoise_strategy 12 | from giga_connectome import methods, utils 13 | from giga_connectome.postprocess import run_postprocessing_dataset 14 | 15 | from giga_connectome.logger import gc_logger 16 | 17 | gc_log = gc_logger() 18 | 19 | 20 | def set_verbosity(verbosity: int | list[int]) -> None: 21 | if isinstance(verbosity, list): 22 | verbosity = verbosity[0] 23 | if verbosity == 0: 24 | gc_log.setLevel("ERROR") 25 | elif verbosity == 1: 26 | gc_log.setLevel("WARNING") 27 | elif verbosity == 2: 28 | gc_log.setLevel("INFO") 29 | elif verbosity == 3: 30 | gc_log.setLevel("DEBUG") 31 | 32 | 33 | def workflow(args: argparse.Namespace) -> None: 34 | gc_log.info(vars(args)) 35 | 36 | # set file paths 37 | bids_dir = args.bids_dir 38 | output_dir = args.output_dir 39 | atlases_dir = args.atlases_dir 40 | standardize = True # always standardising the time series 41 | smoothing_fwhm = args.smoothing_fwhm 42 | calculate_average_correlation = ( 43 | args.calculate_intranetwork_average_correlation 44 | ) 45 | subjects = utils.get_subject_lists(args.participant_label, bids_dir) 46 | strategy = get_denoise_strategy(args.denoise_strategy) 47 | 48 | atlas = load_atlas_setting(args.atlas) 49 | user_bids_filter = utils.parse_bids_filter(args.bids_filter_file) 50 | 51 | # get template information and update BIDS filters 52 | template, bids_filters = utils.prepare_bidsfilter_and_template( 53 | strategy, user_bids_filter 54 | ) 55 | 56 | set_verbosity(args.verbosity) 57 | 58 | # check output path 59 | output_dir.mkdir(parents=True, exist_ok=True) 60 | atlases_dir.mkdir(parents=True, exist_ok=True) 61 | 62 | gc_log.info(f"Indexing BIDS directory:\n\t{bids_dir}") 63 | 64 | utils.create_ds_description(output_dir) 65 | utils.create_sidecar(output_dir / "meas-PearsonCorrelation_relmat.json") 66 | methods.generate_method_section( 67 | output_dir=output_dir, 68 | atlas=atlas["name"], 69 | smoothing_fwhm=smoothing_fwhm, 70 | standardize="zscore", 71 | strategy=args.denoise_strategy, 72 | mni_space=template, 73 | average_correlation=calculate_average_correlation, 74 | ) 75 | 76 | for subject in subjects: 77 | subj_data, _ = utils.get_bids_images( 78 | [subject], template, bids_dir, args.reindex_bids, bids_filters 79 | ) 80 | subject_mask_nii, subject_seg_niis = generate_gm_mask_atlas( 81 | atlases_dir, atlas, template, subj_data["mask"] 82 | ) 83 | 84 | gc_log.info(f"Generate subject level connectomes: sub-{subject}") 85 | 86 | run_postprocessing_dataset( 87 | strategy, 88 | atlas, 89 | subject_seg_niis, 90 | subj_data["bold"], 91 | subject_mask_nii, 92 | standardize, 93 | smoothing_fwhm, 94 | output_dir, 95 | calculate_average_correlation, 96 | ) 97 | return 98 | -------------------------------------------------------------------------------- /pyproject.toml: -------------------------------------------------------------------------------- 1 | [build-system] 2 | requires = ["hatchling", "hatch-vcs"] 3 | build-backend = "hatchling.build" 4 | 5 | [project] 6 | name = "giga_connectome" 7 | description = "Generate connectome from fMRIPrep outputs" 8 | readme = "README.md" 9 | requires-python = ">=3.9" 10 | license = { file="LICENSE" } 11 | authors = [ 12 | { name="Hao-Ting Wang", email="htwangtw@gmail.com" }, 13 | ] 14 | classifiers = [ 15 | "Programming Language :: Python :: 3", 16 | "Programming Language :: Python :: 3.9", 17 | "Programming Language :: Python :: 3.10", 18 | "Programming Language :: Python :: 3.11", 19 | "Programming Language :: Python :: 3.12", 20 | "Programming Language :: Python :: 3.13", 21 | ] 22 | dependencies = [ 23 | "nilearn[plotting] >=0.10.3", 24 | "pybids >=0.15.0, <0.16.0", 25 | "templateflow < 23.0.0", 26 | "setuptools", 27 | "jinja2 >= 2.0", 28 | "rich", 29 | ] 30 | dynamic = ["version"] 31 | 32 | [project.scripts] 33 | giga_connectome = "giga_connectome.run:main" 34 | 35 | [project.optional-dependencies] 36 | dev = [ 37 | "black", 38 | "flake8", 39 | "pre-commit", 40 | "giga_connectome[test]", 41 | 'tox', 42 | 'mypy', 43 | 'types-setuptools', 44 | 'pandas-stubs', 45 | 'types-tqdm' 46 | ] 47 | test = [ 48 | "pytest", 49 | "pytest-cov", 50 | ] 51 | docs = [ 52 | "sphinx", 53 | "sphinx_rtd_theme", 54 | "myst-parser", 55 | "sphinx-argparse" 56 | ] 57 | # Aliases 58 | tests = ["giga_connectome[test]"] 59 | 60 | [project.urls] 61 | "Homepage" = "https://github.com/bids-apps/giga_connectome" 62 | "Documentation" = "https://giga-connectome.readthedocs.io/en/latest/" 63 | 64 | [tool.hatch.version] 65 | source = "vcs" 66 | 67 | [tool.hatch.build.hooks.vcs] 68 | version-file = "giga_connectome/_version.py" 69 | 70 | [tool.hatch.build.targets.sdist] 71 | exclude = [".git_archival.txt"] 72 | 73 | [tool.hatch.build.targets.wheel] 74 | packages = ["giga_connectome"] 75 | exclude = [ 76 | ".github", 77 | "giga_connectome/data/test_data" 78 | ] 79 | 80 | [tool.black] 81 | target-version = ['py38'] 82 | exclude = "giga_connectome/_version.py" 83 | line-length = 79 84 | 85 | [tool.mypy] 86 | check_untyped_defs = true 87 | disallow_any_generics = true 88 | disallow_incomplete_defs = true 89 | disallow_untyped_defs = true 90 | enable_error_code = ["ignore-without-code", "redundant-expr"] # "truthy-bool" 91 | no_implicit_optional = true 92 | show_error_codes = true 93 | # strict = true 94 | warn_redundant_casts = true 95 | warn_unreachable = true 96 | warn_unused_ignores = true 97 | 98 | [[tool.mypy.overrides]] 99 | ignore_missing_imports = true 100 | module = [ 101 | "bids.*", 102 | "giga_connectome._version", 103 | "h5py.*", 104 | "nibabel.*", 105 | "nilearn.*", 106 | "nilearn.connectome.*", 107 | "nilearn.image.*", 108 | "nilearn.interfaces.*", 109 | "nilearn.maskers.*", 110 | "nilearn.masking.*", 111 | "rich.*", 112 | "scipy.ndimage.*", 113 | "templateflow.*", 114 | ] 115 | 116 | [[tool.mypy.overrides]] 117 | ignore_errors = true 118 | module = [ 119 | 'giga_connectome.tests.*', 120 | 'download_templates', 121 | 'conf', 122 | ] 123 | 124 | [tool.pytest.ini_options] 125 | minversion = "7" 126 | log_cli_level = "INFO" 127 | xfail_strict = true 128 | testpaths = ["giga_connectome/tests"] 129 | addopts = ["-ra", "--strict-config", "--strict-markers", "--doctest-modules", "-v"] 130 | markers = [ 131 | "smoke: smoke tests that will run on a downsampled real dataset (deselect with '-m \"not smoke\"')", 132 | ] 133 | # filterwarnings = ["error"] 134 | -------------------------------------------------------------------------------- /giga_connectome/tests/test_cli.py: -------------------------------------------------------------------------------- 1 | """ 2 | Simple code to smoke test the functionality. 3 | """ 4 | 5 | from pathlib import Path 6 | 7 | import json 8 | import pytest 9 | from pkg_resources import resource_filename 10 | 11 | import pandas as pd 12 | 13 | from giga_connectome import __version__ 14 | from giga_connectome.run import main 15 | 16 | 17 | def test_version(capsys): 18 | try: 19 | main(["-v"]) 20 | except SystemExit: 21 | pass 22 | captured = capsys.readouterr() 23 | assert __version__ == captured.out.split()[0] 24 | 25 | 26 | def test_help(capsys): 27 | try: 28 | main(["-h"]) 29 | except SystemExit: 30 | pass 31 | captured = capsys.readouterr() 32 | assert "Generate denoised timeseries" in captured.out 33 | 34 | 35 | @pytest.mark.smoke 36 | def test_smoke(tmp_path, caplog): 37 | bids_dir = resource_filename( 38 | "giga_connectome", 39 | "data/test_data/ds000017-fmriprep22.0.1-downsampled-nosurface", 40 | ) 41 | output_dir = tmp_path / "output" 42 | atlases_dir = tmp_path / "atlases" 43 | work_dir = tmp_path / "work" 44 | 45 | if not Path(output_dir).exists: 46 | Path(output_dir).mkdir() 47 | 48 | main( 49 | [ 50 | "--participant-label", 51 | "1", 52 | "-w", 53 | str(work_dir), 54 | "-a", 55 | str(atlases_dir), 56 | "--atlas", 57 | "Schaefer2018", 58 | "--denoise-strategy", 59 | "simple", 60 | "--reindex-bids", 61 | "--calculate-intranetwork-average-correlation", 62 | "--bids-filter-file", 63 | str(Path(bids_dir).parent / "bids_filter.json"), 64 | str(bids_dir), 65 | str(output_dir), 66 | "participant", 67 | ] 68 | ) 69 | # check outputs 70 | assert "has been deprecated" in caplog.text.splitlines()[0] 71 | 72 | output_folder = output_dir / "sub-1" / "ses-timepoint1" / "func" 73 | 74 | base = ( 75 | "sub-1_ses-timepoint1_task-probabilisticclassification" 76 | "_run-01_seg-Schaefer2018100Parcels7Networks" 77 | ) 78 | ts_base = ( 79 | "sub-1_ses-timepoint1_task-probabilisticclassification" 80 | "_run-01_desc-denoiseSimple" 81 | ) 82 | relmat_file = output_folder / ( 83 | base + "_meas-PearsonCorrelation" + "_desc-denoiseSimple_relmat.tsv" 84 | ) 85 | assert relmat_file.exists() 86 | relmat = pd.read_csv(relmat_file, sep="\t") 87 | assert len(relmat) == 100 88 | json_file = relmat_file = output_folder / (ts_base + "_timeseries.json") 89 | assert json_file.exists() 90 | with open(json_file, "r") as f: 91 | content = json.load(f) 92 | assert content.get("SamplingFrequency") == 0.5 93 | 94 | timeseries_file = relmat_file = output_folder / ( 95 | base + "_desc-denoiseSimple_timeseries.tsv" 96 | ) 97 | assert timeseries_file.exists() 98 | timeseries = pd.read_csv(timeseries_file, sep="\t") 99 | assert len(timeseries.columns) == 100 100 | 101 | # immediately rerun should cover the case where the output already exists 102 | main( 103 | [ 104 | "--participant-label", 105 | "1", 106 | "-a", 107 | str(atlases_dir), 108 | "--atlas", 109 | "Schaefer2018", 110 | "--denoise-strategy", 111 | "simple", 112 | "--calculate-intranetwork-average-correlation", 113 | "--bids-filter-file", 114 | str(Path(bids_dir).parent / "bids_filter.json"), 115 | str(bids_dir), 116 | str(output_dir), 117 | "participant", 118 | ] 119 | ) 120 | 121 | # deleate gm mask to trigger rerun the atlas generation 122 | 123 | gm_path = ( 124 | atlases_dir 125 | / "sub-1" 126 | / "func" 127 | / "sub-1_space-MNI152NLin2009cAsym_res-2_label-GM_mask.nii.gz" 128 | ) 129 | # delete gm_path 130 | gm_path.unlink() 131 | # rerun 132 | main( 133 | [ 134 | "--participant-label", 135 | "1", 136 | "-a", 137 | str(atlases_dir), 138 | "--atlas", 139 | "Schaefer2018", 140 | "--denoise-strategy", 141 | "simple", 142 | "--calculate-intranetwork-average-correlation", 143 | "--bids-filter-file", 144 | str(Path(bids_dir).parent / "bids_filter.json"), 145 | str(bids_dir), 146 | str(output_dir), 147 | "participant", 148 | ] 149 | ) 150 | 151 | # rerun but with icaaroma 152 | main( 153 | [ 154 | "--participant-label", 155 | "1", 156 | "-a", 157 | str(atlases_dir), 158 | "--atlas", 159 | "Schaefer2018", 160 | "--denoise-strategy", 161 | "icaaroma", 162 | "--calculate-intranetwork-average-correlation", 163 | "--bids-filter-file", 164 | str(Path(bids_dir).parent / "bids_filter.json"), 165 | str(bids_dir), 166 | str(output_dir), 167 | "participant", 168 | ] 169 | ) 170 | -------------------------------------------------------------------------------- /docs/source/changes.md: -------------------------------------------------------------------------------- 1 | # What’s new 2 | 3 | ## 0.6.1.dev 4 | 5 | **Released MONTH YEAR** 6 | 7 | ### New 8 | 9 | ### Fixes 10 | 11 | ### Enhancements 12 | 13 | - [DOCS] Add JOSS reference to the citations and use the `README.md` as the documentation landing page. (@htwangtw)[#232](https://github.com/bids-apps/giga_connectome/pull/232) 14 | 15 | ### Changes 16 | 17 | 18 | ## 0.6.0 19 | 20 | **Released May 2025** 21 | 22 | ### New 23 | 24 | - [EHN] Default atlas `Schaefer20187Networks` is renamed to `Schaefer2018`. `Schaefer20187Networks` will be deprecated ub 0.7.0. (@htwangtw) 25 | - [EHN] `--work-dir` is now renamed to `--atlases-dir`. `--work-dir` will be deprecated in 0.7.0. (@htwangtw) 26 | - [EHN] Add details of denoising strategy to the meta data of the time series extraction. (@htwangtw) [#144](https://github.com/bids-apps/giga_connectome/issues/144) 27 | - [DOCS] Add instructions to download atlases packaged in the container for contributors. (@htwangtw) 28 | - [DOCS] Add All Contributors bot. (@htwangtw) 29 | 30 | ### Fixes 31 | 32 | - [FIX] Make sure version docker images matches version of package in the image (@Remi-Gau) [#169](https://github.com/bids-apps/giga_connectome/issues/169) 33 | - [MAINT] Remove recurrsive import. (@htwangtw) [#135](https://github.com/bids-apps/giga_connectome/issues/135) 34 | - [DOCS] Remove `meas` entity in timeseries outputs in the documentation. (@htwangtw) [#136](https://github.com/bids-apps/giga_connectome/issues/136) 35 | - [FIX] Incompatible types in assignment of variable `mask_array` in `giga_connectome/mask.py`. (@htwangtw) [#189](https://github.com/bids-apps/giga_connectome/pull/189) 36 | - [FIX] ICA-AROMA implementation. (@htwangtw) [#211](https://github.com/bids-apps/giga_connectome/issues/211) 37 | 38 | ### Enhancements 39 | 40 | - [DOCS] Improve advance usage example with test data and executable code. (@htwangtw) [#215](https://github.com/bids-apps/giga_connectome/pull/215) 41 | - [DOCS] Clarify fMRIPrep versions supports. (@htwangtw) [#227](https://github.com/bids-apps/giga_connectome/pull/227) 42 | 43 | ### Changes 44 | 45 | - [EHN] Merge `atlas-` and the atlas description `desc-` into one filed `seg-` defined under 'Derivatives-Image data type' in BIDS. (@htwangtw) [#143](https://github.com/bids-apps/giga_connectome/issues/143) 46 | - [EHN] Working directory is now renamed as `atlases/` to reflect on the atlases directory mentioned in BEP017. 47 | - [EHN] Use hyphen instead of underscores for CLI arguments `participant-label` and `smoothing-fwhm`. Underscore variation will be deprecated ub 0.7.0. (@htwangtw) [#190](https://github.com/bids-apps/giga_connectome/pull/190) 48 | 49 | ## 0.5.0 50 | 51 | Released April 2024 52 | 53 | ### New 54 | 55 | - [EHN] Add Harvard-Oxford atlas. (@htwangtw) [#117](https://github.com/bids-apps/giga_connectome/issues/117) 56 | - [DOCS] Improved documentation on using customised configuration files. (@htwangtw) 57 | - [ENH] use logger instead of print statements. (@Remi-Gau) 58 | 59 | ### Fixes 60 | 61 | - [FIX] Bump nilearn version to 0.10.2 to fix issues [#26](https://github.com/bids-apps/giga_connectome/issues/26) and [#27](https://github.com/bids-apps/giga_connectome/issues/27). (@Remi-Gau) 62 | 63 | ### Enhancements 64 | 65 | - [ENH] Reduce the docker image size. (@htwangtw) 66 | 67 | ### Changes 68 | 69 | - [ENH] Make output more BIDS compliant. (@Remi-Gau) 70 | - [MAINT] Pin dependencies for docker build for better reproducibility. (@Remi-Gau) 71 | - [MAINT] Automate docker build and release. (@Remi-Gau, @htwangtw) 72 | - [DOCS] Update the release and post-release procedure (@htwangtw) 73 | 74 | ## 0.4.0 75 | 76 | Released August 2023 77 | 78 | ### New 79 | 80 | - [DOCS] Documentations. What you are reading here. (@htwangtw) 81 | - [EHN] BIDS filter. (@htwangtw) 82 | - [EHN] Calculate average intranetwork correlation (NIAK feature). (@htwangtw) 83 | - [EHN] Add TR as an attribute to time series data. (@htwangtw) 84 | - [MAINT] Fully functional CI and documentations. (@htwangtw) 85 | 86 | ### Fixes 87 | 88 | ### Changes 89 | 90 | - [EHN] If an output file already exists, it will be overwritten and warning is logged. (@Remi-Gau) 91 | - [EHN] Default atlas is now MIST. (@htwangtw) 92 | - [EHN] When using the `participant` analysis level, the output is one file per subject, rather than one file per scan. (@htwangtw) 93 | 94 | ## 0.3.0 95 | 96 | Released June 2023 97 | 98 | ### New 99 | 100 | - [EHN] expose some preprocessing options: standardization and smoothing. (@htwangtw) 101 | - [EHN] `--version` flag. (@htwangtw) 102 | 103 | ### Fixes 104 | 105 | ### Changes 106 | 107 | ## 0.2.0 108 | 109 | Released June 2023 110 | 111 | ### New 112 | 113 | ### Fixes 114 | 115 | ### Changes 116 | 117 | - [FIX] Detact different affine matrix from input and use the most common one as group mask resampling target. (@htwangtw) 118 | 119 | 120 | ## 0.1.1 121 | 122 | Released May 2023. Hot fix for 0.1.1. 123 | 124 | ### New 125 | 126 | ### Fixes 127 | - [FIX] Lock `pybids` and `templateflow` versions as new releases of `pybids` lead to conflicts. (@htwangtw) 128 | 129 | ### Changes 130 | 131 | ## 0.1.0 132 | 133 | Released May 2023 134 | 135 | First working version of the BIDS-app. (@htwangtw) 136 | 137 | ### New 138 | 139 | - [MAINT] Working CI for unit tests (@htwangtw). 140 | - [EHN] Dockerfile (@Hyedryn). 141 | 142 | ### Fixes 143 | 144 | ### Changes 145 | -------------------------------------------------------------------------------- /.github/workflows/test.yml: -------------------------------------------------------------------------------- 1 | --- 2 | on: 3 | push: 4 | branches: 5 | - main 6 | - maint/* 7 | tags: 8 | - '*' 9 | pull_request: 10 | branches: 11 | - main 12 | - maint/* 13 | paths: ['giga_connectome/*', 'pyproject.toml', 'requirements.txt', 'Dockerfile'] 14 | 15 | defaults: 16 | run: 17 | shell: bash 18 | 19 | concurrency: 20 | group: ${{ github.workflow }}-${{ github.ref }} 21 | cancel-in-progress: true 22 | 23 | permissions: 24 | contents: read 25 | 26 | jobs: 27 | check_skip_flags: 28 | name: Check skip flags 29 | runs-on: ubuntu-latest 30 | outputs: 31 | head-commit-message: ${{ steps.get_head_commit_message.outputs.headCommitMsg }} 32 | steps: 33 | - name: Get repo 34 | uses: actions/checkout@v6 35 | with: 36 | ref: ${{ github.event.pull_request.head.sha }} 37 | - name: Print head git commit message 38 | id: get_head_commit_message 39 | run: echo "headCommitMsg=$(git show -s --format=%s)" >> $GITHUB_OUTPUT 40 | 41 | download-test-data: 42 | runs-on: ubuntu-latest 43 | steps: 44 | - uses: actions/checkout@v6 45 | with: 46 | fetch-depth: 0 47 | - uses: actions/setup-python@v6 48 | - name: install tox 49 | run: pip install tox 50 | - uses: actions/cache@v4 51 | id: cache 52 | env: 53 | cache-name: ds000017 54 | with: 55 | path: /home/runner/work/giga_connectome/giga_connectome/giga_connectome/data/test_data 56 | key: ${{ env.cache-name }} 57 | 58 | - if: ${{ steps.cache.outputs.cache-hit != 'true' }} 59 | name: Download fmriprep derivative of ds000017 60 | id: download 61 | run: tox -e test_data 62 | 63 | build: 64 | runs-on: ubuntu-latest 65 | steps: 66 | - uses: actions/checkout@v6 67 | with: 68 | fetch-depth: 0 69 | - uses: actions/setup-python@v6 70 | with: 71 | python-version: 3 72 | - run: pip install --upgrade build twine 73 | - name: Build sdist and wheel 74 | run: python -m build 75 | - run: twine check dist/* 76 | - name: Upload sdist and wheel artifacts 77 | uses: actions/upload-artifact@v5 78 | with: 79 | name: dist 80 | path: dist/ 81 | - name: Build git archive 82 | run: mkdir archive && git archive -v -o archive/archive.tgz HEAD 83 | - name: Upload git archive artifact 84 | uses: actions/upload-artifact@v5 85 | with: 86 | name: archive 87 | path: archive/ 88 | 89 | test-package: 90 | runs-on: ubuntu-latest 91 | needs: [build] 92 | strategy: 93 | matrix: 94 | package: [wheel, sdist, archive] 95 | steps: 96 | - name: Download sdist and wheel artifacts 97 | if: matrix.package != 'archive' 98 | uses: actions/download-artifact@v6 99 | with: 100 | name: dist 101 | path: dist/ 102 | - name: Download git archive artifact 103 | if: matrix.package == 'archive' 104 | uses: actions/download-artifact@v6 105 | with: 106 | name: archive 107 | path: archive/ 108 | - uses: actions/setup-python@v6 109 | with: 110 | python-version: 3 111 | - name: Display Python version 112 | run: python -c "import sys; print(sys.version)" 113 | - name: Update pip 114 | run: pip install --upgrade pip 115 | - name: Install wheel 116 | if: matrix.package == 'wheel' 117 | run: pip install dist/*.whl 118 | - name: Install sdist 119 | if: matrix.package == 'sdist' 120 | run: pip install dist/*.tar.gz 121 | - name: Install archive 122 | if: matrix.package == 'archive' 123 | run: pip install archive/archive.tgz 124 | 125 | test-coverage: 126 | runs-on: ubuntu-latest 127 | needs: [build, download-test-data, check_skip_flags] 128 | strategy: 129 | fail-fast: false 130 | matrix: 131 | python-version: ['3.9', '3.10', '3.11', '3.12', '3.13'] 132 | steps: 133 | - uses: actions/checkout@v6 134 | - name: Set up Python ${{ matrix.python-version }} 135 | uses: actions/setup-python@v6 136 | with: 137 | python-version: ${{ matrix.python-version }} 138 | allow-prereleases: true 139 | 140 | - name: Restore cached data ds000017 141 | id: cache 142 | uses: actions/cache/restore@v4 143 | with: 144 | path: /home/runner/work/giga_connectome/giga_connectome/giga_connectome/data/test_data 145 | key: ds000017 146 | 147 | - name: Install build dependencies 148 | run: python -m pip install --upgrade pip 149 | - name: Install task package 150 | run: pip install -e .[test] 151 | 152 | - name: Test with pytest - ignore smoke test. 153 | if: ${{ !contains(needs.check_skip_flags.outputs.head-commit-message, 'full_test') }} 154 | run: | 155 | pytest -m "not smoke" --cov=giga_connectome --cov-report=xml --pyargs giga_connectome 156 | 157 | - name: Full test - run all the test to generate accurate coverage report. 158 | if: ${{ contains(needs.check_skip_flags.outputs.head-commit-message, 'full_test') }} || ${{ github.event.pull_request.merged }} 159 | run: pytest --cov=giga_connectome --cov-report=xml --pyargs giga_connectome 160 | 161 | - uses: codecov/codecov-action@v5 162 | if: ${{ always() }} 163 | with: 164 | token: ${{ secrets.CODECOV_TOKEN }} 165 | -------------------------------------------------------------------------------- /.github/workflows/docker.yml: -------------------------------------------------------------------------------- 1 | --- 2 | name: docker 3 | 4 | on: 5 | push: 6 | branches: ['main'] 7 | tags: ['*'] 8 | pull_request: 9 | branches: ['*'] 10 | paths: 11 | - 'Dockerfile' 12 | - 'requirements.txt' 13 | - '.github/workflows/docker.yml' 14 | - 'tools/*' 15 | release: 16 | types: [published] 17 | 18 | concurrency: 19 | group: ${{ github.workflow }}-${{ github.ref }} 20 | cancel-in-progress: true 21 | 22 | permissions: 23 | contents: read 24 | 25 | env: 26 | USER_NAME: bids 27 | REPO_NAME: giga_connectome 28 | DATA: /home/runner/work/giga_connectome/giga_connectome/giga_connectome/data/test_data 29 | IMAGE: /home/runner/work/giga_connectome/giga_connectome/docker 30 | 31 | jobs: 32 | download-test-data: 33 | runs-on: ubuntu-latest 34 | steps: 35 | - uses: actions/checkout@v6 36 | with: 37 | fetch-depth: 0 38 | - uses: actions/cache@v4 39 | id: cache 40 | with: 41 | path: ${{ env.DATA }} 42 | key: data 43 | - if: ${{ steps.cache.outputs.cache-hit != 'true' }} 44 | uses: actions/setup-python@v6 45 | - if: ${{ steps.cache.outputs.cache-hit != 'true' }} 46 | name: Download fmriprep derivative of ds000017 47 | run: | 48 | pip install tox 49 | tox -e test_data 50 | 51 | docker-build: 52 | runs-on: ubuntu-latest 53 | steps: 54 | - uses: actions/checkout@v6 55 | with: 56 | fetch-depth: 0 57 | - name: Build the Docker image 58 | run: | 59 | docker build . --file Dockerfile --tag ${{env.USER_NAME}}/${{env.REPO_NAME}} 60 | mkdir -p ${{ env.IMAGE }} 61 | docker save "${{env.USER_NAME}}/${{env.REPO_NAME}}" > "${{ env.IMAGE }}/image.tar" 62 | - name: Check image size and version 63 | run: | 64 | docker images 65 | docker run --rm ${{env.USER_NAME}}/${{env.REPO_NAME}} --version 66 | - name: Upload docker artifacts 67 | uses: actions/upload-artifact@v5 68 | with: 69 | name: docker 70 | path: ${{ env.IMAGE }} 71 | 72 | docker-run: 73 | runs-on: ubuntu-latest 74 | needs: [download-test-data, docker-build] 75 | strategy: 76 | matrix: 77 | atlas: ['Schaefer2018', 'MIST', 'DiFuMo', 'HarvardOxfordCortical', 'HarvardOxfordCorticalSymmetricSplit', 'HarvardOxfordSubcortical'] 78 | steps: 79 | - uses: actions/checkout@v6 80 | with: 81 | fetch-depth: 0 82 | - name: Restore docker image 83 | uses: actions/download-artifact@v6 84 | with: 85 | name: docker 86 | path: ${{ env.IMAGE }} 87 | - name: Restore cached data ds000017 88 | uses: actions/cache/restore@v4 89 | with: 90 | path: ${{ env.DATA }} 91 | key: data 92 | - name: Test the Docker image 93 | run: | 94 | docker load -i ${{ env.IMAGE }}/image.tar 95 | docker run --rm \ 96 | -v ${{ env.DATA }}:/test_data \ 97 | -v ./outputs:/outputs \ 98 | -v ./outputs/atlases:/atlases \ 99 | ${{env.USER_NAME}}/${{env.REPO_NAME}} \ 100 | /test_data/ds000017-fmriprep22.0.1-downsampled-nosurface \ 101 | /outputs \ 102 | participant \ 103 | -a /atlases \ 104 | --atlas ${{ matrix.atlas }} \ 105 | --participant_label 1 \ 106 | --reindex-bids 107 | 108 | - name: Upload output artifact 109 | uses: actions/upload-artifact@v5 110 | with: 111 | name: connectome_${{ matrix.atlas }} 112 | path: ./outputs/ 113 | 114 | docker-push: 115 | runs-on: ubuntu-latest 116 | needs: [docker-run] 117 | defaults: 118 | run: 119 | shell: bash -el {0} 120 | if: ${{ github.ref == 'refs/heads/main' || github.ref_type == 'tag' }} 121 | steps: 122 | - uses: actions/checkout@v6 123 | with: 124 | fetch-depth: 0 125 | - name: Restore docker image 126 | uses: actions/download-artifact@v6 127 | with: 128 | name: docker 129 | path: ${{ env.IMAGE }} 130 | - name: Log in to Docker Hub 131 | uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef 132 | with: 133 | username: ${{ secrets.DOCKERHUB_USERNAME }} 134 | password: ${{ secrets.DOCKERHUB_TOKEN }} 135 | - name: Load image 136 | run: docker load -i ${{ env.IMAGE }}/image.tar 137 | - name: Push unstable to dockerhub on tags or on main 138 | run: | 139 | echo "Pushing unstable versions to DockerHub" 140 | unstable="${{env.USER_NAME}}/${{env.REPO_NAME}}:unstable" 141 | docker tag "${{env.USER_NAME}}/${{env.REPO_NAME}}" "${unstable}" 142 | docker push "${unstable}" 143 | - name: Push stable release to dockerhub on tags only 144 | if: ${{ github.ref_type == 'tag' }} 145 | run: | 146 | echo "Pushing stable and latest versions to DockerHub for latest and ${{ github.ref_name }}" 147 | 148 | unstable="${{env.USER_NAME}}/${{env.REPO_NAME}}:unstable" 149 | latest="${{env.USER_NAME}}/${{env.REPO_NAME}}:latest" 150 | docker tag "${unstable}" "${latest}" 151 | docker push "${latest}" 152 | 153 | tagged_release="${{env.USER_NAME}}/${{env.REPO_NAME}}:${{ github.ref_name }}" 154 | docker tag "${unstable}" "${tagged_release}" 155 | docker push "${tagged_release}" 156 | -------------------------------------------------------------------------------- /CODE_OF_CONDUCT.md: -------------------------------------------------------------------------------- 1 | # Contributor Covenant Code of Conduct 2 | 3 | ## Our Pledge 4 | 5 | We as members, contributors, and leaders pledge to make participation in our 6 | community a harassment-free experience for everyone, regardless of age, body 7 | size, visible or invisible disability, ethnicity, sex characteristics, gender 8 | identity and expression, level of experience, education, socioeconomic status, 9 | nationality, personal appearance, race, religion, or sexual identity 10 | and orientation. 11 | 12 | We pledge to act and interact in ways that contribute to an open, welcoming, 13 | diverse, inclusive, and healthy community. 14 | 15 | ## Our Standards 16 | 17 | Examples of behavior that contributes to a positive environment for our 18 | community include: 19 | 20 | * Demonstrating empathy and kindness toward other people 21 | * Being respectful of differing opinions, viewpoints, and experiences 22 | * Giving and gracefully accepting constructive feedback 23 | * Accepting responsibility and apologizing to those affected by our mistakes, 24 | and learning from the experience 25 | * Focusing on what is best not just for us as individuals, but for the 26 | overall community 27 | 28 | Examples of unacceptable behavior include: 29 | 30 | * The use of sexualized language or imagery, and sexual attention or 31 | advances of any kind 32 | * Trolling, insulting or derogatory comments, and personal or political attacks 33 | * Public or private harassment 34 | * Publishing others' private information, such as a physical or email 35 | address, without their explicit permission 36 | * Other conduct which could reasonably be considered inappropriate in a 37 | professional setting 38 | 39 | ## Enforcement Responsibilities 40 | 41 | Community leaders are responsible for clarifying and enforcing our standards of 42 | acceptable behavior and will take appropriate and fair corrective action in 43 | response to any behavior that they deem inappropriate, threatening, offensive, 44 | or harmful. 45 | 46 | Community leaders have the right and responsibility to remove, edit, or reject 47 | comments, commits, code, wiki edits, issues, and other contributions that are 48 | not aligned to this Code of Conduct, and will communicate reasons for moderation 49 | decisions when appropriate. 50 | 51 | ## Scope 52 | 53 | This Code of Conduct applies within all community spaces, and also applies when 54 | an individual is officially representing the community in public spaces. 55 | Examples of representing our community include using an official e-mail address, 56 | posting via an official social media account, or acting as an appointed 57 | representative at an online or offline event. 58 | 59 | ## Enforcement 60 | 61 | Instances of abusive, harassing, or otherwise unacceptable behavior may be 62 | reported to the community leaders responsible for enforcement at 63 | htwangtw@gmail.com. 64 | All complaints will be reviewed and investigated promptly and fairly. 65 | 66 | All community leaders are obligated to respect the privacy and security of the 67 | reporter of any incident. 68 | 69 | ## Enforcement Guidelines 70 | 71 | Community leaders will follow these Community Impact Guidelines in determining 72 | the consequences for any action they deem in violation of this Code of Conduct: 73 | 74 | ### 1. Correction 75 | 76 | **Community Impact**: Use of inappropriate language or other behavior deemed 77 | unprofessional or unwelcome in the community. 78 | 79 | **Consequence**: A private, written warning from community leaders, providing 80 | clarity around the nature of the violation and an explanation of why the 81 | behavior was inappropriate. A public apology may be requested. 82 | 83 | ### 2. Warning 84 | 85 | **Community Impact**: A violation through a single incident or series 86 | of actions. 87 | 88 | **Consequence**: A warning with consequences for continued behavior. No 89 | interaction with the people involved, including unsolicited interaction with 90 | those enforcing the Code of Conduct, for a specified period of time. This 91 | includes avoiding interactions in community spaces as well as external channels 92 | like social media. Violating these terms may lead to a temporary or 93 | permanent ban. 94 | 95 | ### 3. Temporary Ban 96 | 97 | **Community Impact**: A serious violation of community standards, including 98 | sustained inappropriate behavior. 99 | 100 | **Consequence**: A temporary ban from any sort of interaction or public 101 | communication with the community for a specified period of time. No public or 102 | private interaction with the people involved, including unsolicited interaction 103 | with those enforcing the Code of Conduct, is allowed during this period. 104 | Violating these terms may lead to a permanent ban. 105 | 106 | ### 4. Permanent Ban 107 | 108 | **Community Impact**: Demonstrating a pattern of violation of community 109 | standards, including sustained inappropriate behavior, harassment of an 110 | individual, or aggression toward or disparagement of classes of individuals. 111 | 112 | **Consequence**: A permanent ban from any sort of public interaction within 113 | the community. 114 | 115 | ## Attribution 116 | 117 | This Code of Conduct is adapted from the [Contributor Covenant][homepage], 118 | version 2.0, available at 119 | https://www.contributor-covenant.org/version/2/0/code_of_conduct.html. 120 | 121 | Community Impact Guidelines were inspired by [Mozilla's code of conduct 122 | enforcement ladder](https://github.com/mozilla/diversity). 123 | 124 | [homepage]: https://www.contributor-covenant.org 125 | 126 | For answers to common questions about this code of conduct, see the FAQ at 127 | https://www.contributor-covenant.org/faq. Translations are available at 128 | https://www.contributor-covenant.org/translations. 129 | -------------------------------------------------------------------------------- /giga_connectome/connectome.py: -------------------------------------------------------------------------------- 1 | from __future__ import annotations 2 | 3 | from pathlib import Path 4 | from typing import Any 5 | 6 | import numpy as np 7 | from nibabel import Nifti1Image 8 | from nilearn.connectome import ConnectivityMeasure 9 | from nilearn.image import load_img 10 | from nilearn.maskers import NiftiMasker 11 | 12 | 13 | def build_size_roi( 14 | mask: np.ndarray[Any, Any], labels_roi: np.ndarray[Any, Any] 15 | ) -> np.ndarray[Any, np.dtype[Any]]: 16 | """Extract labels and sizes of ROIs given an atlas. 17 | The atlas parcels must be discrete segmentations. 18 | 19 | Adapted from: 20 | https://github.com/SIMEXP/niak/blob/master/commands/SI_processing/niak_build_size_roi.m 21 | [SIZE_ROI,LABELS_ROI] = BUILD_SIZE_ROI(MASK) 22 | 23 | Parameters 24 | ---------- 25 | mask : np.ndarray 26 | Mask of the ROI. Voxels belonging to no region are coded with 0, 27 | those belonging to region `I` are coded with `I` (`I` being a 28 | positive integer). 29 | 30 | labels_roi : np.ndarray 31 | Labels of region I. 32 | 33 | Returns 34 | ------- 35 | np.ndarray 36 | An array containing the sizes of the ROIs. 37 | """ 38 | 39 | nb_roi = len(labels_roi) 40 | size_roi = np.zeros([nb_roi, 1]) 41 | 42 | for num_r in range(nb_roi): 43 | size_roi[num_r] = np.count_nonzero(mask == labels_roi[num_r]) 44 | 45 | return size_roi 46 | 47 | 48 | def calculate_intranetwork_correlation( 49 | correlation_matrix: np.ndarray[Any, Any], 50 | masker_labels: np.ndarray[Any, Any], 51 | time_series_atlas: np.ndarray[Any, Any], 52 | group_mask: str | Path | Nifti1Image, 53 | atlas_image: str | Path | Nifti1Image, 54 | ) -> tuple[np.ndarray[Any, Any], np.ndarray[Any, Any]]: 55 | """Calculate the average functional correlation within each parcel. 56 | Currently we only support discrete segmentations. 57 | 58 | Parameters 59 | ---------- 60 | correlation_matrix : np.array 61 | N by N Pearson's correlation matrix. 62 | 63 | masker_labels : np.array 64 | Labels for each parcel in the atlas. 65 | 66 | time_series_atlas : np.array 67 | Time series extracted from each parcel. 68 | 69 | group_mask : str | Path | Nifti1Image 70 | The group grey matter mask. 71 | 72 | atlas_image : str | Path | Nifti1Image 73 | 3D atlas image. 74 | 75 | Returns 76 | ------- 77 | tuple[np.ndarray, np.ndarray] 78 | A tuple containing the modified Pearson's correlation matrix with 79 | the diagonal replaced by the average correlation within each parcel, 80 | and an array of the computed average intranetwork correlations for 81 | each parcel. 82 | """ 83 | if isinstance(atlas_image, (str, Path)): 84 | atlas_image = load_img(atlas_image) 85 | 86 | if len(atlas_image.shape) > 3: 87 | raise NotImplementedError("Only support 3D discrete segmentations.") 88 | # flatten the atlas label image to a vector 89 | atlas_voxel_flatten = NiftiMasker( 90 | standardize=False, mask_img=group_mask 91 | ).fit_transform(atlas_image) 92 | size_parcels = build_size_roi(atlas_voxel_flatten, masker_labels) 93 | # calculate the standard deviation of time series in each parcel 94 | var_parcels = time_series_atlas.var(axis=0) 95 | var_parcels = np.reshape(var_parcels, (var_parcels.shape[0], 1)) 96 | # detect invalid parcels 97 | mask_empty = (size_parcels == 0) | (size_parcels == 1) 98 | 99 | # calculate average functional correlation within each parcel 100 | avg_intranetwork_correlation = ( 101 | (size_parcels * size_parcels) * var_parcels - size_parcels 102 | ) / (size_parcels * (size_parcels - 1)) 103 | avg_intranetwork_correlation[mask_empty] = 0 104 | avg_intranetwork_correlation = avg_intranetwork_correlation.reshape(-1) 105 | # replace the diagonal with average functional correlation 106 | idx_diag = np.diag_indices(correlation_matrix.shape[0]) 107 | correlation_matrix[idx_diag] = avg_intranetwork_correlation 108 | return correlation_matrix, avg_intranetwork_correlation 109 | 110 | 111 | def generate_timeseries_connectomes( 112 | masker: NiftiMasker, 113 | denoised_img: Nifti1Image, 114 | group_mask: str | Path, 115 | correlation_measure: ConnectivityMeasure, 116 | calculate_average_correlation: bool, 117 | ) -> tuple[np.ndarray[Any, Any], np.ndarray[Any, Any], NiftiMasker]: 118 | """Generate timeseries-based connectomes from functional data. 119 | 120 | Parameters 121 | ---------- 122 | masker : NiftiMasker 123 | NiftiMasker instance for extracting time series. 124 | 125 | denoised_img : Nifti1Image 126 | Denoised functional image. 127 | 128 | group_mask : str | Path 129 | Path to the group grey matter mask. 130 | 131 | correlation_measure : ConnectivityMeasure 132 | Connectivity measure for computing correlations. 133 | 134 | calculate_average_correlation : bool 135 | Flag indicating whether to calculate average parcel correlations. 136 | 137 | Returns 138 | ------- 139 | tuple[np.ndarray, np.ndarray] 140 | A tuple containing the correlation matrix and time series atlas. 141 | """ 142 | time_series_atlas = masker.fit_transform(denoised_img) 143 | correlation_matrix = correlation_measure.fit_transform( 144 | [time_series_atlas] 145 | )[0] 146 | masker_labels = masker.labels_ 147 | # average correlation within each parcel 148 | if calculate_average_correlation: 149 | ( 150 | correlation_matrix, 151 | _, 152 | ) = calculate_intranetwork_correlation( 153 | correlation_matrix, 154 | masker_labels, 155 | time_series_atlas, 156 | group_mask, 157 | masker.labels_img_, 158 | ) 159 | # convert to float 32 instead of 64 160 | time_series_atlas = time_series_atlas.astype(np.float32) 161 | correlation_matrix = correlation_matrix.astype(np.float32) 162 | return correlation_matrix, time_series_atlas, masker 163 | -------------------------------------------------------------------------------- /giga_connectome/run.py: -------------------------------------------------------------------------------- 1 | from __future__ import annotations 2 | from typing import Any 3 | import argparse 4 | from pathlib import Path 5 | from typing import Sequence 6 | 7 | from giga_connectome import __version__ 8 | from giga_connectome.atlas import get_atlas_labels 9 | from giga_connectome.logger import gc_logger 10 | 11 | gc_log = gc_logger() 12 | 13 | preset_atlas = get_atlas_labels() 14 | deprecations = { 15 | # parser attribute name: 16 | # (replacement flag, version slated to be removed in) 17 | "work-dir": ("--atlases-dir", "0.7.0"), 18 | } 19 | 20 | 21 | class DeprecatedAction(argparse.Action): 22 | def __call__( 23 | self, 24 | parser: argparse.ArgumentParser, 25 | namespace: argparse.Namespace, 26 | values: str | Sequence[Any] | None, 27 | option_string: str | None = None, 28 | ) -> None: 29 | new_opt, rem_vers = deprecations.get(self.dest, (None, None)) 30 | msg = ( 31 | f"{self.option_strings} has been deprecated and will be removed " 32 | f"in {rem_vers or 'a later version'}." 33 | ) 34 | if new_opt: 35 | msg += f" Please use `{new_opt}` instead." 36 | gc_log.warning(msg) 37 | delattr(namespace, self.dest) 38 | 39 | 40 | def global_parser() -> argparse.ArgumentParser: 41 | parser = argparse.ArgumentParser( 42 | formatter_class=argparse.RawTextHelpFormatter, 43 | description=( 44 | "Generate denoised timeseries and Pearson's correlation based " 45 | "connectomes from fmriprep processed dataset." 46 | ), 47 | ) 48 | parser.add_argument( 49 | "bids_dir", 50 | action="store", 51 | type=Path, 52 | help="The directory with the input dataset formatted according to the " 53 | "BIDS standard (i.e. unaltered fMRIPrep derivative directory).", 54 | ) 55 | parser.add_argument( 56 | "output_dir", 57 | action="store", 58 | type=Path, 59 | help="The directory where the output files should be stored.", 60 | ) 61 | parser.add_argument( 62 | "analysis_level", 63 | help="Level of the analysis that will be performed. Only participant" 64 | "level is available.", 65 | choices=["participant"], 66 | ) 67 | parser.add_argument( 68 | "-v", "--version", action="version", version=__version__ 69 | ) 70 | parser.add_argument( 71 | "--participant-label", 72 | "--participant_label", 73 | help="The label(s) of the participant(s) that should be analyzed. The " 74 | "label corresponds to sub- from the BIDS spec (so " 75 | "it does not include 'sub-'). If this parameter is not provided all " 76 | "subjects should be analyzed. Multiple participants can be specified " 77 | "with a space separated list.", 78 | nargs="+", 79 | ) 80 | parser.add_argument( 81 | "-a", 82 | "--atlases-dir", 83 | action="store", 84 | type=Path, 85 | default=Path("./atlases"), 86 | help="Path where subject specific segmentations are stored.", 87 | ) 88 | parser.add_argument( 89 | "-w", 90 | "--work-dir", 91 | action=DeprecatedAction, 92 | help="This argument is deprecated. Please use --atlases-dir instead.", 93 | ) 94 | parser.add_argument( 95 | "--atlas", 96 | help="The choice of atlas for time series extraction. Default atlas " 97 | f"choices are: {preset_atlas}. User can pass " 98 | "a path to a json file containing configuration for their own choice " 99 | "of atlas. The default is 'Schaefer2018'.", 100 | default="Schaefer2018", 101 | ) 102 | parser.add_argument( 103 | "--denoise-strategy", 104 | help="The choice of post-processing for denoising. The default " 105 | "choices are: 'simple', 'simple+gsr', 'scrubbing.2', " 106 | "'scrubbing.2+gsr', 'scrubbing.5', 'scrubbing.5+gsr', 'acompcor50', " 107 | "'icaaroma'. User can pass a path to a json file containing " 108 | "configuration for their own choice of denoising strategy. The default" 109 | "is 'simple'.", 110 | default="simple", 111 | ) 112 | parser.add_argument( 113 | "--smoothing_fwhm", 114 | "--smoothing-fwhm", 115 | help="Size of the full-width at half maximum in millimeters of " 116 | "the spatial smoothing to apply to the signal. The default is 5.0.", 117 | type=float, 118 | default=5.0, 119 | ) 120 | parser.add_argument( 121 | "--reindex-bids", 122 | help="Reindex BIDS data set, even if layout has already been created.", 123 | action="store_true", 124 | ) 125 | parser.add_argument( 126 | "--bids-filter-file", 127 | type=Path, 128 | help="A JSON file describing custom BIDS input filters using PyBIDS. " 129 | "We use the same format as described in fMRIPrep documentation: " 130 | "https://fmriprep.org/en/latest/faq.html#" 131 | "how-do-i-select-only-certain-files-to-be-input-to-fmriprep " 132 | "\nHowever, the query filed should always be 'bold'", 133 | ) 134 | parser.add_argument( 135 | "--calculate-intranetwork-average-correlation", 136 | help="Calculate average correlation within each network. This is a " 137 | "python implementation of the matlab code from the NIAK connectome " 138 | "pipeline (option A). The default is False.", 139 | action="store_true", 140 | ) 141 | parser.add_argument( 142 | "--verbosity", 143 | help=""" 144 | Verbosity level. 145 | """, 146 | required=False, 147 | choices=[0, 1, 2, 3], 148 | default=2, 149 | type=int, 150 | nargs=1, 151 | ) 152 | return parser 153 | 154 | 155 | def main(argv: None | Sequence[str] = None) -> None: 156 | """Entry point.""" 157 | parser = global_parser() 158 | 159 | args = parser.parse_args(argv) 160 | 161 | # local import to speed up CLI response 162 | # when just askig for --help or --version 163 | from giga_connectome.workflow import workflow 164 | 165 | workflow(args) 166 | 167 | 168 | if __name__ == "__main__": 169 | raise RuntimeError( 170 | "run.py should not be run directly;\n" 171 | "Please `pip install` and use the `giga_connectome` command" 172 | ) 173 | -------------------------------------------------------------------------------- /docs/source/contributing.md: -------------------------------------------------------------------------------- 1 | # Contribution 2 | 3 | ## Setting up your environment for development 4 | 5 | 1. Fork the repository from github and clone your fork locally (see [here](https://docs.github.com/en/authentication/connecting-to-github-with-ssh/adding-a-new-ssh-key-to-your-github-account) to setup your ssh key): 6 | 7 | ```bash 8 | git clone git@github.com:/giga_connectome.git 9 | ``` 10 | 11 | 2. Pull the submodules for fetching atlases. 12 | 13 | ```bash 14 | git submodule update --init --recursive 15 | ``` 16 | 17 | 3. Set up a virtual environment to work in using whichever environment management tool you're used to and activate it. For example: 18 | 19 | ```bash 20 | python3 -m venv giga_connectome 21 | source giga_connectome/bin/activate 22 | ``` 23 | 24 | 3. Install the developmental version of the project. This will include all the dependency necessary for developers. For details on the packages installed, see `pyproject.toml`. 25 | 26 | ```bash 27 | pip install -e .[dev] 28 | ``` 29 | 30 | 4. Get atlases packaged in the container 31 | 32 | ```bash 33 | python ./tools/download_templates.py 34 | ``` 35 | 36 | You will find the files in local user's home directory under `~/.cache/templateflow/`. 37 | 38 | 5. Install the data required for testing from zenodo 39 | 40 | This can be done using tox by running: 41 | 42 | ```bash 43 | tox -e test_data 44 | ``` 45 | 46 | 6. Install pre-commit hooks to run all the checks before each commit. 47 | 48 | ```bash 49 | pre-commit install 50 | ``` 51 | 52 | ## Contributing to code 53 | 54 | This is a very generic workflow. 55 | 56 | 1. Comment on an existing issue or open a new issue referencing your addition. 57 | 58 | :::{tip} 59 | Review and discussion on new code can begin well before the work is complete, and the more discussion the better! 60 | The development team may prefer a different path than you've outlined, so it's better to discuss it and get approval at the early stage of your work. 61 | ::: 62 | 63 | 2. On your fork, create a new branch from main: 64 | 65 | ```bash 66 | git checkout -b your_branch 67 | ``` 68 | 69 | 3. Make the changes, lint, and format. 70 | 71 | 4. Commit your changes on this branch. 72 | 73 | If you want to make sure all the tests will be run by github continuous integration, 74 | make sure that your commit message contains `full_test`. 75 | 76 | 5. Run the tests locally; you can run spectfic tests to speed up the process: 77 | 78 | ```bash 79 | pytest -v giga_connectome/tests/test_connectome.py::test_calculate_intranetwork_correlation 80 | ``` 81 | 82 | 6. push your changes to your online fork. If this is the first commit, you might want to set up the remote tracking: 83 | 84 | ```bash 85 | git push origin HEAD --set-upstream 86 | ``` 87 | In the future you can simply do: 88 | 89 | ```bash 90 | git push 91 | ``` 92 | 7. Submit a pull request from your fork of the repository. 93 | 94 | 8. Check that all continuous integration tests pass. 95 | 96 | ## Contributing to documentation 97 | 98 | The workflow is the same as code contributions, with some minor differences. 99 | 100 | 1. Install the `[doc]` dependencies. 101 | 102 | ```bash 103 | pip install -e '.[doc]' 104 | ``` 105 | 106 | 2. After making changes, build the docs locally: 107 | 108 | ```bash 109 | cd docs 110 | make html 111 | ``` 112 | 113 | 3. Submit your changes. 114 | 115 | ## Writing a PR 116 | 117 | When opening a pull request, please use one of the following prefixes: 118 | 119 | - **[ENH]** for enhancements 120 | - **[FIX]** for bug fixes 121 | - **[TEST]** for new or updated tests 122 | - **[DOCS]** for new or updated documentation 123 | - **[STYL]** for stylistic changes 124 | - **[MAINT]** for refactoring existing code, any maintainace related things 125 | 126 | Pull requests should be submitted early and often (please don't mix too many unrelated changes within one PR)! 127 | If your pull request is not yet ready to be merged, please submit as a drafted PR. 128 | This tells the development team that your pull request is a "work-in-progress", and that you plan to continue working on it. 129 | 130 | One your PR is ready a member of the development team will review your changes to confirm that they can be merged into the main codebase. 131 | 132 | Once the PR is ready to merge, we will add you to our contributor page [All Contributors bot](https://allcontributors.org/) to celebrate your contributions. 133 | 134 | See [All Contributors bot](https://allcontributors.org/docs/en/bot/usage) usage note for adding contributors. 135 | 136 | ### Running the demo 137 | 138 | You can run a demo of the bids app by downloading some test data and atlases. 139 | 140 | Run the following from the root of the repository. 141 | 142 | ```bash 143 | pip install tox 144 | tox -e test_data 145 | python ./tools/download_templates.py 146 | ``` 147 | 148 | ```bash 149 | giga_connectome \ 150 | --atlas Schaefer20187Networks \ 151 | --denoise-strategy simple \ 152 | --bids-filter giga_connectome/data/test_data/bids_filter.json \ 153 | --reindex-bids \ 154 | --calculate-intranetwork-average-correlation \ 155 | giga_connectome/data/test_data/ds000017-fmriprep22.0.1-downsampled-nosurface \ 156 | giga_connectome/data/test_data/output \ 157 | participant 158 | ``` 159 | 160 | ## Prepare a release 161 | 162 | Currently this project is not pushed to PyPi. 163 | We simply tag the version on the repository so users can reference the version of installation. 164 | The release process will trigger a new tagged docker build of the software. 165 | 166 | Switch to a new branch locally: 167 | 168 | ```bash 169 | git checkout -b REL-x.y.z 170 | ``` 171 | First we need to prepare the release by updating the file `giga_connectome/docs/changes.md` to make sure all the new features, enhancements, and bug fixes are included in their respective sections. 172 | 173 | Finally, we need to change the title from x.y.z.dev to x.y.z 174 | 175 | ```markdown 176 | ## x.y.z 177 | 178 | **Released MONTH YEAR** 179 | 180 | ### New 181 | ... 182 | ``` 183 | Add these changes and submit a PR: 184 | 185 | ```bash 186 | git add docs/ 187 | git commit -m "REL x.y.z" 188 | git push upstream REL-x.y.z 189 | ``` 190 | 191 | Once the PR has been reviewed and merged, pull from master and tag the merge commit: 192 | 193 | ```bash 194 | git checkout main 195 | git pull upstream main 196 | git tag x.y.z 197 | git push upstream --tags 198 | ``` 199 | 200 | ## Post-release 201 | 202 | At this point, the release has been made. 203 | 204 | We also need to create a new section in `giga_connectome/docs/changes.md` with a title and the usual New, Enhancements, Bug Fixes, and Changes sections for the version currently under development: 205 | 206 | ```markdown 207 | 208 | ## x.y.z+1.dev 209 | 210 | **Released MONTH YEAR** 211 | 212 | ### New 213 | 214 | ### Fixes 215 | 216 | ### Enhancements 217 | 218 | ### Changes 219 | ``` 220 | 221 | Based on contributing guidelines from the [STEMMRoleModels](https://github.com/KirstieJane/STEMMRoleModels/blob/gh-pages/CONTRIBUTING.md) project and [Nilearn contribution guidelines](https://nilearn.github.io/stable/development.html). 222 | -------------------------------------------------------------------------------- /giga_connectome/tests/test_utils.py: -------------------------------------------------------------------------------- 1 | from pathlib import Path 2 | 3 | import pytest 4 | from bids.tests import get_test_data_path 5 | from pkg_resources import resource_filename 6 | 7 | from nilearn._utils.data_gen import create_fake_bids_dataset 8 | from giga_connectome.denoise import get_denoise_strategy 9 | 10 | 11 | from giga_connectome import utils 12 | 13 | 14 | def test_prepare_bidsfilter_and_template(): 15 | # regular strategy and no user bids filter 16 | strategy = get_denoise_strategy("simple") 17 | user_bids_filter = None 18 | template, bids_filters = utils.prepare_bidsfilter_and_template( 19 | strategy, user_bids_filter 20 | ) 21 | assert template == "MNI152NLin2009cAsym" 22 | assert bids_filters == {} 23 | 24 | # icaaroma and no user bids filter 25 | strategy = get_denoise_strategy("icaaroma") 26 | template, bids_filters = utils.prepare_bidsfilter_and_template( 27 | strategy, user_bids_filter 28 | ) 29 | assert template == "MNI152NLin6Asym" 30 | assert type(bids_filters) is dict 31 | assert bids_filters["bold"]["desc"] == "smoothAROMAnonaggr" 32 | assert bids_filters["mask"]["space"] == "MNI152NLin2009cAsym" 33 | 34 | # icaaroma and user bids filter 35 | user_bids_filter = { 36 | "bold": {"task": "probabilisticclassification", "run": "1"}, 37 | } 38 | template, bids_filters = utils.prepare_bidsfilter_and_template( 39 | strategy, user_bids_filter 40 | ) 41 | assert template == "MNI152NLin6Asym" 42 | assert type(bids_filters) is dict 43 | assert bids_filters["bold"]["task"] == "probabilisticclassification" 44 | assert bids_filters["bold"]["desc"] == "smoothAROMAnonaggr" 45 | assert bids_filters["mask"]["space"] == "MNI152NLin2009cAsym" 46 | 47 | 48 | def test_get_bids_images(): 49 | subjects = ["1"] 50 | template = "MNI152NLin2009cAsym" 51 | bids_dir = resource_filename( 52 | "giga_connectome", 53 | "data/test_data/ds000017-fmriprep22.0.1-downsampled-nosurface", 54 | ) 55 | reindex_bids = True 56 | user_bids_filters = { 57 | "bold": {"task": "probabilisticclassification", "run": "1"}, 58 | } # task and run should apply to both mask and bold 59 | subj_data, _ = utils.get_bids_images( 60 | subjects, template, bids_dir, reindex_bids, user_bids_filters 61 | ) 62 | assert len(subj_data["bold"]) == len(subj_data["mask"]) 63 | 64 | # test for icaaroma 65 | strategy = get_denoise_strategy("icaaroma") 66 | template, bids_filters = utils.prepare_bidsfilter_and_template( 67 | strategy, user_bids_filters 68 | ) 69 | print(bids_filters) 70 | subj_data, _ = utils.get_bids_images( 71 | subjects, template, bids_dir, reindex_bids, bids_filters 72 | ) 73 | print(subj_data) 74 | assert len(subj_data["bold"]) == len(subj_data["mask"]) 75 | assert ( 76 | all( 77 | "desc-smoothAROMAnonaggr_bold.nii.gz" in img.path 78 | for img in subj_data["bold"] 79 | ) 80 | == True 81 | ) 82 | 83 | 84 | def test_check_check_filter(): 85 | """Unit test for utils.check_filter.""" 86 | correct_filter = {"bold": {"suffix": "bold"}} 87 | assert utils.check_filter(correct_filter) == correct_filter 88 | with pytest.raises(ValueError) as msg: 89 | utils.check_filter({"bold": {"suffix": "bold"}, "dseg": {"run": "1"}}) 90 | assert "dseg" in str(msg.value) 91 | 92 | 93 | @pytest.mark.parametrize( 94 | "source_file", 95 | [ 96 | "sub-01_ses-ah_task-rest_run-1_space-MNIfake_res-2_desc-preproc_bold.nii.gz", 97 | "sub-01_ses-ah_task-rest_run-1_space-MNIfake_res-2_desc-brain_mask.nii.gz", 98 | ], 99 | ) 100 | def test_parse_bids_name(source_file): 101 | subject, session, specifier = utils.parse_bids_name(source_file) 102 | assert subject == "sub-01" 103 | assert session == "ses-ah" 104 | assert specifier == "ses-ah_task-rest_run-1" 105 | 106 | 107 | def test_get_subject_lists(): 108 | bids_test = Path(get_test_data_path()) 109 | # strip the sub- prefix 110 | subjects = utils.get_subject_lists(participant_label=["sub-01"]) 111 | assert len(subjects) == 1 112 | assert subjects[0] == "01" 113 | subjects = utils.get_subject_lists( 114 | participant_label=None, bids_dir=bids_test / "ds005_derivs/dummy" 115 | ) 116 | assert len(subjects) == 1 117 | assert subjects[0] == "01" 118 | 119 | 120 | @pytest.mark.parametrize( 121 | "suffix,extension,target", 122 | [ 123 | ( 124 | "timeseries", 125 | "tsv", 126 | "sub-01_ses-ah_task-rest_run-1_seg-fake100_desc-denoiseSimple_timeseries.tsv", 127 | ), 128 | ( 129 | "timeseries", 130 | "json", 131 | "sub-01_ses-ah_task-rest_run-1_desc-denoiseSimple_timeseries.json", 132 | ), 133 | ( 134 | "relmat", 135 | "tsv", 136 | "sub-01_ses-ah_task-rest_run-1_seg-fake100_meas-PearsonCorrelation_desc-denoiseSimple_relmat.tsv", 137 | ), 138 | ( 139 | "report", 140 | "html", 141 | "sub-01_ses-ah_task-rest_run-1_seg-fake100_desc-denoiseSimple_report.html", 142 | ), 143 | ], 144 | ) 145 | def test_output_filename(suffix, extension, target): 146 | source_file = "sub-01_ses-ah_task-rest_run-1_space-MNIfake_res-2_desc-preproc_bold.nii.gz" 147 | 148 | generated_target = utils.output_filename( 149 | source_file=source_file, 150 | atlas="fake", 151 | suffix=suffix, 152 | extension=extension, 153 | strategy="simple", 154 | atlas_desc="100", 155 | ) 156 | assert target == generated_target 157 | 158 | 159 | @pytest.mark.parametrize( 160 | "source_file,atlas,atlas_desc,suffix,target", 161 | [ 162 | ( 163 | "sub-01_ses-ah_task-rest_run-1_space-MNIfake_res-2_desc-brain_mask.nii.gz", 164 | "fake", 165 | "100", 166 | "dseg", 167 | "sub-01_seg-fake100_dseg.nii.gz", 168 | ), 169 | ( 170 | "sub-01_ses-ah_task-rest_run-1_space-MNIfake_res-2_desc-brain_mask.nii.gz", 171 | "", 172 | "", 173 | "mask", 174 | "sub-01_space-MNIfake_res-2_label-GM_mask.nii.gz", 175 | ), 176 | ( 177 | "sub-01_ses-ah_task-rest_run-1_space-MNIfake_desc-brain_mask.nii.gz", 178 | "", 179 | "", 180 | "mask", 181 | "sub-01_space-MNIfake_label-GM_mask.nii.gz", 182 | ), 183 | ], 184 | ) 185 | def test_output_filename_seg(source_file, atlas, atlas_desc, suffix, target): 186 | generated_target = utils.output_filename( 187 | source_file=source_file, 188 | atlas=atlas, 189 | suffix=suffix, 190 | extension="nii.gz", 191 | strategy="", 192 | atlas_desc=atlas_desc, 193 | ) 194 | assert target == generated_target 195 | 196 | 197 | def test_desc_entity_recognised(tmp_path): 198 | 199 | create_fake_bids_dataset(tmp_path, n_sub=1, n_ses=1, n_runs=[1, 1]) 200 | 201 | subjects = ["01"] 202 | template = "MNI" 203 | reindex_bids = True 204 | 205 | utils.get_bids_images( 206 | subjects, 207 | template, 208 | tmp_path / "bids_dataset" / "derivatives", 209 | reindex_bids, 210 | bids_filters=None, 211 | ) 212 | -------------------------------------------------------------------------------- /giga_connectome/atlas.py: -------------------------------------------------------------------------------- 1 | from __future__ import annotations 2 | 3 | import json 4 | import os 5 | from pathlib import Path 6 | from typing import Any, Dict, List, TypedDict 7 | 8 | import nibabel as nib 9 | from nibabel import Nifti1Image 10 | from nilearn.image import resample_to_img 11 | from pkg_resources import resource_filename 12 | 13 | from giga_connectome.logger import gc_logger 14 | from giga_connectome.utils import progress_bar 15 | 16 | gc_log = gc_logger() 17 | 18 | ATLAS_CONFIG_TYPE = TypedDict( 19 | "ATLAS_CONFIG_TYPE", 20 | { 21 | "name": str, 22 | "parameters": Dict[str, str], 23 | "desc": List[str], 24 | "templateflow_dir": Any, 25 | }, 26 | ) 27 | 28 | ATLAS_SETTING_TYPE = TypedDict( 29 | "ATLAS_SETTING_TYPE", 30 | {"name": str, "file_paths": Dict[str, List[Path]], "type": str}, 31 | ) 32 | 33 | deprecations = { 34 | # parser attribute name: 35 | # (replacement, version slated to be removed in) 36 | "Schaefer20187Networks": ("Schaefer2018", "0.7.0"), 37 | } 38 | 39 | 40 | def load_atlas_setting( 41 | atlas: str | Path | dict[str, Any], 42 | ) -> ATLAS_SETTING_TYPE: 43 | """Load atlas details for templateflow api to fetch. 44 | The setting file can be configured for atlases not included in the 45 | templateflow collections, but user has to organise their files to 46 | templateflow conventions to use the tool. 47 | 48 | Parameters 49 | ---------- 50 | atlas: str or pathlib.Path or dict 51 | Path to atlas configuration json file or a python dictionary. 52 | It should contain the following fields: 53 | 54 | - name : name of the atlas. 55 | 56 | - parameters : BIDS entities that fits templateflow conventions \ 57 | except desc. 58 | 59 | - desc : templateflow entities description. Can be a list if user \ 60 | wants to include multiple resolutions of the atlas. 61 | 62 | - templateflow_dir : Path to templateflow director. \ 63 | If null, use the system default. 64 | 65 | Returns 66 | ------- 67 | dict 68 | Path to the atlas files. 69 | """ 70 | atlas_config = _check_altas_config(atlas) 71 | gc_log.info(atlas_config) 72 | 73 | # load template flow 74 | templateflow_dir = atlas_config.get("templateflow_dir") 75 | if isinstance(templateflow_dir, str): 76 | templateflow_dir = Path(templateflow_dir) 77 | if templateflow_dir.exists(): 78 | os.environ["TEMPLATEFLOW_HOME"] = str(templateflow_dir.resolve()) 79 | else: 80 | raise FileNotFoundError 81 | 82 | import templateflow 83 | 84 | parcellation = {} 85 | for d in atlas_config["desc"]: 86 | p = templateflow.api.get( 87 | **atlas_config["parameters"], 88 | raise_empty=True, 89 | desc=d, 90 | extension="nii.gz", 91 | ) 92 | if isinstance(p, Path): 93 | p = [p] 94 | parcellation[d] = p 95 | return { 96 | "name": atlas_config["name"], 97 | "file_paths": parcellation, 98 | "type": atlas_config["parameters"]["suffix"], 99 | } 100 | 101 | 102 | def resample_atlas_collection( 103 | subject_seg_file_names: list[str], 104 | atlas_config: ATLAS_SETTING_TYPE, 105 | subject_mask_dir: Path, 106 | subject_mask: Nifti1Image, 107 | ) -> list[Path]: 108 | """Resample a atlas collection to group grey matter mask. 109 | 110 | Parameters 111 | ---------- 112 | subject_atlas_file_names: list of str 113 | File names of subject atlas segmentations. 114 | 115 | atlas_config: dict 116 | Atlas name. Currently support Schaefer20187Networks, MIST, DiFuMo. 117 | 118 | subject_mask_dir: pathlib.Path 119 | Path to where the outputs are saved. 120 | 121 | subject_mask : nibabel.nifti1.Nifti1Image 122 | EPI (grey matter) mask for the subject. 123 | 124 | Returns 125 | ------- 126 | list of pathlib.Path 127 | Paths to subject specific segmentations created from atlases sampled 128 | to individual grey matter mask. 129 | """ 130 | gc_log.info("Resample atlas to group grey matter mask.") 131 | subject_seg = [] 132 | 133 | with progress_bar(text="Resampling atlases") as progress: 134 | task = progress.add_task( 135 | description="resampling", total=len(atlas_config["file_paths"]) 136 | ) 137 | 138 | for seg_file, desc in zip( 139 | subject_seg_file_names, atlas_config["file_paths"] 140 | ): 141 | parcellation = atlas_config["file_paths"][desc] 142 | parcellation_resampled = resample_to_img( 143 | parcellation, subject_mask, interpolation="nearest" 144 | ) 145 | save_path = subject_mask_dir / seg_file 146 | nib.save(parcellation_resampled, save_path) 147 | subject_seg.append(save_path) 148 | 149 | progress.update(task, advance=1) 150 | 151 | return subject_seg 152 | 153 | 154 | def get_atlas_labels() -> List[str]: 155 | """Get the list of available atlas labels.""" 156 | atlas_dir = resource_filename("giga_connectome", "data/atlas") 157 | return [p.stem for p in Path(atlas_dir).glob("*.json")] 158 | 159 | 160 | def _check_altas_config( 161 | atlas: str | Path | dict[str, Any], 162 | ) -> ATLAS_CONFIG_TYPE: 163 | """Load the configuration file. 164 | 165 | Parameters 166 | ---------- 167 | atlas : str | Path | dict 168 | Atlas name or configuration file path. 169 | 170 | Returns 171 | ------- 172 | dict 173 | valid atlas configuration. 174 | 175 | Raises 176 | ------ 177 | KeyError 178 | atlas configuration not containing the correct keys. 179 | """ 180 | if isinstance(atlas, str) and atlas in deprecations: 181 | new_name, version = deprecations[atlas] 182 | gc_log.warning( 183 | f"{atlas} has been deprecated and will be removed in " 184 | f"{version}. Please use {new_name} instead." 185 | ) 186 | atlas = new_name 187 | 188 | # load the file first if the input is not already a dictionary 189 | atlas_dir = resource_filename("giga_connectome", "data/atlas") 190 | preset_atlas = [p.stem for p in Path(atlas_dir).glob("*.json")] 191 | 192 | if isinstance(atlas, (str, Path)): 193 | if atlas in preset_atlas: 194 | config_path = Path( 195 | resource_filename( 196 | "giga_connectome", f"data/atlas/{atlas}.json" 197 | ) 198 | ) 199 | elif Path(atlas).exists(): 200 | config_path = Path(atlas) 201 | else: 202 | raise FileNotFoundError( 203 | f"Atlas configuration file {atlas} not found." 204 | ) 205 | 206 | with open(config_path, "r") as file: 207 | atlas_config = json.load(file) 208 | else: 209 | atlas_config = atlas 210 | 211 | minimal_keys = ["name", "parameters", "desc", "templateflow_dir"] 212 | keys = list(atlas_config.keys()) 213 | common_keys = set(minimal_keys).intersection(set(keys)) 214 | if common_keys != set(minimal_keys): 215 | raise KeyError( 216 | "Invalid dictionary input. Input should" 217 | " contain minimally the following keys: 'name', " 218 | "'parameters', 'desc', 'templateflow_dir'. Found " 219 | f"{keys}" 220 | ) 221 | 222 | # cast to list of string 223 | if isinstance(atlas_config["desc"], (str, int)): 224 | desc = [atlas_config["desc"]] 225 | else: 226 | desc = atlas_config["desc"] 227 | atlas_config["desc"] = [str(x) for x in desc] 228 | 229 | return atlas_config 230 | -------------------------------------------------------------------------------- /giga_connectome/denoise.py: -------------------------------------------------------------------------------- 1 | from __future__ import annotations 2 | 3 | import json 4 | from pathlib import Path 5 | from typing import Any, Callable, Dict, List, TypedDict, Union 6 | 7 | import numpy as np 8 | import pandas as pd 9 | from nibabel import Nifti1Image 10 | from nilearn.interfaces import fmriprep 11 | from nilearn.interfaces.fmriprep import load_confounds_utils as lc_utils 12 | from nilearn.maskers import NiftiMasker 13 | from pkg_resources import resource_filename 14 | 15 | PRESET_STRATEGIES = [ 16 | "simple", 17 | "simple+gsr", 18 | "scrubbing.2", 19 | "scrubbing.2+gsr", 20 | "scrubbing.5", 21 | "scrubbing.5+gsr", 22 | "acompcor50", 23 | "icaaroma", 24 | ] 25 | 26 | # More refined type not possible with python <= 3.9? 27 | # STRATEGY_TYPE = TypedDict( 28 | # "STRATEGY_TYPE", 29 | # { 30 | # "name": str, 31 | # "function": Callable[ 32 | # ..., tuple[pd.DataFrame, Union[np.ndarray[Any, Any], None]] 33 | # ], 34 | # "parameters": dict[str, str | list[str]], 35 | # }, 36 | # ) 37 | STRATEGY_TYPE = TypedDict( 38 | "STRATEGY_TYPE", 39 | { 40 | "name": str, 41 | "function": Callable[..., Any], 42 | "parameters": Dict[str, Union[str, List[str]]], 43 | }, 44 | ) 45 | 46 | METADATA_TYPE = TypedDict( 47 | "METADATA_TYPE", 48 | { 49 | "ConfoundRegressors": List[str], 50 | "ICAAROMANoiseComponents": List[str], 51 | "NumberOfVolumesDiscardedByMotionScrubbing": int, 52 | "NumberOfVolumesDiscardedByNonsteadyStatesDetector": int, 53 | "MeanFramewiseDisplacement": float, 54 | "SamplingFrequency": float, 55 | }, 56 | ) 57 | 58 | 59 | def get_denoise_strategy( 60 | strategy: str, 61 | ) -> STRATEGY_TYPE: 62 | """ 63 | Select denoise strategies and associated parameters. 64 | The strategy parameters are designed to pass to load_confounds_strategy. 65 | 66 | Parameter 67 | --------- 68 | 69 | strategy : str 70 | Name of the denoising strategy options: \ 71 | simple, simple+gsr, scrubbing.5, scrubbing.5+gsr, \ 72 | scrubbing.2, scrubbing.2+gsr, acompcor50, icaaroma. 73 | Or the path to a configuration json file. 74 | 75 | Return 76 | ------ 77 | 78 | dict 79 | Denosing strategy parameter to pass to load_confounds_strategy. 80 | """ 81 | if strategy in PRESET_STRATEGIES: 82 | config_path: str | Path = resource_filename( 83 | "giga_connectome", f"data/denoise_strategy/{strategy}.json" 84 | ) 85 | elif Path(strategy).exists(): 86 | config_path = Path(strategy) 87 | else: 88 | raise ValueError(f"Invalid input: {strategy}") 89 | 90 | with open(config_path, "r") as file: 91 | benchmark_strategy = json.load(file) 92 | 93 | lc_function = getattr(fmriprep, benchmark_strategy["function"]) 94 | benchmark_strategy.update({"function": lc_function}) 95 | return benchmark_strategy 96 | 97 | 98 | def is_ica_aroma(strategy: STRATEGY_TYPE) -> bool: 99 | """Check if the current strategy is ICA AROMA. 100 | 101 | Parameters 102 | ---------- 103 | strategy : dict 104 | Denoising strategy dictionary. See :func:`get_denoise_strategy`. 105 | 106 | Returns 107 | ------- 108 | bool 109 | True if the strategy is ICA AROMA. 110 | """ 111 | strategy_preset = strategy["parameters"].get("denoise_strategy", False) 112 | strategy_user_define = strategy["parameters"].get("strategy", False) 113 | if strategy_preset or strategy_user_define: 114 | return strategy_preset == "ica_aroma" 115 | elif isinstance(strategy_user_define, list): 116 | return "ica_aroma" in strategy_user_define 117 | else: 118 | raise ValueError(f"Invalid input dictionary. {strategy['parameters']}") 119 | 120 | 121 | def denoise_meta_data(strategy: STRATEGY_TYPE, img: str) -> METADATA_TYPE: 122 | """Get metadata of the denoising process. 123 | 124 | Including: column names of the confound regressors, number of 125 | volumes discarded by motion scrubbing, number of volumes discarded 126 | by non-steady states detector, mean framewise displacement and 127 | place holder for sampling frequency (1/TR). 128 | 129 | Parameters 130 | ---------- 131 | strategy : dict 132 | Denoising strategy parameter to pass to load_confounds_strategy 133 | or load_confounds. 134 | img : str 135 | Path to the nifti image to denoise. 136 | 137 | Returns 138 | ------- 139 | dict 140 | Metadata of the denoising process. 141 | """ 142 | cf, sm = strategy["function"](img, **strategy["parameters"]) 143 | cf_file = lc_utils.get_confounds_file( 144 | img, flag_full_aroma=is_ica_aroma(strategy) 145 | ) 146 | cf_full = pd.read_csv(cf_file, sep="\t") 147 | framewise_displacement = cf_full["framewise_displacement"] 148 | mean_fd = np.mean(framewise_displacement) 149 | # get non steady state volumes 150 | n_non_steady = len(lc_utils.find_confounds(cf_full, ["non_steady_state"])) 151 | # sample mask = \ 152 | # number of scan - number of scrubbed volumes \ 153 | # - number of non steady states volumes 154 | n_scrub = 0 if sm is None else (cf.shape[0] - sm.shape[0] - n_non_steady) 155 | 156 | # get ICA AROMA confounds regressors 157 | # It will not be used in the denoising process as it was already 158 | # regressed out in the fMRIPrep pipeline. 159 | # But the number of components still contribute to the 160 | # loss of temporal degrees of freedom. 161 | ica_aroma_components = lc_utils.find_confounds(cf_full, ["aroma"]) 162 | 163 | meta_data: METADATA_TYPE = { 164 | "ConfoundRegressors": cf.columns.tolist(), 165 | "ICAAROMANoiseComponents": ica_aroma_components, 166 | "NumberOfVolumesDiscardedByMotionScrubbing": n_scrub, 167 | "NumberOfVolumesDiscardedByNonsteadyStatesDetector": n_non_steady, 168 | "MeanFramewiseDisplacement": mean_fd, 169 | "SamplingFrequency": np.nan, # place holder 170 | } 171 | return meta_data 172 | 173 | 174 | def denoise_nifti_voxel( 175 | strategy: STRATEGY_TYPE, 176 | group_mask: str | Path, 177 | standardize: bool, 178 | smoothing_fwhm: float, 179 | img: str, 180 | ) -> Nifti1Image | None: 181 | """Denoise voxel level data per nifti image. 182 | 183 | Parameters 184 | ---------- 185 | strategy : dict 186 | Denoising strategy parameter to pass to load_confounds_strategy. 187 | group_mask : str | Path 188 | Path to the group mask. 189 | standardize : bool 190 | Standardize the data. If True, zscore the data. If False, do \ 191 | not standardize. 192 | smoothing_fwhm : float 193 | Smoothing kernel size in mm. 194 | img : str 195 | Path to the nifti image to denoise. 196 | 197 | Returns 198 | ------- 199 | Nifti1Image 200 | Denoised nifti image. 201 | """ 202 | cf, sm = strategy["function"](img, **strategy["parameters"]) 203 | if _check_exclusion(cf, sm): 204 | return None 205 | 206 | # if high pass filter is not applied through cosines regressors, 207 | # then detrend 208 | detrend = "cosine00" not in cf.columns 209 | group_masker = NiftiMasker( 210 | mask_img=group_mask, 211 | detrend=detrend, 212 | standardize=standardize, 213 | smoothing_fwhm=smoothing_fwhm, 214 | ) 215 | time_series_voxel = group_masker.fit_transform( 216 | img, confounds=cf, sample_mask=sm 217 | ) 218 | denoised_img = group_masker.inverse_transform(time_series_voxel) 219 | return denoised_img 220 | 221 | 222 | def _check_exclusion( 223 | reduced_confounds: pd.DataFrame, 224 | sample_mask: np.ndarray[Any, Any] | None, 225 | ) -> bool: 226 | """For scrubbing based strategy, check if regression can be performed.""" 227 | if sample_mask is not None: 228 | kept_vol = len(sample_mask) 229 | else: 230 | kept_vol = reduced_confounds.shape[0] 231 | # if more noise regressors than volume, this data is not denoisable 232 | remove = kept_vol < reduced_confounds.shape[1] 233 | return remove 234 | -------------------------------------------------------------------------------- /paper/paper.md: -------------------------------------------------------------------------------- 1 | --- 2 | title: 'Giga Connectome: a BIDS-app for time series and functional connectome extraction' 3 | tags: 4 | - BIDS 5 | - Python 6 | - fMRI 7 | - functional connectivity 8 | authors: 9 | - name: Hao-Ting Wang 10 | corresponding: true 11 | orcid: 0000-0003-4078-2038 12 | affiliation: "1,2" # (Multiple affiliations must be quoted) 13 | - name: Rémi Gau 14 | orcid: 0000-0002-1535-9767 15 | affiliation: "3,4" 16 | - name: Natasha Clarke 17 | orcid: 0000-0003-2455-3614 18 | affiliation: 1 19 | - name: Quentin Dessain 20 | orcid: 0000-0002-7314-0413 21 | affiliation: 5 22 | - name: Lune Bellec 23 | orcid: 0000-0002-9111-0699 24 | affiliation: "1,2,6" 25 | affiliations: 26 | - name: Centre de Recherche de l'Institut Universitaire de Gériatrie de Montréal, Université de Montréal, Montréal, QC, Canada 27 | index: 1 28 | - name: Département de psychologie, Université de Montréal, Montréal, Canada 29 | index: 2 30 | - name: MIND team, INRIA, CEA, Université Paris-Saclay, Paris, France 31 | index: 3 32 | - name: Neuro Data Science ORIGAMI Laboratory, McConnell Brain Imaging Centre, Faculty of Medicine, McGill University, Montréal, Canada. 33 | index: 4 34 | - name: Institute of Information and Communication Technologies, Electronics and Applied Mathematics (ICTEAM), Université catholique de Louvain, Louvain-la-Neuve, Belgium 35 | index: 5 36 | - name: Mila, Université de Montréal, Montréal, Montréal, Canada 37 | index: 6 38 | date: 12 April 2024 39 | bibliography: paper.bib 40 | --- 41 | 42 | # Summary 43 | 44 | Researchers perform two steps before functional magnetic resonance imaging (fMRI) data analysis: 45 | standardised preprocessing and customised denoising. 46 | `fMRIPrep` [@fmriprep\; RRID:SCR_016216], 47 | a popular software in the neuroimaging community, is a common choice for preprocessing. 48 | `fMRIPrep` performs minimal preprocessing, leaving a few steps for the end user: smoothing, denoising, and standardisation. 49 | The present software, `giga-connectome`, 50 | is a Brain Imaging Data Structure [BIDS\; @bids\; RRID:SCR_016124] 51 | compliant container image that aims to perform these steps as well as extract time series signals and generate connectomes for machine learning applications. 52 | All these steps are implemented with functions from `nilearn` [@Nilearn\; RRID:SCR_001362], 53 | a Python library for machine learning in neuroimaging. 54 | 55 | The tool performs smoothing, denoising, and standardisation on voxel level data. 56 | Smoothing is implemented with a 5mm full width at half maximum kernel and the user can change the kernel size based on the voxel size of their fMRI data. 57 | For the denoising step, we built the workflow closely aligned with the design choice of `fmriprep` 58 | and worked with the `fmriprep` developers while implementing a key Application Programming Interface (API), 59 | `load_confounds`, implemented in the software library `nilearn`. 60 | The tool provides some preset strategies based on @wang_continuous_2024 and the current long-term support release of `fMRIPrep`. 61 | Users can implement their own strategy using configuration files to directly interact with the `load_confounds` API. 62 | The details of the process can be found in the [user documentation](https://giga-connectome.readthedocs.io/en/latest/workflow.html). 63 | Finally the data is standardised as z-scores. 64 | 65 | The atlas for time series extraction was retrieved through `templateflow` [@templateflow\; RRID:SCR_021876], 66 | a brain template and atlas naming system with a Python API. 67 | The container image provides some default atlases that are already available in the `templateflow` repository, including 68 | Harvard-Oxford [@makris_decreased_2006; @goldstein_hypothalamic_2007; @frazier_structural_2005; @desikan_automated_2006], 69 | Schaefer [@schaefer_local-global_2018], 70 | MIST [@urchs_mist_2019], 71 | and DiFuMo [@dadi_fine-grain_2020]. 72 | Customised atlases will have to be formatted in `templateflow` convention and supplied using a configuration file. 73 | We aim to include more default atlases when they are included in the `templateflow` repository. 74 | 75 | The time series extraction is implemented with `nilearn` objects `NiftiLabelsMasker` and `NiftiMapsMasker`. 76 | The generated time series are used to construct connectomes calculated as Pearson's correlation with `nilearn` object `ConnectivityMeasure`. 77 | 78 | Finally the saved time series and connectomes follow the format of the 79 | [BIDS-connectome specification](https://bids.neuroimaging.io/bep017). 80 | Users can follow the specification to interpret and retrieve the relevant results. 81 | The coverage of the atlas is also included as an HTML visual report, provided by `nilearn.masker` for users to examine the quality of the atlas coverage. 82 | More information about the usage, workflow, and outputs can be found on the 83 | [official documentation](https://giga-connectome.readthedocs.io/en/latest/). 84 | 85 | 86 | # Statement of need 87 | 88 | `giga-connectome` is created for large scale deployment on multiple `fMRIPrep` preprocessed neuroimaging datasets. 89 | We aimed to create a tool that is lightweight in terms of code base complexity, software dependencies, and command line interface (CLI). 90 | The current software follows the BIDS-apps API [@bidsapp] and is the first of its kind that creates outputs following the BIDS-connectome specification. 91 | Both users and developers would benefit from the detailed definition of BIDS-apps API and BIDS-connectome for shared usage documentation and development guidelines. 92 | The key dependencies of the software are Python clients of BIDS or BIDS adjacent projects (`pyBIDS` and `templateflow` Python client) and `nilearn`, 93 | which is an open source library of high quality and with a clear development cycle for future maintenance. 94 | We used configuration files to structure the choice of denoising strategies and brain atlas to avoid crowding the CLI and ensure the choices of the user are traceable. 95 | 96 | We aim to provide a lightweight alternative to other existing post-fMRIPrep processing software such as XCP-D [@xcp-d] 97 | and HALFPipe [@HALFpipe], 98 | or preprocessing software with fMRIPrep support such as C-PAC [@cpac] 99 | and CONN [@conn\; RRID:SCR_009550]. 100 | These tools provide more flexibility and options for denoising and more types of downstream feature extraction for a wider range of fMRI analysis. 101 | `giga-connectome` was intentionally designed with a narrow scope for quick deployment and the ease for machine learning researchers to adopt. 102 | We hope this modular implementation can eventually be included as part of these existing workflows so all `fMRIPrep` outputs can share a time series and connectome extraction tool that is minimal and streamlined. 103 | Furthermore, this lean design choice aims to reduce the barrier to learning the code base and the ease of on-boarding new contributors. 104 | We hope this choice will invite more users to contribute to the tool and benefit from the open source neuroimaging community. 105 | 106 | `giga-connectome` has already been deployed on multiple large neuroimaging datasets such as 107 | ABIDE [@ABIDE], 108 | ADHD200 [@adhd200], 109 | UK Biobank [@ukbiobank], 110 | and more. 111 | The generated time series and connectomes have been included in a registered report [@clarke_2024], 112 | and various work under preparation in the [SIMEXP lab](https://github.com/SIMEXP/) 113 | and [CNeuromod project](https://www.cneuromod.ca/). 114 | The data processing scripts using `giga-connectome` can be found 115 | [here for UK Biobank](https://github.com/Hyedryn/ukbb-scripts/tree/dev) 116 | and [this repository](https://github.com/SIMEXP/giga_preprocess2) for the other datasets. 117 | 118 | # Acknowledgement 119 | 120 | The project was supported by the following funding: 121 | Digital Alliance Canada Resource Allocation Competition (RAC 1827 and RAC 4455) to LB, 122 | Institut de Valorisation des Données projets de recherche stratégiques 123 | (IVADO PFR3) to LB, 124 | and Canadian Consortium on Neurodegeneration in Aging 125 | (CCNA; team 9 "discovering new biomarkers") to LB, 126 | the Courtois Foundation to LB, 127 | and Institut national de recherche en sciences et technologies du numérique 128 | (INRIA; Programme Équipes Associées - NeuroMind Team DRI-012229) to LB. 129 | HTW and NC are supported by IVADO postdoc fellowships. 130 | QD is a research fellow of the Fonds de la Recherche Scientifique - FNRS of Belgium. 131 | RG is supported by funding from the Chan Zuckerberg Initiative. 132 | LB was funded by Fonds de Recherche du Québec - Santé. 133 | 134 | # Reference 135 | -------------------------------------------------------------------------------- /giga_connectome/postprocess.py: -------------------------------------------------------------------------------- 1 | from __future__ import annotations 2 | 3 | import json 4 | from pathlib import Path 5 | from typing import Any, Sequence 6 | 7 | import numpy as np 8 | import pandas as pd 9 | from bids.layout import BIDSImageFile 10 | from nilearn.connectome import ConnectivityMeasure 11 | from nilearn.maskers import NiftiLabelsMasker, NiftiMapsMasker 12 | 13 | from giga_connectome import utils 14 | from giga_connectome.atlas import ATLAS_SETTING_TYPE 15 | from giga_connectome.connectome import generate_timeseries_connectomes 16 | from giga_connectome.denoise import ( 17 | STRATEGY_TYPE, 18 | denoise_nifti_voxel, 19 | denoise_meta_data, 20 | ) 21 | from giga_connectome.logger import gc_logger 22 | from giga_connectome.utils import progress_bar 23 | 24 | gc_log = gc_logger() 25 | 26 | 27 | def run_postprocessing_dataset( 28 | strategy: STRATEGY_TYPE, 29 | atlas: ATLAS_SETTING_TYPE, 30 | resampled_atlases: Sequence[str | Path], 31 | images: Sequence[BIDSImageFile], 32 | group_mask: str | Path, 33 | standardize: bool, 34 | smoothing_fwhm: float, 35 | output_path: Path, 36 | calculate_average_correlation: bool = False, 37 | ) -> None: 38 | """ 39 | Generate subject and group level timeseries and connectomes. 40 | 41 | The time series data is denoised as follow: 42 | 43 | - Time series extractions through label or map maskers are performed \ 44 | on the denoised nifti file. Denoising steps are performed on the \ 45 | voxel level: 46 | 47 | - spatial smoothing 48 | 49 | - detrend, only if high pass filter is not applied through confounds 50 | 51 | - Regress out confounds 52 | 53 | - standardize 54 | 55 | - Extract time series from atlas 56 | 57 | - Compute correlation matrix 58 | 59 | - Optional: average correlation within each parcel. 60 | 61 | - Save timeseries and correlation matrix to h5 file 62 | 63 | - Optional: Create average correlation matrix across subjects when using \ 64 | group level analysis. 65 | 66 | Parameters 67 | ---------- 68 | 69 | strategy : dict 70 | Parameters for `load_confounds_strategy` or `load_confounds`. 71 | 72 | atlas : dict 73 | Atlas settings. 74 | 75 | resampled_atlases : list of str or pathlib.Path 76 | Atlas niftis resampled to the common space of the dataset. 77 | 78 | images : list of BIDSImageFile 79 | Preprocessed Nifti images for post processing 80 | 81 | group_mask : str or pathlib.Path 82 | Group level grey matter mask. 83 | 84 | standardize : bool 85 | Standardization to zscore or not used in nilearn, passed to nilearn \ 86 | masker. 87 | 88 | smoothing_fwhm : float 89 | Smoothing kernel size, passed to nilearn masker. 90 | 91 | output_path: 92 | Full path to output directory. 93 | 94 | analysis_level : str 95 | Level of analysis, only "participant" is available. 96 | 97 | calculate_average_correlation : bool 98 | Whether to calculate average correlation within each parcel. 99 | """ 100 | atlas_maskers: dict[str, (NiftiLabelsMasker | NiftiMapsMasker)] = {} 101 | connectomes: dict[str, list[np.ndarray[Any, Any]]] = {} 102 | for atlas_path in resampled_atlases: 103 | if isinstance(atlas_path, str): 104 | atlas_path = Path(atlas_path) 105 | seg = atlas_path.name.split("seg-")[-1].split("_")[0] 106 | atlas_maskers[seg] = _get_masker(atlas_path) 107 | connectomes[seg] = [] 108 | 109 | correlation_measure = ConnectivityMeasure( 110 | kind="correlation", vectorize=False, discard_diagonal=False 111 | ) 112 | 113 | # transform data 114 | gc_log.info("Processing subject") 115 | 116 | with progress_bar(text="Processing subject") as progress: 117 | task = progress.add_task( 118 | description="processing subject", total=len(images) 119 | ) 120 | 121 | for img in images: 122 | print() 123 | gc_log.info(f"Processing image:\n{img.filename}") 124 | 125 | # process timeseries 126 | denoised_img = denoise_nifti_voxel( 127 | strategy, group_mask, standardize, smoothing_fwhm, img.path 128 | ) 129 | 130 | # parse file name 131 | subject, session, specifier = utils.parse_bids_name(img.path) 132 | 133 | # folder for this subject output 134 | connectome_path = output_path / subject 135 | if session: 136 | connectome_path = connectome_path / session 137 | connectome_path = connectome_path / "func" 138 | 139 | # All timeseries derivatives of the same scan have the same 140 | # metadata so one json file for them all. 141 | # see https://bids.neuroimaging.io/bep012 142 | json_filename = connectome_path / utils.output_filename( 143 | source_file=Path(img.filename).stem, 144 | atlas=atlas["name"], 145 | atlas_desc="", 146 | strategy=strategy["name"], 147 | suffix="timeseries", 148 | extension="json", 149 | ) 150 | utils.check_path(json_filename) 151 | if denoised_img: 152 | meta_data = denoise_meta_data(strategy, img.path) 153 | meta_data["SamplingFrequency"] = ( 154 | 1 / img.entities["RepetitionTime"] 155 | ) 156 | with open(json_filename, "w") as f: 157 | json.dump(meta_data, f, indent=4) 158 | 159 | for seg, masker in atlas_maskers.items(): 160 | 161 | if not denoised_img: 162 | time_series_atlas, correlation_matrix = None, None 163 | attribute_name = f"{subject}_{specifier}" f"_seg-{seg}" 164 | gc_log.info(f"{attribute_name}: no volume after scrubbing") 165 | progress.update(task, advance=1) 166 | continue 167 | 168 | # extract timeseries and connectomes 169 | (correlation_matrix, time_series_atlas, masker) = ( 170 | generate_timeseries_connectomes( 171 | masker, 172 | denoised_img, 173 | group_mask, 174 | correlation_measure, 175 | calculate_average_correlation, 176 | ) 177 | ) 178 | 179 | # reverse engineer atlas_desc 180 | desc = seg.split(atlas["name"])[-1] 181 | # dump correlation_matrix to tsv 182 | relmat_filename = connectome_path / utils.output_filename( 183 | source_file=Path(img.filename).stem, 184 | atlas=atlas["name"], 185 | suffix="relmat", 186 | extension="tsv", 187 | strategy=strategy["name"], 188 | atlas_desc=desc, 189 | ) 190 | utils.check_path(relmat_filename) 191 | df = pd.DataFrame(correlation_matrix) 192 | df.to_csv(relmat_filename, sep="\t", index=False) 193 | 194 | # dump timeseries to tsv file 195 | timeseries_filename = connectome_path / utils.output_filename( 196 | source_file=Path(img.filename).stem, 197 | atlas=atlas["name"], 198 | suffix="timeseries", 199 | extension="tsv", 200 | strategy=strategy["name"], 201 | atlas_desc=desc, 202 | ) 203 | utils.check_path(timeseries_filename) 204 | df = pd.DataFrame(time_series_atlas) 205 | df.to_csv(timeseries_filename, sep="\t", index=False) 206 | 207 | report = masker.generate_report() 208 | report_filename = connectome_path / utils.output_filename( 209 | source_file=Path(img.filename).stem, 210 | atlas=atlas["name"], 211 | suffix="report", 212 | extension="html", 213 | strategy=strategy["name"], 214 | atlas_desc=desc, 215 | ) 216 | report.save_as_html(report_filename) 217 | 218 | progress.update(task, advance=1) 219 | 220 | gc_log.info(f"Saved to:\n{connectome_path}") 221 | 222 | 223 | def _get_masker(atlas_path: Path) -> NiftiLabelsMasker | NiftiMapsMasker: 224 | """Get the masker object based on the templateflow file name suffix.""" 225 | atlas_type = atlas_path.name.split("_")[-1].split(".nii")[0] 226 | if atlas_type == "dseg": 227 | atlas_masker = NiftiLabelsMasker( 228 | labels_img=atlas_path, 229 | standardize=False, 230 | cmap="gray", 231 | ) 232 | elif atlas_type == "probseg": 233 | atlas_masker = NiftiMapsMasker( 234 | maps_img=atlas_path, 235 | standardize=False, 236 | cmap="gray", 237 | ) 238 | return atlas_masker 239 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | [![DOI](https://joss.theoj.org/papers/10.21105/joss.07061/status.svg)](https://doi.org/10.21105/joss.07061) 2 | [![All Contributors](https://img.shields.io/github/all-contributors/bids-apps/giga_connectome?color=ee8449&style=flat)](#contributors) 3 | [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) 4 | [![codecov](https://codecov.io/gh/bids-apps/giga_connectome/branch/main/graph/badge.svg?token=P4EGV7NKZ8)](https://codecov.io/gh/bids-apps/giga_connectome) 5 | [![.github/workflows/test.yml](https://github.com/bids-apps/giga_connectome/actions/workflows/test.yml/badge.svg)](https://github.com/bids-apps/giga_connectome/actions/workflows/test.yml) 6 | [![pre-commit.ci status](https://results.pre-commit.ci/badge/github/bids-apps/giga_connectome/main.svg)](https://results.pre-commit.ci/latest/github/bids-apps/giga_connectome/main) 7 | [![Documentation Status](https://readthedocs.org/projects/giga-connectome/badge/?version=stable)](https://giga-connectome.readthedocs.io/en/latest/?badge=stable) 8 | ![https://github.com/psf/black](https://img.shields.io/badge/code%20style-black-000000.svg) 9 | [![Docker pulls](https://img.shields.io/docker/pulls/bids/giga_connectome)](https://hub.docker.com/r/bids/giga_connectome/tags) 10 | 11 | # giga-connectome 12 | 13 | This is a BIDS-App to extract signal from a parcellation with `nilearn`, 14 | typically useful in a context of resting-state data processing. 15 | 16 | You can read our [JOSS paper](https://doi.org/10.21105/joss.07061) for the background of the project and the details of implementations. 17 | 18 | ## Description 19 | 20 | Functional connectivity is a common approach in analysing resting state fMRI data. 21 | The Python tool `Nilearn` provides utilities to extract and denoise time-series on a parcellation. 22 | `Nilearn` also has methods to compute functional connectivity. 23 | While `Nilearn` provides useful methods to generate connectomes, 24 | there is no standalone one stop solution to generate connectomes from `fMRIPrep` outputs. 25 | `giga-connectome` (a BIDS-app!) combines `Nilearn` and `TemplateFlow` to denoise the data, generate timeseries, 26 | and most critically `giga-connectome` generates functional connectomes directly from `fMRIPrep` outputs. 27 | The workflow comes with several built-in denoising strategies and 28 | there are several choices of atlases (MIST, Schaefer 7 networks, DiFuMo, Harvard-Oxford). 29 | Users can customise their own strategies and atlases using the configuration json files. 30 | 31 | 32 | ## Supported `fMRIPrep` versions 33 | 34 | `giga-connectome` fully supports outputs of fMRIPrep LTS (long-term support) 20.2.x. 35 | 36 | For `fMRIPrep` 23.1.0 and later, `giga-connectome` does not support ICA-AROMA denoising, 37 | as the strategy is removed from the `fMRIPrep` workflow. 38 | 39 | ## Quick start 40 | 41 | Pull from `Dockerhub` (Recommended) 42 | 43 | ```bash 44 | docker pull bids/giga_connectome:latest 45 | docker run -ti --rm bids/giga_connectome --help 46 | ``` 47 | 48 | If you want to get the bleeding-edge version of the app, 49 | pull the `unstable` version. 50 | 51 | ```bash 52 | docker pull bids/giga_connectome:unstable 53 | ``` 54 | 55 | ## How to report errors 56 | 57 | Please use the [GitHub issue](https://github.com/bids-apps/giga_connectome/issues) to report errors. 58 | Check out the open issues first to see if we're already working on it. 59 | If not, [open up a new issue](https://github.com/bids-apps/giga_connectome/issues/new)! 60 | 61 | ## How to contribute 62 | 63 | You can review open [issues]((https://github.com/bids-apps/giga_connectome/issues)) that we are looking for help with. 64 | If you submit a new pull request please be as detailed as possible in your comments. 65 | If you have any question related how to create a pull request, you can check our [documentation for contributors](https://giga-connectome.readthedocs.io/en/latest/contributing.html). 66 | 67 | ## Contributors 68 | 69 | 70 | 71 | 72 | 73 | 74 | 75 | 76 | 77 | 78 | 79 | 80 | 81 | 82 | 83 | 84 | 85 | 86 | 87 | 88 |
Hao-Ting Wang
Hao-Ting Wang

🤔 🔬 💻 ⚠️
Quentin Dessain
Quentin Dessain

📓 📦
Natasha Clarke
Natasha Clarke

📓 💡 🐛
Remi Gau
Remi Gau

🚇 🚧
Lune Bellec
Lune Bellec

🤔 💵
Jon Cluce
Jon Cluce

🐛
Emeline Mullier
Emeline Mullier

🐛
James Kent
James Kent

🐛 📖
Marcel Stimberg
Marcel Stimberg

📓 📖 🐛
89 | 90 | 91 | 92 | 93 | 94 | 95 | ## Acknowledgements 96 | 97 | Please cite the following paper if you are using `giga-connectome` in your work: 98 | ```bibtex 99 | @article{Wang2025, 100 | doi = {10.21105/joss.07061}, 101 | url = {https://doi.org/10.21105/joss.07061}, 102 | year = {2025}, publisher = {The Open Journal}, 103 | volume = {10}, 104 | number = {110}, 105 | pages = {7061}, 106 | author = {Hao-Ting Wang and Rémi Gau and Natasha Clarke and Quentin Dessain and Lune Bellec}, 107 | title = {Giga Connectome: a BIDS-app for time series and functional connectome extraction}, 108 | journal = {Journal of Open Source Software} 109 | } 110 | ``` 111 | 112 | `giga-connectome` uses `nilearn` under the hood, 113 | hence please consider cite `nilearn` using the Zenodo DOI: 114 | 115 | ```bibtex 116 | @software{Nilearn, 117 | author = {Nilearn contributors}, 118 | license = {BSD-4-Clause}, 119 | title = {{nilearn}}, 120 | url = {https://github.com/nilearn/nilearn}, 121 | doi = {https://doi.org/10.5281/zenodo.8397156} 122 | } 123 | ``` 124 | Nilearn’s Research Resource Identifier (RRID) is: [RRID:SCR_001362][] 125 | 126 | We acknowledge all the [nilearn developers][] 127 | as well as the [BIDS-Apps team][] 128 | 129 | This is a Python project packaged according to [Contemporary Python Packaging - 2023][]. 130 | 131 | [Contemporary Python Packaging - 2023]: https://effigies.gitlab.io/posts/python-packaging-2023/ 132 | [RRID:SCR_001362]: https://rrid.site/data/record/nlx_144509-1/SCR_001362/resolver?q=nilearn&l=nilearn&i=rrid:scr_001362 133 | [nilearn developers]: https://github.com/nilearn/nilearn/graphs/contributors 134 | [BIDS-Apps team]:https://github.com/orgs/BIDS-Apps/people 135 | -------------------------------------------------------------------------------- /giga_connectome/utils.py: -------------------------------------------------------------------------------- 1 | from __future__ import annotations 2 | 3 | import json 4 | from pathlib import Path 5 | from typing import Any 6 | 7 | from bids import BIDSLayout 8 | from bids.layout import BIDSFile, Query 9 | from nilearn.interfaces.bids import parse_bids_filename 10 | from rich.progress import ( 11 | BarColumn, 12 | MofNCompleteColumn, 13 | Progress, 14 | SpinnerColumn, 15 | TaskProgressColumn, 16 | TextColumn, 17 | TimeElapsedColumn, 18 | TimeRemainingColumn, 19 | ) 20 | 21 | from giga_connectome import __version__ 22 | from giga_connectome.logger import gc_logger 23 | from giga_connectome.denoise import is_ica_aroma, STRATEGY_TYPE 24 | 25 | 26 | gc_log = gc_logger() 27 | 28 | 29 | def prepare_bidsfilter_and_template( 30 | strategy: STRATEGY_TYPE, 31 | user_bids_filter: None | dict[str, dict[str, str]], 32 | ) -> tuple[str, dict[str, dict[str, str]]]: 33 | """ 34 | Prepare the template and BIDS filters for ICA AROMA. 35 | This solution only applies to fMRIPrep version < 23.1.0. 36 | For later versions, we will wait for updates in the upstream library 37 | nilearn to support the new layoput for ICA AROMA. 38 | 39 | Parameters 40 | ---------- 41 | strategy : str 42 | The denoising strategy. 43 | bids_filter_file : Path 44 | The path to the BIDS filter file. 45 | 46 | Returns 47 | ------- 48 | tuple[str, None | dict[str, dict[str, str]]] 49 | The template and BIDS filters. 50 | """ 51 | user_bids_filter = check_filter(user_bids_filter) 52 | if is_ica_aroma(strategy): 53 | template = "MNI152NLin6Asym" 54 | bids_filters = { # this only applies to fMRIPrep version < 23.1.0 55 | "bold": { 56 | "desc": "smoothAROMAnonaggr", 57 | "space": "MNI152NLin6Asym", 58 | }, 59 | "mask": { # no brain mask for MNI152NLin6Asym, use the closest 60 | "space": "MNI152NLin2009cAsym", 61 | }, 62 | } 63 | if user_bids_filter: 64 | for suffix, entities in user_bids_filter.items(): 65 | bids_filters[suffix].update(entities) 66 | return template, bids_filters 67 | else: 68 | return "MNI152NLin2009cAsym", user_bids_filter 69 | 70 | 71 | def get_bids_images( 72 | subjects: list[str], 73 | template: str, 74 | bids_dir: Path, 75 | reindex_bids: bool, 76 | bids_filters: None | dict[str, dict[str, str]], 77 | ) -> tuple[dict[str, list[BIDSFile]], BIDSLayout]: 78 | """ 79 | Apply BIDS filter to the base filter we are using. 80 | Modified from fmripprep 81 | """ 82 | 83 | bids_filters = check_filter(bids_filters) 84 | 85 | layout = BIDSLayout( 86 | root=bids_dir, 87 | database_path=bids_dir, 88 | validate=False, 89 | derivatives=False, 90 | reset_database=reindex_bids, 91 | config=["bids", "derivatives"], 92 | ) 93 | 94 | layout_get_kwargs = { 95 | "return_type": "object", 96 | "subject": subjects, 97 | "session": Query.OPTIONAL, 98 | "task": Query.ANY, 99 | "run": Query.OPTIONAL, 100 | "extension": ".nii.gz", 101 | } 102 | queries = { 103 | "bold": { 104 | "desc": "preproc", 105 | "space": template, 106 | "suffix": "bold", 107 | "datatype": "func", 108 | }, 109 | "mask": { 110 | "suffix": "mask", 111 | "space": "MNI152NLin2009cAsym", 112 | "datatype": "func", 113 | }, 114 | } 115 | 116 | # update individual queries first 117 | for suffix, entities in bids_filters.items(): 118 | queries[suffix].update(entities) 119 | 120 | # now go through the shared entities in layout_get_kwargs 121 | for entity in list(layout_get_kwargs.keys()): 122 | for suffix, entities in bids_filters.items(): 123 | if entity in entities: 124 | # avoid clobbering layout.get 125 | layout_get_kwargs.update({entity: entities[entity]}) 126 | del queries[suffix][entity] 127 | 128 | subj_data = { 129 | dtype: layout.get(**layout_get_kwargs, **query) 130 | for dtype, query in queries.items() 131 | } 132 | return subj_data, layout 133 | 134 | 135 | def check_filter( 136 | bids_filters: None | dict[str, dict[str, str]], 137 | ) -> dict[str, dict[str, str]]: 138 | """Should only have bold and mask.""" 139 | if not bids_filters: 140 | return {} 141 | queries = list(bids_filters.keys()) 142 | base = ["bold", "mask"] 143 | all_detected = set(base).union(set(queries)) 144 | if len(all_detected) > len(base): 145 | extra = all_detected.difference(set(base)) 146 | raise ValueError( 147 | "The only meaningful filters for giga-connectome are 'bold' " 148 | f"and 'mask'. We found other filters here: {extra}." 149 | ) 150 | return bids_filters 151 | 152 | 153 | def _filter_pybids_none_any( 154 | dct: dict[str, None | str], 155 | ) -> dict[str, Query.NONE | Query.ANY]: 156 | return { 157 | k: Query.NONE if v is None else (Query.ANY if v == "*" else v) 158 | for k, v in dct.items() 159 | } 160 | 161 | 162 | def parse_bids_filter(value: Path) -> None | dict[str, dict[str, str]]: 163 | """Parse a BIDS filter json file. 164 | Parameters 165 | ---------- 166 | value : Path 167 | Path to the BIDS json file. 168 | 169 | Returns 170 | ------- 171 | dict 172 | Dictionary of BIDS filters. 173 | """ 174 | 175 | from json import JSONDecodeError, loads 176 | 177 | if not value: 178 | return None 179 | 180 | if not value.exists(): 181 | raise FileNotFoundError(f"Path does not exist: <{value}>.") 182 | try: 183 | tmp = loads( 184 | value.read_text(), 185 | object_hook=_filter_pybids_none_any, 186 | ) 187 | except JSONDecodeError as e: 188 | raise ValueError(f"JSON syntax error in: <{value}>.") from e 189 | return tmp 190 | 191 | 192 | def parse_bids_name(img: str) -> tuple[str, str | None, str]: 193 | """Get subject, session, and specifier for a fMRIPrep output.""" 194 | reference = parse_bids_filename(img) 195 | 196 | subject = f"sub-{reference['sub']}" 197 | 198 | specifier = f"task-{reference['task']}" 199 | run = reference.get("run", None) 200 | if isinstance(run, str): 201 | specifier = f"{specifier}_run-{run}" 202 | 203 | session = reference.get("ses", None) 204 | if isinstance(session, str): 205 | session = f"ses-{session}" 206 | specifier = f"{session}_{specifier}" 207 | 208 | return subject, session, specifier 209 | 210 | 211 | def get_subject_lists( 212 | participant_label: None | list[str] = None, bids_dir: None | Path = None 213 | ) -> list[str]: 214 | """ 215 | Parse subject list from user options. 216 | 217 | Parameters 218 | ---------- 219 | 220 | participant_label : 221 | 222 | A list of BIDS competible subject identifiers. 223 | If the prefix `sub-` is present, it will be removed. 224 | 225 | bids_dir : 226 | 227 | The fMRIPrep derivative output. 228 | 229 | Return 230 | ------ 231 | 232 | list 233 | BIDS subject identifier without `sub-` prefix. 234 | """ 235 | if participant_label: 236 | # TODO: check these IDs exists 237 | checked_labels = [] 238 | for sub_id in participant_label: 239 | if "sub-" in sub_id: 240 | sub_id = sub_id.replace("sub-", "") 241 | checked_labels.append(sub_id) 242 | return checked_labels 243 | # get all subjects, this is quicker than bids... 244 | if bids_dir: 245 | subject_dirs = bids_dir.glob("sub-*/") 246 | return [ 247 | subject_dir.name.split("-")[-1] 248 | for subject_dir in subject_dirs 249 | if subject_dir.is_dir() 250 | ] 251 | return [] 252 | 253 | 254 | def check_path(path: Path) -> None: 255 | """Check if given path (file or dir) already exists. 256 | 257 | If so, a warning is logged and the previous file is deleted. 258 | If the parent path does not exist, it is created. 259 | """ 260 | path = path.absolute() 261 | path.parent.mkdir(parents=True, exist_ok=True) 262 | if path.exists(): 263 | gc_log.warning( 264 | f"Specified path already exists:\n\t{path}\n" 265 | "Old file will be overwritten." 266 | ) 267 | path.unlink() 268 | 269 | 270 | def create_ds_description(output_dir: Path) -> None: 271 | """Create a dataset_description.json file.""" 272 | ds_desc: dict[str, Any] = { 273 | "BIDSVersion": "1.9.0", 274 | "License": None, 275 | "Name": None, 276 | "ReferencesAndLinks": [], 277 | "DatasetDOI": None, 278 | "DatasetType": "derivative", 279 | "GeneratedBy": [ 280 | { 281 | "Name": "giga_connectome", 282 | "Version": __version__, 283 | "CodeURL": "https://github.com/bids-apps/giga_connectome.git", 284 | } 285 | ], 286 | "HowToAcknowledge": ( 287 | "Please refer to our repository: " 288 | "https://github.com/bids-apps/giga_connectome.git." 289 | ), 290 | } 291 | with open(output_dir / "dataset_description.json", "w") as f: 292 | json.dump(ds_desc, f, indent=4) 293 | 294 | 295 | def create_sidecar(output_path: Path) -> None: 296 | """Create a JSON sidecar for the connectivity data.""" 297 | metadata: dict[str, Any] = { 298 | "Measure": "Pearson correlation", 299 | "MeasureDescription": "Pearson correlation", 300 | "Weighted": False, 301 | "Directed": False, 302 | "ValidDiagonal": True, 303 | "StorageFormat": "Full", 304 | "NonNegative": "", 305 | "Code": "https://github.com/bids-apps/giga_connectome.git", 306 | } 307 | with open(output_path, "w") as f: 308 | json.dump(metadata, f, indent=4) 309 | 310 | 311 | def output_filename( 312 | source_file: str, 313 | atlas: str, 314 | suffix: str, 315 | extension: str, 316 | strategy: str | None = None, 317 | atlas_desc: str | None = None, 318 | ) -> str: 319 | """Generate output filneme.""" 320 | subject, session, specifier = parse_bids_name(source_file) 321 | seg = f"seg-{atlas}{atlas_desc}" 322 | if extension != "nii.gz": 323 | root: str = f"{subject}_{specifier}" 324 | 325 | if extension != "json": 326 | root += f"_{seg}" 327 | 328 | if suffix == "relmat": 329 | root += "_meas-PearsonCorrelation" 330 | 331 | if strategy is None: 332 | strategy = "" 333 | 334 | return ( 335 | f"{root}_desc-denoise{strategy.capitalize()}_{suffix}.{extension}" 336 | ) 337 | 338 | elif suffix == "mask": 339 | reference = parse_bids_filename(source_file) 340 | tpl: str = f"space-{reference['space']}" 341 | tpl += f"_res-{reference['res']}" if "res" in reference else "" 342 | return f"{subject}_{tpl}_label-GM_{suffix}.{extension}" 343 | else: 344 | return f"{subject}_{seg}_{suffix}.{extension}" 345 | 346 | 347 | def progress_bar(text: str, color: str = "green") -> Progress: 348 | return Progress( 349 | TextColumn(f"[{color}]{text}"), 350 | SpinnerColumn("dots"), 351 | TimeElapsedColumn(), 352 | BarColumn(), 353 | MofNCompleteColumn(), 354 | TaskProgressColumn(), 355 | TimeRemainingColumn(), 356 | ) 357 | -------------------------------------------------------------------------------- /giga_connectome/mask.py: -------------------------------------------------------------------------------- 1 | from __future__ import annotations 2 | 3 | import os 4 | import re 5 | from pathlib import Path 6 | from typing import Any, Sequence 7 | 8 | import nibabel as nib 9 | import numpy as np 10 | from bids.layout import BIDSImageFile 11 | from nibabel import Nifti1Image 12 | from nilearn.image import ( 13 | get_data, 14 | load_img, 15 | math_img, 16 | new_img_like, 17 | resample_to_img, 18 | ) 19 | from nilearn.masking import compute_multi_epi_mask 20 | from scipy.ndimage import binary_closing 21 | 22 | from giga_connectome.atlas import ATLAS_SETTING_TYPE, resample_atlas_collection 23 | from giga_connectome.logger import gc_logger 24 | from giga_connectome import utils 25 | 26 | gc_log = gc_logger() 27 | 28 | 29 | def generate_gm_mask_atlas( 30 | atlases_dir: Path, 31 | atlas: ATLAS_SETTING_TYPE, 32 | template: str, 33 | masks: list[BIDSImageFile], 34 | ) -> tuple[Path, list[Path]]: 35 | """ """ 36 | # check masks; isolate this part and make sure to make it a validate 37 | # templateflow template with a config file 38 | subject, _, _ = utils.parse_bids_name(masks[0].path) 39 | subject_mask_dir = atlases_dir / subject / "func" 40 | subject_mask_dir.mkdir(exist_ok=True, parents=True) 41 | target_subject_mask_file_name: str = utils.output_filename( 42 | source_file=masks[0].path, 43 | atlas="", 44 | suffix="mask", 45 | extension="nii.gz", 46 | strategy="", 47 | atlas_desc="", 48 | ) 49 | target_subject_seg_file_names: list[str] = [ 50 | utils.output_filename( 51 | source_file=masks[0].path, 52 | atlas=atlas["name"], 53 | suffix=atlas["type"], 54 | extension="nii.gz", 55 | strategy="", 56 | atlas_desc=atlas_desc, 57 | ) 58 | for atlas_desc in atlas["file_paths"] 59 | ] 60 | target_subject_mask, target_subject_seg = _check_pregenerated_masks( 61 | subject_mask_dir, 62 | target_subject_mask_file_name, 63 | target_subject_seg_file_names, 64 | ) 65 | 66 | if not target_subject_mask: 67 | # grey matter group mask is only supplied in MNI152NLin2009c(A)sym 68 | subject_mask_nii = generate_subject_gm_mask( 69 | [m.path for m in masks], "MNI152NLin2009cAsym" 70 | ) 71 | nib.save( 72 | subject_mask_nii, subject_mask_dir / target_subject_mask_file_name 73 | ) 74 | else: 75 | subject_mask_nii = load_img( 76 | subject_mask_dir / target_subject_mask_file_name 77 | ) 78 | 79 | if not target_subject_seg or not target_subject_mask: 80 | # resample if the grey matter mask was not generated 81 | # or the atlas was not present 82 | subject_seg_niis = resample_atlas_collection( 83 | target_subject_seg_file_names, 84 | atlas, 85 | subject_mask_dir, 86 | subject_mask_nii, 87 | ) 88 | else: 89 | subject_seg_niis = [ 90 | subject_mask_dir / i for i in target_subject_seg_file_names 91 | ] 92 | 93 | return subject_mask_dir / target_subject_mask_file_name, subject_seg_niis 94 | 95 | 96 | def generate_subject_gm_mask( 97 | imgs: Sequence[Path | str | Nifti1Image], 98 | template: str = "MNI152NLin2009cAsym", 99 | templateflow_dir: Path | None = None, 100 | n_iter: int = 2, 101 | ) -> Nifti1Image: 102 | """ 103 | Generate a subject EPI grey matter mask, and overlaid with a MNI grey 104 | matter template. 105 | The subject EPI mask will ensure the signal extraction is from the most 106 | overlapping voxels for all scans of the subject. 107 | 108 | Parameters 109 | ---------- 110 | imgs : list of Path or str or Nifti1Image 111 | list of EPI masks or preprocessed BOLD data. 112 | 113 | template : str, Default = MNI152NLin2009cAsym 114 | Template name from TemplateFlow to retrieve the grey matter template. 115 | This template should match the template for the EPI mask. 116 | 117 | templateflow_dir : None or pathlib.Path 118 | TemplateFlow directory. Default to None to download the directory, 119 | otherwise use the templateflow data saved at the given path. 120 | 121 | n_iter: int, optional, Default = 2 122 | Number of repetitions of dilation and erosion steps performed in 123 | scipy.ndimage.binary_closing function. 124 | 125 | Keyword Arguments 126 | ----------------- 127 | Used to filter the cirret 128 | See keyword arguments in templateflow.api module. 129 | 130 | Return 131 | ------ 132 | 133 | nibabel.nifti1.Nifti1Image 134 | EPI (grey matter) mask for the current group of subjects. 135 | """ 136 | gc_log.debug(f"Found {len(imgs)} masks") 137 | if exclude := _check_mask_affine(imgs): 138 | imgs, __annotations__ = _get_consistent_masks(imgs, exclude) 139 | gc_log.debug(f"Remaining: {len(imgs)} masks") 140 | 141 | # templateflow environment setting to get around network issue 142 | if templateflow_dir and templateflow_dir.exists(): 143 | os.environ["TEMPLATEFLOW_HOME"] = str(templateflow_dir.resolve()) 144 | import templateflow 145 | 146 | # use default nilearn parameters to create the group epi mask 147 | group_epi_mask = compute_multi_epi_mask( 148 | imgs, 149 | lower_cutoff=0.2, 150 | upper_cutoff=0.85, 151 | connected=True, 152 | opening=False, # we should be using fMRIPrep masks 153 | threshold=0.5, 154 | target_affine=None, 155 | target_shape=None, 156 | exclude_zeros=False, 157 | n_jobs=1, 158 | memory=None, 159 | verbose=0, 160 | ) 161 | gc_log.info( 162 | f"Group EPI mask affine:\n{group_epi_mask.affine}" 163 | f"\nshape: {group_epi_mask.shape}" 164 | ) 165 | 166 | # load grey matter mask 167 | check_valid_template = re.match(r"MNI152NLin2009[ac][A]?[sS]ym", template) 168 | if not check_valid_template: 169 | raise ValueError( 170 | f"TemplateFlow does not supply template {template} " 171 | "with grey matter masks. Possible templates: " 172 | "MNI152NLin2009a*, MNI152NLin2009c*." 173 | ) 174 | # preprocessed data don't need high res 175 | # for MNI152NLin2009a* templates, only one resolution is available 176 | gm_res = "02" if template == "MNI152NLin2009cAsym" else "1" 177 | 178 | mni_gm_path = templateflow.api.get( 179 | template, 180 | raise_empty=True, 181 | label="GM", 182 | resolution=gm_res, 183 | ) 184 | 185 | mni_gm = resample_to_img( 186 | source_img=mni_gm_path, 187 | target_img=group_epi_mask, 188 | interpolation="continuous", 189 | ) 190 | # the following steps are take from 191 | # nilearn.images.fetch_icbm152_brain_gm_mask 192 | mni_gm_data = get_data(mni_gm) 193 | # this is a probalistic mask, getting one fifth of the values 194 | mni_gm_mask = (mni_gm_data > 0.2).astype("int8") 195 | mni_gm_mask = binary_closing(mni_gm_mask, iterations=n_iter) 196 | mni_gm_mask_img = new_img_like(mni_gm, mni_gm_mask) 197 | 198 | # now we combine both masks into one 199 | return math_img("img1 & img2", img1=group_epi_mask, img2=mni_gm_mask_img) 200 | 201 | 202 | def _get_consistent_masks( 203 | mask_imgs: Sequence[Path | str | Nifti1Image], exclude: list[int] 204 | ) -> tuple[list[Path | str | Any], list[str]]: 205 | """Create a list of masks that has the same affine. 206 | 207 | Parameters 208 | ---------- 209 | 210 | mask_imgs : 211 | The original list of functional masks 212 | 213 | exclude : 214 | list of index to exclude. 215 | 216 | Returns 217 | ------- 218 | list of str 219 | Functional masks with the same affine. 220 | 221 | list of str 222 | Identifiers of scans with a different affine. 223 | """ 224 | weird_mask_identifiers = [] 225 | odd_masks = np.array(mask_imgs)[np.array(exclude)] 226 | odd_masks = odd_masks.tolist() 227 | for odd_file in odd_masks: 228 | identifier = Path(odd_file).name.split("_space")[0] 229 | weird_mask_identifiers.append(identifier) 230 | cleaned_func_masks = list(set(mask_imgs) - set(odd_masks)) 231 | return cleaned_func_masks, weird_mask_identifiers 232 | 233 | 234 | def _check_mask_affine( 235 | mask_imgs: Sequence[Path | str | Nifti1Image], 236 | ) -> list[int] | None: 237 | """Given a list of input mask images, show the most common affine matrix 238 | and subjects with different values. 239 | 240 | Parameters 241 | ---------- 242 | mask_imgs : :obj:`list` of Niimg-like objects 243 | See :ref:`extracting_data`. 244 | 3D or 4D EPI image with same affine. 245 | 246 | Returns 247 | ------- 248 | 249 | list or None 250 | Index of masks with odd affine matrix. Return None when all masks have 251 | the same affine matrix. 252 | """ 253 | # save all header and affine info in hashable type... 254 | header_info: dict[str, list[str]] = {"affine": []} 255 | key_to_header = {} 256 | for this_mask in mask_imgs: 257 | img = load_img(this_mask) 258 | affine = img.affine 259 | affine_hashable = str(affine) 260 | header_info["affine"].append(affine_hashable) 261 | if affine_hashable not in key_to_header: 262 | key_to_header[affine_hashable] = affine 263 | 264 | if isinstance(mask_imgs[0], Nifti1Image): 265 | # this looks janky, but this is for mypy to understand the type 266 | mask_arrays = np.array(np.arange(len(mask_imgs)).tolist()) 267 | else: 268 | mask_arrays = np.array(mask_imgs) 269 | 270 | # get most common values 271 | common_affine = max( 272 | set(header_info["affine"]), key=header_info["affine"].count 273 | ) 274 | gc_log.info( 275 | f"We found {len(set(header_info['affine']))} unique affine " 276 | f"matrices. The most common one is " 277 | f"{key_to_header[common_affine]}" 278 | ) 279 | odd_balls = set(header_info["affine"]) - {common_affine} 280 | if not odd_balls: 281 | return None 282 | 283 | exclude = [] 284 | for ob in odd_balls: 285 | ob_index = [ 286 | i for i, aff in enumerate(header_info["affine"]) if aff == ob 287 | ] 288 | gc_log.debug( 289 | "The following subjects has a different affine matrix " 290 | f"({key_to_header[ob]}) comparing to the most common value: " 291 | f"{mask_arrays[ob_index]}." 292 | ) 293 | exclude += ob_index 294 | gc_log.info( 295 | f"{len(exclude)} out of {len(mask_arrays)} has " 296 | "different affine matrix. Ignore when creating group mask." 297 | ) 298 | return sorted(exclude) 299 | 300 | 301 | def _check_pregenerated_masks( 302 | subject_mask_dir: Path, 303 | subject_mask_file_name: str, 304 | subject_seg_file_names: list[str], 305 | ) -> tuple[bool, bool]: 306 | """Check if the working directory is populated with needed files.""" 307 | # subject grey matter mask 308 | if target_subject_mask := ( 309 | subject_mask_dir / subject_mask_file_name 310 | ).exists(): 311 | gc_log.info( 312 | "Found pregenerated group level grey matter mask: " 313 | f"{subject_mask_dir / subject_mask_file_name}" 314 | ) 315 | 316 | # atlas 317 | all_exist = [ 318 | (subject_mask_dir / file_path).exists() 319 | for file_path in subject_seg_file_names 320 | ] 321 | if target_subject_seg := all(all_exist): 322 | gc_log.info( 323 | "Found resampled atlases:\n" 324 | f"{[filepath for filepath in subject_seg_file_names]} " 325 | f"in {subject_mask_dir}." 326 | "\nSkipping individual segmentation generation step." 327 | ) 328 | return target_subject_mask, target_subject_seg 329 | -------------------------------------------------------------------------------- /docs/source/outputs.md: -------------------------------------------------------------------------------- 1 | # Outputs 2 | 3 | The output of this app aims to follow the guideline 4 | of the [BIDS extension proposal 17 - Generic BIDS connectivity data schema](https://bids.neuroimaging.io/bep017). 5 | 6 | Metadata files content is described in this BIDS extension proposal. 7 | 8 | :::{note} 9 | A Brain Imaging Data Structure (BIDS) Extension Proposal (BEP) is a community-driven process to add a new modality or set of data types to the BIDS Specification. 10 | 11 | Although BEPs are not official part of the main BIDS Specification, they still can be adopted by users for data organisation and development. 12 | 13 | `giga-connectome` aims to review the changes in a timely manner and update the outputs in accordance with the development of BEP017. 14 | 15 | To learn more about BIDS Extension [see this page](https://bids.neuroimaging.io/extensions/index.html). 16 | ::: 17 | 18 | ## Participant level 19 | 20 | For each participant that was passed to `--participant-label` 21 | (or all participants under `bids_dir` if no `--participant-label` is passed), 22 | the output will be save in `sub-/[ses-]/func`. 23 | 24 | ### Data files 25 | 26 | For each input image (that is, preprocessed BOLD time series) 27 | and each atlas the following data files will be generated 28 | 29 | - a `[matches]_seg-{atlas}{atlas_description}_meas-PearsonCorrelation_desc-denoise{denoise_strategy}_relmat.tsv` 30 | file that contains the correlation matrix between all the regions of the atlas 31 | - a `[matches]_seg-{atlas}{atlas_description}_desc-denoise{denoise_strategy}_timeseries.tsv` 32 | file that contains the extracted timeseries for each region of the atlas 33 | 34 | - `{atlas}` refers to the name of the atlas used (for example, `Schaefer2018`) 35 | - `{atlas_description}` refers to the sub type of atlas used (for example, `100Parcels7Networks`) 36 | - `{denoise_strategy}` refers to the denoise strategy passed to the command line 37 | 38 | ### Metadata 39 | 40 | A JSON file is generated in the root of the output dataset (`meas-PearsonCorrelation_relmat.json`) 41 | that contains metadata applicable to all `relmat.tsv` files. 42 | 43 | For each input image (that is, preprocessed BOLD time series) 44 | a `[matches]_desc-denoise{denoise_strategy}_timeseries.json` 45 | 46 | ### Example 47 | 48 | ``` 49 | ├── dataset_description.json 50 | ├── logs 51 | │   └── CITATION.md 52 | ├── meas-PearsonCorrelation_relmat.json 53 | ├── sub-1 54 | │   ├── ses-timepoint1 55 | │   │   └── func 56 | │   │   ├── sub-1_ses-timepoint1_task-probabilisticclassification_run-01_desc-denoiseSimple_timeseries.json 57 | │   │   ├── sub-1_ses-timepoint1_task-probabilisticclassification_run-01_seg-Schaefer2018100Parcels7Networks_desc-denoiseSimple_report.html 58 | │   │   ├── sub-1_ses-timepoint1_task-probabilisticclassification_run-01_seg-Schaefer2018100Parcels7Networks_desc-denoiseSimple_timeseries.tsv 59 | │   │   ├── sub-1_ses-timepoint1_task-probabilisticclassification_run-01_seg-Schaefer2018100Parcels7Networks_meas-PearsonCorrelation_desc-denoiseSimple_relmat.tsv 60 | │   │   ├── sub-1_ses-timepoint1_task-probabilisticclassification_run-01_seg-Schaefer2018200Parcels7Networks_desc-denoiseSimple_report.html 61 | │   │   ├── sub-1_ses-timepoint1_task-probabilisticclassification_run-01_seg-Schaefer2018200Parcels7Networks_desc-denoiseSimple_timeseries.tsv 62 | │   │   ├── sub-1_ses-timepoint1_task-probabilisticclassification_run-01_seg-Schaefer2018200Parcels7Networks_meas-PearsonCorrelation_desc-denoiseSimple_relmat.tsv 63 | │   │   ├── sub-1_ses-timepoint1_task-probabilisticclassification_run-02_desc-denoiseSimple_timeseries.json 64 | │   │   ├── sub-1_ses-timepoint1_task-probabilisticclassification_run-02_seg-Schaefer2018100Parcels7Networks_desc-denoiseSimple_report.html 65 | │   │   ├── sub-1_ses-timepoint1_task-probabilisticclassification_run-02_seg-Schaefer2018100Parcels7Networks_desc-denoiseSimple_timeseries.tsv 66 | │   │   ├── sub-1_ses-timepoint1_task-probabilisticclassification_run-02_seg-Schaefer2018100Parcels7Networks_meas-PearsonCorrelation_desc-denoiseSimple_relmat.tsv 67 | │   │   ├── sub-1_ses-timepoint1_task-probabilisticclassification_run-02_seg-Schaefer2018200Parcels7Networks_desc-denoiseSimple_report.html 68 | │   │   ├── sub-1_ses-timepoint1_task-probabilisticclassification_run-02_seg-Schaefer2018200Parcels7Networks_desc-denoiseSimple_timeseries.tsv 69 | │   │   └── sub-1_ses-timepoint1_task-probabilisticclassification_run-02_seg-Schaefer2018200Parcels7Networks_meas-PearsonCorrelation_desc-denoiseSimple_relmat.tsv 70 | │   └── ses-timepoint2 71 | │   └── func 72 | │      ├── sub-1_ses-timepoint2_task-probabilisticclassification_run-01_desc-denoiseSimple_timeseries.json 73 | │      ├── sub-1_ses-timepoint2_task-probabilisticclassification_run-01_seg-Schaefer2018100Parcels7Networks_desc-denoiseSimple_report.html 74 | │      ├── sub-1_ses-timepoint2_task-probabilisticclassification_run-01_seg-Schaefer2018100Parcels7Networks_desc-denoiseSimple_timeseries.tsv 75 | │      ├── sub-1_ses-timepoint2_task-probabilisticclassification_run-01_seg-Schaefer2018100Parcels7Networks_meas-PearsonCorrelation_desc-denoiseSimple_relmat.tsv 76 | │      ├── sub-1_ses-timepoint2_task-probabilisticclassification_run-01_seg-Schaefer2018200Parcels7Networks_desc-denoiseSimple_report.html 77 | │      ├── sub-1_ses-timepoint2_task-probabilisticclassification_run-01_seg-Schaefer2018200Parcels7Networks_desc-denoiseSimple_timeseries.tsv 78 | │      ├── sub-1_ses-timepoint2_task-probabilisticclassification_run-01_seg-Schaefer2018200Parcels7Networks_meas-PearsonCorrelation_desc-denoiseSimple_relmat.tsv 79 | │      ├── sub-1_ses-timepoint2_task-probabilisticclassification_run-02_desc-denoiseSimple_timeseries.json 80 | │      ├── sub-1_ses-timepoint2_task-probabilisticclassification_run-02_seg-Schaefer2018100Parcels7Networks_desc-denoiseSimple_report.html 81 | │      ├── sub-1_ses-timepoint2_task-probabilisticclassification_run-02_seg-Schaefer2018100Parcels7Networks_desc-denoiseSimple_timeseries.tsv 82 | │      ├── sub-1_ses-timepoint2_task-probabilisticclassification_run-02_seg-Schaefer2018100Parcels7Networks_meas-PearsonCorrelation_desc-denoiseSimple_relmat.tsv 83 | │      ├── sub-1_ses-timepoint2_task-probabilisticclassification_run-02_seg-Schaefer2018200Parcels7Networks_desc-denoiseSimple_report.html 84 | │      ├── sub-1_ses-timepoint2_task-probabilisticclassification_run-02_seg-Schaefer2018200Parcels7Networks_desc-denoiseSimple_timeseries.tsv 85 | │      └── sub-1_ses-timepoint2_task-probabilisticclassification_run-02_seg-Schaefer2018200Parcels7Networks_meas-PearsonCorrelation_desc-denoiseSimple_relmat.tsv 86 | └── sub-2 87 |    ├── ses-timepoint1 88 |    │   └── func 89 |    │   ├── sub-2_ses-timepoint1_task-probabilisticclassification_run-01_desc-denoiseSimple_timeseries.json 90 |    │   ├── sub-2_ses-timepoint1_task-probabilisticclassification_run-01_seg-Schaefer2018100Parcels7Networks_desc-denoiseSimple_report.html 91 |    │   ├── sub-2_ses-timepoint1_task-probabilisticclassification_run-01_seg-Schaefer2018100Parcels7Networks_desc-denoiseSimple_timeseries.tsv 92 |    │   ├── sub-2_ses-timepoint1_task-probabilisticclassification_run-01_seg-Schaefer2018100Parcels7Networks_meas-PearsonCorrelation_desc-denoiseSimple_relmat.tsv 93 |    │   ├── sub-2_ses-timepoint1_task-probabilisticclassification_run-01_seg-Schaefer2018200Parcels7Networks_desc-denoiseSimple_report.html 94 |    │   ├── sub-2_ses-timepoint1_task-probabilisticclassification_run-01_seg-Schaefer2018200Parcels7Networks_desc-denoiseSimple_timeseries.tsv 95 |    │   ├── sub-2_ses-timepoint1_task-probabilisticclassification_run-01_seg-Schaefer2018200Parcels7Networks_meas-PearsonCorrelation_desc-denoiseSimple_relmat.tsv 96 |    │   ├── sub-2_ses-timepoint1_task-probabilisticclassification_run-02_desc-denoiseSimple_timeseries.json 97 |    │   ├── sub-2_ses-timepoint1_task-probabilisticclassification_run-02_seg-Schaefer2018100Parcels7Networks_desc-denoiseSimple_report.html 98 |    │   ├── sub-2_ses-timepoint1_task-probabilisticclassification_run-02_seg-Schaefer2018100Parcels7Networks_desc-denoiseSimple_timeseries.tsv 99 |    │   ├── sub-2_ses-timepoint1_task-probabilisticclassification_run-02_seg-Schaefer2018100Parcels7Networks_meas-PearsonCorrelation_desc-denoiseSimple_relmat.tsv 100 |    │   ├── sub-2_ses-timepoint1_task-probabilisticclassification_run-02_seg-Schaefer2018200Parcels7Networks_desc-denoiseSimple_report.html 101 |    │   ├── sub-2_ses-timepoint1_task-probabilisticclassification_run-02_seg-Schaefer2018200Parcels7Networks_desc-denoiseSimple_timeseries.tsv 102 |    │   └── sub-2_ses-timepoint1_task-probabilisticclassification_run-02_seg-Schaefer2018200Parcels7Networks_meas-PearsonCorrelation_desc-denoiseSimple_relmat.tsv 103 |    └── ses-timepoint2 104 |    └── func 105 |       ├── sub-2_ses-timepoint2_task-probabilisticclassification_run-01_desc-denoiseSimple_timeseries.json 106 |       ├── sub-2_ses-timepoint2_task-probabilisticclassification_run-01_seg-Schaefer2018100Parcels7Networks_desc-denoiseSimple_report.html 107 |       ├── sub-2_ses-timepoint2_task-probabilisticclassification_run-01_seg-Schaefer2018100Parcels7Networks_desc-denoiseSimple_timeseries.tsv 108 |       ├── sub-2_ses-timepoint2_task-probabilisticclassification_run-01_seg-Schaefer2018100Parcels7Networks_meas-PearsonCorrelation_desc-denoiseSimple_relmat.tsv 109 |       ├── sub-2_ses-timepoint2_task-probabilisticclassification_run-01_seg-Schaefer2018200Parcels7Networks_desc-denoiseSimple_report.html 110 |       ├── sub-2_ses-timepoint2_task-probabilisticclassification_run-01_seg-Schaefer2018200Parcels7Networks_desc-denoiseSimple_timeseries.tsv 111 |       ├── sub-2_ses-timepoint2_task-probabilisticclassification_run-01_seg-Schaefer2018200Parcels7Networks_meas-PearsonCorrelation_desc-denoiseSimple_relmat.tsv 112 |       ├── sub-2_ses-timepoint2_task-probabilisticclassification_run-02_desc-denoiseSimple_timeseries.json 113 |       ├── sub-2_ses-timepoint2_task-probabilisticclassification_run-02_seg-Schaefer2018100Parcels7Networks_desc-denoiseSimple_report.html 114 |       ├── sub-2_ses-timepoint2_task-probabilisticclassification_run-02_seg-Schaefer2018100Parcels7Networks_desc-denoiseSimple_timeseries.tsv 115 |       ├── sub-2_ses-timepoint2_task-probabilisticclassification_run-02_seg-Schaefer2018100Parcels7Networks_meas-PearsonCorrelation_desc-denoiseSimple_relmat.tsv 116 |       ├── sub-2_ses-timepoint2_task-probabilisticclassification_run-02_seg-Schaefer2018200Parcels7Networks_desc-denoiseSimple_report.html 117 |       ├── sub-2_ses-timepoint2_task-probabilisticclassification_run-02_seg-Schaefer2018200Parcels7Networks_desc-denoiseSimple_timeseries.tsv 118 |       └── sub-2_ses-timepoint2_task-probabilisticclassification_run-02_seg-Schaefer2018200Parcels7Networks_meas-PearsonCorrelation_desc-denoiseSimple_relmat.tsv 119 | ``` 120 | 121 | 122 | ## Atlases 123 | 124 | The merged grey matter masks per subject and the atlases resampled to the individual EPI data are in the directory specified at `--atlases_dir`. 125 | 126 | For each subject and each atlas the following data files will be generated 127 | 128 | - a `sub-_space-MNI152NLin2009cAsym_res-2_label-GM_mask.nii.gz` 129 | Grey matter mask in the dedicated space for a given subject, 130 | created from merging all the EPI brain masks of a given subject, 131 | and converges with the grey matter mask of the given space. 132 | - `sub-_seg-{atlas}{atlas_description}_[dseg|probseg].nii.gz` 133 | files where the atlas were sampled to `sub-_space-MNI152NLin2009cAsym_res-2_label-GM_mask.nii.gz` 134 | for each individual. 135 | 136 | - `{atlas}` refers to the name of the atlas used (for example, `Schaefer2018`) 137 | - `{atlas_description}` refers to the sub type of atlas used (for example, `100Parcels7Networks`) 138 | 139 | ### Example 140 | 141 | ``` 142 | └── sub-1 143 | └── func 144 | ├── sub-1_seg-Schaefer2018100Parcels7Networks_dseg.nii.gz 145 | ├── sub-1_seg-Schaefer2018200Parcels7Networks_dseg.nii.gz 146 | ├── sub-1_seg-Schaefer2018300Parcels7Networks_dseg.nii.gz 147 | ├── sub-1_seg-Schaefer2018400Parcels7Networks_dseg.nii.gz 148 | ├── sub-1_seg-Schaefer2018500Parcels7Networks_dseg.nii.gz 149 | ├── sub-1_seg-Schaefer2018600Parcels7Networks_dseg.nii.gz 150 | ├── sub-1_seg-Schaefer2018800Parcels7Networks_dseg.nii.gz 151 | └── sub-1_space-MNI152NLin2009cAsym_res-2_label-GM_mask.nii.gz 152 | ``` 153 | -------------------------------------------------------------------------------- /paper/paper.bib: -------------------------------------------------------------------------------- 1 | @software{Nilearn, 2 | author = {{Nilearn contributors}}, 3 | license = {BSD-4-Clause}, 4 | title = {{nilearn}}, 5 | year = {2024}, 6 | url = {https://github.com/nilearn/nilearn}, 7 | doi = {10.5281/zenodo.8397156} 8 | } 9 | 10 | @article{fmriprep, 11 | title = {{fMRIPrep}: a robust preprocessing pipeline for functional {MRI}}, 12 | volume = {16}, 13 | issn = {1548-7091, 1548-7105}, 14 | doi = {10.1038/s41592-018-0235-4}, 15 | number = {1}, 16 | journal = {Nature Methods}, 17 | author = {Esteban, Oscar and Markiewicz, Christopher J. and Blair, Ross W. and Moodie, Craig A. and Isik, A. Ilkay and Erramuzpe, Asier and Kent, James D. and Goncalves, Mathias and DuPre, Elizabeth and Snyder, Madeleine and Oya, Hiroyuki and Ghosh, Satrajit S. and Wright, Jessey and Durnez, Joke and Poldrack, Russell A. and Gorgolewski, Krzysztof J.}, 18 | month = jan, 19 | year = {2019}, 20 | pages = {111--116}, 21 | } 22 | 23 | @article{bidsapp, 24 | title = {{BIDS} apps: Improving ease of use, accessibility, and reproducibility of neuroimaging data analysis methods}, 25 | volume = {13}, 26 | issn = {1553-7358}, 27 | doi = {10.1371/journal.pcbi.1005209}, 28 | number = {3}, 29 | journal = {PLOS Computational Biology}, 30 | author = {Gorgolewski, Krzysztof J. and Alfaro-Almagro, Fidel and Auer, Tibor and Bellec, Pierre and Capotă, Mihai and Chakravarty, M. Mallar and Churchill, Nathan W. and Cohen, Alexander Li and Craddock, R. Cameron and Devenyi, Gabriel A. and Eklund, Anders and Esteban, Oscar and Flandin, Guillaume and Ghosh, Satrajit S. and Guntupalli, J. Swaroop and Jenkinson, Mark and Keshavan, Anisha and Kiar, Gregory and Liem, Franziskus and Raamana, Pradeep Reddy and Raffelt, David and Steele, Christopher J. and Quirion, Pierre-Olivier and Smith, Robert E. and Strother, Stephen C. and Varoquaux, Gaël and Wang, Yida and Yarkoni, Tal and Poldrack, Russell A.}, 31 | month = mar, 32 | year = {2017}, 33 | pages = {e1005209}, 34 | } 35 | 36 | @article{bids, 37 | title = {The brain imaging data structure, a format for organizing and describing outputs of neuroimaging experiments}, 38 | volume = {3}, 39 | issn = {2052-4463}, 40 | doi = {10.1038/sdata.2016.44}, 41 | number = {1}, 42 | journal = {Scientific Data}, 43 | author = {Gorgolewski, Krzysztof J. and Auer, Tibor and Calhoun, Vince D. and Craddock, R. Cameron and Das, Samir and Duff, Eugene P. and Flandin, Guillaume and Ghosh, Satrajit S. and Glatard, Tristan and Halchenko, Yaroslav O. and Handwerker, Daniel A. and Hanke, Michael and Keator, David and Li, Xiangrui and Michael, Zachary and Maumet, Camille and Nichols, B. Nolan and Nichols, Thomas E. and Pellman, John and Poline, Jean-Baptiste and Rokem, Ariel and Schaefer, Gunnar and Sochat, Vanessa and Triplett, William and Turner, Jessica A. and Varoquaux, Gaël and Poldrack, Russell A.}, 44 | month = jun, 45 | year = {2016}, 46 | pages = {160044}, 47 | } 48 | 49 | @article{wang_continuous_2024, 50 | title = {Continuous evaluation of denoising strategies in resting-state {fMRI} connectivity using {fMRIPrep} and {Nilearn}}, 51 | volume = {20}, 52 | issn = {1553-7358}, 53 | doi = {10.1371/journal.pcbi.1011942}, 54 | number = {3}, 55 | urldate = {2024-03-28}, 56 | journal = {PLOS Computational Biology}, 57 | author = {Wang, Hao-Ting and Meisler, Steven L. and Sharmarke, Hanad and Clarke, Natasha and Gensollen, Nicolas and Markiewicz, Christopher J. and Paugam, François and Thirion, Bertrand and Bellec, Pierre}, 58 | month = mar, 59 | year = {2024}, 60 | pages = {e1011942}, 61 | } 62 | 63 | @article{templateflow, 64 | title = {{TemplateFlow}: {FAIR}-sharing of multi-scale, multi-species brain models}, 65 | volume = {19}, 66 | issn = {1548-7105}, 67 | shorttitle = {{TemplateFlow}}, 68 | doi = {10.1038/s41592-022-01681-2}, 69 | language = {en}, 70 | number = {12}, 71 | journal = {Nature Methods}, 72 | author = {Ciric, Rastko and Thompson, William H. and Lorenz, Romy and Goncalves, Mathias and MacNicol, Eilidh E. and Markiewicz, Christopher J. and Halchenko, Yaroslav O. and Ghosh, Satrajit S. and Gorgolewski, Krzysztof J. and Poldrack, Russell A. and Esteban, Oscar}, 73 | month = dec, 74 | year = {2022}, 75 | pages = {1568--1571}, 76 | } 77 | 78 | @article{makris_decreased_2006, 79 | title = {Decreased volume of left and total anterior insular lobule in schizophrenia}, 80 | volume = {83}, 81 | issn = {0920-9964}, 82 | doi = {10.1016/j.schres.2005.11.020}, 83 | number = {2-3}, 84 | journal = {Schizophrenia Research}, 85 | author = {Makris, Nikos and Goldstein, Jill M. and Kennedy, David and Hodge, Steven M. and Caviness, Verne S. and Faraone, Stephen V. and Tsuang, Ming T. and Seidman, Larry J.}, 86 | month = apr, 87 | year = {2006}, 88 | pmid = {16448806}, 89 | pages = {155--171}, 90 | } 91 | 92 | @article{goldstein_hypothalamic_2007, 93 | title = {Hypothalamic abnormalities in schizophrenia: sex effects and genetic vulnerability}, 94 | volume = {61}, 95 | issn = {0006-3223}, 96 | shorttitle = {Hypothalamic abnormalities in schizophrenia}, 97 | doi = {10.1016/j.biopsych.2006.06.027}, 98 | language = {eng}, 99 | number = {8}, 100 | journal = {Biological Psychiatry}, 101 | author = {Goldstein, Jill M. and Seidman, Larry J. and Makris, Nikos and Ahern, Todd and O'Brien, Liam M. and Caviness, Verne S. and Kennedy, David N. and Faraone, Stephen V. and Tsuang, Ming T.}, 102 | month = apr, 103 | year = {2007}, 104 | pmid = {17046727}, 105 | pages = {935--945}, 106 | } 107 | 108 | @article{frazier_structural_2005, 109 | title = {Structural brain magnetic resonance imaging of limbic and thalamic volumes in pediatric bipolar disorder}, 110 | volume = {162}, 111 | issn = {0002-953X}, 112 | doi = {10.1176/appi.ajp.162.7.1256}, 113 | language = {eng}, 114 | number = {7}, 115 | journal = {The American Journal of Psychiatry}, 116 | author = {Frazier, Jean A. and Chiu, Sufen and Breeze, Janis L. and Makris, Nikos and Lange, Nicholas and Kennedy, David N. and Herbert, Martha R. and Bent, Eileen K. and Koneru, Vamsi K. and Dieterich, Megan E. and Hodge, Steven M. and Rauch, Scott L. and Grant, P. Ellen and Cohen, Bruce M. and Seidman, Larry J. and Caviness, Verne S. and Biederman, Joseph}, 117 | month = jul, 118 | year = {2005}, 119 | pmid = {15994707}, 120 | pages = {1256--1265}, 121 | } 122 | 123 | @article{desikan_automated_2006, 124 | title = {An automated labeling system for subdividing the human cerebral cortex on {MRI} scans into gyral based regions of interest}, 125 | volume = {31}, 126 | issn = {1053-8119}, 127 | doi = {10.1016/j.neuroimage.2006.01.021}, 128 | language = {eng}, 129 | number = {3}, 130 | journal = {NeuroImage}, 131 | author = {Desikan, Rahul S. and Ségonne, Florent and Fischl, Bruce and Quinn, Brian T. and Dickerson, Bradford C. and Blacker, Deborah and Buckner, Randy L. and Dale, Anders M. and Maguire, R. Paul and Hyman, Bradley T. and Albert, Marilyn S. and Killiany, Ronald J.}, 132 | month = jul, 133 | year = {2006}, 134 | pmid = {16530430}, 135 | pages = {968--980}, 136 | } 137 | 138 | @article{schaefer_local-global_2018, 139 | title = {Local-Global Parcellation of the Human Cerebral Cortex from Intrinsic Functional Connectivity MRI}, 140 | volume = {28}, 141 | issn = {1460-2199}, 142 | doi = {10.1093/cercor/bhx179}, 143 | language = {eng}, 144 | number = {9}, 145 | journal = {Cerebral Cortex (New York, N.Y.: 1991)}, 146 | author = {Schaefer, Alexander and Kong, Ru and Gordon, Evan M. and Laumann, Timothy O. and Zuo, Xi-Nian and Holmes, Avram J. and Eickhoff, Simon B. and Yeo, B. T. Thomas}, 147 | month = sep, 148 | year = {2018}, 149 | pmid = {28981612}, 150 | pmcid = {PMC6095216}, 151 | pages = {3095--3114}, 152 | } 153 | 154 | @article{urchs_mist_2019, 155 | title = {{MIST}: A multi-resolution parcellation of functional brain networks}, 156 | volume = {1}, 157 | issn = {2515-5059}, 158 | shorttitle = {{MIST}}, 159 | doi = {10.12688/mniopenres.12767.2}, 160 | language = {en}, 161 | urldate = {2024-03-28}, 162 | journal = {MNI Open Research}, 163 | author = {Urchs, Sebastian and Armoza, Jonathan and Moreau, Clara and Benhajali, Yassine and St-Aubin, Jolène and Orban, Pierre and Bellec, Pierre}, 164 | month = mar, 165 | year = {2019}, 166 | pages = {3}, 167 | } 168 | 169 | @article{dadi_fine-grain_2020, 170 | title = {Fine-grain atlases of functional modes for {fMRI} analysis}, 171 | volume = {221}, 172 | issn = {1053-8119}, 173 | doi = {10.1016/j.neuroimage.2020.117126}, 174 | urldate = {2024-03-28}, 175 | journal = {NeuroImage}, 176 | author = {Dadi, Kamalaker and Varoquaux, Gaël and Machlouzarides-Shalit, Antonia and Gorgolewski, Krzysztof J. and Wassermann, Demian and Thirion, Bertrand and Mensch, Arthur}, 177 | month = nov, 178 | year = {2020}, 179 | pages = {117126}, 180 | } 181 | 182 | @article{clarke_2024, 183 | title = {Investigating the convergence of resting-state functional connectivity profiles in {Alzheimer}'s disease with neuropsychiatric symptoms and schizophrenia}, 184 | author = {Clarke, Natasha and Moreau, Clara A. and Harvey, Annabelle and Wang, Hao-Ting and Orban, Pierre and Zahinoor, Ismail and Jacquemont, Sébastien and Bellec, Pierre}, 185 | year = {2024}, 186 | journal = {Brain Communications}, 187 | doi = {10.17605/OSF.IO/T7JKZ}, 188 | url = {http://osf.io/t7jkz}, 189 | } 190 | 191 | 192 | @article{HALFpipe, 193 | title = {{ENIGMA} {HALFpipe}: Interactive, reproducible, and efficient analysis for resting-state and task-based {fMRI} data}, 194 | volume = {43}, 195 | issn = {1097-0193}, 196 | doi = {10.1002/hbm.25829}, 197 | language = {eng}, 198 | number = {9}, 199 | journal = {Human Brain Mapping}, 200 | author = {Waller, Lea and Erk, Susanne and Pozzi, Elena and Toenders, Yara J. and Haswell, Courtney C. and Büttner, Marc and Thompson, Paul M. and Schmaal, Lianne and Morey, Rajendra A. and Walter, Henrik and Veer, Ilya M.}, 201 | month = jun, 202 | year = {2022}, 203 | pmid = {35305030}, 204 | pmcid = {PMC9120555}, 205 | pages = {2727--2742}, 206 | } 207 | 208 | @misc{cpac, 209 | title = {Moving Beyond Processing and Analysis-Related Variation in Neuroscience}, 210 | doi = {10.1101/2021.12.01.470790}, 211 | language = {en}, 212 | urldate = {2024-03-28}, 213 | publisher = {bioRxiv}, 214 | author = {Li, Xinhui and Esper, Nathalia Bianchini and Ai, Lei and Giavasis, Steve and Jin, Hecheng and Feczko, Eric and Xu, Ting and Clucas, Jon and Franco, Alexandre and Heinsfeld, Anibal Sólon and Adebimpe, Azeez and Vogelstein, Joshua T. and Yan, Chao-Gan and Esteban, Oscar and Poldrack, Russell A. and Craddock, Cameron and Fair, Damien and Satterthwaite, Theodore and Kiar, Gregory and Milham, Michael P.}, 215 | month = jan, 216 | year = {2024}, 217 | } 218 | 219 | @article{conn, 220 | title = {Conn: a functional connectivity toolbox for correlated and anticorrelated brain networks}, 221 | volume = {2}, 222 | issn = {2158-0022}, 223 | doi = {10.1089/brain.2012.0073}, 224 | language = {eng}, 225 | number = {3}, 226 | journal = {Brain Connectivity}, 227 | author = {Whitfield-Gabrieli, Susan and Nieto-Castanon, Alfonso}, 228 | year = {2012}, 229 | pmid = {22642651}, 230 | pages = {125--141}, 231 | } 232 | 233 | @misc{xcp-d, 234 | title = {{XCP}-{D}: A Robust Pipeline for the post-processing of {fMRI} data}, 235 | shorttitle = {{XCP}-{D}}, 236 | doi = {10.1101/2023.11.20.567926}, 237 | language = {en}, 238 | urldate = {2024-03-28}, 239 | publisher = {bioRxiv}, 240 | author = {Mehta, Kahini and Salo, Taylor and Madison, Thomas and Adebimpe, Azeez and Bassett, Danielle S. and Bertolero, Max and Cieslak, Matthew and Covitz, Sydney and Houghton, Audrey and Keller, Arielle S. and Luo, Audrey and Miranda-Dominguez, Oscar and Nelson, Steve M. and Shafiei, Golia and Shanmugan, Sheila and Shinohara, Russell T. and Sydnor, Valerie J. and Feczko, Eric and Fair, Damien A. and Satterthwaite, Theodore D.}, 241 | month = nov, 242 | year = {2023}, 243 | } 244 | 245 | @article{ABIDE, 246 | title = {The Autism Brain Imaging Data Exchange: Towards Large-Scale Evaluation of the Intrinsic Brain Architecture in Autism}, 247 | volume = {19}, 248 | issn = {1359-4184}, 249 | doi = {10.1038/mp.2013.78}, 250 | number = {6}, 251 | journal = {Molecular psychiatry}, 252 | author = {Di Martino, Adriana and Yan, Chao-Gan and Li, Qingyang and Denio, Erin and Castellanos, Francisco X. and Alaerts, Kaat and Anderson, Jeffrey S. and Assaf, Michal and Bookheimer, Susan Y. and Dapretto, Mirella and Deen, Ben and Delmonte, Sonja and Dinstein, Ilan and Ertl-Wagner, Birgit and Fair, Damien A. and Gallagher, Louise and Kennedy, Daniel P. and Keown, Christopher L. and Keysers, Christian and Lainhart, Janet E. and Lord, Catherine and Luna, Beatriz and Menon, Vinod and Minshew, Nancy and Monk, Christopher S. and Mueller, Sophia and Müller, Ralph-Axel and Nebel, Mary Beth and Nigg, Joel T. and O’Hearn, Kirsten and Pelphrey, Kevin A. and Peltier, Scott J. and Rudie, Jeffrey D. and Sunaert, Stefan and Thioux, Marc and Tyszka, J. Michael and Uddin, Lucina Q. and Verhoeven, Judith S. and Wenderoth, Nicole and Wiggins, Jillian L. and Mostofsky, Stewart H. and Milham, Michael P.}, 253 | month = jun, 254 | year = {2014}, 255 | pmid = {23774715}, 256 | pmcid = {PMC4162310}, 257 | pages = {659--667}, 258 | } 259 | 260 | @article{ukbiobank, 261 | title = {{UK} {Biobank}: An Open Access Resource for Identifying the Causes of a Wide Range of Complex Diseases of Middle and Old Age}, 262 | volume = {12}, 263 | issn = {1549-1676}, 264 | doi = {10.1371/journal.pmed.1001779}, 265 | language = {en}, 266 | number = {3}, 267 | journal = {PLOS Medicine}, 268 | author = {Sudlow, Cathie and Gallacher, John and Allen, Naomi and Beral, Valerie and Burton, Paul and Danesh, John and Downey, Paul and Elliott, Paul and Green, Jane and Landray, Martin and Liu, Bette and Matthews, Paul and Ong, Giok and Pell, Jill and Silman, Alan and Young, Alan and Sprosen, Tim and Peakman, Tim and Collins, Rory}, 269 | month = mar, 270 | year = {2015}, 271 | pages = {e1001779}, 272 | } 273 | 274 | @article{adhd200, 275 | title = {The {ADHD}-200 consortium: a model to advance the translational potential of neuroimaging in clinical neuroscience}, 276 | volume = {6}, 277 | issn = {1662-5137}, 278 | doi = {10.3389/fnsys.2012.00062}, 279 | journal = {Frontiers in Systems Neuroscience}, 280 | author = {Milham, Michael P. Ph D. and Fair, Damien PA-C. and Mennes, Maarten Ph D. and Mostofsky, Stewart H. M. D.}, 281 | month = sep, 282 | year = {2012} 283 | } 284 | -------------------------------------------------------------------------------- /docs/source/usage.md: -------------------------------------------------------------------------------- 1 | # Usage Notes 2 | 3 | ## Command line interface 4 | 5 | ```{eval-rst} 6 | .. argparse:: 7 | :prog: giga_connectome 8 | :module: giga_connectome.run 9 | :func: global_parser 10 | ``` 11 | 12 | ## General usage 13 | 14 | Here is an example of using the preset denoise strategies and atlases. 15 | 16 | ### Download the example dataset 17 | 18 | We created a downsampled version of Open Neuro dataset 19 | [ds000017](https://openneuro.org/datasets/ds000017/versions/00001) 20 | hosted on [Zenodo page](https://zenodo.org/records/8091903). 21 | You can download this dataset to understand how to use different options of `giga_connectome`. 22 | 23 | You can download the data to any preferred location. 24 | For the purpose of this tutorial, we will save the downloaded dataset under `giga_connectome/test_data` 25 | in your home directory (`~/giga_connectome/test_data`). 26 | 27 | Here is the download instruction for Linux/Mac users: 28 | 29 | ```bash 30 | mkdir -p ~/giga_connectome/test_data 31 | wget --retry-connrefused \ 32 | --waitretry=5 \ 33 | --read-timeout=20 \ 34 | --timeout=15 \ 35 | -t 0 \ 36 | -q \ 37 | -O ~/giga_connectome/test_data/ds000017.tar.gz \ 38 | "https://zenodo.org/record/8091903/files/ds000017-fmriprep22.0.1-downsampled-nosurface.tar.gz?download=1" 39 | tar -xzf ~/giga_connectome/test_data/ds000017.tar.gz -C ~/giga_connectome/test_data/ 40 | rm ~/giga_connectome/test_data/ds000017.tar.gz 41 | ``` 42 | 43 | Alternatively, you can go to the [Zenodo page](https://zenodo.org/records/8091903) and download 44 | `ds000017-fmriprep22.0.1-downsampled-nosurface.tar.gz` and uncompress it. 45 | 46 | Now you will find the dataset at `~/giga_connectome/test_data/ds000017-fmriprep22.0.1-downsampled-nosurface`. 47 | Under this directory, you will find a full fMRIPrep output layout of two subjects. 48 | 49 | ### Running `giga_connectome` with container 50 | 51 | Given that you have already [installed the container](./installation.md), you can run `giga_connectome` with 52 | [apptainer](https://apptainer.org/) as follow: 53 | 54 | ```bash 55 | DATA=~/giga_connectome/test_data/ds000017-fmriprep22.0.1-downsampled-nosurface 56 | mkdir -p outputs/ 57 | 58 | apptainer run \ 59 | --bind ${DATA}:/inputs \ 60 | --bind ./outputs:/outputs \ 61 | --bind ./outputs/atlases:/atlases \ 62 | giga_connectome \ 63 | /inputs \ 64 | /outputs \ 65 | participant \ 66 | -a /atlases \ 67 | --atlas Schaefer2018 \ 68 | --denoise-strategy simple \ 69 | --reindex-bids 70 | ``` 71 | 72 | For Docker: 73 | 74 | ```bash 75 | DATA=${HOME}/giga_connectome/test_data/ds000017-fmriprep22.0.1-downsampled-nosurface 76 | mkdir -p outputs/ 77 | 78 | docker run --rm \ 79 | -v ${DATA}:/test_data \ 80 | -v ./outputs:/outputs \ 81 | -v ./outputs/atlases:/atlases \ 82 | bids/giga_connectome \ 83 | /test_data \ 84 | /outputs \ 85 | participant \ 86 | -a /atlases \ 87 | --atlas Schaefer2018 \ 88 | --denoise-strategy simple \ 89 | --reindex-bids 90 | ``` 91 | 92 | Now you can navigate the outputs under `outputs` 93 | 94 | ## Advanced: Using customised configuration files for denoising strategy and atlas 95 | 96 | Aside from the preset strategies and atlases, the users can supply their own for further customisation. 97 | We encourage the users to use the container version of the BIDS app, hence all documentation below will relect on the usage of the container. 98 | Users can use the preset template as examples for creating their own configuration files. 99 | This section will walk through the details of the configuration files and extra steps needed. 100 | 101 | All presets can be found in [`giga_connectome/data`](https://github.com/bids-apps/giga_connectome/tree/main/giga_connectome/data). 102 | 103 | ### Denoising strategy 104 | 105 | 1. Create the configuration file. 106 | 107 | The tool uses `nilearn.interfaces.fmriprep.load_confounds` and `nilearn.interfaces.fmriprep.load_confounds_strategy` 108 | as the way of retrieving confounds. 109 | 110 | In a `json` file, define the customised strategy in the following format: 111 | 112 | ``` 113 | { 114 | "name": "", 115 | "function": ", ", 116 | "parameters": { 117 | "": "", 118 | .... 119 | } 120 | } 121 | ``` 122 | 123 | See examples in [`giga_connectome/data/denoise_strategy`](https://github.com/bids-apps/giga_connectome/tree/main/giga_connectome/data/denoise_strategy). 124 | 125 | In the following section, we will use this customised denoising strategy (5 aCompCor components, 6 motion parameters, global signal). 126 | Run the following section in bash to create the denoising configuration file. 127 | 128 | ```bash 129 | mkdir ${HOME}/customised_denoise 130 | DENOISE_CONFIG=${HOME}/customised_denoise/denoise_config.json 131 | 132 | # create denoise file 133 | cat << EOF > ${DENOISE_CONFIG} 134 | { 135 | "name": "custom_compcor", 136 | "function": "load_confounds", 137 | "parameters": { 138 | "strategy": ["high_pass", "motion", "compcor"], 139 | "motion": "basic", 140 | "n_compcor": 5, 141 | "compcor": "anat_combined", 142 | "global_signal": "basic", 143 | "demean": true 144 | } 145 | } 146 | EOF 147 | ``` 148 | 149 | 2. Mount the path to the configuration file to the container and pass the **mounted path** to `--denoise-strategy`. 150 | 151 | An example using Apptainer, with data downloaded as described in the [previous section](#download-the-example-dataset) : 152 | 153 | ```bash 154 | # create denoising strategy 155 | mkdir ${HOME}/customised_denoise/outputs 156 | DATA=${HOME}/giga_connectome/test_data/ds000017-fmriprep22.0.1-downsampled-nosurface 157 | OUTPUT_DIR=${HOME}/customised_denoise/outputs 158 | ATLASES_DIR=${HOME}/customised_denoise/outputs/atlas 159 | DENOISE_CONFIG=${HOME}/customised_denoise/denoise_config.json 160 | 161 | GIGA_CONNECTOME=${HOME}/giga-connectome.simg # assuming the container is created in $HOME 162 | 163 | apptainer run \ 164 | --bind ${DATA}:/data/input \ 165 | --bind ${OUTPUT_DIR}:/data/output \ 166 | --bind ${ATLASES_DIR}:/data/atlases \ 167 | --bind ${DENOISE_CONFIG}:/data/denoise_config.json \ 168 | ${GIGA_CONNECTOME} \ 169 | -a /data/atlases \ 170 | --atlas Schaefer2018 \ 171 | --denoise-strategy /data/denoise_config.json \ 172 | --reindex-bids \ 173 | /data/input \ 174 | /data/output \ 175 | participant 176 | ``` 177 | 178 | For Docker: 179 | 180 | ```bash 181 | # create denoising strategy 182 | mkdir ${HOME}/customised_denoise/outputs 183 | DATA=${HOME}/giga_connectome/test_data/ds000017-fmriprep22.0.1-downsampled-nosurface 184 | OUTPUT_DIR=${HOME}/customised_denoise/outputs 185 | ATLASES_DIR=${HOME}/customised_denoise/outputs/atlas 186 | DENOISE_CONFIG=${HOME}/customised_denoise/denoise_config.json 187 | 188 | docker run --rm \ 189 | -v ${DATA}:/data/input \ 190 | -v ${OUTPUT_DIR}:/data/output \ 191 | -v ${ATLASES_DIR}:/data/atlases \ 192 | -v ${DENOISE_CONFIG}:/data/denoise_config.json \ 193 | bids/giga_connectome:unstable \ 194 | /data/input \ 195 | /data/output \ 196 | participant \ 197 | -a /data/atlases \ 198 | --reindex-bids \ 199 | --atlas Schaefer2018 \ 200 | --denoise-strategy /data/denoise_config.json 201 | ``` 202 | 203 | ### Atlas 204 | 205 | 1. Organise the atlas according to the [TemplateFlow](https://www.templateflow.org/python-client/0.7.1/naming.html) convention. 206 | 207 | :::{warning} 208 | `giga-connectome` and its upstream project `nilearn` did not explicitly test on non-standard templates, such as templates compiled from specific datasets or individual templates. 209 | ::: 210 | 211 | A minimal set up should look like this: 212 | 213 | ``` 214 | /path/to/my_atlas/ 215 | └──tpl-CustomisedTemplate/ # template directory of a valid template name 216 | ├── tpl-CustomisedTemplate_res-02_atlas-coolatlas_desc-256dimensions_probseg.nii.gz 217 | ├── tpl-CustomisedTemplate_res-02_atlas-coolatlas_desc-512dimensions_probseg.nii.gz 218 | └── tpl-CustomisedTemplate_res-02_atlas-coolatlas_desc-64dimensions_probseg.nii.gz 219 | ``` 220 | 221 | 2. Update your TemplateFlow directory with network connection. 222 | 223 | We will store our customised atlas under `${HOME}/customised_atlas/templateflow`. 224 | This is an extremely important step in order to run the BIDS-app correctly without network connection. 225 | 226 | ```bash 227 | mkdir -p ${HOME}/customised_atlas/templateflow 228 | python3 -c "import os; from pathlib import Path; os.environ['TEMPLATEFLOW_HOME'] = f'{Path.home()}/customised_atlas/templateflow'; from templateflow.api import get; get(['MNI152NLin2009cAsym', 'MNI152NLin6Asym'])" 229 | ``` 230 | 231 | If your `CustomisedTemplate` is an existing TemplateFlow template, it should be added to the above line for download. 232 | For example, you have a template in `MNI152NLin2009aAsym`: 233 | 234 | ```bash 235 | mkdir -p ${HOME}/customised_atlas/templateflow 236 | python3 -c "import os; from pathlib import Path; os.environ['TEMPLATEFLOW_HOME'] = f'{Path.home()}/customised_atlas/templateflow'; from templateflow.api import get; get(['MNI152NLin2009cAsym', 'MNI152NLin6Asym', 'MNI152NLin2009aAsym'])" 237 | ``` 238 | 239 | 3. Create your config file. 240 | 241 | In a `json` file, define the customised atlas. We will use the atlas above as an example. 242 | It's very important to specify `templateflow_dir` otherwise the BIDS-app will search for the default already downloaded in the container. 243 | `templateflow_dir` should be the target you will be mounting to the BIDS-app container, rather than the location on your disc. 244 | 245 | Example: 246 | ``` 247 | { 248 | "name": "", # for simplicity, one can use the 'atlas' field of the file name 249 | "parameters": { # the fields in this section should all be present and consistent with your atlas, except 'desc' 250 | "atlas": "coolatlas", # this should match the 'atlas' field of the file name 251 | "template": "CustomisedTemplate", 252 | "resolution": "02", 253 | "suffix": "probseg" 254 | }, 255 | "desc": [ # entity desc of the atlases 256 | "64dimensions", 257 | "128dimensions", 258 | "256dimensions"], 259 | "templateflow_dir" : "/data/atlas" # the target path you will be mounting to the BIDS-app container 260 | } 261 | ``` 262 | 263 | See examples in [`giga_connectome/data/atlas/`](https://github.com/bids-apps/giga_connectome/tree/main/giga_connectome/data/atlas). 264 | 265 | In the following section, we will generate an atlas on the test data using `nilearn.regions.Parcellations` with ward clustering method. 266 | Run the following python code to generate the atlas and save to `${HOME}/customised_atlas/templateflow` in templateflow convention. 267 | 268 | ```python 269 | from pathlib import Path 270 | from nilearn.regions import Parcellations 271 | from nilearn.datasets import load_mni152_gm_mask 272 | 273 | data_paths = f"{Path.home()}/giga_connectome/test_data/ds000017-fmriprep22.0.1-downsampled-nosurface/sub-*/ses-*/func/*space-MNI152NLin2009cAsym_res-2_desc-preproc_bold.nii.gz" 274 | gm = load_mni152_gm_mask(resolution=4, threshold=0.5) 275 | ward = Parcellations( 276 | method="ward", 277 | n_parcels=50, 278 | mask=gm, 279 | smoothing_fwhm=8.0, # the example dataset is heavily downsampled 280 | standardize=False, 281 | memory="nilearn_cache", 282 | memory_level=1, 283 | verbose=1 284 | ) 285 | ward.fit(data_paths) # nilearn can comprehend wild card 286 | ward_labels_img = ward.labels_img_ 287 | 288 | # Now, ward_labels_img are Nifti1Image object, it can be saved to file 289 | # with the following code: 290 | 291 | tpl_dir = Path.home() / "customised_atlas" / "templateflow" / "tpl-MNI152NLin2009cAsym" 292 | tpl_dir.mkdir(exist_ok=True, parents=True) 293 | print(f"Output will be saved to: {tpl_dir}") 294 | ward_labels_img.to_filename(tpl_dir / "tpl-MNI152NLin2009cAsym_res-02_atlas-wardclustering_desc-50_dseg.nii.gz") 295 | ``` 296 | 297 | Create the configuration file: 298 | 299 | ```bash 300 | ATLAS_CONFIG=${HOME}/customised_atlas/ward_config.json 301 | 302 | # create denoise file 303 | cat << EOF > ${ATLAS_CONFIG} 304 | { 305 | "name": "wardclustering", 306 | "parameters": { 307 | "atlas": "wardclustering", 308 | "template": "MNI152NLin2009cAsym", 309 | "resolution": "02", 310 | "suffix": "dseg" 311 | }, 312 | "desc": ["50"], 313 | "templateflow_dir": "/data/atlas" 314 | } 315 | EOF 316 | ``` 317 | 318 | 4. Mount the path to the configuration file to the container and pass the **mounted path** to `--atlas`. 319 | The path in your configuration file under `templateflow_dir` should be exported as an environment variable of the container. 320 | 321 | An example using Apptainer: 322 | 323 | ```bash 324 | mkdir ${HOME}/customised_atlas/outputs 325 | DATA=${HOME}/giga_connectome/test_data/ds000017-fmriprep22.0.1-downsampled-nosurface 326 | OUTPUT_DIR=${HOME}/customised_atlas/outputs 327 | OUTPUT_ATLASES_DIR=${HOME}/customised_atlas/outputs/atlas 328 | ATLASES_DIR=${HOME}/customised_atlas/templateflow 329 | ATLAS_CONFIG=${HOME}/customised_atlas/ward_config.json 330 | 331 | GIGA_CONNECTOME=${HOME}/giga-connectome.simg # assuming the container is created in $HOME 332 | 333 | export APPTAINERENV_TEMPLATEFLOW_HOME=/data/atlas 334 | 335 | apptainer run \ 336 | --bind ${FMRIPREP_DIR}:/data/input \ 337 | --bind ${OUTPUT_DIR}:/data/output \ 338 | --bind ${ATLASES_DIR}:/data/atlas \ 339 | --bind ${ATLAS_CONFIG}:/data/atlas_config.json \ 340 | ${GIGA_CONNECTOME} \ 341 | -a /data/atlas \ 342 | --atlas /data/atlas_config.json \ 343 | --denoise-strategy simple \ 344 | /data/input \ 345 | /data/output \ 346 | participant 347 | ``` 348 | 349 | For Docker: 350 | 351 | ```bash 352 | mkdir ${HOME}/customised_atlas/outputs 353 | DATA=${HOME}/giga_connectome/test_data/ds000017-fmriprep22.0.1-downsampled-nosurface 354 | OUTPUT_DIR=${HOME}/customised_atlas/outputs 355 | OUTPUT_ATLASES_DIR=${HOME}/customised_atlas/outputs/atlas 356 | TFL_DIR=${HOME}/customised_atlas/templateflow 357 | ATLAS_CONFIG=${HOME}/customised_atlas/ward_config.json 358 | 359 | docker run --rm \ 360 | -e TEMPLATEFLOW_HOME=/data/atlas \ 361 | -v ${TFL_DIR}:/data/atlas \ 362 | -v ${DATA}:/data/input \ 363 | -v ${OUTPUT_DIR}:/data/output \ 364 | -v ${OUTPUT_ATLASES_DIR}:/data/output/atlas \ 365 | -v ${ATLAS_CONFIG}:/data/atlas_config.json \ 366 | bids/giga_connectome:unstable \ 367 | /data/input \ 368 | /data/output \ 369 | participant \ 370 | -a /data/output/atlas \ 371 | --atlas /data/atlas_config.json \ 372 | --denoise-strategy simple 373 | ``` 374 | --------------------------------------------------------------------------------