├── .gitignore
├── .travis.yml
├── CODE_OF_CONDUCT.md
├── CONTRIBUTING.md
├── DEVELOPMENT_GUIDE.md
├── LICENSE
├── PULL_REQUEST_TEMPLATE.md
├── README.md
├── docs
├── Makefile
└── source
│ ├── conf.py
│ ├── create-network.rst
│ ├── global-analysis.rst
│ ├── index.rst
│ ├── introduction.rst
│ ├── local-analysis.rst
│ ├── plotting.rst
│ ├── sc-matrices.rst
│ ├── scona.datasets.NSPN_WhitakerVertes_PNAS2016.rst
│ ├── scona.datasets.rst
│ ├── scona.rst
│ ├── scona.scripts.rst
│ └── scona.wrappers.rst
├── push.sh
├── requirements.txt
├── scona
├── __init__.py
├── bits_and_bobs
│ ├── Corr_pathology_SummaryFig_Cost_10.png
│ ├── NSPN_BehaviouralGraphs.ipynb
│ └── visualising_behavioural_networks.py
├── classes.py
├── datasets
│ ├── NSPN_WhitakerVertes_PNAS2016
│ │ ├── 500.centroids.txt
│ │ ├── 500.names.txt
│ │ ├── PARC_500aparc_thickness_behavmerge.csv
│ │ └── __init__.py
│ └── __init__.py
├── graph_measures.py
├── make_corr_matrices.py
├── make_figures.py
├── make_graphs.py
├── nilearn_plotting.py
├── permute_groups.py
├── scripts
│ ├── __init__.py
│ ├── make_figures.py
│ ├── useful_functions.py
│ └── visualisation_commands.py
├── stats_functions.py
├── visualisations.py
├── visualisations_helpers.py
└── wrappers
│ ├── __init__.py
│ ├── corrmat_from_regionalmeasures.py
│ └── network_analysis_from_corrmat.py
├── setup.py
├── tests
├── .fixture_hash
├── __init__.py
├── graph_measures_test.py
├── make_corr_matrices_test.py
├── make_graphs_test.py
├── regression_test.py
└── write_fixtures.py
└── tutorials
├── global_measures_viz.ipynb
├── global_network_viz.ipynb
├── interactive_viz_tutorial.ipynb
├── introductory_tutorial.ipynb
├── prepare_data.ipynb
└── tutorial.ipynb
/.gitignore:
--------------------------------------------------------------------------------
1 | # Ignore the figures directory that is created when you run
2 | # some of the tutorials. We don't want those outputs to be
3 | # brought up to the github repository.
4 | tutorials/figures
5 |
6 | # Byte-compiled / optimized / DLL files
7 | __pycache__/
8 | *.py[cod]
9 | *$py.class
10 | .pytest_cache/
11 |
12 | # C extensions
13 | *.so
14 |
15 | # Distribution / packaging
16 | .Python
17 | env/
18 | #build/
19 | develop-eggs/
20 | dist/
21 | downloads/
22 | eggs/
23 | .eggs/
24 | lib/
25 | lib64/
26 | parts/
27 | sdist/
28 | var/
29 | *.egg-info/
30 | .installed.cfg
31 | *.egg
32 |
33 | # PyInstaller
34 | # Usually these files are written by a python script from a template
35 | # before PyInstaller builds the exe, so as to inject date/other infos into it.
36 | *.manifest
37 | *.spec
38 |
39 | # Installer logs
40 | pip-log.txt
41 | pip-delete-this-directory.txt
42 |
43 | # Unit test / coverage reports
44 | htmlcov/
45 | .tox/
46 | .coverage
47 | .coverage.*
48 | .cache
49 | nosetests.xml
50 | coverage.xml
51 | *,cover
52 | .hypothesis/
53 |
54 | # Translations
55 | *.mo
56 | *.pot
57 |
58 | # Django stuff:
59 | *.log
60 | local_settings.py
61 |
62 | # Flask stuff:
63 | instance/
64 | .webassets-cache
65 |
66 | # Scrapy stuff:
67 | .scrapy
68 |
69 | # Sphinx documentation
70 | #docs/_build/
71 |
72 | # PyBuilder
73 | target/
74 |
75 | # IPython Notebook
76 | .ipynb_checkpoints
77 |
78 | # pyenv
79 | .python-version
80 |
81 | # celery beat schedule file
82 | celerybeat-schedule
83 |
84 | # dotenv
85 | .env
86 |
87 | # virtualenv
88 | venv/
89 | ENV/
90 |
91 | # Spyder project settings
92 | .spyderproject
93 |
94 | # Rope project settings
95 | .ropeproject
96 |
97 | # files generated by jupyter demo
98 | corrmat_file.txt
99 | corrmat_picture.png
100 | corrmat_picture_LowRes.png
101 | network_analysis/
102 |
103 | # ignore vscode settings
104 | .vscode
105 |
106 | # ignore the built sphinx docs
107 | docs/source/_build/
108 |
109 | # ignore temporary test files
110 | # These are created when you run the regression tests
111 | # locally. If the tests run to the end they'll be deleted
112 | # but if you've ended up with an error half way then they'll
113 | # stick around. Either way we don't want them in the main
114 | # repository so lets ignore the directory.
115 | temporary_test_fixtures/
116 |
117 | # ignore the src folder which appears on Kirstie's windows
118 | # machine (we're not sure wehre it came from!)
119 | src/
120 |
--------------------------------------------------------------------------------
/.travis.yml:
--------------------------------------------------------------------------------
1 | language: python
2 | python:
3 | - "3.7"
4 | env:
5 | matrix:
6 | - NETWORKX_VERSION=2.4
7 | global:
8 | secure: QvKbq/nKWPcVk5RrOD5rLUUioaM0iZ9PblW/ucd3zjPWdeJTKSG4BqFf+fWkLN+3aE9k+EeFsQ0voyKriKrSOHHOoHGIbROOdjM8+Uc06N8h1noJ/LLW1BBDbICtCMma70//QWYG+eqIGSBozp3tZ5QGbDZB5Nt5fNDSbO6rYJzfBF26jWAWjurB6E/6v+Dp3nbFRZ2Eju2/RG059w9Y5IK87n/GKUmiS3kRgLJaeSwvji8f6/29xVYsKhorw2GlQA0Es/rnXFHBF5tMqX1CrCjWGN2XH/MIRtrI7FxUz691LBXkGivOl8E6zjyKN/aTvjKe+PZHdF+VPCzjWMUklT2L5p4W6qbjxR+Uw/7ecfBeiTMkubzFig4kqpkz9skhfOiYCeEld+4dzl5mGFhpxwSXKLTQtVvwh7PGuuc22LWK4O5LyESv3jHHPyxEhB48GDc5illzalMqYw7+m/H1A8i4RSPx5xloNSMHmS1naNXq851kZmdLspqd67ECBNnLRN/AMfQIqltTlheFTYZk2hbD9g+fntjr1aboYIxyB1f33WSAJuyTBFagM/la/tVDUGM57WiV7dvLRXF1gXr4U7DzD7mUpflXmn75wCiuyNB4rWDv7LxEpgBR0iXJultzP0ewVPgAU31k8/rMapzfWwTX4z+kXgVNYJqe9XhBudY=
9 | install:
10 | - pip install -q networkx==$NETWORKX_VERSION
11 | - pip install -q -r requirements.txt
12 | - pip install sphinx sphinx sphinxcontrib-napoleon sphinx_rtd_theme
13 | script:
14 | - python3 -m pytest -v
15 | - cd docs
16 | - make html
17 | - touch .nojekyll
18 | after_success:
19 | - cd ..
20 | - ./push.sh
21 | notifications:
22 | email: false
23 |
--------------------------------------------------------------------------------
/CODE_OF_CONDUCT.md:
--------------------------------------------------------------------------------
1 | # Code of Conduct
2 |
3 | We value the participation of every member of our community and want to ensure that every contributor has an enjoyable and fulfilling experience. Accordingly, everyone who participates in the `scona` project is expected to show respect and courtesy to other community members at all time.
4 |
5 | [Kirstie Whitaker](https://whitakerlab.github.io/about/#head-honcho-dr-kirstie-whitaker), founder and lead developer of the `scona` project, members of the Whitaker lab, and contributors to the `scona` project are dedicated to a ***harassment-free experience for everyone***, regardless of gender, gender identity and expression, sexual orientation, disability, physical appearance, body size, race, age or religion. **We do not tolerate harassment by and/or of members of our community in any form**.
6 |
7 | *We are particularly motivated to support new and/or anxious collaborators, people who are looking to learn and develop their skills, and anyone who has experienced discrimination in the past.*
8 |
9 | ## Our expectations
10 |
11 | To make clear what is expected, we ask all members of the community to conform to the following Code of Conduct.
12 |
13 | * All communication - online and in person - should be appropriate for a professional audience including people of many different backgrounds. Sexual language and imagery is not appropriate at any time.
14 |
15 | * Be kind to others. Do not insult or put down other contributors.
16 |
17 | * Behave professionally. Remember that harassment and sexist, racist, or exclusionary jokes are not appropriate.
18 |
19 | Examples of behavior that contributes to creating a positive environment include:
20 |
21 | * Using welcoming and inclusive language
22 | * Being respectful of differing viewpoints and experiences
23 | * Gracefully accepting constructive criticism
24 | * Focusing on what is best for the community
25 | * Showing empathy towards other community members
26 |
27 | Harassment includes offensive verbal comments related to gender, sexual orientation, disability, physical appearance, body size, race, religion, sexual images in public spaces, deliberate intimidation, stalking, following, harassing photography or recording, sustained disruption of discussions, inappropriate physical contact, and unwelcome sexual attention.
28 |
29 | Examples of unacceptable behavior by community members include:
30 |
31 | * The use of sexualized language or imagery and unwelcome sexual attention or advances
32 | * Trolling, insulting/derogatory comments, and personal or political attacks
33 | * Public or private harassment
34 | * Publishing others' private information, such as a physical or electronic address, without explicit permission
35 | * Other conduct which could reasonably be considered inappropriate in a professional setting
36 |
37 | ***Participants asked to stop any harassing behavior are expected to comply immediately.***
38 |
39 | ## Our Responsibilities
40 |
41 | Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.
42 |
43 | Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.
44 |
45 | ## Enforcement
46 |
47 | Members of the community who violate these rules - no matter how much they have contributed to the Whitaker lab, or how specialised their skill set - will be approached by Kirstie Whitaker. If inappropriate behaviour persists after a discussion with Kirstie, the contributer will be asked to discontinue their participation in Whitaker lab projects.
48 |
49 | **To report an issue** please contact [Kirstie Whitaker](https://github.com/KirstieJane) (email: [kwhitaker@turing.ac.uk](mailto:kwhitaker@turing.ac.uk)) or [Isla Staden](https://github.com/Islast) (email: [islastaden@gmail.com](mailto:islastaden@gmail.com)). All communication will be treated as confidential.
50 |
51 | ## Attribution
52 |
53 | This Code of Conduct was built on top of the [Mozilla Science Lab's][mozilla-science-home] [Code of Conduct][mozilla-science-coc] which is in the public domain under a [CC0 license][cc0-link]. It also incorporates part of the [Contributor Covenant][contributor-covenant-home], version 1.4, available at [http://contributor-covenant.org/version/1/4][contributor-covenant-version] and used under a [CC-BY 4.0][ccby-link] license.
54 |
55 | The `scona` Code of Conduct was developed by [Kirstie Whitaker][kirstie-github] and is provided under a [CC-BY 4.0][ccby-link] license, which means that anyone can remix and reuse the content provided here, without seeking additional permission, so long as it is attributed back to the [scona][scn-repo] project. Please remember to also attribute the [Contributor Covenant][contributor-covenant-home].
56 |
57 |
58 | [contributor-covenant-home]: http://contributor-covenant.org
59 | [contributor-covenant-version]: http://contributor-covenant.org/version/1/4
60 | [ccby-link]: https://creativecommons.org/licenses/by/4.0
61 | [cc0-link]: https://creativecommons.org/publicdomain/zero/1.0
62 | [kirstie-github]: https://github.com/kirstiejane
63 | [scn-repo]: https://github.com/WhitakerLab/scona
64 | [mozilla-science-home]: https://science.mozilla.org/
65 | [mozilla-science-coc]: https://github.com/mozillascience/code_of_conduct
66 |
--------------------------------------------------------------------------------
/CONTRIBUTING.md:
--------------------------------------------------------------------------------
1 | # Contributing to scona project
2 |
3 | Welcome to the `scona` GitHub repository, and thank you for thinking about contributing! :smiley::heart::smiley:
4 |
5 | The point of this file is to make it suuuuuper easy for you to get involved.
6 | So if you have any questions that aren't covered here please let us know!
7 | Check out the [Share your thoughts](#share-your-thoughts) section below for more details.
8 |
9 | Before you start you'll need to set up a free [GitHub][link_github] account and sign in.
10 | Here are some [instructions][link_signupinstructions].
11 |
12 | Scroll down or jump to one of the following sections:
13 |
14 | * [Share your thoughts](#share-your-thoughts)
15 | * [A description of the different labels](#labels)
16 | * [Make a change](#make-a-change)
17 | * [Recognising contributions](#recognising-contributions)
18 | * [Get in touch](#how-to-get-in-touch)
19 |
20 | Our detailed development guide can be found at [DEVELOPMENT_GUIDE.md](DEVELOPMENT_GUIDE.md).
21 | Once you've read the contributing guidelines below, and if you think you need the additional information, check out those instructions too.
22 |
23 | ## Share your thoughts
24 |
25 | Although GitHub calls them **issues**, we'd like you to think of them as **conversation starters**.
26 | They're our way of communicating across all the members of the team.
27 |
28 | (If you're here you ***already are*** a member of the `scona` team.)
29 |
30 | Your thoughts can be [questions][link_question], [bugs][link_bug], [requests][link_request], or a myriad of other suggestions.
31 | In the next section we'll talk through some of the labels on each issue to help you select the ones you'd most like to help with.
32 |
33 | GitHub has a nice set of help pages if you're looking for more information about [discussing projects in issues][link_discussingissues].
34 |
35 | ### Labels
36 |
37 | You can find all currently open conversations under the [issues tab][link_issues].
38 |
39 | The current list of labels are [here][link_labels] and include:
40 |
41 | * [][link_question] These issues are questions and represent a great place to start. Whomever has opened the issue wants to hear from you!
42 |
43 | To reply, read the question and then respond in a variety of different ways:
44 |
45 | * If you want to just agree with everything you can [react to the post][link_react] with one of :+1: :smile: :heart: :tada: :rocket:
46 | * Alternatively you could write a comment to:
47 | * express your emotions more dramatically (check out this [cheat sheet][link_emojis] for emojis you might need)
48 | * provide a more nuanced description of your answer (using your words)
49 | * ask for a clarification
50 | * ask a follow up question
51 |
52 |
53 |
54 | * [][link_nocode] These issues don't require any coding knowledge.
55 |
56 | If you're looking to contribute but aren't very confident in your coding skills these issues are a great place to start.
57 |
58 | All issues with the no code label are suggesting documentation tasks, or asking for feedback or suggestions.
59 |
60 |
61 |
62 | * [][link_goodfirstbug] These issues contain a task that anyone with any level of experience can help with.
63 |
64 | A major goal of the project is to have as many people as possible complete their very first [pull request][link_pullrequest] on one of these issues.
65 | They will always have explicit information on who to contact to help you through the process.
66 |
67 | Remember: **There are no stupid questions!**
68 |
69 | We can not encourage you enough to submit even the tiniest change to the project repository.
70 | Let's go from :confused: & :anguished: to :smiley: & :tada: together!
71 |
72 |
73 |
74 | * [][link_helpwanted] These issues contain a task that a member of the team has determined we need additional help with.
75 |
76 | If you have particular skills then consider reading through these issues as they are a great place to offer your expertise.
77 |
78 | If you aren't sure what to offer, you could also recommend issues to your friends/colleagues who may be able to help.
79 |
80 |
81 |
82 | * [][link_bug] These issues point to problems in the project.
83 |
84 | If you find a bug, please give as much detail as possible in your issue.
85 |
86 | If you experience the same bug as one already listed, please add any additional information that you have as a comment.
87 |
88 |
89 |
90 | * [][link_request] These issues are asking for features (or anything else) to be added to the project.
91 |
92 | If you have a good idea and would like to see it implemented in the scona project please open a new issue and add in as much detail as possible.
93 |
94 | Please try to make sure that your feature is distinct from any others that have already been requested or implemented.
95 | If you find one that's similar but there are subtle differences please reference the other request in your issue.
96 |
97 |
98 | ## Make a change
99 |
100 | Once you've identified one of the issues above that you feel you can contribute to, you're ready to make a change to the project repository! :tada::smiley:
101 |
102 | 1. First, describe what you're planning to do as a comment on the issue, (and this might mean making a new issue).
103 |
104 | Check in with one of the scona development team to ensure you aren't overlapping with work that's currently underway and that everyone is on the same page with the goal of the work you're going to carry out.
105 |
106 | [This blog][link_pushpullblog] is a nice explanation of why putting this work in up front is so useful to everyone involved.
107 |
108 | 2. [Fork][link_fork] the [`scona`][link_brainnetworksrepo] to your profile.
109 |
110 | You can now do whatever you want with this copy of the project.
111 | You won't mess up anyone else's work so you're super safe.
112 |
113 | Make sure to [keep your fork up to date][link_updateupstreamwiki] with the master repository.
114 |
115 | 3. Make the changes you've discussed.
116 |
117 | Try to keep the changes focused rather than changing lots of things at once.
118 | If you feel tempted to branch out then please *literally* branch out: create [separate branches][link_branches] for different updates to make the next step much easier!
119 |
120 | We have a [development guide](DEVELOPMENT_GUIDE.md) that covers:
121 | * [Installing](DEVELOPMENT_GUIDE.md#installing-in-editable-mode)
122 | * [Linting](DEVELOPMENT_GUIDE.md#linting)
123 | * [Docstrings](DEVELOPMENT_GUIDE.md#writing-docstrings)
124 | * [Building Sphinx docs](DEVELOPMENT_GUIDE.md#building-sphinx-docs)
125 | * [Tutorials](DEVELOPMENT_GUIDE.md#tutorials)
126 | * [Testing](DEVELOPMENT_GUIDE.md#testing)
127 | * [A worked example](DEVELOPMENT_GUIDE.md#worked-example)
128 |
129 | It's in a separate file (called [DEVELOPMENT_GUIDE.md](DEVELOPMENT_GUIDE.md)) because we don't want you to feel overwhelmed when you contribute for the first time to `scona`.
130 | Everyone has different comfort levels with things like linting, testing and writing documentation.
131 | All are really important, but we don't need you to submit a perfect pull request!
132 | Pick the parts that are useful from that guide, or just do your best and we'll help out once you've shared your changes.
133 |
134 | 4. Submit a [pull request][link_pullrequest].
135 |
136 | A member of the executive team will review your changes, have a bit of discussion and hopefully merge them in!
137 | N.B. you don't have to be ready to merge to make a pull request!
138 | We encourage you to submit a pull request as early as you want to.
139 | They help us to keep track of progress and help you to get earlier feedback.
140 |
141 | **Success!!** :balloon::balloon::balloon: Well done! And thank you :smiley::tada::sparkles:
142 |
143 |
144 | ## Recognising contributions
145 |
146 | If you're logged into GitHub you can see everyone who has contributed to the repository via our [live contributors page][link_contributorslive].
147 | (You might have to add `WhitakerLab/scona` as the repository name before you click to sign in via GitHub.)
148 |
149 | These pages are powered by the [Let's all build a hat rack][link_hatrackhome] project, and we love them.
150 |
151 | Quoting from their [website][link_hatrackhome]:
152 |
153 | > Open Source project contribution focuses a lot on the code committed to projects, but there is so much more that goes on behind the scenes that is just as valuable to FOSS than the code itself.
154 | >
155 | > LABHR seeks to find those that provide these non-code contributions, and thank them.
156 | >
157 | > LABHR started as an [idea by Leslie Hawthorn][link_hatrackidea].
158 | > She advocates openly thanking people on social media, and writing recommendations.
159 | >
160 | > This idea was extended by Katie McLaughlin with her work on [automating this process on GitHub][link_hatrackcontributions].
161 |
162 | ## How to get in touch
163 |
164 | If you have a question or a comment we'd love for you to [open an issue][link_issues] because that will be our fastest way of communicating, getting the answers to you and (if necessary) making a change.
165 |
166 | If you'd prefer email, you can contact [Isla](https://github.com/Islast) at [islastaden@gmail.com](mailto:islastaden@gmail.com).
167 | If she doesn't reply to your email after a couple of days please feel free to ping her again.
168 |
169 | ## Thank you!
170 |
171 | You are awesome. :heart_eyes::sparkles::sunny:
172 |
173 | And if you've found typos in this (or any other) page, you could consider submitting your very first pull request to fix them via the [typos and broken links][link_fixingtyposissue] issue!
174 |
175 | [link_github]: https://github.com/
176 | [link_brainnetworksrepo]: https://github.com/WhitakerLab/scona
177 | [link_signupinstructions]: https://help.github.com/articles/signing-up-for-a-new-github-account
178 | [link_react]: https://github.com/blog/2119-add-reactions-to-pull-requests-issues-and-comments
179 | [link_issues]: https://github.com/WhitakerLab/scona/issues
180 | [link_labels]: https://github.com/WhitakerLab/scona/labels
181 | [link_discussingissues]: https://help.github.com/en/categories/collaborating-with-issues-and-pull-requests
182 | [link_bug]: https://github.com/WhitakerLab/scona/labels/bug
183 | [link_goodfirstbug]: https://github.com/WhitakerLab/scona/labels/good-first-bug
184 | [link_helpwanted]: https://github.com/WhitakerLab/scona/labels/help-wanted
185 | [link_nocode]: https://github.com/WhitakerLab/scona/labels/no-code
186 | [link_question]: https://github.com/WhitakerLab/scona/labels/question
187 | [link_request]: https://github.com/WhitakerLab/scona/labels/request
188 |
189 | [link_emojis]: http://www.emoji-cheat-sheet.com/
190 | [link_pullrequest]: https://help.github.com/en/articles/proposing-changes-to-your-work-with-pull-requests
191 | [link_fork]: https://help.github.com/articles/fork-a-repo/
192 | [link_pushpullblog]: https://www.igvita.com/2011/12/19/dont-push-your-pull-requests/
193 | [link_branches]: https://help.github.com/articles/creating-and-deleting-branches-within-your-repository/
194 | [link_updateupstreamwiki]: https://github.com/KirstieJane/STEMMRoleModels/wiki/Syncing-your-fork-to-the-original-repository-via-the-browser
195 | [link_contributorslive]: https://labhr.github.io/hatrack/#repo=WhitakerLab/scona
196 | [link_hatrackhome]: https://labhr.github.io/
197 | [link_hatrackidea]: http://hawthornlandings.org/2015/02/13/a-place-to-hang-your-hat/
198 | [link_hatrackcontributions]: http://opensource.com/life/15/10/octohat-github-non-code-contribution-tracker
199 | [link_fixingtyposissue]: https://github.com/WhitakerLab/scona//issues/4
200 |
--------------------------------------------------------------------------------
/DEVELOPMENT_GUIDE.md:
--------------------------------------------------------------------------------
1 | # Development Guide
2 |
3 | The contributing guidelines above have dealt with getting involved, asking questions, making pull requests, etcetera.
4 | The [development guide](#development-guide) deals with the specifics of contributing code to the `scona` codebase, and ends with a worked example to guide you through the process of writing docstrings and tests for new sections of code.
5 |
6 | * [Installing](#installing-in-editable-mode)
7 | * [Linting](#linting)
8 | * [Docstrings](#writing-docstrings)
9 | * [Building Sphinx docs](#building-sphinx-docs)
10 | * [Tutorials](#tutorials)
11 | * [Testing](#testing)
12 | * [A worked example](#worked-example)
13 |
14 |
15 | ### Installing in editable mode
16 |
17 | The command `pip install -e git+https://github.com/WhitakerLab/scona.git#egg=scona` should install scona in editable mode.
18 | This means that the python install of `scona` will be kept up to date with any changes you make, including switching branches in git.
19 |
20 | Kirstie has had some difficulty with using this installation step with `jupyter lab` on windows (it works fine in a notebook server or ipython terminal).
21 | The work around was to run `python setup.py develop` from the `scona` root directory.
22 |
23 | Please open an issue if you're having any similar challenges with getting started!
24 | We really want you to be able to contribute and installing the package is a necessary first step :sparkles:
25 |
26 | ### Linting
27 |
28 | scona uses the [PEP8 style guide](https://www.python.org/dev/peps/pep-0008/).
29 | You can use [flake8](http://flake8.pycqa.org/en/latest/) to lint code.
30 | We're quite a young project (at time of writing in January 2019) and so we aren't going to be super hardcore about your linting!
31 | Linting should make your life easier, but if you're not sure how to get started, or if this is a barrier to you contributing to `scona` then don't worry about it or [get in touch](CONTRIBUTING.md#how-to-get-in-touch) and we'll be happy to help you.
32 | Feel free also to correct unlinted code in scona when you come across it!:sparkles:
33 |
34 | ### Writing docstrings
35 |
36 | We at scona love love LOVE documentation 😍 💖 😘 so any contributions that make using the various functions, classes and wrappers easier are ALWAYS welcome.
37 |
38 | `scona` uses the `sphinx` extension [`napoleon`](http://www.sphinx-doc.org/en/master/usage/extensions/napoleon.html) to generate code from numpy style docstrings. See the [numpydoc guide](https://numpydoc.readthedocs.io/en/latest/) for details on syntax.
39 | For an example of how docstrings are written in scona, checkout the [docstrings section](#step-1-docstrings) in our [code example](#worked-example) below.
40 |
41 | `sphinx` can automatically create links to crossreference other packages. If set up correctly ``:class:`package-name.special-class` `` renders as `package-name.special-class` with a link to the `special-class` documentation in `package-name`'s online documentation. If the package is scona, the package name can be omitted, so that
42 | ``:class:`networkx.Graph` `` becomes [`networkx.Graph`](https://networkx.github.io/documentation/stable/reference/classes/graph.html#networkx.Graph), and ``:func:`create_corrmat` `` becomes [`create_corrmat`](https://whitakerlab.github.io/scona/scona.html#scona.make_corr_matrices.create_corrmat).
43 |
44 | Crossreferencing is currently set up for the python standard library, networkx, pandas, numpy and python-louvain. It is possible to set this up for other python packages by adding
45 | ```python
46 | 'package-name': ('https://package.documentation.url/', None)
47 | ```
48 | to the `intersphinx_mapping` dictionary in [docs/source/conf.py](docs/source/conf.py) (the handy dandy configuration file for sphinx).
49 |
50 | ### Building Sphinx docs
51 |
52 | When [docstrings](https://en.wikipedia.org/wiki/Docstring#Python) are updated, `sphinx` can automatically update the docs (and ultimately our website). Unfortunately this is [not yet an automated process](https://github.com/WhitakerLab/scona/issues/79). For the time being somebody needs to build those pages. If you're comfortable doing this you can follow the instructions below, but if it's going to be a barrier to you submitting a pull request then please just prepare the docstrings and the maintainers of scona will build the html files for you 😃.
53 | You might also use these instructions to build documenatation locally while you're still writing, for example to check rendering.
54 |
55 | You will need `sphinx` (`pip install sphinx`) and `make` (depends on your distribution) installed.
56 | In a terminal, navigate to the docs folder and run `make html`. You should be able to view the new documentation in your browser at `file:///local/path/to/scona/docs/build/html/scona.html#module-scona`
57 |
58 | ### Tutorials
59 |
60 | You may also want to show off the functionality of some new (or old) code. Please feel free to add a tutorial to the tutorials folder. You may find it helpful to use the `NSPN_WhitakerVertes_PNAS2016` data as an example dataset, as demonstrated in [tutorials/tutorial.ipynb](#tutorials/tutorial.ipynb).
61 |
62 | ### Testing
63 |
64 | Tests don't need to be exhaustive or prove the validity of a function. The aim is only to alert us when something has gone wrong. Testing is something most people do naturally whenever they write code. If you think about the sorts of things you would try running in a command line or jupyter notebook to test out a function you have just defined, these are the sorts of things that can go in unit tests.
65 |
66 | scona uses pytest to run our test suite. pytest runs test discovery on all modules with names ending `_test.py`, so if you make a new test module, make sure the filename conforms to this format.
67 | Use [`py.test`](https://docs.pytest.org/en/latest) to run tests, or `py.test --cov=scona` if you also want a test coverage report.
68 |
69 | For an example of how tests are written in scona, checkout the [testing section](#step-2-testing) in our [code example](#worked-example) below.
70 |
71 | ### Random seeds
72 |
73 | Sometimes you want a random process to choose the same pseudo-random numbers each time so that the process returns the same result each time. This is particularly useful for testing and reproducibility. To do this we set a [random seed](https://www.tutorialspoint.com/python/number_seed.htm).
74 |
75 | There is currently no way to seed the random graph generators in scona except by setting the global seed. For more discussion on this subject see [issue #77](https://github.com/WhitakerLab/scona/issues/77). To set the global random seed put the following lines near the top of your test.
76 |
77 | ```python
78 | import random
79 | random.seed(42)
80 | ```
81 | Where 42 is your integer of choice, see https://docs.python.org/3.7/library/random.html#random.seed
82 |
83 |
84 | ### Worked Example
85 |
86 | A lot of the developer guidelines above are a little hard to apply in the abstract, so this section is going to apply them to a sample piece of code. We'll start with a working function and show you how to [add a docstring](#writing-docstrings) and [add some tests](#testing).
87 |
88 | We'll start with a new function to calculate the proportion of interhemispheric edges *leaving* each module of a graph. This is a somewhat silly idea given that we have no guarantee that a module is entirely within one hemisphere, but it is only intended for the purpose of demonstration.
89 |
90 | ```python
91 | def calc_leaving_module_interhem(G, M):
92 | # Assign a binary variable "interhem" to each edge in G
93 | # Assign a "hemisphere" label to each node in G
94 | # N.B this function relies on G having nodal attribute "centroids" defined
95 | assign_interhem(G)
96 |
97 | leaving_interhem = {}
98 | # Loop over the modules in M
99 | for module, nodeset in M.items():
100 | # Calculate the proportion of edges from a node inside module to a node outside of module that are interhemispheric
101 | # N.B interhem is a 0, 1 variable indicating if an edges is interhemispheric, so it is possible to sum over the interhem value of valid edges.
102 | leaving_interhem[module] = np.mean([G.[node1][node2]['interhem'] for node1 in nodeset for node2 in nx.all_neighbors(node1) if node2 not in nodeset])
103 | return leaving_interhem
104 | ```
105 |
106 | Now suppose we decide to add this back into the scona source code.
107 |
108 | #### step 1: docstrings
109 |
110 | The key things to have in the docstring are a short description at the top,
111 | an explanation of the function parameters, and a description of what the
112 | function returns. If, say, the function returns nothing, or has no parameters, you can leave those out. For `calc_leaving_module_interhem` we might write something like this:
113 |
114 | ```python
115 | def calc_leaving_module_interhem(G, M):
116 | '''
117 | Calculate the proportion of edges leaving each module that are
118 | interhemispheric
119 |
120 | Parameters
121 | ----------
122 | G : :class:`networkx.Graph`
123 | M : dict
124 | M maps module names to vertex sets. M represents a nodal
125 | partition of G
126 |
127 | Returns
128 | -------
129 | dict
130 | a dictionary mapping a module name to the proportion of interhemispheric
131 | leaving edges
132 |
133 | See Also
134 | --------
135 | :func:`assign_interhem`
136 | '''
137 | ```
138 |
139 | Let's say we add this to [`scona.graph_measures`](scona/graph_measures.py). If you followed the [instructions to build sphinx documentation locally](#building-sphinx-docs) you would be able to view this function in your browser at `file:///local/path/to/scona/docs/build/html/scona.html#module-scona.graph_measures.calc_leaving_module_interhem`
140 |
141 | #### step 2: Testing
142 |
143 | Now we need to write some tests for this function to [tests/graph_measures_test.py](scona/graph_measures.py)
144 | Tests don't need to be exhaustive or prove the validity of a function. The aim is simply to alert us when something has gone wrong. Testing is something most people do naturally when they write code. If you think about the sorts of sanity checks you would try running in a command line or jupyter notebook to make sure everything is working properly when you have just defined a function, these are the sorts of things that should go in unit tests.
145 |
146 | Examples of good tests for `calc_leaving_module_interhem` might be:
147 | * checking that `calc_leaving_module_interhem` raises an error when run on a graph where the nodal attribute "centroids" is not defined.
148 | * checking that `calc_leaving_module_interhem(G, M)` has the same dictionary keys as M for some partition M.
149 | * given a partition M with only two modules, check that the values of `calc_leaving_module_interhem(G, M)` are equal as they are evaluating the same set of edges. (There is no third module to connect to, so the set of leaving A is actually the set of edges from A -> B, which is the same set of edges from B -> A, and these are precisely the edges leaving B)
150 | * given a partition M with only one module, check that the values of `calc_leaving_module_interhem(G, M)` are 0, as there are no leaving edges.
151 |
152 | *
153 | ```
154 | G : 0-----1
155 | | |
156 | | |
157 | 2-----3
158 | ```
159 | We can define a simple square graph G with nodes 0 and 2 in the right hemisphere and 2 and 3 in the left hemisphere. Using this simple graph let's calculate the expected result for a couple of partitions.
160 | * If M = `{0: {0,2}, 1: {1,3}}` then `calc_leaving_module_interhem(G, M)` is `{0:1.0, 1:1.0}` since all leaving edges are interhemispheric
161 | * If M = `{0: {0,1}, 1: {2,3}}` then `calc_leaving_module_interhem(G, M)` is `{0:0.0, 1:0.0}` since no leaving edges are interhemispheric
162 | * If M = `{0: {0}, 1: {1,2,3}}` then `calc_leaving_module_interhem(G, M)` is `{0:0.5, 1:0.5}`
163 | These are all sanity checks we can use to check our function is working as expected.
164 |
165 |
166 | Let's set up a `LeavingModuleInterhem` test case. We use the `setUpClass` method to define some variables which we will use repeatedly throughout.
167 |
168 | ```python
169 | class LeavingModuleInterhem(unittest.TestCase):
170 | @classmethod
171 | def setUpClass(self):
172 | self.G_no_centroids = nx.erdos_renyi_graph(20, 0.2)
173 | self.G_random_graph = self.G_no_centroids.copy()
174 | # assign random centroids to G_random_graph
175 | scn.assign_node_centroids(
176 | self.G_random_graph,
177 | [tuple(np.subtract((.5, .5, .5), np.random.rand(3))) for x in range(20)])
178 | # Define a trivial partition for G1
179 | self.M_one_big_module = {0: set(G_random_graph.nodes)}
180 | # Create a second partition of G1 with two modules
181 | nodeset1 = set(np.random.choice(self.G_random_graph.nodes))
182 | nodeset2 = set(self.G_random_graph.nodes).difference(nodeset1)
183 | self.M_two_random_modules = {0: nodeset1, 1: nodeset2}
184 | # Create graph G2
185 | # G2 : 0-----1
186 | # | |
187 | # | |
188 | # 2-----3
189 | self.G_square_graph = nx.Graph()
190 | self.G_square_graph.add_nodes_from([0, 1, 2, 3])
191 | self.G_square_graph.add_edges_from([(0, 2), (2, 3), (1, 3), (0, 1)])
192 | scn.assign_node_centroids(
193 | self.G_square_graph, [(1, 0, 0), (-1, 0, 0), (1, 0, 0), (-1, 0, 0)])
194 | ```
195 | Now we have defined the set up for testing, we can move on to the testing methods. These should be short methods making one or two assertions about the behaviour of our new function. We try to make the function names descriptive so that if they error during testing we can tell at a glance which behaviour they were testing.
196 |
197 | ```python
198 | def centroids_must_be_defined(self):
199 | # This function should fail on a graph where no centroids are defined
200 | with self.assertRaises(Exception):
201 | scn.calc_leaving_module_interhem(
202 | self.G_no_centroids, self.M_one_big_module)
203 |
204 | def result_keys_are_modules(self):
205 | # check that `calc_leaving_module_interhem(G, M)` has the same
206 | # dictionary keys as M
207 | result = scn.calc_leaving_module_interhem(
208 | self.G_random_graph, self.M_one_big_module)
209 | assert result.keys() == self.M_one_big_module.keys()
210 |
211 | def trivial_partition_values_equal_zero(self):
212 | # check that the values of `calc_leaving_module_interhem(G, M)` are 0,
213 | # as there are no leaving edges
214 | result = scn.calc_leaving_module_interhem(
215 | self.G_random_graph, self.M_one_big_module)
216 | for x in result.values():
217 | assert x == 0
218 |
219 | def partition_size_two_modules_have_equal_values(self):
220 | # check that the values of `calc_leaving_module_interhem(G, M)` are equal,
221 | # as they are evaluating the same edges.
222 | L2 = scn.calc_leaving_module_interhem(
223 | self.G_random_graph, self.M_two_random_modules)
224 | assert L2[0] == L2[1]
225 |
226 | def G2_modules_are_hemispheres_values_are_1(self):
227 | # the leaving interhem values should be one for each module, since since
228 | # all leaving edges are interhemispheric
229 | result = scn.calc_leaving_module_interhem(
230 | self.G_square_graph, {0: {0, 2}, 1: {1, 3}})
231 | assert result == {0: 1.0, 1: 1.0}
232 |
233 | def G2_modules_are_split_across_hemispheres_values_0(self):
234 | # the leaving interhem values should be zero for each module, since since
235 | # none of the leaving edges are interhemispheric
236 | result = scn.calc_leaving_module_interhem(
237 | self.G_square_graph, {0: {0, 1}, 1: {2, 3}})
238 | assert result == {0: 0.0, 1: 0.0}
239 |
240 | def G2_test_module_M5(self):
241 | result = scn.calc_leaving_module_interhem(
242 | self.G_square_graph, {0: {0}, 1: {1, 2, 3}})
243 | assert == {0: .5, 1: .5}
244 | ```
245 |
246 | And now you're ready to roll! :tada:
247 |
248 | Thank you for reading this far through scona's contributing guidelines :sparkles::hibiscus::tada:.
249 | As always, if you have any question, see any typo's, or have suggestions or corrections for these guidelines don't hesitate to [let us know](#how-to-get-in-touch):heart_eyes:.
250 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2017 Kirstie Whitaker
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/PULL_REQUEST_TEMPLATE.md:
--------------------------------------------------------------------------------
1 |
6 | - [ ] I'm ready to merge
7 |
8 | * **What's the context for this pull request?**
9 | _(this is a good place to reference any issues that this PR addresses)_
10 |
11 | * **What's new?**
12 |
13 | * **What should a reviewer feedback on?**
14 |
15 | * **Does anything need to be updated after merge?**
16 | _(e.g the wiki or the WhitakerLab website)_
17 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 |
2 |
3 | # scona
4 |
5 | [](https://gitter.im/WhitakerLab/scona?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) [](https://travis-ci.org/WhitakerLab/scona)
6 | [](https://github.com/WhitakerLab/scona/blob/master/LICENSE)
7 | [](https://mybinder.org/v2/gh/WhitakerLab/scona/master)
8 |
9 | Welcome to the `scona` GitHub repository! :sparkles:
10 |
11 | * [Get started](#get-started)
12 | * [What are we doing?](#what-are-we-doing)
13 | * [Why do we need another package?](#why-do-we-need-another-package)
14 | * [Who is scona built for?](#who-is-scona-built-for)
15 | * [Get involved](#get-involved)
16 |
17 | ## Get Started
18 |
19 | If you don't want to bother reading the whole of this page, here are three ways to get your hands dirty and explore `scona`:
20 |
21 | * Install `scona` as a python package with pip
22 |
23 | `pip install -e git+https://github.com/WhitakerLab/scona.git#egg=scona`
24 |
25 | * Check out our [tutorial](tutorials/tutorial.ipynb) for examples of basic functionality.
26 | Or run it interactively [in Binder](https://mybinder.org/v2/gh/whitakerlab/scona/master?filepath=tutorials%2Ftutorial.ipynb).
27 |
28 | * Read the docs: https://whitakerlab.github.io/scona
29 |
30 | ## What are we doing?
31 |
32 | `scona` is a toolkit to perform **s**tructural **co**variance brain **n**etwork **a**nalyses using python.
33 |
34 | `scona` takes regional cortical thickness data obtained from structural MRI and generates a matrix of correlations between regions over a cohort of subjects.
35 | The correlation matrix is used alongside the [networkx package](https://networkx.github.io/) to generate a variety of networks and network measures.
36 |
37 | The `scona` codebase was first developed by Dr Kirstie Whitaker for the Neuroscience in Psychiatry Network publication "Adolescence is associated with genomically patterned consolidation of the hubs of the human brain connectome" published in PNAS in 2016 [(Whitaker*, Vertes* et al, 2016](http://dx.doi.org/10.1073/pnas.1601745113)).
38 | In 2017, Isla Staden was awarded a [Mozilla Science Lab mini-grant](https://whitakerlab.github.io/resources/Mozilla-Science-Mini-Grant-June2017) to develop Kirstie's [original code](https://github.com/KirstieJane/NSPN_WhitakerVertes_PNAS2016) into a
39 | documented, tested python package that is easy to use and re-use.
40 |
41 | ### Why do we need another package?
42 |
43 | There are lots of tools to analyse brain networks available.
44 | Some require you to link together lots of different packages, particularly when you think about visualising your results.
45 | Others will do the full analyses but require you to work with very specifically formatted datasets.
46 | Our goal with `scona` is to balance these two by providing a package that can take in data pre-processed in multiple different ways and run your analysis all the way to the end.
47 |
48 | The code is modular, extendable and well documented.
49 | You can run a standard analysis from beginning to end (including exporting "ready for publication" figures) with just a few clicks of a button.
50 | We welcome any contributions to extend these "standard analysis" into a suite of possible investigations of structural covariance brain network data.
51 |
52 | ### Who is scona built for?
53 |
54 | `scona` ties together the excellent graph theoretical analyses provided by [`NetworkX`](https://networkx.github.io) along with some neuroscience know-how.
55 | For example, we've incorporated methods to include anatomical locations of your regions of interest, double edge swapping to preserve degree distribution in random graphs, and some standard reporting mechanisms that would be expected in a neuroimaging journal article.
56 |
57 | Our target audience are researchers who have structural brain imaging data and would like to run quite standard structural covariance network analyses.
58 | We don't want experts in neuroimaging to have to also become expert in building reproducible pipelines and well tested software.
59 | (Although we'd love to answer questions if you are interested in improving your development skills!)
60 | `scona` is available to help researchers get started (and publish) their analyses quickly and reliably.
61 |
62 | We would also like to encourage network ninjas to incorporate their methods and ideas into the package either as code or by submitting an issue and a recommendation.
63 |
64 | ## Get involved
65 |
66 | `scona` is openly developed and welcomes contributors.
67 |
68 | If you're thinking about contributing (:green_heart: you are loved), our [roadmap](https://github.com/WhitakerLab/scona/issues/12) and our [contributing guidelines](CONTRIBUTING.md) are a good place to start.
69 | You don't need advanced skills or knowledge to help out.
70 | Newbies to Python, neuroscience, network analyses, git and GitHub are all welcome.
71 |
72 | The only rule you absolutely have to follow when you're contributing to `scona` is to act in accordance with our [code of conduct](CODE_OF_CONDUCT.md).
73 | We value the participation of every member of our community and want to ensure that every contributor has an enjoyable and fulfilling experience.
74 |
75 | Our detailed development guide can be found at [DEVELOPMENT_GUIDE.md](DEVELOPMENT_GUIDE.md).
76 | Once you've read the [contributing guidelines](CONTRIBUTING.md), and if you think you need the additional information on linting, testing, and writing and building the documentation, please check out those instructions too.
77 |
78 | If you have questions or want to get in touch you can join our [gitter lobby](https://gitter.im/WhitakerLab/scona), tweet [@Whitaker_Lab](https://twitter.com/whitaker_lab) or email Isla at [islastaden@gmail.com](mailto:islastaden@gmail.com).
79 |
80 |
81 | ## Other Stuff
82 |
83 | To view our (successful) Mozilla Mini-Grant application, head [here](https://github.com/WhitakerLab/WhitakerLabProjectManagement/blob/master/FUNDING_APPLICATIONS/MozillaScienceLabMiniGrant_June2017.md).
84 |
85 | In October 2017 scona ran a MozFest [session](https://github.com/MozillaFoundation/mozfest-program-2017/issues/724)
86 |
--------------------------------------------------------------------------------
/docs/Makefile:
--------------------------------------------------------------------------------
1 | # Makefile for Sphinx documentation
2 | #
3 |
4 | # You can set these variables from the command line.
5 | SPHINXOPTS =
6 | SPHINXBUILD = sphinx-build
7 | PAPER =
8 | BUILDDIR = build
9 |
10 | # Internal variables.
11 | PAPEROPT_a4 = -D latex_paper_size=a4
12 | PAPEROPT_letter = -D latex_paper_size=letter
13 | ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source
14 | # the i18n builder cannot share the environment and doctrees with the others
15 | I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source
16 |
17 | .PHONY: help
18 | help:
19 | @echo "Please use \`make ' where is one of"
20 | @echo " html to make standalone HTML files"
21 | @echo " dirhtml to make HTML files named index.html in directories"
22 | @echo " singlehtml to make a single large HTML file"
23 | @echo " pickle to make pickle files"
24 | @echo " json to make JSON files"
25 | @echo " htmlhelp to make HTML files and a HTML help project"
26 | @echo " qthelp to make HTML files and a qthelp project"
27 | @echo " applehelp to make an Apple Help Book"
28 | @echo " devhelp to make HTML files and a Devhelp project"
29 | @echo " epub to make an epub"
30 | @echo " epub3 to make an epub3"
31 | @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
32 | @echo " latexpdf to make LaTeX files and run them through pdflatex"
33 | @echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx"
34 | @echo " text to make text files"
35 | @echo " man to make manual pages"
36 | @echo " texinfo to make Texinfo files"
37 | @echo " info to make Texinfo files and run them through makeinfo"
38 | @echo " gettext to make PO message catalogs"
39 | @echo " changes to make an overview of all changed/added/deprecated items"
40 | @echo " xml to make Docutils-native XML files"
41 | @echo " pseudoxml to make pseudoxml-XML files for display purposes"
42 | @echo " linkcheck to check all external links for integrity"
43 | @echo " doctest to run all doctests embedded in the documentation (if enabled)"
44 | @echo " coverage to run coverage check of the documentation (if enabled)"
45 | @echo " dummy to check syntax errors of document sources"
46 |
47 | .PHONY: clean
48 | clean:
49 | rm -rf $(BUILDDIR)/*
50 |
51 | .PHONY: html
52 | html:
53 | $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
54 | @echo
55 | @echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
56 |
57 | .PHONY: dirhtml
58 | dirhtml:
59 | $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
60 | @echo
61 | @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
62 |
63 | .PHONY: singlehtml
64 | singlehtml:
65 | $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
66 | @echo
67 | @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
68 |
69 | .PHONY: pickle
70 | pickle:
71 | $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
72 | @echo
73 | @echo "Build finished; now you can process the pickle files."
74 |
75 | .PHONY: json
76 | json:
77 | $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
78 | @echo
79 | @echo "Build finished; now you can process the JSON files."
80 |
81 | .PHONY: htmlhelp
82 | htmlhelp:
83 | $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
84 | @echo
85 | @echo "Build finished; now you can run HTML Help Workshop with the" \
86 | ".hhp project file in $(BUILDDIR)/htmlhelp."
87 |
88 | .PHONY: qthelp
89 | qthelp:
90 | $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
91 | @echo
92 | @echo "Build finished; now you can run "qcollectiongenerator" with the" \
93 | ".qhcp project file in $(BUILDDIR)/qthelp, like this:"
94 | @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/scona.qhcp"
95 | @echo "To view the help file:"
96 | @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/scona.qhc"
97 |
98 | .PHONY: applehelp
99 | applehelp:
100 | $(SPHINXBUILD) -b applehelp $(ALLSPHINXOPTS) $(BUILDDIR)/applehelp
101 | @echo
102 | @echo "Build finished. The help book is in $(BUILDDIR)/applehelp."
103 | @echo "N.B. You won't be able to view it unless you put it in" \
104 | "~/Library/Documentation/Help or install it in your application" \
105 | "bundle."
106 |
107 | .PHONY: devhelp
108 | devhelp:
109 | $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
110 | @echo
111 | @echo "Build finished."
112 | @echo "To view the help file:"
113 | @echo "# mkdir -p $$HOME/.local/share/devhelp/scona"
114 | @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/scona"
115 | @echo "# devhelp"
116 |
117 | .PHONY: epub
118 | epub:
119 | $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
120 | @echo
121 | @echo "Build finished. The epub file is in $(BUILDDIR)/epub."
122 |
123 | .PHONY: epub3
124 | epub3:
125 | $(SPHINXBUILD) -b epub3 $(ALLSPHINXOPTS) $(BUILDDIR)/epub3
126 | @echo
127 | @echo "Build finished. The epub3 file is in $(BUILDDIR)/epub3."
128 |
129 | .PHONY: latex
130 | latex:
131 | $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
132 | @echo
133 | @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
134 | @echo "Run \`make' in that directory to run these through (pdf)latex" \
135 | "(use \`make latexpdf' here to do that automatically)."
136 |
137 | .PHONY: latexpdf
138 | latexpdf:
139 | $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
140 | @echo "Running LaTeX files through pdflatex..."
141 | $(MAKE) -C $(BUILDDIR)/latex all-pdf
142 | @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
143 |
144 | .PHONY: latexpdfja
145 | latexpdfja:
146 | $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
147 | @echo "Running LaTeX files through platex and dvipdfmx..."
148 | $(MAKE) -C $(BUILDDIR)/latex all-pdf-ja
149 | @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
150 |
151 | .PHONY: text
152 | text:
153 | $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
154 | @echo
155 | @echo "Build finished. The text files are in $(BUILDDIR)/text."
156 |
157 | .PHONY: man
158 | man:
159 | $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
160 | @echo
161 | @echo "Build finished. The manual pages are in $(BUILDDIR)/man."
162 |
163 | .PHONY: texinfo
164 | texinfo:
165 | $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
166 | @echo
167 | @echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
168 | @echo "Run \`make' in that directory to run these through makeinfo" \
169 | "(use \`make info' here to do that automatically)."
170 |
171 | .PHONY: info
172 | info:
173 | $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
174 | @echo "Running Texinfo files through makeinfo..."
175 | make -C $(BUILDDIR)/texinfo info
176 | @echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
177 |
178 | .PHONY: gettext
179 | gettext:
180 | $(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
181 | @echo
182 | @echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
183 |
184 | .PHONY: changes
185 | changes:
186 | $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
187 | @echo
188 | @echo "The overview file is in $(BUILDDIR)/changes."
189 |
190 | .PHONY: linkcheck
191 | linkcheck:
192 | $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
193 | @echo
194 | @echo "Link check complete; look for any errors in the above output " \
195 | "or in $(BUILDDIR)/linkcheck/output.txt."
196 |
197 | .PHONY: doctest
198 | doctest:
199 | $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
200 | @echo "Testing of doctests in the sources finished, look at the " \
201 | "results in $(BUILDDIR)/doctest/output.txt."
202 |
203 | .PHONY: coverage
204 | coverage:
205 | $(SPHINXBUILD) -b coverage $(ALLSPHINXOPTS) $(BUILDDIR)/coverage
206 | @echo "Testing of coverage in the sources finished, look at the " \
207 | "results in $(BUILDDIR)/coverage/python.txt."
208 |
209 | .PHONY: xml
210 | xml:
211 | $(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml
212 | @echo
213 | @echo "Build finished. The XML files are in $(BUILDDIR)/xml."
214 |
215 | .PHONY: pseudoxml
216 | pseudoxml:
217 | $(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml
218 | @echo
219 | @echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml."
220 |
221 | .PHONY: dummy
222 | dummy:
223 | $(SPHINXBUILD) -b dummy $(ALLSPHINXOPTS) $(BUILDDIR)/dummy
224 | @echo
225 | @echo "Build finished. Dummy builder generates no files."
226 |
--------------------------------------------------------------------------------
/docs/source/conf.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python3
2 | # -*- coding: utf-8 -*-
3 | #
4 | # scona documentation build configuration file, created by
5 | # sphinx-quickstart on Mon Feb 5 16:59:27 2018.
6 | #
7 | # This file is execfile()d with the current directory set to its
8 | # containing dir.
9 | #
10 | # Note that not all possible configuration values are present in this
11 | # autogenerated file.
12 | #
13 | # All configuration values have a default; values that are commented out
14 | # serve to show the default.
15 |
16 | # If extensions (or modules to document with autodoc) are in another directory,
17 | # add these directories to sys.path here. If the directory is relative to the
18 | # documentation root, use os.path.abspath to make it absolute, like shown here.
19 | #
20 | import os
21 | import sys
22 | sys.path.insert(0, '../')
23 | sys.path.insert(0, os.path.abspath('../'))
24 |
25 | # -- General configuration ------------------------------------------------
26 |
27 | # If your documentation needs a minimal Sphinx version, state it here.
28 | #
29 | # needs_sphinx = '1.0'
30 |
31 | # Add any Sphinx extension module names here, as strings. They can be
32 | # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
33 | # ones.
34 | extensions = [
35 | 'sphinx.ext.autodoc',
36 | 'sphinx.ext.doctest',
37 | 'sphinx.ext.intersphinx',
38 | 'sphinx.ext.coverage',
39 | 'sphinx.ext.ifconfig',
40 | 'sphinx.ext.viewcode',
41 | 'sphinx.ext.imgmath',
42 | 'sphinx.ext.napoleon',
43 | ]
44 |
45 | # Add any paths that contain templates here, relative to this directory.
46 | templates_path = ['_templates']
47 |
48 | # The suffix(es) of source filenames.
49 | # You can specify multiple suffix as a list of string:
50 | #
51 | # source_suffix = ['.rst', '.md']
52 | source_suffix = '.rst'
53 |
54 | # The encoding of source files.
55 | #
56 | # source_encoding = 'utf-8-sig'
57 |
58 | # The master toctree document.
59 | master_doc = 'index'
60 |
61 | # General information about the project.
62 | project = 'scona'
63 | copyright = '2018, Dr Kirstie Whitaker, Isla Staden'
64 | author = 'Dr Kirstie Whitaker, Isla Staden'
65 |
66 | # The version info for the project you're documenting, acts as replacement for
67 | # |version| and |release|, also used in various other places throughout the
68 | # built documents.
69 | #
70 | # The short X.Y version.
71 | version = '0.1dev'
72 | # The full version, including alpha/beta/rc tags.
73 | release = '0.1dev'
74 |
75 | # The language for content autogenerated by Sphinx. Refer to documentation
76 | # for a list of supported languages.
77 | #
78 | # This is also used if you do content translation via gettext catalogs.
79 | # Usually you set "language" from the command line for these cases.
80 | language = None
81 |
82 | # There are two options for replacing |today|: either, you set today to some
83 | # non-false value, then it is used:
84 | #
85 | # today = ''
86 | #
87 | # Else, today_fmt is used as the format for a strftime call.
88 | #
89 | # today_fmt = '%B %d, %Y'
90 |
91 | # List of patterns, relative to source directory, that match files and
92 | # directories to ignore when looking for source files.
93 | # This patterns also effect to html_static_path and html_extra_path
94 | exclude_patterns = []
95 |
96 | # The reST default role (used for this markup: `text`) to use for all
97 | # documents.
98 | #
99 | # default_role = None
100 |
101 | # If true, '()' will be appended to :func: etc. cross-reference text.
102 | #
103 | # add_function_parentheses = True
104 |
105 | # If true, the current module name will be prepended to all description
106 | # unit titles (such as .. function::).
107 | #
108 | # add_module_names = True
109 |
110 | # If true, sectionauthor and moduleauthor directives will be shown in the
111 | # output. They are ignored by default.
112 | #
113 | # show_authors = False
114 |
115 | # The name of the Pygments (syntax highlighting) style to use.
116 | pygments_style = 'sphinx'
117 |
118 | # A list of ignored prefixes for module index sorting.
119 | # modindex_common_prefix = []
120 |
121 | # If true, keep warnings as "system message" paragraphs in the built documents.
122 | # keep_warnings = False
123 |
124 | # If true, `todo` and `todoList` produce output, else they produce nothing.
125 | todo_include_todos = False
126 |
127 |
128 | # -- Options for HTML output ----------------------------------------------
129 |
130 | # The theme to use for HTML and HTML Help pages. See the documentation for
131 | # a list of builtin themes.
132 | #
133 | html_theme = 'sphinx_rtd_theme'
134 |
135 | # Theme options are theme-specific and customize the look and feel of a theme
136 | # further. For a list of options available for each theme, see the
137 | # documentation.
138 | #
139 | # html_theme_options = {}
140 |
141 | # Add any paths that contain custom themes here, relative to this directory.
142 | # html_theme_path = []
143 |
144 | # The name for this set of Sphinx documents.
145 | # " v documentation" by default.
146 | #
147 | # html_title = 'scona v0.1dev'
148 |
149 | # A shorter title for the navigation bar. Default is the same as html_title.
150 | #
151 | # html_short_title = None
152 |
153 | # The name of an image file (relative to this directory) to place at the top
154 | # of the sidebar.
155 | #
156 | # html_logo = None
157 |
158 | # The name of an image file (relative to this directory) to use as a favicon of
159 | # the docs. This file should be a Windows icon file (.ico) being 16x16 or
160 | # 32x32 pixels large.
161 | #
162 | # html_favicon = None
163 |
164 | # Add any paths that contain custom static files (such as style sheets) here,
165 | # relative to this directory. They are copied after the builtin static files,
166 | # so a file named "default.css" will overwrite the builtin "default.css".
167 | html_static_path = ['_static']
168 |
169 | # Add any extra paths that contain custom files (such as robots.txt or
170 | # .htaccess) here, relative to this directory. These files are copied
171 | # directly to the root of the documentation.
172 | #
173 | # html_extra_path = []
174 |
175 | # If not None, a 'Last updated on:' timestamp is inserted at every page
176 | # bottom, using the given strftime format.
177 | # The empty string is equivalent to '%b %d, %Y'.
178 | #
179 | # html_last_updated_fmt = None
180 |
181 | # If true, SmartyPants will be used to convert quotes and dashes to
182 | # typographically correct entities.
183 | #
184 | # html_use_smartypants = True
185 |
186 | # Custom sidebar templates, maps document names to template names.
187 | #
188 | # html_sidebars = {}
189 |
190 | # Additional templates that should be rendered to pages, maps page names to
191 | # template names.
192 | #
193 | # html_additional_pages = {}
194 |
195 | # If false, no module index is generated.
196 | #
197 | # html_domain_indices = True
198 |
199 | # If false, no index is generated.
200 | #
201 | # html_use_index = True
202 |
203 | # If true, the index is split into individual pages for each letter.
204 | #
205 | # html_split_index = False
206 |
207 | # If true, links to the reST sources are added to the pages.
208 | #
209 | # html_show_sourcelink = True
210 |
211 | # If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
212 | #
213 | # html_show_sphinx = True
214 |
215 | # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
216 | #
217 | # html_show_copyright = True
218 |
219 | # If true, an OpenSearch description file will be output, and all pages will
220 | # contain a tag referring to it. The value of this option must be the
221 | # base URL from which the finished HTML is served.
222 | #
223 | # html_use_opensearch = ''
224 |
225 | # This is the file name suffix for HTML files (e.g. ".xhtml").
226 | # html_file_suffix = None
227 |
228 | # Language to be used for generating the HTML full-text search index.
229 | # Sphinx supports the following languages:
230 | # 'da', 'de', 'en', 'es', 'fi', 'fr', 'h', 'it', 'ja'
231 | # 'nl', 'no', 'pt', 'ro', 'r', 'sv', 'tr', 'zh'
232 | #
233 | # html_search_language = 'en'
234 |
235 | # A dictionary with options for the search language support, empty by default.
236 | # 'ja' uses this config value.
237 | # 'zh' user can custom change `jieba` dictionary path.
238 | #
239 | # html_search_options = {'type': 'default'}
240 |
241 | # The name of a javascript file (relative to the configuration directory) that
242 | # implements a search results scorer. If empty, the default will be used.
243 | #
244 | # html_search_scorer = 'scorer.js'
245 |
246 | # Output file base name for HTML help builder.
247 | htmlhelp_basename = 'sconadoc'
248 |
249 | # -- Options for LaTeX output ---------------------------------------------
250 |
251 | latex_elements = {
252 | # The paper size ('letterpaper' or 'a4paper').
253 | #
254 | # 'papersize': 'letterpaper',
255 |
256 | # The font size ('10pt', '11pt' or '12pt').
257 | #
258 | # 'pointsize': '10pt',
259 |
260 | # Additional stuff for the LaTeX preamble.
261 | #
262 | # 'preamble': '',
263 |
264 | # Latex figure (float) alignment
265 | #
266 | # 'figure_align': 'htbp',
267 | }
268 |
269 | # Grouping the document tree into LaTeX files. List of tuples
270 | # (source start file, target name, title,
271 | # author, documentclass [howto, manual, or own class]).
272 | latex_documents = [
273 | (master_doc,
274 | 'scona.tex',
275 | 'scona Documentation',
276 | 'Dr Kirstie Whitaker, Isla Staden',
277 | 'manual'),
278 | ]
279 |
280 | # The name of an image file (relative to this directory) to place at the top of
281 | # the title page.
282 | #
283 | # latex_logo = None
284 |
285 | # For "manual" documents, if this is true, then toplevel headings are parts,
286 | # not chapters.
287 | #
288 | # latex_use_parts = False
289 |
290 | # If true, show page references after internal links.
291 | #
292 | # latex_show_pagerefs = False
293 |
294 | # If true, show URL addresses after external links.
295 | #
296 | # latex_show_urls = False
297 |
298 | # Documents to append as an appendix to all manuals.
299 | #
300 | # latex_appendices = []
301 |
302 | # It false, will not define \strong, \code, itleref, \crossref ... but only
303 | # \sphinxstrong, ..., \sphinxtitleref, ... To help avoid clash with user added
304 | # packages.
305 | #
306 | # latex_keep_old_macro_names = True
307 |
308 | # If false, no module index is generated.
309 | #
310 | # latex_domain_indices = True
311 |
312 |
313 | # -- Options for manual page output ---------------------------------------
314 |
315 | # One entry per manual page. List of tuples
316 | # (source start file, name, description, authors, manual section).
317 | man_pages = [
318 | (master_doc,
319 | 'brainnetworksinpython',
320 | 'scona Documentation',
321 | [author],
322 | 1)
323 | ]
324 |
325 | # If true, show URL addresses after external links.
326 | #
327 | # man_show_urls = False
328 |
329 |
330 | # -- Options for Texinfo output -------------------------------------------
331 |
332 | # Grouping the document tree into Texinfo files. List of tuples
333 | # (source start file, target name, title, author,
334 | # dir menu entry, description, category)
335 | texinfo_documents = [
336 | (master_doc,
337 | 'scona',
338 | 'scona Documentation',
339 | author,
340 | 'scona',
341 | 'Software to analyse structural covariance brain networks in python. ',
342 | 'Miscellaneous'),
343 | ]
344 |
345 | # Documents to append as an appendix to all manuals.
346 | #
347 | # texinfo_appendices = []
348 |
349 | # If false, no module index is generated.
350 | #
351 | # texinfo_domain_indices = True
352 |
353 | # How to display URL addresses: 'footnote', 'no', or 'inline'.
354 | #
355 | # texinfo_show_urls = 'footnote'
356 |
357 | # If true, do not generate a @detailmenu in the "Top" node's menu.
358 | #
359 | # texinfo_no_detailmenu = False
360 |
361 |
362 | # Example configuration for intersphinx: refer to the Python standard library.
363 | intersphinx_mapping = {
364 | 'python': ('https://docs.python.org/', None),
365 | 'numpy': ('https://docs.scipy.org/doc/numpy/', None),
366 | 'networkx': ('https://networkx.github.io/documentation/stable/', None),
367 | 'pandas': ('http://pandas.pydata.org/pandas-docs/stable/', None),
368 | 'python-louvain': ('https://python-louvain.readthedocs.io/en/latest/',
369 | None)}
370 |
--------------------------------------------------------------------------------
/docs/source/create-network.rst:
--------------------------------------------------------------------------------
1 | Creating a Network
2 | ==================
3 |
4 | Weighted and Binary Graphs
5 | --------------------------
6 |
7 | Minimum Spanning Tree
8 | ---------------------
9 | - why a connected graph?
10 |
11 | Thresholding
12 | ------------
13 |
14 |
15 | The BrainNetwork Class
16 | ----------------------
17 |
18 | This is a very lightweight subclass of the :class:`networkx.Graph` class. This means that any methods you can use on a :class:`networkx.Graph` object can also be used on a `BrainNetwork` object, although the reverse is not true. We have **added** various methods that allow us to keep track of measures that have already been calculated. This is particularly useful later on when one is dealing with 1000 random graphs (or more!) and saves a lot of time.
19 |
20 | All :class:`scona.BrainNetwork` **methods** have a corresponding ``scona`` **function**. So while the :class:`scona.BrainNetwork` methods can only be applied to :class:`scona.BrainNetwork` objects, you can find the equivalent function in ``scona`` which can be used on a regular :class:`networkx.Graph` object.
21 |
22 | For example, if ``G`` is a :class:`scona.BrainNetwork` object, you can threshold it by typing ``G.threshold(10)``. If ``nxG`` is a :class:`Networkx.Graph` you can use ``scona.threshold_graph(nxG, 10)`` to perform the same function.
23 |
24 | A :class:`BrainNetwork` can be initialised from a :class:`networkx.Graph` or from a correlation matrix represented as a :class:`pandas.DataFrame` or :class:`numpy.array`.
25 |
26 |
27 | Resources
28 | ---------
--------------------------------------------------------------------------------
/docs/source/global-analysis.rst:
--------------------------------------------------------------------------------
1 | Network Analysis at the Global Level
2 | ====================================
3 |
4 | Random Graphs
5 | -------------
6 |
7 |
8 |
9 | The GraphBundle Class
10 | ---------------------
11 | A :class:`GraphBundle` is a dictionary storing graphs under
12 |
13 |
14 | Across Bundle Analyses
15 | ----------------------
16 |
17 | Resources
18 | ---------
--------------------------------------------------------------------------------
/docs/source/index.rst:
--------------------------------------------------------------------------------
1 | .. scona documentation master file, created by
2 | sphinx-quickstart on Mon Feb 5 16:59:27 2018.
3 | You can adapt this file completely to your liking, but it should at least
4 | contain the root `toctree` directive.
5 |
6 | scona : Structural Covariance Network Analysis
7 | ==============================================
8 |
9 | .. toctree::
10 | :maxdepth: 2
11 | :caption: Contents:
12 |
13 | introduction
14 | sc-matrices
15 | create-network
16 | local-analysis
17 | global-analysis
18 | plotting
19 |
20 | Indices and tables
21 | ==================
22 |
23 | * :ref:`genindex`
24 | * :ref:`modindex`
25 | * :ref:`search`
26 |
--------------------------------------------------------------------------------
/docs/source/introduction.rst:
--------------------------------------------------------------------------------
1 | Introduction
2 | ============
3 |
4 | What is scona
5 | -------------
6 |
7 | scona is a toolkit to analyse structural covariance brain networks using python.
8 |
9 | scona takes regional cortical thickness data from structural MRI and generates a matrix of correlations between brain regions over a cohort of subjects. This correlation matrix is used to generate a variety of networks and network measures.
10 |
11 | The logic behind structural covariance networks
12 | -----------------------------------------------
13 |
14 | Installing scona
15 | ----------------
16 |
17 | You can install scona directly from the GitHub repository::
18 |
19 | pip install git+https://github.com/WhitakerLab/scona.git
20 |
21 | If you want to edit scona it's recommended that you pass the ``-e`` flag to ``pip`` to install the package editably.
22 |
23 | Getting Started
24 | ---------------
25 |
26 | We have automatically generated `docstring `_ documentation and here's how to navigate to it.
27 |
28 | See all docs organized in the alphabetical order:
29 | * :ref:`genindex`
30 |
31 | See the structure of the package:
32 | * :ref:`modindex`
33 |
34 | See the submodules page:
35 | * :ref:`ref-subpackages-label`
36 |
37 | | Besides, you can type any function into the **search bar** and come up with some results.
38 | | Alongside this documentation scona has some jupyter notebook `tutorials `_.
39 |
40 | Finding Help
41 | ------------
42 | If you have questions or want to get in touch, you can join our `gitter lobby `_, tweet `@Whitaker_Lab `_ or email Isla at islastaden@gmail.com.
43 |
44 |
45 |
--------------------------------------------------------------------------------
/docs/source/local-analysis.rst:
--------------------------------------------------------------------------------
1 | Network Analysis at the Nodal Level
2 | ===================================
3 |
4 |
5 |
6 | Resources
7 | ---------
--------------------------------------------------------------------------------
/docs/source/plotting.rst:
--------------------------------------------------------------------------------
1 | Plotting Brain Networks
2 | =======================
3 |
4 | With nilearn
5 | ------------
--------------------------------------------------------------------------------
/docs/source/sc-matrices.rst:
--------------------------------------------------------------------------------
1 | Structural Covariance Matrices
2 | ==============================
3 |
4 | scona needs two key inputs to create a correlation matrix
5 |
6 |
7 |
8 | Resources
9 | ---------
10 |
11 |
--------------------------------------------------------------------------------
/docs/source/scona.datasets.NSPN_WhitakerVertes_PNAS2016.rst:
--------------------------------------------------------------------------------
1 | scona\.datasets\.NSPN\_WhitakerVertes\_PNAS2016 package
2 | =======================================================================
3 |
4 | Module contents
5 | ---------------
6 |
7 | .. automodule:: scona.datasets.NSPN_WhitakerVertes_PNAS2016
8 | :members:
9 | :undoc-members:
10 | :show-inheritance:
11 |
--------------------------------------------------------------------------------
/docs/source/scona.datasets.rst:
--------------------------------------------------------------------------------
1 | scona\.datasets package
2 | =======================================
3 |
4 | Subpackages
5 | -----------
6 |
7 | .. toctree::
8 |
9 | scona.datasets.NSPN_WhitakerVertes_PNAS2016
10 |
11 | Module contents
12 | ---------------
13 |
14 | .. automodule:: scona.datasets
15 | :members:
16 | :undoc-members:
17 | :show-inheritance:
18 |
--------------------------------------------------------------------------------
/docs/source/scona.rst:
--------------------------------------------------------------------------------
1 | scona package
2 | =============================
3 |
4 | .. _ref-subpackages-label:
5 |
6 | Subpackages
7 | -----------
8 |
9 | .. toctree::
10 |
11 | scona.scripts
12 | scona.wrappers
13 | scona.datasets
14 |
15 | Module contents
16 | ---------------
17 |
18 | .. automodule:: scona
19 | :members:
20 | :undoc-members:
21 | :show-inheritance:
22 |
23 | Submodules
24 | ----------
25 |
26 | scona.make_corr_matrices module
27 | -----------------------------------------------
28 |
29 | .. automodule:: scona.make_corr_matrices
30 | :members:
31 | :undoc-members:
32 | :show-inheritance:
33 |
34 | scona.make_graphs module
35 | ----------------------------------------
36 |
37 | .. automodule:: scona.make_graphs
38 | :members:
39 | :undoc-members:
40 | :show-inheritance:
41 |
42 | scona.graph_measures module
43 | -------------------------------------------
44 |
45 | .. automodule:: scona.graph_measures
46 | :members:
47 | :undoc-members:
48 | :show-inheritance:
49 |
50 | scona.classes module
51 | ------------------------------------
52 |
53 | .. automodule:: scona.classes
54 | :members:
55 | :undoc-members:
56 | :show-inheritance:
57 |
58 | scona.stats_functions module
59 | --------------------------------------------
60 |
61 | .. automodule:: scona.stats_functions
62 | :members:
63 | :undoc-members:
64 | :show-inheritance:
65 |
66 | scona.visualisations module
67 | --------------------------------------------
68 |
69 | .. automodule:: scona.visualisations
70 | :members:
71 | :undoc-members:
72 | :show-inheritance:
73 |
--------------------------------------------------------------------------------
/docs/source/scona.scripts.rst:
--------------------------------------------------------------------------------
1 | scona.scripts package
2 | =====================================
3 |
4 | Submodules
5 | ----------
6 |
7 |
8 | scona.scripts.make_figures module
9 | -------------------------------------------------
10 |
11 | .. automodule:: scona.scripts.make_figures
12 | :members:
13 | :undoc-members:
14 | :show-inheritance:
15 |
16 | scona.scripts.useful_functions module
17 | -----------------------------------------------------
18 |
19 | .. automodule:: scona.scripts.useful_functions
20 | :members:
21 | :undoc-members:
22 | :show-inheritance:
23 |
24 | scona.scripts.visualisation_commands module
25 | -----------------------------------------------------------
26 |
27 | .. automodule:: scona.scripts.visualisation_commands
28 | :members:
29 | :undoc-members:
30 | :show-inheritance:
31 |
32 |
33 | Module contents
34 | ---------------
35 |
36 | .. automodule:: scona.scripts
37 | :members:
38 | :undoc-members:
39 | :show-inheritance:
40 |
--------------------------------------------------------------------------------
/docs/source/scona.wrappers.rst:
--------------------------------------------------------------------------------
1 | scona.wrappers package
2 | ======================================
3 |
4 | Submodules
5 | ----------
6 |
7 | scona.wrappers.corrmat_from_regionalmeasures module
8 | -------------------------------------------------------------------
9 |
10 | .. automodule:: scona.wrappers.corrmat_from_regionalmeasures
11 | :members:
12 | :undoc-members:
13 | :show-inheritance:
14 |
15 | scona.wrappers.network_analysis_from_corrmat module
16 | -------------------------------------------------------------------
17 |
18 | .. automodule:: scona.wrappers.network_analysis_from_corrmat
19 | :members:
20 | :undoc-members:
21 | :show-inheritance:
22 |
23 |
24 | Module contents
25 | ---------------
26 |
27 | .. automodule:: scona.wrappers
28 | :members:
29 | :undoc-members:
30 | :show-inheritance:
31 |
--------------------------------------------------------------------------------
/push.sh:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env bash
2 | # Push HTML files to gh-pages automatically.
3 |
4 | # Fill this out with the correct org/repo
5 | ORG="WhitakerLab"
6 | REPO="scona"
7 | # This probably should match an email for one of your users.
8 | EMAIL="kw401@cam.ac.uk"
9 |
10 | set -e
11 |
12 | # Clone the gh-pages branch outside of the repo and cd into it.
13 | cd ..
14 | git clone -b gh-pages "https://$GH_TOKEN@github.com/$ORG/$REPO.git" gh-pages
15 | cd gh-pages
16 |
17 | # Update git configuration so I can push.
18 | if [ "$1" != "dry" ]; then
19 | # Update git config.
20 | git config user.name "Travis Builder"
21 | git config user.email "$EMAIL"
22 | fi
23 |
24 | # Copy in the HTML. You may want to change this with your documentation path.
25 | cp -R ../$REPO/docs/build/html/* ./
26 |
27 | # Add and commit changes.
28 | git add -A .
29 | git commit -m "[ci skip] Autodoc commit for $COMMIT."
30 | if [ "$1" != "dry" ]; then
31 | # -q is very important, otherwise you leak your GH_TOKEN
32 | git push -q origin gh-pages
33 | fi
34 |
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | # Note that this requirements file list the requirements needed to run the testing suite.
2 | # Package dependencies should be listed in setup.py instead.
3 | pytest
4 | pandas
5 | python-louvain==0.11
6 | numpy
7 | scipy
8 | seaborn==0.9.0
9 | scikit-learn
10 | networkx>=2.2
11 | forceatlas2
12 | nilearn==0.5.2
13 |
--------------------------------------------------------------------------------
/scona/__init__.py:
--------------------------------------------------------------------------------
1 | """
2 | scona
3 | =====
4 | scona is a Python package for the analysis
5 | of structural covariance brain networks.
6 |
7 | Website (including documentation)::
8 | http://whitakerlab.github.io/scona
9 | Source::
10 | https://github.com/WhitakerLab/scona
11 | Bug reports::
12 | https://github.com/WhitakerLab/scona/issues
13 |
14 | Simple example
15 | --------------
16 | FILL
17 |
18 | Bugs
19 | ----
20 | Please report any bugs that you find in an issue on `our GitHub repository `_.
21 | Or, even better, fork the repository and create a pull request.
22 |
23 | License
24 | -------
25 | `scona is licensed under the MIT License `_.
26 | """
27 |
28 | # Release data
29 | __author__ = "Kirstie Whitaker and Isla Staden"
30 | __license__ = "MIT"
31 |
32 | __date__ = ""
33 | __version__ = 0.1
34 |
35 | __bibtex__ = """ FILL
36 | """
37 |
38 | from scona.make_corr_matrices import *
39 | from scona.make_graphs import *
40 | from scona.graph_measures import *
41 | from scona.classes import *
42 |
43 | from scona.wrappers import *
44 |
45 | from scona.visualisations_helpers import *
46 |
47 | import scona.datasets
48 | from scona import *
49 |
50 |
--------------------------------------------------------------------------------
/scona/bits_and_bobs/Corr_pathology_SummaryFig_Cost_10.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/WhitakerLab/scona/bb29a4247f4d53e0a1e56459b3a908f2048b54c7/scona/bits_and_bobs/Corr_pathology_SummaryFig_Cost_10.png
--------------------------------------------------------------------------------
/scona/datasets/NSPN_WhitakerVertes_PNAS2016/500.centroids.txt:
--------------------------------------------------------------------------------
1 | -56.403550 -40.152663 1.708876
2 | -53.140506 -49.843038 8.264557
3 | -5.001684 20.645903 25.733446
4 | -33.265925 20.200202 45.347826
5 | -31.958115 2.146597 51.269110
6 | -38.795007 12.584757 33.278581
7 | -39.715079 11.341351 48.846438
8 | -8.609127 -73.360119 17.095238
9 | -5.304200 -87.102157 19.323496
10 | -24.010774 -5.861410 -32.826641
11 | -30.237677 -46.493585 -17.452397
12 | -34.771765 -9.299608 -35.172549
13 | -33.515847 -72.220765 -14.257923
14 | -37.632472 -38.758481 -22.906300
15 | -38.896698 -60.874682 -16.663844
16 | -43.393728 -58.809524 40.471545
17 | -35.980519 -83.125541 18.926407
18 | -44.904486 -56.280753 17.439942
19 | -31.993691 -75.483701 33.056782
20 | -43.132353 -66.558824 15.906250
21 | -37.122661 -69.533264 43.258836
22 | -43.266380 -75.049409 23.400644
23 | -45.069149 -64.283245 32.022606
24 | -43.614049 -6.016575 -40.149171
25 | -50.245499 -60.608838 -7.837971
26 | -48.242567 -19.479656 -32.479656
27 | -52.185499 -51.758004 -16.904896
28 | -52.818271 -31.309430 -26.385069
29 | -52.835546 -41.770174 -22.660878
30 | -8.616947 -48.171793 7.731863
31 | -5.075020 -42.784517 28.464485
32 | -24.909233 -92.300469 14.186228
33 | -42.947170 -72.409434 -6.833962
34 | -14.744709 -99.826720 8.843915
35 | -45.166234 -77.340260 4.583117
36 | -16.533141 -100.145533 -6.046110
37 | -34.835691 -84.610446 -14.468988
38 | -28.701852 -89.411111 2.435185
39 | -24.468116 -93.362319 -13.927536
40 | -35.435277 -89.190612 -3.732575
41 | -17.753509 47.755755 -18.101067
42 | -20.811106 15.327714 -20.021815
43 | -28.966843 31.229088 -15.813112
44 | -28.864099 25.384448 -12.546512
45 | -23.078374 -62.599419 -8.500726
46 | -6.559578 -88.407240 -8.045249
47 | -16.899535 -52.515349 -6.002791
48 | -6.980070 -76.649472 -1.662368
49 | -14.722494 -63.339853 0.894866
50 | -15.834179 -74.541455 -9.203046
51 | -6.025210 17.265756 -16.858718
52 | -6.501101 52.613353 -11.097579
53 | -5.719202 39.965588 -19.798348
54 | -59.379435 -14.499638 -22.706010
55 | -56.848990 -58.883231 2.381914
56 | -52.194580 -0.915219 -29.740097
57 | -60.825629 -42.141930 -8.054339
58 | -60.018519 -27.635031 -13.298611
59 | -25.991189 -25.186784 -25.332159
60 | -22.076053 -38.247688 -12.107914
61 | -7.362816 -38.570397 61.859206
62 | -8.721267 -21.255204 49.808145
63 | -7.506276 -26.002510 60.179079
64 | -49.740465 21.610233 15.596279
65 | -44.486409 11.434907 5.653076
66 | -45.811818 13.245455 17.796364
67 | -41.442783 40.407672 -12.177503
68 | -48.102240 30.818890 5.227848
69 | -40.487207 31.110874 -0.502132
70 | -9.986425 -89.402715 3.926094
71 | -13.643931 -74.058960 7.273988
72 | -16.090447 -36.385163 70.946138
73 | -51.747771 -13.374522 13.364331
74 | -29.604043 -33.269917 62.903686
75 | -60.994012 -13.468777 23.994867
76 | -40.488584 -29.291096 58.126712
77 | -53.220698 -15.220698 34.495012
78 | -44.948367 -27.507903 49.681770
79 | -49.124402 -21.491627 44.209330
80 | -6.448488 -26.226764 38.546473
81 | -4.381127 -7.346745 35.440380
82 | -37.874144 -17.550514 53.143836
83 | -49.770687 0.859046 7.718093
84 | -13.373494 -26.420269 69.112686
85 | -54.641864 3.804590 21.876912
86 | -24.045574 -17.930762 65.358457
87 | -32.737288 -15.538136 59.027119
88 | -53.932496 -2.569074 28.427786
89 | -42.151947 -6.437797 47.716999
90 | -47.849213 -7.086993 39.928749
91 | -14.240201 -60.530653 12.161809
92 | -9.713115 -48.762295 58.942623
93 | -11.605708 -66.186047 23.613108
94 | -9.260824 -44.096093 43.357973
95 | -7.174011 -69.684746 35.981921
96 | -7.305000 -61.735000 48.466250
97 | -7.411572 -56.623362 28.838428
98 | -5.020690 35.550805 2.473103
99 | -40.059727 33.650171 16.156997
100 | -39.194444 47.543478 5.071256
101 | -31.628272 29.739529 36.488220
102 | -21.916205 57.989612 -10.367036
103 | -42.453770 27.739687 29.635846
104 | -32.064103 53.733728 -2.766272
105 | -25.071520 40.946360 29.699872
106 | -23.693239 56.430878 9.910192
107 | -37.375483 41.638996 23.134170
108 | -26.224037 50.222870 20.186698
109 | -11.062982 61.803342 6.299486
110 | -11.686777 -8.424793 64.785124
111 | -9.020717 52.775299 15.416733
112 | -17.927432 1.240032 62.199362
113 | -10.349686 54.690566 27.231447
114 | -7.206540 -2.830465 52.236661
115 | -7.866716 41.423417 31.085420
116 | -10.006845 10.758385 61.433949
117 | -12.798887 41.986778 41.756437
118 | -8.050800 16.079022 48.005644
119 | -17.722302 30.842446 48.028777
120 | -17.761706 21.067726 55.610368
121 | -6.939499 28.797636 44.500000
122 | -16.828947 -49.727632 66.319737
123 | -12.001488 -89.209821 25.053571
124 | -31.851301 -47.447336 59.381660
125 | -34.423810 -45.066667 46.744444
126 | -29.741438 -54.797945 46.405822
127 | -18.845291 -83.868834 31.849776
128 | -20.723512 -60.528759 61.520686
129 | -23.650064 -73.056194 29.860792
130 | -21.033210 -66.468635 43.566421
131 | -12.295620 -70.559611 52.287105
132 | -49.500000 -32.086745 8.040936
133 | -43.696244 8.443114 -23.509526
134 | -61.949202 -38.301161 11.640784
135 | -52.287141 3.186017 -16.006242
136 | -59.114051 -24.142336 -1.903285
137 | -55.627500 -12.855000 -2.430833
138 | -51.982718 -7.207384 -8.805970
139 | -44.148208 -47.573290 41.342020
140 | -46.732528 -28.504745 18.622088
141 | -54.163802 -47.446916 39.001011
142 | -58.911636 -29.412948 25.093613
143 | -57.393069 -49.401980 24.446535
144 | -54.588803 -32.859073 37.251931
145 | -49.357225 -38.912139 32.553757
146 | -8.118848 63.548619 -9.218487
147 | -31.541839 10.662289 -35.594371
148 | -44.533505 -22.900344 6.773196
149 | -35.686777 -20.587603 11.657025
150 | -34.236728 7.124383 -12.183642
151 | -37.400137 -8.593707 4.436389
152 | -35.450030 4.279830 -0.592368
153 | 53.969182 -39.123273 1.497343
154 | 54.990074 -42.904467 8.951613
155 | 5.639305 21.084500 26.725736
156 | 37.199108 10.573551 33.786033
157 | 32.879760 5.008016 52.605210
158 | 38.175589 21.138116 45.692719
159 | 36.441860 12.222222 48.854005
160 | 7.970109 -78.013587 26.547554
161 | 6.201097 -89.290676 13.001828
162 | 8.630194 -72.195291 16.803324
163 | 24.687620 -4.962155 -33.415010
164 | 32.028812 -69.722689 -12.972389
165 | 34.950083 -7.781198 -36.410150
166 | 37.491313 -59.459459 -16.182432
167 | 36.034682 -34.844756 -22.613543
168 | 33.583537 -48.043195 -16.660147
169 | 49.547278 -52.679083 13.404011
170 | 37.366412 -80.319520 21.611778
171 | 51.260490 -52.230769 26.508741
172 | 35.461940 -73.577685 35.111575
173 | 43.066667 -65.541270 12.949206
174 | 42.330661 -50.821643 40.643287
175 | 42.866485 -72.440054 20.835150
176 | 50.199747 -56.465234 40.015171
177 | 38.150633 -64.760759 44.783544
178 | 46.550943 -64.835220 30.294340
179 | 43.380094 -4.293253 -40.017368
180 | 51.656716 -58.795115 -8.910448
181 | 52.719643 -19.405357 -32.359524
182 | 53.740681 -48.775527 -18.389789
183 | 51.821429 -32.635870 -24.873447
184 | 10.075728 -47.138511 6.864078
185 | 5.081715 -43.030744 27.894822
186 | 18.645885 -99.162095 -7.394015
187 | 44.513465 -70.021544 -2.035907
188 | 17.105691 -99.073171 10.070461
189 | 24.048611 -90.626736 16.782986
190 | 43.316492 -78.804576 -11.404194
191 | 32.220800 -84.385600 8.259200
192 | 28.676205 -86.707831 -12.200301
193 | 29.431464 -94.003115 -5.362928
194 | 41.602711 -84.248175 -1.237748
195 | 22.097129 15.253110 -19.587081
196 | 18.684153 45.825683 -18.556831
197 | 34.726609 27.001327 -12.522230
198 | 19.724658 30.732877 -19.559589
199 | 18.333333 -50.895480 -4.911488
200 | 8.198175 -89.140808 -8.573664
201 | 20.625610 -58.531707 -6.080488
202 | 7.687927 -77.239180 -3.471526
203 | 12.885463 -63.722467 2.183921
204 | 17.115450 -73.040747 -8.471986
205 | 7.618033 52.545902 -8.671311
206 | 6.636640 19.291498 -18.221154
207 | 6.335362 40.741717 -18.018256
208 | 60.749491 -34.048201 -8.065173
209 | 51.631319 2.998767 -32.906289
210 | 59.994048 -10.801339 -22.330357
211 | 56.269417 -57.781553 3.134709
212 | 62.074727 -43.288833 -7.718724
213 | 59.991545 -21.624904 -15.326672
214 | 23.854350 -37.517107 -11.568915
215 | 27.447798 -24.861338 -24.204731
216 | 11.545455 -33.684025 49.448367
217 | 5.356571 -16.772436 61.135417
218 | 6.497475 -29.478114 59.215488
219 | 45.542199 12.248934 5.779199
220 | 50.729825 20.261988 18.121637
221 | 46.692771 12.043373 16.936145
222 | 42.385469 41.077648 -11.351636
223 | 48.344595 35.371622 -0.694595
224 | 43.058896 25.395092 5.938650
225 | 49.411126 29.426052 7.560380
226 | 14.565878 -70.587838 8.201014
227 | 12.274854 -91.432749 3.473684
228 | 9.324074 -78.765432 7.675926
229 | 13.789222 -34.572455 71.667066
230 | 50.836154 -10.313846 13.850769
231 | 28.370800 -33.301275 63.250290
232 | 61.877751 -11.132029 22.532192
233 | 39.056309 -32.346194 56.622523
234 | 54.620429 -14.836066 36.203026
235 | 43.566168 -23.448792 53.867664
236 | 48.387306 -19.411917 45.558290
237 | 5.613982 -10.595137 38.677812
238 | 5.707363 -24.730648 36.269352
239 | 50.415483 1.973011 7.995028
240 | 13.824742 -24.344964 68.442506
241 | 54.264137 5.644953 24.204724
242 | 22.698219 -15.158388 65.774133
243 | 55.756061 -2.121970 27.641667
244 | 33.546008 -20.560456 58.860837
245 | 48.733232 -3.493140 43.394055
246 | 39.911111 -11.508333 46.213889
247 | 33.180982 -9.761759 53.350716
248 | 14.098616 -58.053633 12.916955
249 | 7.879498 -58.918828 24.867782
250 | 9.761111 -47.658889 59.438889
251 | 17.346741 -68.079057 26.266297
252 | 6.423292 -59.699888 49.988802
253 | 7.562500 -66.638221 37.237981
254 | 9.153191 -45.782979 40.912766
255 | 5.983949 34.532588 2.542802
256 | 28.995132 56.136300 -10.249652
257 | 22.805582 58.195380 -1.627526
258 | 31.818499 29.862129 37.897033
259 | 38.776316 49.490132 5.110746
260 | 43.917790 26.858491 30.252022
261 | 23.690265 56.926254 12.712881
262 | 43.268354 33.058228 22.368354
263 | 24.458836 47.983535 26.632272
264 | 38.413146 43.921127 19.272300
265 | 30.409341 40.024725 24.248626
266 | 9.729638 6.951357 45.687783
267 | 10.931872 61.390760 5.931872
268 | 12.366579 -3.265092 65.643045
269 | 10.089983 52.879457 14.414261
270 | 19.312956 3.410584 60.360401
271 | 10.210453 54.908711 26.160279
272 | 9.686775 8.294664 60.026295
273 | 11.492403 46.987569 37.341160
274 | 19.625523 17.776151 54.751046
275 | 9.105031 36.256840 30.567520
276 | 9.200519 24.388709 53.685918
277 | 16.002727 36.524545 45.570909
278 | 9.662184 30.314873 45.207278
279 | 23.584713 -81.495541 24.360510
280 | 14.988916 -47.232759 68.312808
281 | 15.733461 -85.226270 34.587728
282 | 22.034121 -75.393701 40.053806
283 | 23.849231 -65.156923 38.592308
284 | 29.590909 -46.771562 61.291375
285 | 13.058750 -68.865000 53.702500
286 | 33.208925 -44.119675 46.361055
287 | 18.519651 -59.153930 61.758734
288 | 30.873267 -53.879208 51.443564
289 | 45.127378 10.190275 -23.683404
290 | 60.395197 -31.749636 9.372635
291 | 53.179974 1.882853 -14.544503
292 | 56.760262 -7.716835 -6.308150
293 | 59.154909 -22.562182 3.008727
294 | 51.438479 -17.538404 -3.435496
295 | 42.910714 -34.254870 40.887987
296 | 45.033989 -24.933810 18.411449
297 | 52.152080 -41.395983 41.955524
298 | 55.925602 -25.443107 26.921225
299 | 60.142169 -41.424096 23.042169
300 | 58.938005 -35.886792 36.867026
301 | 53.339002 -30.725624 31.862812
302 | 9.833805 62.818697 -10.737488
303 | 32.329699 9.840602 -32.327820
304 | 45.549628 -20.647643 7.012407
305 | 36.196011 9.173379 -11.673955
306 | 36.408831 -19.093213 10.738348
307 | 38.165125 -6.045575 4.285337
308 | 36.767277 5.573538 -1.363260
309 |
--------------------------------------------------------------------------------
/scona/datasets/NSPN_WhitakerVertes_PNAS2016/500.names.txt:
--------------------------------------------------------------------------------
1 | lh_bankssts_part1
2 | lh_bankssts_part2
3 | lh_caudalanteriorcingulate_part1
4 | lh_caudalmiddlefrontal_part1
5 | lh_caudalmiddlefrontal_part2
6 | lh_caudalmiddlefrontal_part3
7 | lh_caudalmiddlefrontal_part4
8 | lh_cuneus_part1
9 | lh_cuneus_part2
10 | lh_entorhinal_part1
11 | lh_fusiform_part1
12 | lh_fusiform_part2
13 | lh_fusiform_part3
14 | lh_fusiform_part4
15 | lh_fusiform_part5
16 | lh_inferiorparietal_part1
17 | lh_inferiorparietal_part2
18 | lh_inferiorparietal_part3
19 | lh_inferiorparietal_part4
20 | lh_inferiorparietal_part5
21 | lh_inferiorparietal_part6
22 | lh_inferiorparietal_part7
23 | lh_inferiorparietal_part8
24 | lh_inferiortemporal_part1
25 | lh_inferiortemporal_part2
26 | lh_inferiortemporal_part3
27 | lh_inferiortemporal_part4
28 | lh_inferiortemporal_part5
29 | lh_inferiortemporal_part6
30 | lh_isthmuscingulate_part1
31 | lh_isthmuscingulate_part2
32 | lh_lateraloccipital_part1
33 | lh_lateraloccipital_part2
34 | lh_lateraloccipital_part3
35 | lh_lateraloccipital_part4
36 | lh_lateraloccipital_part5
37 | lh_lateraloccipital_part6
38 | lh_lateraloccipital_part7
39 | lh_lateraloccipital_part8
40 | lh_lateraloccipital_part9
41 | lh_lateralorbitofrontal_part1
42 | lh_lateralorbitofrontal_part2
43 | lh_lateralorbitofrontal_part3
44 | lh_lateralorbitofrontal_part4
45 | lh_lingual_part1
46 | lh_lingual_part2
47 | lh_lingual_part3
48 | lh_lingual_part4
49 | lh_lingual_part5
50 | lh_lingual_part6
51 | lh_medialorbitofrontal_part1
52 | lh_medialorbitofrontal_part2
53 | lh_medialorbitofrontal_part3
54 | lh_middletemporal_part1
55 | lh_middletemporal_part2
56 | lh_middletemporal_part3
57 | lh_middletemporal_part4
58 | lh_middletemporal_part5
59 | lh_parahippocampal_part1
60 | lh_parahippocampal_part2
61 | lh_paracentral_part1
62 | lh_paracentral_part2
63 | lh_paracentral_part3
64 | lh_parsopercularis_part1
65 | lh_parsopercularis_part2
66 | lh_parsopercularis_part3
67 | lh_parsorbitalis_part1
68 | lh_parstriangularis_part1
69 | lh_parstriangularis_part2
70 | lh_pericalcarine_part1
71 | lh_pericalcarine_part2
72 | lh_postcentral_part1
73 | lh_postcentral_part2
74 | lh_postcentral_part3
75 | lh_postcentral_part4
76 | lh_postcentral_part5
77 | lh_postcentral_part6
78 | lh_postcentral_part7
79 | lh_postcentral_part8
80 | lh_posteriorcingulate_part1
81 | lh_posteriorcingulate_part2
82 | lh_precentral_part1
83 | lh_precentral_part2
84 | lh_precentral_part3
85 | lh_precentral_part4
86 | lh_precentral_part5
87 | lh_precentral_part6
88 | lh_precentral_part7
89 | lh_precentral_part8
90 | lh_precentral_part9
91 | lh_precuneus_part1
92 | lh_precuneus_part2
93 | lh_precuneus_part3
94 | lh_precuneus_part4
95 | lh_precuneus_part5
96 | lh_precuneus_part6
97 | lh_precuneus_part7
98 | lh_rostralanteriorcingulate_part1
99 | lh_rostralmiddlefrontal_part1
100 | lh_rostralmiddlefrontal_part2
101 | lh_rostralmiddlefrontal_part3
102 | lh_rostralmiddlefrontal_part4
103 | lh_rostralmiddlefrontal_part5
104 | lh_rostralmiddlefrontal_part6
105 | lh_rostralmiddlefrontal_part7
106 | lh_rostralmiddlefrontal_part8
107 | lh_rostralmiddlefrontal_part9
108 | lh_rostralmiddlefrontal_part10
109 | lh_superiorfrontal_part1
110 | lh_superiorfrontal_part2
111 | lh_superiorfrontal_part3
112 | lh_superiorfrontal_part4
113 | lh_superiorfrontal_part5
114 | lh_superiorfrontal_part6
115 | lh_superiorfrontal_part7
116 | lh_superiorfrontal_part8
117 | lh_superiorfrontal_part9
118 | lh_superiorfrontal_part10
119 | lh_superiorfrontal_part11
120 | lh_superiorfrontal_part12
121 | lh_superiorfrontal_part13
122 | lh_superiorparietal_part1
123 | lh_superiorparietal_part2
124 | lh_superiorparietal_part3
125 | lh_superiorparietal_part4
126 | lh_superiorparietal_part5
127 | lh_superiorparietal_part6
128 | lh_superiorparietal_part7
129 | lh_superiorparietal_part8
130 | lh_superiorparietal_part9
131 | lh_superiorparietal_part10
132 | lh_superiortemporal_part1
133 | lh_superiortemporal_part2
134 | lh_superiortemporal_part3
135 | lh_superiortemporal_part4
136 | lh_superiortemporal_part5
137 | lh_superiortemporal_part6
138 | lh_superiortemporal_part7
139 | lh_supramarginal_part1
140 | lh_supramarginal_part2
141 | lh_supramarginal_part3
142 | lh_supramarginal_part4
143 | lh_supramarginal_part5
144 | lh_supramarginal_part6
145 | lh_supramarginal_part7
146 | lh_frontalpole_part1
147 | lh_temporalpole_part1
148 | lh_transversetemporal_part1
149 | lh_insula_part1
150 | lh_insula_part2
151 | lh_insula_part3
152 | lh_insula_part4
153 | rh_bankssts_part1
154 | rh_bankssts_part2
155 | rh_caudalanteriorcingulate_part1
156 | rh_caudalmiddlefrontal_part1
157 | rh_caudalmiddlefrontal_part2
158 | rh_caudalmiddlefrontal_part3
159 | rh_caudalmiddlefrontal_part4
160 | rh_cuneus_part1
161 | rh_cuneus_part2
162 | rh_cuneus_part3
163 | rh_entorhinal_part1
164 | rh_fusiform_part1
165 | rh_fusiform_part2
166 | rh_fusiform_part3
167 | rh_fusiform_part4
168 | rh_fusiform_part5
169 | rh_inferiorparietal_part1
170 | rh_inferiorparietal_part2
171 | rh_inferiorparietal_part3
172 | rh_inferiorparietal_part4
173 | rh_inferiorparietal_part5
174 | rh_inferiorparietal_part6
175 | rh_inferiorparietal_part7
176 | rh_inferiorparietal_part8
177 | rh_inferiorparietal_part9
178 | rh_inferiorparietal_part10
179 | rh_inferiortemporal_part1
180 | rh_inferiortemporal_part2
181 | rh_inferiortemporal_part3
182 | rh_inferiortemporal_part4
183 | rh_inferiortemporal_part5
184 | rh_isthmuscingulate_part1
185 | rh_isthmuscingulate_part2
186 | rh_lateraloccipital_part1
187 | rh_lateraloccipital_part2
188 | rh_lateraloccipital_part3
189 | rh_lateraloccipital_part4
190 | rh_lateraloccipital_part5
191 | rh_lateraloccipital_part6
192 | rh_lateraloccipital_part7
193 | rh_lateraloccipital_part8
194 | rh_lateraloccipital_part9
195 | rh_lateralorbitofrontal_part1
196 | rh_lateralorbitofrontal_part2
197 | rh_lateralorbitofrontal_part3
198 | rh_lateralorbitofrontal_part4
199 | rh_lingual_part1
200 | rh_lingual_part2
201 | rh_lingual_part3
202 | rh_lingual_part4
203 | rh_lingual_part5
204 | rh_lingual_part6
205 | rh_medialorbitofrontal_part1
206 | rh_medialorbitofrontal_part2
207 | rh_medialorbitofrontal_part3
208 | rh_middletemporal_part1
209 | rh_middletemporal_part2
210 | rh_middletemporal_part3
211 | rh_middletemporal_part4
212 | rh_middletemporal_part5
213 | rh_middletemporal_part6
214 | rh_parahippocampal_part1
215 | rh_parahippocampal_part2
216 | rh_paracentral_part1
217 | rh_paracentral_part2
218 | rh_paracentral_part3
219 | rh_parsopercularis_part1
220 | rh_parsopercularis_part2
221 | rh_parsopercularis_part3
222 | rh_parsorbitalis_part1
223 | rh_parstriangularis_part1
224 | rh_parstriangularis_part2
225 | rh_parstriangularis_part3
226 | rh_pericalcarine_part1
227 | rh_pericalcarine_part2
228 | rh_pericalcarine_part3
229 | rh_postcentral_part1
230 | rh_postcentral_part2
231 | rh_postcentral_part3
232 | rh_postcentral_part4
233 | rh_postcentral_part5
234 | rh_postcentral_part6
235 | rh_postcentral_part7
236 | rh_postcentral_part8
237 | rh_posteriorcingulate_part1
238 | rh_posteriorcingulate_part2
239 | rh_precentral_part1
240 | rh_precentral_part2
241 | rh_precentral_part3
242 | rh_precentral_part4
243 | rh_precentral_part5
244 | rh_precentral_part6
245 | rh_precentral_part7
246 | rh_precentral_part8
247 | rh_precentral_part9
248 | rh_precuneus_part1
249 | rh_precuneus_part2
250 | rh_precuneus_part3
251 | rh_precuneus_part4
252 | rh_precuneus_part5
253 | rh_precuneus_part6
254 | rh_precuneus_part7
255 | rh_rostralanteriorcingulate_part1
256 | rh_rostralmiddlefrontal_part1
257 | rh_rostralmiddlefrontal_part2
258 | rh_rostralmiddlefrontal_part3
259 | rh_rostralmiddlefrontal_part4
260 | rh_rostralmiddlefrontal_part5
261 | rh_rostralmiddlefrontal_part6
262 | rh_rostralmiddlefrontal_part7
263 | rh_rostralmiddlefrontal_part8
264 | rh_rostralmiddlefrontal_part9
265 | rh_rostralmiddlefrontal_part10
266 | rh_superiorfrontal_part1
267 | rh_superiorfrontal_part2
268 | rh_superiorfrontal_part3
269 | rh_superiorfrontal_part4
270 | rh_superiorfrontal_part5
271 | rh_superiorfrontal_part6
272 | rh_superiorfrontal_part7
273 | rh_superiorfrontal_part8
274 | rh_superiorfrontal_part9
275 | rh_superiorfrontal_part10
276 | rh_superiorfrontal_part11
277 | rh_superiorfrontal_part12
278 | rh_superiorfrontal_part13
279 | rh_superiorparietal_part1
280 | rh_superiorparietal_part2
281 | rh_superiorparietal_part3
282 | rh_superiorparietal_part4
283 | rh_superiorparietal_part5
284 | rh_superiorparietal_part6
285 | rh_superiorparietal_part7
286 | rh_superiorparietal_part8
287 | rh_superiorparietal_part9
288 | rh_superiorparietal_part10
289 | rh_superiortemporal_part1
290 | rh_superiortemporal_part2
291 | rh_superiortemporal_part3
292 | rh_superiortemporal_part4
293 | rh_superiortemporal_part5
294 | rh_superiortemporal_part6
295 | rh_supramarginal_part1
296 | rh_supramarginal_part2
297 | rh_supramarginal_part3
298 | rh_supramarginal_part4
299 | rh_supramarginal_part5
300 | rh_supramarginal_part6
301 | rh_supramarginal_part7
302 | rh_frontalpole_part1
303 | rh_temporalpole_part1
304 | rh_transversetemporal_part1
305 | rh_insula_part1
306 | rh_insula_part2
307 | rh_insula_part3
308 | rh_insula_part4
309 |
--------------------------------------------------------------------------------
/scona/datasets/NSPN_WhitakerVertes_PNAS2016/__init__.py:
--------------------------------------------------------------------------------
1 | #! /usr/bin/env python
2 |
3 | from os.path import dirname, abspath
4 | from scona.scripts.useful_functions import read_in_data
5 |
6 | """NSPN Whitaker, Vertes et al, 2016"""
7 |
8 | TITLE = """Cortical Thickness Measures"""
9 | SOURCE = """
10 | Adolescent consolidation of human connectome hubs
11 | Kirstie J. Whitaker, Petra E. Vértes, Rafael Romero-Garcia, František Váša, Michael Moutoussis, Gita Prabhu, Nikolaus Weiskopf, Martina F. Callaghan, Konrad Wagstyl, Timothy Rittman, Roger Tait, Cinly Ooi, John Suckling, Becky Inkster, Peter Fonagy, Raymond J. Dolan, Peter B. Jones, Ian M. Goodyer, the NSPN Consortium, Edward T. Bullmore
12 | Proceedings of the National Academy of Sciences Aug 2016, 113 (32) 9105-9110; DOI: 10.1073/pnas.1601745113
13 | """
14 | DESCRSHORT = """Cortical thickness data"""
15 | DESCRLONG = """ """
16 |
17 | filepath = dirname(abspath(__file__))
18 | centroids_file = filepath + "/500.centroids.txt"
19 | names_file = filepath + "/500.names.txt"
20 | regionalmeasures_file = filepath + "/PARC_500aparc_thickness_behavmerge.csv"
21 | covars_file = None
22 |
23 |
24 | def _data():
25 | return (regionalmeasures_file,
26 | names_file,
27 | covars_file,
28 | centroids_file)
29 |
30 |
31 | def _centroids():
32 | return centroids_file
33 |
34 |
35 | def _regionalmeasures():
36 | return regionalmeasures_file
37 |
38 |
39 | def _names():
40 | return names_file
41 |
42 |
43 | def _covars():
44 | return covars_file
45 |
46 |
47 | def import_data():
48 | return read_in_data(
49 | regionalmeasures_file,
50 | names_file,
51 | covars_file=covars_file,
52 | centroids_file=centroids_file,
53 | data_as_df=True)
54 |
--------------------------------------------------------------------------------
/scona/datasets/__init__.py:
--------------------------------------------------------------------------------
1 | """
2 | Datasets module
3 | """
4 | from . import NSPN_WhitakerVertes_PNAS2016
5 |
--------------------------------------------------------------------------------
/scona/make_corr_matrices.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | """
3 | Tools to create a correlation matrix from regional measures
4 | """
5 | # Essential package imports
6 | import os
7 | import numpy as np
8 | from scona.stats_functions import residuals
9 |
10 |
11 | def get_non_numeric_cols(df):
12 | '''
13 | returns the columns of df whose dtype is not numeric (e.g not a subtype of
14 | :class:`numpy.number`)
15 |
16 | Parameters
17 | ----------
18 | df : :class:`pandas.DataFrame`
19 |
20 | Returns
21 | -------
22 | list
23 | non numeric columns of df
24 | '''
25 | numeric = np.fromiter((np.issubdtype(y, np.number) for y in df.dtypes),
26 | bool)
27 | non_numeric_cols = np.array(df.columns)[~numeric]
28 | return non_numeric_cols
29 |
30 |
31 | def create_residuals_df(df, names, covars=[]):
32 | '''
33 | Calculate residuals of columns specified by names, correcting for the
34 | columns in covars_list.
35 |
36 | Parameters
37 | ----------
38 | df : :class:`pandas.DataFrame`
39 | A pandas data frame with subjects as rows and columns including
40 | brain regions and covariates. Should be numeric for the columns in
41 | `names` and `covars`.
42 | names : list
43 | A list of the brain regions you wish to correlate.
44 | covars: list, optional
45 | A list of covariates to correct for before correlating
46 | the regional measures. Each element should correspond to a
47 | column heading in `df`.
48 | Default is an empty list.
49 |
50 | Returns
51 | -------
52 | :class:`pandas.DataFrame`
53 | Residuals of columns `names` of `df`, correcting for `covars`
54 |
55 | Raises
56 | ------
57 | TypeError
58 | if there are non numeric entries in the columns specified by `names` or
59 | `covars`
60 | '''
61 | # Raise TypeError if any of the relevant columns are nonnumeric
62 | non_numeric_cols = get_non_numeric_cols(df[names+covars])
63 | if non_numeric_cols.size > 0:
64 | raise TypeError('DataFrame columns {} are non numeric'
65 | .format(', '.join(non_numeric_cols)))
66 |
67 | # Make a new data frame that will contain
68 | # the residuals for each column after correcting for
69 | # the covariates in covars
70 | df_res = df[names].copy()
71 |
72 | if len(covars) > 1:
73 | x = np.vstack([df[covars]])
74 | elif len(covars) == 1:
75 | x = df[covars]
76 | else:
77 | x = np.ones_like(df.iloc[:, 0])
78 |
79 | # Calculate the residuals
80 | for name in names:
81 | df_res.loc[:, name] = residuals(x.T, df.loc[:, name])
82 |
83 | # Return the residuals data frame
84 | return df_res
85 |
86 |
87 | def create_corrmat(df_res, names=None, method='pearson'):
88 | '''
89 | Correlate over the rows of `df_res`
90 |
91 | Parameters
92 | ----------
93 | df_res : :class:`pandas.DataFrame`
94 | `df_res` contains structural data about regions of the brain with
95 | subjects as rows after correction for any covariates of no interest.
96 | names : list, optional
97 | The brain regions you wish to correlate over. These will become nodes
98 | in your graph. If `names` is None then all columns are included.
99 | Default is `None`.
100 | methods : string, optional
101 | The method of correlation passed to :func:`pandas.DataFrame.corr`.
102 | Default is pearsons correlation (`pearson`).
103 |
104 | Returns
105 | -------
106 | :class:`pandas.DataFrame`
107 | A correlation matrix.
108 |
109 | Raises
110 | ------
111 | TypeError
112 | If there are non numeric entries in the columns in `df_res` specified
113 | by `names`.
114 | '''
115 | if names is None:
116 | names = df_res.columns
117 |
118 | # Raise TypeError if any of the relevant columns are nonnumeric
119 | non_numeric_cols = get_non_numeric_cols(df_res)
120 | if non_numeric_cols.size > 0:
121 | raise TypeError('DataFrame columns {} are non numeric'
122 | .format(', '.join(non_numeric_cols)))
123 |
124 | return df_res.loc[:, names].astype(float).corr(method=method)
125 |
126 |
127 | def corrmat_from_regionalmeasures(
128 | regional_measures,
129 | names,
130 | covars=None,
131 | method='pearson'):
132 | '''
133 | Calculate the correlation of `names` columns over the rows of
134 | `regional_measures` after correcting for covariance with the columns in
135 | `covars`
136 |
137 | Parameters
138 | ----------
139 | regional_measures : :class:`pandas.DataFrame`
140 | a pandas data frame with subjects as rows, and columns including
141 | brain regions and covariates. Should be numeric for the columns in
142 | names and covars_list
143 | names : list
144 | a list of the brain regions you wish to correlate
145 | covars: list
146 | covars is a list of covariates (as df column headings)
147 | to correct for before correlating the regions.
148 | methods : string
149 | the method of correlation passed to :func:`pandas.DataFrame.corr`
150 |
151 | Returns
152 | -------
153 | :class:`pandas.DataFrame`
154 | A correlation matrix with rows and columns keyed by `names`
155 | '''
156 | # Correct for your covariates
157 | df_res = create_residuals_df(regional_measures, names, covars)
158 |
159 | # Make your correlation matrix
160 | M = create_corrmat(df_res, names=names, method=method).values
161 |
162 | return M
163 |
164 |
165 | def save_mat(M, name):
166 | '''
167 | Save matrix M as a text file
168 |
169 | Parameters
170 | ----------
171 | M : array
172 | name : str
173 | name of the output directory
174 | '''
175 | # Check to see if the output directory
176 | # exists, and make it if it does not
177 | dirname = os.path.dirname(name)
178 |
179 | if not os.path.isdir(dirname):
180 | os.makedirs(dirname)
181 |
182 | # Save the matrix as a text file
183 | np.savetxt(name,
184 | M,
185 | fmt='%.5f',
186 | delimiter='\t',
187 | newline='\n')
188 |
--------------------------------------------------------------------------------
/scona/make_figures.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 |
3 | import seaborn as sns
4 | import matplotlib.pylab as plt
5 | import numpy as np
6 | import networkx as nx
7 | import pandas as pd
8 | import matplotlib as mpl
9 | import matplotlib.patches as mpatches
10 | import nilearn as nl
11 |
12 |
13 | def view_corr_mat(corr_mat_file, output_name, cmap_name='RdBu_r', cost=None, bin=False):
14 |
15 | # Read in the data
16 | M = np.loadtxt(corr_mat_file)
17 |
18 | # If cost is given then roughly threshold at that cost.
19 | # NOTE - this is not actually the EXACT network that you're analysing
20 | # because it doesn't include the minimum spanning tree. But it will give
21 | # you a good sense of the network structure.
22 | # #GoodEnough ;)
23 |
24 | if cost:
25 | thr = np.percentile(M.reshape(-1), 100-cost)
26 | M[M0] = 1
40 |
41 | # Create an axis
42 | fig, ax = plt.subplots(figsize=(6,5))
43 | ax.axis('off')
44 |
45 | # Show the network measures
46 | mat_ax = ax.imshow(M,
47 | interpolation='none',
48 | cmap=cmap_name,
49 | vmin=vmin,
50 | vmax=vmax)
51 |
52 | # Put a box around your data
53 | ax.add_patch(
54 | mpatches.Rectangle(
55 | (ax.get_xlim()[0], ax.get_ylim()[1]),
56 | ax.get_xlim()[1],
57 | ax.get_ylim()[0],
58 | fill=False, # remove background
59 | color='k',
60 | linewidth=1) )
61 |
62 | # Add colorbar, make sure to specify tick locations to match desired ticklabels
63 | cbar = fig.colorbar(mat_ax, ticks=ticks_dict['locations'])
64 | cbar.ax.set_yticklabels(ticks_dict['labels']) # vertically oriented colorbar
65 |
66 | plt.tight_layout()
67 |
68 | # Save the picture
69 | fig.savefig(output_name, bbox_inches=0, dpi=100)
70 |
71 | plt.close(fig)
72 |
73 |
74 | def get_anatomical_layouts(G):
75 | '''
76 | This code takes in a BrainNetwork that has x, y, z coordinates and
77 | integer node labels (0 to n-1) for n nodes and returns three dictionaries
78 | containing appropriate pairs of coordinates for sagittal, coronal and
79 | axial slices.
80 | '''
81 |
82 | axial_dict = {}
83 | sagittal_dict = {}
84 | coronal_dict = {}
85 |
86 | for node in G.nodes:
87 | nx.set_node_attribute(node, name="axial", value=np.array())
88 | axial_dict[node] = np.array([df['x'].loc[df['node']==node].values[0],
89 | df['y'].loc[df['node']==node].values[0]])
90 | coronal_dict[node] = np.array([df['x'].loc[df['node']==node].values[0],
91 | df['z'].loc[df['node']==node].values[0]])
92 | sagittal_dict[node] = np.array([df['y'].loc[df['node']==node].values[0],
93 | df['z'].loc[df['node']==node].values[0]])
94 |
95 | return axial_dict, sagittal_dict, coronal_dict
96 |
97 | def axial_layout(x, y, z):
98 | return np.array(x, y)
99 |
100 |
101 | def sagittal_layout(x, y, z):
102 | return np.array(x, z)
103 |
104 |
105 | def coronal_layout(x, y, z):
106 | return np.array(y,z)
107 |
108 |
109 | def anatomical_layout(x, y, z, orientation='sagittal'):
110 | if orientation == 'sagittal':
111 | return sagittal_layout(x, y, z)
112 | if orientation == 'axial':
113 | return axial_layout(x, y, z)
114 | if orientation == 'coronal':
115 | return coronal_layout(x, y, z)
116 | else:
117 | raise ValueError("{} is not recognised as an anatomical layout. orientation values should be one of 'sagittal', 'axial' or 'coronal'.".format(orientation))
118 |
119 |
120 | def plot_anatomical_network(G, measure='module', orientation='sagittal', cmap_name='jet_r', continuous=False, vmax=None, vmin=None, sns_palette=None, edge_list=None, edge_color='k', edge_width=0.2, node_list=None, rc_node_list=[], node_shape='o', rc_node_shape='s', node_size=500, node_size_list=None, figure=None, ax=None):
121 | '''
122 | Plots each node in the graph in one of three orientations
123 | (sagittal, axial or coronal).
124 | The nodes are sorted according to the measure given
125 | (default value: module) and then plotted in that order.
126 | '''
127 |
128 | if edge_list is None:
129 | edge_list = list(G.edges())
130 |
131 | if node_list is None:
132 | node_list = G.nodes()
133 | node_list = sorted(node_list)
134 |
135 | # Put the measures you care about together
136 | # in a data frame
137 | fields = ['degree','module','x','y','z']
138 | if measure not in fields:
139 | fields.append(measure)
140 |
141 | # Add in a node index which relates to the node names in the graph
142 | df['node'] = range(len(df['degree']))
143 |
144 | # Then use these node values to get the appropriate positions for each node
145 | pos = {node: anatomical_layout(
146 | G[node]['x'], G[node]['y'], G[node]['z'], orientation=orientation)
147 | for node in G.nodes}
148 |
149 | # Create a colors_list for the nodes
150 | colors_list = setup_color_list(df,
151 | cmap_name=cmap_name,
152 | sns_palette=sns_palette,
153 | measure=measure,
154 | vmin=vmin,
155 | vmax=vmax,
156 | continuous=continuous)
157 |
158 | # If the node size list is none then
159 | # it'll just be the same size for each node
160 | if node_size_list is None:
161 | node_size_list = [ node_size ] * len(df['degree'])
162 |
163 | # If you have no rich club nodes then all the nodes will
164 | # have the same shape
165 | node_shape_list = [ node_shape ] * len(df['degree'])
166 | # If you have set rich nodes then you'll need to replace
167 | # those indices with the rc_node_shape
168 | for i in rc_node_list:
169 | node_shape_list[i] = 's'
170 |
171 | # We're going to figure out the best way to plot these nodes
172 | # so that they're sensibly on top of each other
173 | sort_dict = {}
174 | sort_dict['axial'] = 'z'
175 | sort_dict['coronal'] = 'y'
176 | sort_dict['sagittal'] = 'x'
177 |
178 | node_order = np.argsort(df[sort_dict[orientation]]).values
179 |
180 | # Now remove all the nodes that are not in the node_list
181 | node_order = [ x for x in node_order if x in node_list ]
182 |
183 | # If you've given this code an axis and figure then use those
184 | # otherwise just create your own
185 | if not ax:
186 | # Create a figure
187 | fig_size_dict = {}
188 | fig_size_dict['axial'] = (9,12)
189 | fig_size_dict['sagittal'] = (12,8)
190 | fig_size_dict['coronal'] = (9,8)
191 |
192 | fig, ax = plt.subplots(figsize=fig_size_dict[orientation])
193 |
194 | # Set the seaborn context and style
195 | sns.set(style="white")
196 | sns.set_context("poster", font_scale=2)
197 |
198 | else:
199 | fig = figure
200 |
201 | # Start by drawing in the edges:
202 | nx.draw_networkx_edges(G,
203 | pos=pos,
204 | edgelist=edge_list,
205 | width=edge_width,
206 | edge_color=edge_color,
207 | ax=ax)
208 |
209 | # And then loop through each node and add it in order
210 | for node in node_order:
211 | nx.draw_networkx_nodes(G,
212 | pos=pos,
213 | node_color=colors_list[node],
214 | node_shape=node_shape_list[node],
215 | node_size=node_size_list[node],
216 | nodelist=[node],
217 | with_labels=False,
218 | ax=ax)
219 |
220 | axis_limits_dict = {}
221 | axis_limits_dict['axial'] = [ -70, 70, -105, 70]
222 | axis_limits_dict['coronal'] = [ -70, 70, -45, 75 ]
223 | axis_limits_dict['sagittal'] = [ -105, 70, -45, 75 ]
224 |
225 | ax.set_xlim(axis_limits_dict[orientation][0],axis_limits_dict[orientation][1])
226 | ax.set_ylim(axis_limits_dict[orientation][2],axis_limits_dict[orientation][3])
227 | ax.axis('off')
228 |
229 | return ax
--------------------------------------------------------------------------------
/scona/make_graphs.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 |
3 | # A Global import to make code python 2 and 3 compatible
4 | from __future__ import print_function
5 | import numpy as np
6 | import networkx as nx
7 | import pandas as pd
8 |
9 |
10 | # ==================== Anatomical Functions =================
11 |
12 |
13 | def anatomical_node_attributes():
14 | '''
15 | default anatomical nodal attributes for scona
16 |
17 | Returns
18 | -------
19 | list
20 | nodal attributes considered "anatomical" by
21 | scona
22 | '''
23 | return ["name", "name_34", "name_68", "hemi", "centroids", "x", "y", "z"]
24 |
25 |
26 | def anatomical_graph_attributes():
27 | '''
28 | default anatomical graph attributes for scona
29 |
30 | Returns
31 | -------
32 | list
33 | graph attributes considered "anatomical" by
34 | scona
35 | '''
36 | return ['parcellation', 'centroids']
37 |
38 |
39 | def assign_node_names(G, parcellation):
40 | """
41 | Modify nodal attribute "name" for nodes of G, inplace.
42 |
43 | Parameters
44 | ----------
45 | G : :class:`networkx.Graph`
46 | parcellation : list
47 | ``parcellation[i]`` is the name of node ``i`` in ``G``
48 |
49 | Returns
50 | -------
51 | :class:`networkx.Graph`
52 | graph with nodal attributes modified
53 | """
54 | # Assign anatomical names to the nodes
55 | for i, node in enumerate(G.nodes()):
56 | G.nodes[i]['name'] = parcellation[i]
57 | #
58 | G.graph['parcellation'] = True
59 | return G
60 |
61 |
62 | def assign_node_centroids(G, centroids):
63 | """
64 | Modify the nodal attributes "centroids", "x", "y", and "z" of nodes
65 | of G, inplace. "centroids" will be set according to the scheme
66 | ``G[i]["centroids"] = centroids[i]``
67 | for nodes ``i`` in ``G``. "x", "y" and "z" are assigned as the first,
68 | second and third coordinate of ``centroids[i]`` respectively.
69 |
70 | Parameters
71 | ----------
72 | G : :class:`networkx.Graph`
73 | centroids : list
74 | ``centroids[i]`` is a tuple representing the cartesian coordinates of
75 | node ``i`` in ``G``
76 |
77 | Returns
78 | -------
79 | :class:`networkx.Graph`
80 | graph with nodal attributes modified
81 | """
82 | # Assign cartesian coordinates to the nodes
83 | for i, node in enumerate(G.nodes()):
84 | G.nodes[i]['x'] = centroids[i][0]
85 | G.nodes[i]['y'] = centroids[i][1]
86 | G.nodes[i]['z'] = centroids[i][2]
87 | G.nodes[i]['centroids'] = centroids[i]
88 | #
89 | G.graph['centroids'] = True
90 | return G
91 |
92 |
93 | def copy_anatomical_data(R, G,
94 | nodal_keys=anatomical_node_attributes(),
95 | graph_keys=anatomical_graph_attributes()):
96 | '''
97 | Copies nodal and graph attributes data from ``G`` to ``R`` for keys
98 | included in ``nodal_keys`` and ``graph_keys`` respectively.
99 |
100 | ``R`` and ``G`` are both graphs. If they have non equal vertex sets
101 | node data will only be copied from ``G`` to ``R`` for nodes in the
102 | intersection.
103 |
104 | Parameters
105 | ----------
106 | R : :class:`networkx.Graph`
107 | G : :class:`networkx.Graph`
108 | ``G`` has anatomical data to copy to ``R``
109 | nodal_keys : list, optional
110 | The list of keys to treat as anatomical nodal data.
111 | Default is set to ``anatomical_node_attributes()``, e.g
112 | ``["name", "name_34", "name_68", "hemi", "centroids", "x", "y", "z"]``
113 | graph_keys : list, optional
114 | The list of keys rto treat as anatomical graph data.
115 | Set to ``anatomical_graph_attributes()`` by default. E.g
116 | ``["centroids", "parcellation"]``
117 |
118 | Returns
119 | -------
120 | :class:`networkx.Graph`
121 | graph ``R`` with the anatomical data of graph ``G``
122 |
123 | See Also
124 | --------
125 | :func:`copy_anatomical_data`
126 | '''
127 | for key in [x for x in R._node.keys() if x in G._node.keys()]:
128 | R._node[key].update({k: v
129 | for k, v in G._node[key].items()
130 | if k in nodal_keys})
131 |
132 | R.graph.update({key: G.graph.get(key)
133 | for key in graph_keys})
134 | return R
135 |
136 |
137 | def anatomical_copy(G,
138 | nodal_keys=anatomical_node_attributes(),
139 | graph_keys=anatomical_graph_attributes()):
140 | '''
141 | Create a new graph from ``G`` preserving:
142 | * nodes
143 | * edges
144 | * any nodal attributes specified in nodal_keys
145 | * any graph attributes specified in graph_keys
146 |
147 | Parameters
148 | ----------
149 | G : :class:`networkx.Graph`
150 | ``G`` has anatomical data to copy to ``R``
151 | nodal_keys : list, optional
152 | The list of keys to treat as anatomical nodal data.
153 | Default is set to ``anatomical_node_attributes()``, e.g
154 | ``["name", "name_34", "name_68", "hemi", "centroids", "x", "y", "z"]``
155 | graph_keys : list, optional
156 | The list of keys rto treat as anatomical graph data.
157 | Set to ``anatomical_graph_attributes()`` by default. E.g
158 | ``["centroids", "parcellation"]``
159 |
160 | Returns
161 | -------
162 | :class:`networkx.Graph`
163 | A new graph with the same nodes and edges as ``G`` and identical
164 | anatomical data.
165 |
166 | See Also
167 | --------
168 | :func:`BrainNetwork.anatomical_copy`
169 | :func:`copy_anatomical_data`
170 | '''
171 | # Create new empty graph
172 | R = type(G)()
173 | # Copy nodes
174 | R.add_nodes_from(G.nodes())
175 | # Copy anatomical data
176 | copy_anatomical_data(R, G, nodal_keys=nodal_keys, graph_keys=graph_keys)
177 | # Preserve edges and edge weights
178 | R.add_edges_from(G.edges)
179 | nx.set_edge_attributes(R,
180 | name="weight",
181 | values=nx.get_edge_attributes(G, name="weight"))
182 | return R
183 |
184 |
185 | def is_nodal_match(G, H, keys=None):
186 | '''
187 | Check that G and H have equal vertex sets.
188 | If keys is passed, also check that the nodal dictionaries of G and H (e.g
189 | the nodal attributes) are equal over the attributes in keys.
190 |
191 | Parameters
192 | ----------
193 | G : :class:`networkx.Graph`
194 | H : :class:`networkx.Graph`
195 | keys : list, optional
196 | a list of attributes on which the nodal dictionaries of G and H should
197 | agree.
198 |
199 | Returns
200 | -------
201 | bool
202 | ``True`` if `G` and `H` have equal vertex sets, or if `keys` is
203 | specified: ``True`` if vertex sets are equal AND the graphs'
204 | nodal dictionaries agree on all attributes in `keys`.
205 | ``False`` otherwise
206 | '''
207 | if set(G.nodes()) != set(H.nodes()):
208 | return False
209 | elif keys is None:
210 | return True
211 | elif False in [(H._node.get(i).get(att) == G._node.get(i).get(att))
212 | for att in keys
213 | for i in G.nodes()]:
214 | return False
215 | else:
216 | return True
217 |
218 |
219 | def is_anatomical_match(G,
220 | H,
221 | nodal_keys=anatomical_node_attributes(),
222 | graph_keys=anatomical_graph_attributes()):
223 | '''
224 | Check that G and H have identical anatomical data (including vertex sets).
225 |
226 | Parameters
227 | ----------
228 | G : :class:`networkx.Graph`
229 | H : :class:`networkx.Graph`
230 | nodal_keys : list, optional
231 | The list of keys to treat as anatomical nodal data.
232 | Default is set to ``anatomical_node_attributes()``, e.g
233 | ``["name", "name_34", "name_68", "hemi", "centroids", "x", "y", "z"]``
234 | graph_keys : list, optional
235 | The list of keys to treat as anatomical graph data.
236 | Set to ``anatomical_graph_attributes()`` by default. E.g
237 | ``["centroids", "parcellation"]``
238 |
239 | Returns
240 | -------
241 | bool
242 | ``True`` if `G` and `H` have the same anatomical data;
243 | ``False`` otherwise
244 | '''
245 | # check nodes match
246 | if not is_nodal_match(G, H, keys=nodal_keys):
247 | return False
248 | # check graph attributes match
249 | elif ({k: v for k, v in G.graph.items() if k in graph_keys}
250 | != {k: v for k, v in H.graph.items() if k in graph_keys}):
251 | return False
252 | else:
253 | return True
254 |
255 |
256 | # ================= Graph construction =====================
257 |
258 |
259 | def weighted_graph_from_matrix(M, create_using=None):
260 | '''
261 | Create a weighted graph from an correlation matrix.
262 |
263 | Parameters
264 | ----------
265 | M : :class:`numpy.array`
266 | an correlation matrix
267 | create_using : :class:`networkx.Graph`, optional
268 | Use specified graph for result. The default is Graph()
269 |
270 | Returns
271 | -------
272 | :class:`networkx.Graph`
273 | A weighted graph with edge weights equivalent to matrix entries
274 | '''
275 | # Make a copy of the matrix
276 | thr_M = np.copy(M)
277 |
278 | # Set all diagonal values to 0
279 | thr_M[np.diag_indices_from(thr_M)] = 0
280 |
281 | # Read this full matrix into a graph G
282 | G = nx.from_numpy_matrix(thr_M, create_using=create_using)
283 |
284 | return G
285 |
286 |
287 | def weighted_graph_from_df(df, create_using=None):
288 | '''
289 | Create a weighted graph from a correlation matrix.
290 |
291 | Parameters
292 | ----------
293 | df : :class:`pandas.DataFrame`
294 | a correlation matrix
295 | create_using : :class:`networkx.Graph`
296 | Use specified graph for result. The default is Graph()
297 |
298 | Returns
299 | -------
300 | :class:`networkx.Graph`
301 | A weighted graph with edge weights equivalent to DataFrame entries
302 | '''
303 | return weighted_graph_from_matrix(df.values, create_using=create_using)
304 |
305 |
306 | def scale_weights(G, scalar=-1, name='weight'):
307 | '''
308 | Multiply edge weights of `G` by `scalar`.
309 |
310 | Parameters
311 | ----------
312 | G : :class:`networkx.Graph`
313 | scalar : float, optional
314 | scalar value to multiply edge weights by. Default is -1
315 | name : str, optional
316 | string that indexes edge weights. Default is "weight"
317 |
318 | Returns
319 | -------
320 | :class:`networkx.Graph`
321 |
322 | '''
323 | edges = nx.get_edge_attributes(G, name=name)
324 | new_edges = {key: value*scalar for key, value in edges.items()}
325 | nx.set_edge_attributes(G, name=name, values=new_edges)
326 | return G
327 |
328 |
329 | def threshold_graph(G, cost, mst=True):
330 | '''
331 | Create a binary graph by thresholding weighted graph G.
332 |
333 | First creates the minimum spanning tree for the graph, and then adds
334 | in edges according to their connection strength up to cost.
335 |
336 | Parameters
337 | ----------
338 | G : :class:`networkx.Graph`
339 | A complete weighted graph
340 | cost : float
341 | A number between 0 and 100. The resulting graph will have the
342 | ``cost*n/100`` highest weighted edges from G, where ``n`` is the number
343 | of edges in G.
344 | mst : bool, optional
345 | If ``False``, skip creation of minimum spanning tree. This may cause
346 | output graph to be disconnected
347 |
348 | Returns
349 | -------
350 | :class:`networkx.Graph`
351 | A binary graph
352 |
353 | Raises
354 | ------
355 | Exception
356 | If it is impossible to create a minimum_spanning_tree at the given cost
357 |
358 | See Also
359 | --------
360 | :func:`scona.BrainNetwork.threshold`
361 | '''
362 | # Weights scaled by -1 as minimum_spanning_tree minimises weight
363 | H = scale_weights(anatomical_copy(G))
364 | # Make a list of all the sorted edges in the full matrix
365 | H_edges_sorted = sorted(H.edges(data=True),
366 | key=lambda edge_info: edge_info[2]['weight'])
367 | # Create an empty graph with the same nodes as H
368 | germ = anatomical_copy(G)
369 | germ.remove_edges_from(G.edges)
370 |
371 | if mst:
372 | # Calculate minimum spanning tree
373 | germ.add_edges_from(nx.minimum_spanning_edges(H))
374 |
375 | # Make a list of the germ graph's edges
376 | germ_edges = germ.edges(data=True)
377 |
378 | # Create a list of sorted edges that are *not* in the germ
379 | # (because you don't want to add them in twice!)
380 | H_edges_sorted_not_germ = [edge for edge in H_edges_sorted
381 | if edge not in germ_edges]
382 | # Calculate how many edges need to be added to reach the specified cost
383 | # and round to the nearest integer.
384 | n_edges = (cost/100.0) * len(H)*(len(H)-1)*0.5
385 | n_edges = np.int(np.around(n_edges))
386 | n_edges = n_edges - len(germ_edges)
387 |
388 | # If your cost is so small that your minimum spanning tree already covers
389 | # it then you can't do any better than the MST and you'll just have to
390 | # return it with an accompanying error message
391 | # A tree has n-1 edges and a complete graph has n(n − 1)/2 edges, so we
392 | # need cost/100 > 2/n, where n is the number of vertices
393 | if n_edges < 0:
394 | raise Exception('Unable to calculate matrix at this cost -\
395 | minimum spanning tree is too large')
396 | print('cost must be >= {}'.format(2/len(H)))
397 | # Otherwise, add in the appropriate number of edges (n_edges)
398 | # from your sorted list (H_edges_sorted_not_germ)
399 | else:
400 | germ.add_edges_from(H_edges_sorted_not_germ[:n_edges])
401 | # binarise edge weights
402 | nx.set_edge_attributes(germ, name='weight', values=1)
403 | # And return the updated germ as your graph
404 | return germ
405 |
406 |
407 | def graph_at_cost(M, cost, mst=True):
408 | '''
409 | Create a binary graph by thresholding weighted matrix M.
410 |
411 | First creates the minimum spanning tree for the graph, and then adds
412 | in edges according to their connection strength up to cost.
413 |
414 | Parameters
415 | ----------
416 | M : :class:`numpy.array` or :class:`pandas.DataFrame`
417 | A correlation matrix.
418 | cost : float
419 | A number between 0 and 100. The resulting graph will have the
420 | ``cost*n/100`` highest weighted edges available, where ``n`` is the
421 | number of edges in G
422 | mst : bool, optional
423 | If ``False``, skip creation of minimum spanning tree. This may cause
424 | output graph to be disconnected
425 |
426 | Returns
427 | -------
428 | :class:`networkx.Graph`
429 | A binary graph
430 |
431 | Raises
432 | ------
433 | Exception
434 | if M is not a :class:`numpy.array` or :class:`pandas.DataFrame`
435 | '''
436 | # If dataframe, convert to array
437 | if isinstance(M, pd.DataFrame):
438 | array = M.values
439 | elif isinstance(M, np.ndarray):
440 | array = M
441 | else:
442 | raise TypeError(
443 | "M should be a numpy array or pandas dataframe")
444 |
445 | # Read this array into a graph G
446 | G = weighted_graph_from_matrix(array)
447 | return threshold_graph(G, cost, mst=mst)
448 |
449 |
450 | # ===================== Random Graphs ======================
451 |
452 |
453 | def random_graph(G, Q=10, seed=None):
454 | '''
455 | Return a connected random graph that preserves degree distribution
456 | by swapping pairs of edges, using :func:`networkx.double_edge_swap`.
457 |
458 | Parameters
459 | ----------
460 | G : :class:`networkx.Graph`
461 | Q : int, optional
462 | constant that specifies how many swaps to conduct for each edge in G
463 | seed : int, random_state or None (default)
464 | Indicator of random state to pass to networkx
465 |
466 | Returns
467 | -------
468 | :class:`networkx.Graph`
469 |
470 | Raises
471 | ------
472 | Exception
473 | if it is not possible in 15 attempts to create a connected random
474 | graph.
475 | '''
476 | R = anatomical_copy(G)
477 |
478 | # Calculate the number of edges and set a constant
479 | # as suggested in the nx documentation
480 | E = R.number_of_edges()
481 |
482 | # Start the counter for randomisation attempts and set connected to False
483 | attempt = 0
484 | connected = False
485 |
486 | # Keep making random graphs until they are connected
487 | while not connected and attempt < 15:
488 | # Now swap some edges in order to preserve the degree distribution
489 | nx.double_edge_swap(R, Q*E, max_tries=Q*E*10, seed=seed)
490 |
491 | # Check that this graph is connected! If not, start again
492 | connected = nx.is_connected(R)
493 | if not connected:
494 | attempt += 1
495 |
496 | if attempt == 15:
497 | raise Exception("** Failed to randomise graph in first 15 tries -\
498 | Attempt aborted. Network is likely too sparse **")
499 | return R
500 |
501 |
502 | def get_random_graphs(G, n=10, Q=10, seed=None):
503 | '''
504 | Create n random graphs through edgeswapping.
505 |
506 | Parameters
507 | ----------
508 | G : :class:`networkx.Graph`
509 | n : int, optional
510 | Q : int, optional
511 | constant to specify how many swaps to conduct for each edge in G
512 | seed : int, random_state or None (default)
513 | Indicator of random state to pass to networkx
514 |
515 | Returns
516 | -------
517 | list of :class:`networkx.Graph`
518 | A list of length n of randomisations of G created using
519 | :func:`scona.make_graphs.random_graph`
520 | '''
521 | graph_list = []
522 |
523 | print(' Creating {} random graphs - may take a little while'
524 | .format(n))
525 |
526 | for i in range(n):
527 | if len(graph_list) <= i:
528 | graph_list += [random_graph(G, Q=Q, seed=seed)]
529 |
530 | return graph_list
531 |
--------------------------------------------------------------------------------
/scona/nilearn_plotting.py:
--------------------------------------------------------------------------------
1 | from nilearn import plotting
2 | import networkx as nx
3 | import numpy as np
4 |
5 |
6 | def graph_to_nilearn_array(
7 | G,
8 | node_colour_att=None,
9 | node_size_att=None,
10 | edge_attribute="weight"):
11 | """
12 | Derive from G the necessary inputs for the `nilearn` graph plotting
13 | functions.
14 | G : :class:`networkx.Graph`
15 | G should have nodal locations in MNI space indexed by nodal
16 | attribute "centroids"
17 | node_colour_att : str, optional
18 | index a nodal attribute to scale node colour by
19 | node_size_att : str, optional
20 | index a nodal attribute to scale node size by
21 | edge_attribute : str, optional
22 | index an edge attribute to scale edge colour by
23 | """
24 | node_order = sorted(list(G.nodes()))
25 | adjacency_matrix = nx.to_numpy_matrix(
26 | G,
27 | nodelist=node_order,
28 | weight=edge_attribute)
29 | node_coords = np.array([G._node[node]["centroids"] for node in node_order])
30 | if node_colour_att is not None:
31 | node_colour_att = [G._node[node][node_colour_att] for node
32 | in node_order]
33 | if node_size_att is not None:
34 | node_size_att = [G._node[node][node_size_att] for node in node_order]
35 | return adjacency_matrix, node_coords, node_colour_att, node_size_att
36 |
37 |
38 | def plot_connectome_with_nilearn(
39 | G,
40 | node_colour_att=None,
41 | node_colour="auto",
42 | node_size_att=None,
43 | node_size=50,
44 | edge_attribute="weight",
45 | edge_cmap="Spectral_r",
46 | edge_vmin=None,
47 | edge_vmax=None,
48 | output_file=None,
49 | display_mode='ortho',
50 | figure=None,
51 | axes=None,
52 | title=None,
53 | annotate=True,
54 | black_bg=False,
55 | alpha=0.7,
56 | edge_kwargs=None,
57 | node_kwargs=None,
58 | colorbar=False):
59 | """
60 | Plot a BrainNetwork using :func:`nilearn.plotting.plot_connectome()` tool.
61 |
62 | Parameters
63 | ----------
64 | G : :class:`networkx.Graph`
65 | G should have nodal locations in MNI space indexed by nodal
66 | attribute "centroids"
67 | node_colour_att : str, optional
68 | index a nodal attribute to scale node colour by
69 | node_size_att : str, optional
70 | index a nodal attribute to scale node size by
71 | edge_attribute : str, optional
72 | index an edge attribute to scale edge colour by
73 |
74 | other parameters are passed to :func:`nilearn.plotting.plot_connectome()`
75 | """
76 | adjacency_matrix, node_coords, colour_list, size_list = graph_to_nilearn_array(G, node_colour=node_colour_att, node_size=node_size_att, edge_attribute=edge_attribute)
77 |
78 | if node_colour_att is not None:
79 | node_colour = [node_colour(x) for x in colour_list]
80 | if node_size_att is not None:
81 | node_size = [x*node_size for x in size_list]
82 |
83 | plotting.plot_connectome(
84 | adjacency_matrix, node_coords, node_color=node_colour,
85 | node_size=node_size, edge_cmap="Spectral_r", edge_vmin=edge_vmin,
86 | edge_vmax=edge_vmax, edge_threshold=0.01, output_file=output_file,
87 | display_mode=display_mode, figure=figure, axes=axes, title=title,
88 | annotate=annotate, black_bg=black_bg, alpha=alpha,
89 | edge_kwargs=edge_kwargs, node_kwargs=node_kwargs, colorbar=colorbar)
90 |
91 |
92 | def view_connectome_with_nilearn(
93 | G,
94 | edge_attribute="weight",
95 | edge_cmap="Spectral_r",
96 | symmetric_cmap=True,
97 | edgewidth=6.0,
98 | node_size=3.0,
99 | node_colour_att=None,
100 | node_colour='black'):
101 | # node_colour_list=None):
102 | """
103 | Plot a BrainNetwork using :func:`nilearn.plotting.view_connectome()` tool.
104 |
105 | Parameters
106 | ----------
107 | G : :class:`networkx.Graph`
108 | G should have nodal locations in MNI space indexed by nodal
109 | attribute "centroids"
110 | node_colour_att : str, optional
111 | index a nodal attribute to scale node colour by
112 | edge_attribute : str, optional
113 | index an edge attribute to scale edge colour by
114 |
115 | other parameters are passed to :func:`nilearn.plotting.view_connectome()`
116 | """
117 | adjacency_matrix, node_coords, colour_list, z = graph_to_nilearn_array(
118 | G,
119 | edge_attribute=edge_attribute,
120 | node_colour=node_colour_att)
121 | return plotting.view_connectome(
122 | adjacency_matrix,
123 | node_coords,
124 | threshold=None,
125 | cmap=edge_cmap,
126 | symmetric_cmap=symmetric_cmap,
127 | linewidth=edgewidth,
128 | marker_size=node_size)
129 | # if colour_list is None:
130 | # colours = [node_colour for i in range(len(node_coords))]
131 | # else:
132 | # colours = np.array([node_colour(x) for x in colour_list])
133 | #
134 | # connectome_info = plotting.html_connectome._get_markers(node_coords,
135 | # colours)
136 | # connectome_info.update(plotting.html_connectome._get_connectome(
137 | # adjacency_matrix, node_coords, threshold=None, cmap=edge_cmap,
138 | # symmetric_cmap=symmetric_cmap))
139 | # connectome_info["line_width"] = edgewidth
140 | # connectome_info["marker_size"] = node_size
141 | # return plotting.html_connectome._make_connectome_html(connectome_info)
142 |
143 |
144 | def view_markers_with_nilearn(
145 | G,
146 | node_size=5.,
147 | node_colouring='black'):
148 | """
149 | Plot nodes of a BrainNetwork using
150 | :func:`nilearn.plotting.view_connectome()` tool.
151 |
152 | Parameters
153 | ----------
154 | G : :class:`networkx.Graph`
155 | G should have nodal locations in MNI space indexed by nodal
156 | attribute "centroids"
157 | node_colouring : str or list of str, default 'black'
158 | node_colouring will determine the colour given to each node.
159 | If a single string is given, this string will be interpreted as a
160 | a colour, and all nodes will be rendered in this colour.
161 | If a list of colours is given,
162 | """
163 | a, node_coords, colour_list, z = graph_to_nilearn_array(
164 | G,)
165 | if isinstance(node_colouring, str):
166 | colours = [node_colouring for i in range(len(node_coords))]
167 | elif colour_list is not None:
168 | colours = np.array([node_colouring(x) for x in colour_list])
169 | return plotting.view_markers(
170 | node_coords, colors=colours, marker_size=node_size)
171 |
--------------------------------------------------------------------------------
/scona/permute_groups.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | """
3 | Given a set of participants split into groups group_1, group_2, ... of sizes
4 | x_1, x_2,... randomly redivide the set in to sizes of x_1, x_2, ... to build
5 | a null distribution for the network structure.
6 | """
7 | # Essential package imports
8 | import os
9 | import numpy as np
10 | from scona.make_corr_matrices import corrmat_from_regionalmeasures
11 |
12 | def corr_matrices_from_divided_data(
13 | regional_measures,
14 | names,
15 | "group"_index,
16 | covars=None,
17 | method='pearson'):
18 | '''
19 | Calculate the correlation of `names` columns over the rows of
20 | `regional_measures` after correcting for covariance with the columns in
21 | `covars`
22 |
23 | Parameters
24 | ----------
25 | regional_measures : :class:`pandas.DataFrame`
26 | a pandas data frame with subjects as rows, and columns including
27 | brain regions, covariates and a column describing the "group". Should be numeric for the columns in
28 | names and covars_list
29 | names : list
30 | a list of the brain regions you wish to correlate
31 | "group"_index : str
32 | the index of the column describing "group" participation
33 | covars: list
34 | covars is a list of covariates (as df column headings)
35 | to correct for before correlating the regions.
36 | methods : string
37 | the method of correlation passed to :func:`pandas.DataFrame.corr`
38 |
39 | Returns
40 | -------
41 | :class:`pandas.DataFrame`
42 | A correlation matrix with rows and columns keyed by `names`
43 | '''
44 | corr_dict = {}
45 | for key, group in regional_measures.groupby("group"_index):
46 | corr_dict[key] = corrmat_from_regionalmeasures(
47 | group, names, covars=covars, method=method)
48 | return corr_dict
49 |
50 | def shuffle_"group"():
51 |
--------------------------------------------------------------------------------
/scona/scripts/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/WhitakerLab/scona/bb29a4247f4d53e0a1e56459b3a908f2048b54c7/scona/scripts/__init__.py
--------------------------------------------------------------------------------
/scona/scripts/useful_functions.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 |
3 | import pandas as pd
4 | import numpy as np
5 | import os
6 |
7 |
8 | def read_in_data(
9 | data,
10 | names_file,
11 | covars_file=None,
12 | centroids_file=None,
13 | data_as_df=True):
14 | '''
15 | Read in data from file paths
16 |
17 | Parameters
18 | ----------
19 | data : str
20 | path to a csv file.
21 | Read in as a :class:`pandas.DataFrame` unless ``data_as_df=False``
22 | names_file : str
23 | path to a text file containing names of brain regions. Read in as list.
24 | covars_file : str, optional
25 | a text file containing a list of covariates to correct for. Read in as
26 | list.
27 | centroids_file : str, optional
28 | a text file containing cartesian coordinates of
29 | brain regions. Should be aligned with names_file so that the ith
30 | line of centroids_file is the coordinates of the brain region named
31 | in the ith line of names_file. Read in as list.
32 | data_as_df : bool, optional
33 | If False, returns data uses :func:`numpy.loadtext` to import data as
34 | :class:`numpy.ndarray`
35 |
36 | Returns
37 | -------
38 | :class:`pandas.DataFrame`, list, list or None, list or None
39 | `data, names, covars, centroids`
40 | '''
41 | # Load names
42 | with open(names_file) as f:
43 | names = [line.strip() for line in f]
44 |
45 | # Load covariates
46 | if covars_file is not None:
47 | with open(covars_file) as f:
48 | covars_list = [line.strip() for line in f]
49 | else:
50 | covars_list = []
51 |
52 | if centroids_file is not None:
53 | centroids = list(np.loadtxt(centroids_file))
54 | else:
55 | centroids = None
56 |
57 | # Load data
58 | if data_as_df:
59 | df = pd.read_csv(data)
60 | else:
61 | df = np.loadtxt(data)
62 |
63 | return df, names, covars_list, centroids
64 |
65 |
66 | def write_out_measures(df, output_dir, name, first_columns=[]):
67 | '''
68 | Write out a DataFrame as a csv
69 |
70 | Parameters
71 | ----------
72 | df : :class:`pandas.DataFrame`
73 | A dataframe of measures to write out
74 | output_dir, name : str
75 | The output and filename to write out to. Creates output_dir if it does
76 | not exist
77 | first_columns : list, optional
78 | There may be columns you want to be saved on the left hand side of the
79 | csv for readability. Columns will go left to right in the order
80 | specified by first_columns, followed by any columns not in
81 | first_columns.
82 | '''
83 | # Make the output directory if it doesn't exist already
84 | if not os.path.isdir(output_dir):
85 | os.makedirs(output_dir)
86 |
87 | output_f_name = os.path.join(output_dir, name)
88 |
89 | # Write the data frame out (with the degree column first)
90 | new_col_list = first_columns.extend(
91 | [col_name
92 | for col_name in df.columns
93 | if col_name not in first_columns])
94 |
95 | df.to_csv(output_f_name, columns=new_col_list)
96 |
--------------------------------------------------------------------------------
/scona/scripts/visualisation_commands.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 |
3 | import seaborn as sns
4 | import matplotlib.pylab as plt
5 | import numpy as np
6 | import networkx as nx
7 | import pandas as pd
8 | import matplotlib as mpl
9 | import os
10 | import sys
11 | import matplotlib.image as mpimg
12 | import matplotlib.gridspec as gridspec
13 | import itertools as it
14 | import matplotlib.patches as mpatches
15 |
16 |
17 | def rescale(fname, suff='png'):
18 | '''
19 | Journals generally like to make life easier for reviewers
20 | by sending them a manuscript that is not going to crash
21 | their computers with its size, so we're going to create
22 | a smaller version of the input figure (fname) that is
23 | 8 inches wide at 200 dpi. It will be saved out in whatever
24 | format specified by the suff parameter, and the name
25 | will be the same as the original but with _LowRes appended
26 | '''
27 |
28 | from PIL import Image
29 | import numpy as np
30 |
31 | # Open the file and figure out what size it is
32 | img = Image.open(fname+'.'+suff)
33 | size = img.size
34 |
35 | # Calculate the scale factor that sets the width
36 | # of the figure to 1600 pixels
37 | scale_factor = 1600.0/size[0]
38 |
39 | # Apply this scale factor to the width and height
40 | # to get the new size
41 | new_size = (np.int(size[0]*scale_factor), np.int(size[1]*scale_factor))
42 |
43 | # Resize the image
44 | small_img = img.resize(new_size, Image.ANTIALIAS)
45 |
46 | # Define the output name
47 | new_name = ''.join([os.path.splitext(fname)[0],
48 | '_LowRes.',
49 | suff])
50 |
51 | # Save the image
52 | small_img.save(new_name, optimize=True, quality=95)
53 |
54 | # And you're done!
55 |
56 |
57 | def view_corr_mat(corr_mat,
58 | output_name,
59 | cmap_name='RdBu_r',
60 | cost=None,
61 | bin=False):
62 | '''
63 | This is a visualisation tool for correlation matrices
64 |
65 | Parameters
66 | ----------
67 | corr_mat : :class:`str` or :class:`pandas.DataFrame` or :class:`numpy.array`
68 | corr_mat could be:
69 |
70 | - a string object - Path to the File, containing the matrix; ``Note`` loading the corr_mat from file is only possible if all data values are float (or integers). Please do not include column or row labels in the file.
71 |
72 | - or a pandas.DataFrame object to represent a correlation matrix;
73 |
74 | - or a numpy.array representing a correlation matrix;
75 | output_name : :class:`str`
76 | the name of the file you want to save your visualization
77 | of correlation matrix to in.
78 | cmap_name : string or Colormap, optional
79 | A Colormap instance or registered colormap name.
80 | The colormap maps scalar data to colors. It is ignored for RGB(A) data.
81 | Defaults to 'RdBu_r'.
82 |
83 | Returns
84 | -------
85 | The visualization of the correlation matrix is saved in the file.
86 |
87 | '''
88 |
89 | # If cost is given then roughly threshold at that cost.
90 | # NOTE - this is not actually the EXACT network that you're analysing
91 | # because it doesn't include the minimum spanning tree. But it will give
92 | # you a good sense of the network structure.
93 | # #GoodEnough ;)
94 |
95 | if isinstance(corr_mat, str):
96 | M = np.loadtxt(corr_mat) # Read in the data
97 | elif isinstance(corr_mat, pd.DataFrame):
98 | M = corr_mat.to_numpy() # Convert the DataFrame to a NumPy array
99 | elif isinstance(corr_mat, np.ndarray):
100 | M = corr_mat # support numpy array as input to the function
101 | else:
102 | raise TypeError("corr_mat argument must be 1) a path to the file containing the matrix or 2) pandas.DataFrame object or 3) numpy.array")
103 |
104 | if M.shape[0] != M.shape[1]:
105 | raise ValueError("The correlation matrix must be n x n, where n is the number of nodes")
106 |
107 | if cost:
108 | thr = np.percentile(M.reshape(-1), 100-cost)
109 | M[M0] = 1
123 |
124 | # Create an axis
125 | fig, ax = plt.subplots(figsize=(6,5))
126 | ax.axis('off')
127 |
128 | # Show the network measures
129 | mat_ax = ax.imshow(M,
130 | interpolation='none',
131 | cmap=cmap_name,
132 | vmin=vmin,
133 | vmax=vmax)
134 |
135 | # Put a box around your data
136 | ax.add_patch(
137 | mpatches.Rectangle(
138 | (ax.get_xlim()[0], ax.get_ylim()[1]),
139 | ax.get_xlim()[1],
140 | ax.get_ylim()[0],
141 | fill=False, # remove background
142 | color='k',
143 | linewidth=1) )
144 |
145 | # Add colorbar, make sure to specify tick locations to match desired tick labels
146 | cbar = fig.colorbar(mat_ax, ticks=ticks_dict['locations'])
147 | cbar.ax.set_yticklabels(ticks_dict['labels']) # vertically oriented colorbar
148 |
149 | plt.tight_layout()
150 |
151 | # Save the picture
152 | fig.savefig(output_name, bbox_inches=0, dpi=100)
153 |
154 | plt.close(fig)
155 |
--------------------------------------------------------------------------------
/scona/stats_functions.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 |
3 | # Essential package imports
4 | import numpy as np
5 |
6 |
7 | def residuals(x, y):
8 | '''
9 | Return residuals of least squares solution to y = AB where A =[[x 1]].
10 | Uses :func:`numpy.linalg.lstsq` to find B
11 |
12 | Parameters
13 | ----------
14 | x: array
15 | y: array
16 |
17 | Returns
18 | -------
19 | array
20 | residuals of least squares solution to y = [[x 1]]B
21 | '''
22 | if len(x.shape) == 1:
23 | x = x[np.newaxis, :]
24 | A = np.vstack([x, np.ones(x.shape[-1])]).T
25 | # get the least squares solution to AB = y
26 | B = np.linalg.lstsq(A, y, rcond=-1)[0]
27 | # calculate and return the residuals
28 | m, c = B[:-1], B[-1]
29 | pre = np.sum(m * x.T, axis=1) + c
30 | res = y - pre
31 | return res
32 |
33 |
34 | def partial_r(x, y, covars):
35 | from scipy.stats import pearsonr
36 |
37 | res_i = residuals(covars, x)
38 | res_j = residuals(covars, y)
39 | part_r = pearsonr(res_i, res_j)[0]
40 | return part_r
41 |
42 |
43 | def variance_partition(x1, x2, y):
44 | '''
45 | Describe the independent and shared explanatory
46 | variance of two (possibly correlated) variables on
47 | the dependent variable (y)
48 | '''
49 | from statsmodels.formula.api import ols
50 | from scipy.stats import pearsonr
51 | import pandas as pd
52 |
53 | # Set up the data frame
54 | df = pd.DataFrame({'Y': y,
55 | 'X1': x1,
56 | 'X2': x2})
57 |
58 | # Get the overall r squared value for the
59 | # multiple regression
60 | Rsq = ols('Y ~ X1 + X2', data=df).fit().rsquared
61 |
62 | # Next calculate the residuals of X1 after correlating
63 | # with X2 (so units will be in those of X1) and vice versa
64 | df['res_X1givenX2'] = residuals(df['X2'], df['X1'])
65 | df['res_X2givenX1'] = residuals(df['X1'], df['X2'])
66 |
67 | # Now calculate the pearson regressions for
68 | # the residuals against the dependent variable to give
69 | # the fraction of variance that each explains independently
70 | # (a and c), along with the fraction of variance
71 | # that is shared across both explanatory variables (b).
72 | # d is the fraction of variance that is not explained
73 | # by the model.
74 | a = (pearsonr(df['res_X1givenX2'], df['Y'])[0])**2
75 | c = (pearsonr(df['res_X2givenX1'], df['Y'])[0])**2
76 | b = Rsq - a - c
77 | d = 1.0 - Rsq
78 |
79 | # Return these four fractions
80 | return a, b, c, d
81 |
--------------------------------------------------------------------------------
/scona/visualisations_helpers.py:
--------------------------------------------------------------------------------
1 | import warnings
2 | import os
3 |
4 | import numpy as np
5 | import pandas as pd
6 | import networkx as nx
7 |
8 | import matplotlib.pyplot as plt
9 | import matplotlib as mpl
10 | import seaborn as sns
11 |
12 |
13 | def create_df_sns_barplot(bundleGraphs, original_network):
14 | """
15 | In order to plot barplot (with error bars) with the help of seaborn,
16 | it is needed to pass a argument "data" - dataset for plotting.
17 |
18 | This function restructures a DataFrame obtained from
19 | `Graph.Bundle.report_global_measures` into an acceptable DataFrame for
20 | seaborn.barplot.
21 |
22 | Parameters
23 | ----------
24 | bundleGraphs : :class:`GraphBundle`
25 | a python dictionary with BrainNetwork objects as values
26 | (:class:`str`: :class:`BrainNetwork` pairs), contains real Graph and
27 | random graphs
28 |
29 | original_network: str, required
30 | This is the name of the initial Graph in GraphBundle. It should index
31 | the particular network in `brain_bundle` that you want the figure
32 | to highlight. A distribution of all the other networks in
33 | `brain_bundle` will be rendered for comparison.
34 |
35 | Returns
36 | -------
37 | :class:`pandas.DataFrame`
38 | Restructured DataFrame suitable for seaborn.barplot
39 | """
40 |
41 | # calculate network measures for each graph in brain_bundle
42 | # if each graph in GraphBundle has already calculated global measures,
43 | # this step will be skipped
44 | bundleGraphs_measures = bundleGraphs.report_global_measures()
45 |
46 | # set abbreviations for measures
47 | abbreviation = {'assortativity': 'a',
48 | 'average_clustering': 'C',
49 | 'average_shortest_path_length': 'L',
50 | 'efficiency': 'E',
51 | 'modularity': 'M'}
52 |
53 | # set columns for our new DataFrame
54 | new_columns = ["measure", "value", "TypeNetwork"]
55 |
56 | # get the number of columns from the old DataFrame
57 | no_columns_old = len(bundleGraphs_measures.columns)
58 |
59 | # get the number of rows from the old DataFrame
60 | no_rows_old = len(bundleGraphs_measures.index)
61 |
62 | # set number of rows (indexes) in new DataFrame
63 | total_rows = no_columns_old * no_rows_old
64 |
65 | # set index for our new DataFrame
66 | index = [i for i in range(1, total_rows + 1)]
67 |
68 | # Build array to contain all data to futher use for creating new DataFrame
69 |
70 | # store values of *Real Graph* in data_array - used to create new DataFrame
71 | data_array = list()
72 |
73 | for measure in bundleGraphs_measures.columns:
74 | # check that the param - original_network - is correct,
75 | # otherwise pass an error
76 | try:
77 | # for original_network get value of each measure
78 | value = bundleGraphs_measures.loc[original_network, measure]
79 | except KeyError:
80 | raise KeyError(
81 | "The name of the initial Graph you passed to the function - \"{}\"" # noqa
82 | " does not exist in GraphBundle. Please provide a true name of "
83 | "initial Graph (represented as a key in GraphBundle)".format(original_network)) # noqa
84 |
85 | # get the abbreviation for measure and use this abbreviation
86 | measure_short = abbreviation[measure]
87 |
88 | type_network = "Observed network"
89 |
90 | # create a temporary array to store measure - value of Real Network
91 | tmp = [measure_short, value, type_network]
92 |
93 | # add the record (measure - value - Real Graph) to the data_array
94 | data_array.append(tmp)
95 |
96 | # now store the measure and measure values of *Random Graphs* in data_array
97 |
98 | # delete Real Graph from old DataFrame -
99 | random_df = bundleGraphs_measures.drop(original_network)
100 |
101 | # for each measure in measures
102 | for measure in random_df.columns:
103 |
104 | # for each graph in Random Graphs
105 | for rand_graph in random_df.index:
106 | # get the value of a measure for a random Graph
107 | value = random_df[measure][rand_graph]
108 |
109 | # get the abbreviation for measure and use this abbreviation
110 | measure_short = abbreviation[measure]
111 |
112 | type_network = "Random network"
113 |
114 | # create temporary array to store measure - value of Random Network
115 | tmp = [measure_short, value, type_network]
116 |
117 | # add record (measure - value - Random Graph) to the global array
118 | data_array.append(tmp)
119 |
120 | # finally create a new DataFrame
121 | NewDataFrame = pd.DataFrame(data=data_array, index=index,
122 | columns=new_columns)
123 |
124 | # include the small world coefficient into new DataFrame
125 |
126 | # check that random graphs exist in GraphBundle
127 | if len(bundleGraphs) > 1:
128 | # get the small_world values for Real Graph
129 | small_world = bundleGraphs.report_small_world(original_network)
130 |
131 | # delete the comparison of the graph labelled original_network with itself # noqa
132 | del small_world[original_network]
133 |
134 | # create list of dictionaries to later append to the new DataFrame
135 | df_small_world = []
136 | for i in list(small_world.values()):
137 | tmp = {'measure': 'sigma',
138 | 'value': i,
139 | 'TypeNetwork': 'Observed network'}
140 |
141 | df_small_world.append(tmp)
142 |
143 | # add small_world values of *original_network* to new DataFrame
144 | NewDataFrame = NewDataFrame.append(df_small_world, ignore_index=True)
145 |
146 | # bar for small_world measure of random graphs should be set exactly to 1 # noqa
147 |
148 | # set constant value of small_world measure for random bar
149 | rand_small_world = {'measure': 'sigma',
150 | 'value': 1,
151 | 'TypeNetwork': 'Random network'}
152 |
153 | # add constant value of small_world measure for random bar to new DataFrame # noqa
154 | NewDataFrame = NewDataFrame.append(rand_small_world,
155 | ignore_index=True)
156 |
157 | return NewDataFrame
158 |
159 |
160 | def save_fig(figure, path_name):
161 | """
162 | Helper function to save figure at the specified location - path_name
163 |
164 | Parameters
165 | ----------
166 | figure : :class:`matplotlib.figure.Figure`
167 | Figure to save.
168 |
169 | path_name : str
170 | Location where to save the figure.
171 |
172 | Returns
173 | -------
174 | Saves figure to the location - path_name
175 | """
176 |
177 | # If path_name ends with "/" - do not save, e.g. "/home/user/dir1/dir2/"
178 | if os.path.basename(path_name):
179 |
180 | # get the directory path (exclude file_name in the end of path_name)
181 | # For example: "/home/dir1/myfile.png" -> "/home/dir1"
182 | dir_path = os.path.dirname(path_name)
183 |
184 | # Create the output directory (dir_path) if it does not already exist
185 | # or if the path_name is not a directory
186 | if not os.path.exists(dir_path) and os.path.dirname(path_name):
187 | warnings.warn('The path "{}" does not exist.\n'
188 | "We will create this directory for you "
189 | "and store the figure there.\n"
190 | "This warning is just to make sure that you aren't "
191 | "surprised by a new directory "
192 | "appearing!".format(dir_path))
193 |
194 | # Make the directory
195 | dir_create = os.path.dirname(path_name)
196 | os.makedirs(dir_create)
197 | else:
198 | warnings.warn('The file name you gave us "{}" ends with \"/\". '
199 | "That is a directory rather than a file name."
200 | "Please run the command again with the name of the file,"
201 | "e.g. '/home/dir1/myfile.png' or to save the file in the "
202 | "current directory you can just pass 'myfile.png'".format(path_name)) # noqa
203 | return
204 |
205 | # save the figure to the file
206 | figure.savefig(path_name, bbox_inches=0, dpi=100)
207 |
208 |
209 | def setup_color_list(df, cmap_name='tab10', sns_palette=None, measure='module',
210 | continuous=False, vmin=None, vmax=None):
211 | """
212 | Use a colormap to set color for each value in the DataFrame[column].
213 | Convert data values (floats) from the interval [vmin,vmax] to the
214 | RGBA colors that the respective Colormap represents.
215 |
216 | Parameters
217 | ----------
218 | df : :class:`pandas.DataFrame`
219 | The DataFrame that contains the required column (measure parameter).
220 |
221 | measure : str
222 | The name of the column in the df (pandas.DataFrame) that contains
223 | values intended to be mapped to colors.
224 |
225 | cmap_name : str or Colormap instance, (optional, default="tab10")
226 | The colormap used to map normalized data values to RGBA colors.
227 |
228 | sns_palette: seaborn palette, (optional, default=None)
229 | Discrete color palette only for discrete data. List of colors defining
230 | a color palette (list of RGB tuples from seaborn color palettes).
231 |
232 | continuous : bool, optional (default=True)
233 | if *True* return the list of colors for continuous data.
234 |
235 | vmin : scalar or None, optional
236 | The minimum value used in colormapping *data*. If *None* the minimum
237 | value in *data* is used.
238 |
239 | vmax : scalar or None, optional
240 | The maximum value used in colormapping *data*. If *None* the maximum
241 | value in *data* is used.
242 |
243 | Returns
244 | -------
245 | list
246 | a list of colors for each value in the DataFrame[measure]
247 | """
248 |
249 | # Store pair (value, color) as a (key,value) in a dict
250 | colors_dict = {}
251 |
252 | # If vmin or vmax not passed, calculate the min and max
253 | # values of the column (measure)
254 | if vmin is None:
255 | vmin = min(df[measure].values)
256 | if vmax is None:
257 | vmax = max(df[measure].values)
258 |
259 | # The number of different colors needed
260 | num_color = len(set(df[measure]))
261 |
262 | # Continuous colorbar for continuous data
263 | if continuous:
264 | # Normalize data into the [0.0, 1.0] interval
265 | norm = mpl.colors.Normalize(vmin=vmin, vmax=vmax)
266 |
267 | # Use of data normalization before returning RGBA colors from colormap
268 | scalarMap = mpl.cm.ScalarMappable(norm=norm, cmap=cmap_name)
269 |
270 | # Map normalized data values to RGBA colors
271 | colors_list = [ scalarMap.to_rgba(x) for x in df[measure] ]
272 |
273 | # For discrete data
274 | else:
275 | # Option 1: If you've passed a matplotlib color map
276 | try:
277 | cmap = mpl.cm.get_cmap(cmap_name)
278 | except ValueError:
279 | warnings.warn("ValueError: Colormap {} is not recognized. ".format(cmap_name) +
280 | "Default colormap tab10 will be used.")
281 | cmap = mpl.cm.get_cmap("tab10")
282 |
283 | for i, value in enumerate(sorted(set(df[measure]))):
284 | colors_dict[value] = cmap((i+0.5)/num_color)
285 |
286 | # Option 2: If you've passed a sns_color_palette - use color_palette
287 | if sns_palette:
288 | color_palette = sns.color_palette(sns_palette, num_color)
289 |
290 | for i, value in enumerate(sorted(set(df[measure]))):
291 | colors_dict[value] = color_palette[i]
292 |
293 | # Make a list of colors for each node in df based on the measure
294 | colors_list = [ colors_dict[value] for value in df[measure].values ]
295 |
296 | return colors_list
297 |
298 |
299 | def add_colorbar(fig, grid, cb_name, cmap_name, vmin=0, vmax=1):
300 | """
301 | Add a colorbar to the figure in the location defined by grid.
302 |
303 | Parameters
304 | ----------
305 | fig : :class:`matplotlib.figure.Figure`
306 | Figure to which colorbar will be added.
307 |
308 | grid : str
309 | Grid spec location to add colormap.
310 |
311 | cb_name: str, (optional, default=None)
312 | The label for the colorbar.
313 | Name of data values this colorbar represents.
314 |
315 | cmap_name : str or Colormap instance
316 | Name of the colormap
317 |
318 | vmin : scalar or None, (optional, default=0)
319 | Minimum value for the colormap
320 |
321 | vmax : scalar or None, (optional, default=1)
322 | Maximum value for the colormap
323 |
324 | Returns
325 | -------
326 | `matplotlib.figure.Figure` object
327 | figure with recently added colorbar
328 | """
329 |
330 | # add ax axes to the figure
331 | ax_cbar = plt.Subplot(fig, grid)
332 | fig.add_subplot(ax_cbar)
333 |
334 | # normalise the colorbar to have the correct upper and lower limits
335 | norm = mpl.colors.Normalize(vmin=vmin, vmax=vmax)
336 |
337 | # set ticks (min, median, max) for colorbar
338 | ticks = [vmin, (vmin + vmax)/2, vmax]
339 |
340 | # put a colorbar in a specified axes,
341 | # and make colorbar for a given colormap
342 | cb = mpl.colorbar.ColorbarBase(ax_cbar, cmap=cmap_name,
343 | norm=norm,
344 | ticks=ticks,
345 | format='%.2f',
346 | orientation='horizontal')
347 |
348 | # set the name of the colorbar
349 | if cb_name:
350 | cb.set_label(cb_name, size=20)
351 |
352 | # adjust the fontsize of ticks to look better
353 | ax_cbar.tick_params(labelsize=20)
354 |
355 | return fig
356 |
357 |
358 | def axial_layout(x, y, z):
359 | """
360 | Axial (horizontal) plane, the plane that is horizontal and parallel to the
361 | axial plane of the body. It contains the lateral and the medial axes of the
362 | brain.
363 |
364 | Parameters
365 | ----------
366 | x, y, z : float
367 | Node Coordinates
368 |
369 | Returns
370 | -------
371 | numpy array
372 | The node coordinates excluding z-axis. `array([x, y])`
373 |
374 | """
375 |
376 | return np.array([x, y])
377 |
378 |
379 | def sagittal_layout(x, y, z):
380 | """
381 | Sagittal plane, a vertical plane that passes from between the cerebral
382 | hemispheres, dividing the brain into left and right halves.
383 |
384 | Parameters
385 | ----------
386 | x, y, z : float
387 | Node Coordinates
388 |
389 | Returns
390 | -------
391 | numpy array
392 | The node coordinates excluding y-axis. `array([x, z])`
393 |
394 | """
395 |
396 | return np.array([x, z])
397 |
398 |
399 | def coronal_layout(x, y, z):
400 | """
401 | Coronal (frontal) plane, a vertical plane that passes through both ears,
402 | and contains the lateral and dorsoventral axes.
403 |
404 | Parameters
405 | ----------
406 | x, y, z : float
407 | Node Coordinates
408 |
409 | Returns
410 | -------
411 | numpy array
412 | The node coordinates excluding y-axis. `array([y, z])`
413 |
414 | """
415 |
416 | return np.array([y, z])
417 |
418 |
419 | def anatomical_layout(x, y, z, orientation="sagittal"):
420 | """
421 | This function extracts the required coordinates of a node based on the
422 | given anatomical layout.
423 |
424 | Parameters
425 | ----------
426 | x, y, z : float
427 | Node Coordinates
428 |
429 | orientation: str, (optional, default="sagittal)
430 | The name of the plane: `sagittal` or `axial` or `coronal`.
431 |
432 | Returns
433 | -------
434 | numpy array
435 | The node coordinates for the given anatomical layout.
436 | """
437 |
438 | if orientation == "sagittal":
439 | return sagittal_layout(x, y, z)
440 | if orientation == "axial":
441 | return axial_layout(x, y, z)
442 | if orientation == "coronal":
443 | return coronal_layout(x, y, z)
444 | else:
445 | raise ValueError(
446 | "{} is not recognised as an anatomical layout. orientation values "
447 | "should be one of 'sagittal', 'axial' or "
448 | "'coronal'.".format(orientation))
449 |
450 |
451 | def graph_to_nilearn_array(
452 | G,
453 | edge_attribute="weight"):
454 | """
455 | Derive from G (BrainNetwork Graph) the necessary inputs for the `nilearn`
456 | graph plotting functions.
457 |
458 | G : :class:`networkx.Graph`
459 | G should have nodal locations in MNI space indexed by nodal
460 | attribute "centroids"
461 |
462 | edge_attribute : string or None optional (default = 'weight')
463 | The edge attribute that holds the numerical value used for the edge
464 | weight. If an edge does not have that attribute, then the value 1 is
465 | used instead.
466 |
467 | Returns
468 | -------
469 | (adjacency_matrix, node_coords)
470 | adjacency_matrix - represents the link strengths of the graph;
471 | node_coords - 3d coordinates of the graph nodes in world space;
472 | """
473 |
474 | # make ordered nodes to produce ordered rows and columns
475 | # in adjacency matrix
476 | node_order = sorted(list(G.nodes()))
477 |
478 | # returns the graph adjacency matrix as a NumPy array
479 | adjacency_matrix = nx.convert_matrix.to_numpy_array(G, nodelist=node_order,
480 | weight=edge_attribute)
481 |
482 | # store nodes coordinates in NumPy array if nodal coordinates exist
483 | try:
484 | node_coords = np.array([G._node[node]["centroids"] for node in node_order]) # noqa
485 | except KeyError:
486 | raise TypeError("There are no centroids (nodes coordinates) in the "
487 | "Graph. Please initialise BrainNetwork "
488 | "with the centroids.")
489 |
490 | return adjacency_matrix, node_coords
491 |
--------------------------------------------------------------------------------
/scona/wrappers/__init__.py:
--------------------------------------------------------------------------------
1 | from scona.wrappers.corrmat_from_regionalmeasures import *
2 | from scona.wrappers.network_analysis_from_corrmat import *
--------------------------------------------------------------------------------
/scona/wrappers/corrmat_from_regionalmeasures.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 |
3 | # ============================================================================
4 | # Created by Kirstie Whitaker
5 | # at Hot Numbers coffee shop on Trumpington Road in Cambridge, September 2016
6 | # Contact: kw401@cam.ac.uk
7 | # ============================================================================
8 |
9 | # ============================================================================
10 | # IMPORTS
11 | # ============================================================================
12 | import argparse
13 | import textwrap
14 |
15 | import scona.make_corr_matrices as mcm
16 | from scona.scripts.useful_functions import read_in_data
17 |
18 |
19 | def setup_argparser():
20 | # Build a basic parser.
21 | help_text = (('Generate a structural correlation \
22 | matrix from an input csv file,\n') + ('a list of \
23 | region names and (optional) covariates.'))
24 |
25 | sign_off = 'Author: Kirstie Whitaker '
26 |
27 | parser = argparse.ArgumentParser(
28 | description=help_text,
29 | epilog=sign_off,
30 | formatter_class=argparse.RawTextHelpFormatter)
31 |
32 | # Now add the arguments
33 | parser.add_argument(
34 | dest='regional_measures_file',
35 | type=str,
36 | metavar='regional_measures_file',
37 | help=textwrap.dedent(
38 | ('CSV file that contains regional values for each participant.\
39 | \n') +
40 | ('Column labels should be the region names or covariate \
41 | variable\n') +
42 | ('names. All participants in the file will be included in the\n') +
43 | ('correlation matrix.')))
44 |
45 | parser.add_argument(
46 | dest='names_file',
47 | type=str,
48 | metavar='names_file',
49 | help=textwrap.dedent(('Text file that contains the names of each \
50 | region to be included\n') + ('in the correlation matrix. One region name \
51 | on each line.')))
52 |
53 | parser.add_argument(
54 | dest='output_name',
55 | type=str,
56 | metavar='output_name',
57 | help=textwrap.dedent(
58 | ('File name of the output correlation matrix.\n') +
59 | ('If the output directory does not yet exist it will be \
60 | created.')))
61 |
62 | parser.add_argument(
63 | '--covars_file',
64 | type=str,
65 | metavar='covars_file',
66 | help=textwrap.dedent(
67 | ('Text file that contains the names of variables that \
68 | should be\n') +
69 | ('covaried for each regional measure before the creation \
70 | of the\n') +
71 | ('correlation matrix. One variable name on each line.\n') +
72 | (' Default: None')),
73 | default=None)
74 |
75 | parser.add_argument(
76 | '--method',
77 | type=str,
78 | metavar='method',
79 | help=textwrap.dedent(
80 | ('Flag submitted to pandas.DataFrame.corr().\n') +
81 | ('options are "pearson", "spearman", "kendall"')),
82 | default='pearson')
83 |
84 | arguments = parser.parse_args()
85 |
86 | return arguments, parser
87 |
88 |
89 | def corrmat_from_regionalmeasures(regional_measures_file,
90 | names_file,
91 | output_name,
92 | covars_file=None,
93 | method='pearson'):
94 | '''
95 | Read in regional measures, names and covariates files to compute
96 | correlation matrix and write it to output_name.
97 |
98 | Parameters:
99 | * regional_measures_file : a csv containing data for some regional
100 | measures with brain regions and covariates as columns and subjects
101 | as rows. The first row of regional_measures should be column
102 | headings.
103 | * names_file : a text file containing names of brain regions. One name
104 | per line of text. These names key columns in df to correlate over.
105 | * covars_file : a text file containing a list of covariates to account
106 | for. One covariate per line of text. These names key columns in df.
107 | * output_name : file name to save output matrix to.
108 | '''
109 | # Read in the data
110 | df, names, covars_list, *a = read_in_data(
111 | regional_measures_file,
112 | names_file,
113 | covars_file=covars_file)
114 |
115 | M = mcm.corrmat_from_regionalmeasures(
116 | df, names, covars=covars_list, method=method)
117 |
118 | # Save the matrix
119 | mcm.save_mat(M, output_name)
120 |
121 |
122 | if __name__ == "__main__":
123 |
124 | # Read in the command line arguments
125 | arg, parser = setup_argparser()
126 |
127 | # Now run the main function :)
128 | corrmat_from_regionalmeasures(
129 | arg.regional_measures_file,
130 | arg.names_file,
131 | arg.output_name,
132 | covars_file=arg.covars_file,
133 | method=arg.method)
134 |
135 | # ============================================================================
136 | # Wooo! All done :)
137 | # ============================================================================
138 |
--------------------------------------------------------------------------------
/scona/wrappers/network_analysis_from_corrmat.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 |
3 | # =============================================================================
4 | # Created by Kirstie Whitaker
5 | # at Neurohackweek 2016 in Seattle, September 2016
6 | # Contact: kw401@cam.ac.uk
7 | # =============================================================================
8 |
9 | # =============================================================================
10 | # IMPORTS
11 | # =============================================================================
12 | import os
13 | import argparse
14 | import textwrap
15 |
16 | import scona as scn
17 | from scona.scripts.useful_functions import read_in_data, \
18 | write_out_measures
19 |
20 | # =============================================================================
21 | # FUNCTIONS
22 | # =============================================================================
23 |
24 |
25 | def setup_argparser():
26 | '''
27 | Code to read in arguments from the command line
28 | Also allows you to change some settings
29 | '''
30 | # Build a basic parser.
31 | help_text = (('Generate a graph as a fixed cost from a non-thresholded\n')
32 | + ('matrix and return global and nodal measures.'))
33 |
34 | sign_off = 'Author: Kirstie Whitaker '
35 |
36 | parser = argparse.ArgumentParser(
37 | description=help_text,
38 | epilog=sign_off,
39 | formatter_class=argparse.RawTextHelpFormatter)
40 |
41 | # Now add the arguments
42 | parser.add_argument(
43 | dest='corr_mat_file',
44 | type=str,
45 | metavar='corr_mat_file',
46 | help=textwrap.dedent(('Text file (tab or space delimited) that \
47 | contains the unthresholded\n') + ('matrix with no column or row labels.')))
48 |
49 | parser.add_argument(
50 | dest='names_file',
51 | type=str,
52 | metavar='names_file',
53 | help=textwrap.dedent(('Text file that contains the names of each \
54 | region, in the same\n') + ('order as the correlation matrix. One region \
55 | name on each line.')))
56 |
57 | parser.add_argument(
58 | dest='centroids_file',
59 | type=str,
60 | metavar='centroids_file',
61 | help=textwrap.dedent(('Text file that contains the x, y, z \
62 | coordinates of each region,\n') + ('in the same order as the correlation \
63 | matrix. One set of three\n') + ('coordinates, tab or space delimited, on each \
64 | line.')))
65 |
66 | parser.add_argument(
67 | dest='output_dir',
68 | type=str,
69 | metavar='output_dir',
70 | help=textwrap.dedent(('Location in which to save global and nodal \
71 | measures.')))
72 |
73 | parser.add_argument(
74 | '-c', '--cost',
75 | type=float,
76 | metavar='cost',
77 | help=textwrap.dedent(('Cost at which to threshold the matrix.\n') +
78 | (' Default: 10.0')),
79 | default=10.0)
80 |
81 | parser.add_argument(
82 | '-n', '--n_rand',
83 | type=int,
84 | metavar='n_rand',
85 | help=textwrap.dedent(('Number of random graphs to generate to compare \
86 | with real network.\n') + (' Default: 1000')),
87 | default=1000)
88 |
89 | parser.add_argument(
90 | '-s', '--seed', '--random_seed',
91 | type=int,
92 | metavar='seed',
93 | help=textwrap.dedent(('Set a random seed to pass to the random graph \
94 | creator.\n') + (' Default: None')),
95 | default=None)
96 |
97 | arguments = parser.parse_args()
98 |
99 | return arguments, parser
100 |
101 |
102 | def network_analysis_from_corrmat(corr_mat_file,
103 | names_file,
104 | centroids_file,
105 | output_dir,
106 | cost=10,
107 | n_rand=1000,
108 | edge_swap_seed=None):
109 | '''
110 | This is the big function!
111 | It reads in the correlation matrix, thresholds it at the given cost
112 | (incorporating a minimum spanning tree), creates a networkx graph,
113 | calculates global and nodal measures (including random comparisons
114 | for the global measures) and writes them out to csv files.
115 | '''
116 | # Read in the data
117 | M, names, a, centroids = read_in_data(
118 | corr_mat_file,
119 | names_file,
120 | centroids_file=centroids_file,
121 | data_as_df=False)
122 |
123 | corrmat = os.path.basename(corr_mat_file).strip('.txt')
124 |
125 | # Initialise graph
126 | B = scn.BrainNetwork(
127 | network=M,
128 | parcellation=names,
129 | centroids=centroids)
130 | # Threshold graph
131 | G = B.threshold(cost)
132 | # Calculate the modules
133 | G.partition()
134 | # Calculate distance and hemispheric attributes
135 | G.calculate_spatial_measures()
136 | # Get the nodal measures
137 | # (note that this takes a bit of time because the participation coefficient
138 | # takes a while)
139 | G.calculate_nodal_measures()
140 | nodal_df = G.report_nodal_measures()
141 | nodal_name = 'NodalMeasures_{}_cost{:03.0f}.csv'.format(corrmat, cost)
142 | # FILL possibly wise to remove certain cols here (centroids)
143 | # Save your nodal measures
144 | write_out_measures(
145 | nodal_df, output_dir, nodal_name, first_columns=['name'])
146 |
147 | # Create setup for comparing real_graph against random graphs
148 | # name your graph G after the corrmat it was created from
149 | bundle = scn.GraphBundle([G], [corrmat])
150 | # Get the global measures
151 | # (note that this takes a bit of time because you're generating random
152 | # graphs)
153 | bundle.create_random_graphs(corrmat, n_rand, seed=edge_swap_seed)
154 | # Add the small world coefficient to global measures
155 | small_world = bundle.report_small_world(corrmat)
156 | for gname, G in bundle.items():
157 | G.graph['global_measures'].update(
158 | {"sw coeff against " + corrmat: small_world[gname]})
159 | global_df = bundle.report_global_measures()
160 | global_name = 'GlobalMeasures_{}_cost{:03.0f}.csv'.format(corrmat, cost)
161 | # Write out the global measures
162 | write_out_measures(
163 | global_df, output_dir, global_name, first_columns=[corrmat])
164 |
165 | # Get the rich club coefficients
166 | rc_df = bundle.report_rich_club()
167 | rc_name = 'rich_club_{}_cost{:03.0f}.csv'.format(corrmat, cost)
168 | # Write out the rich club coefficients
169 | write_out_measures(
170 | rc_df, output_dir, rc_name, first_columns=['degree', corrmat])
171 |
172 |
173 | if __name__ == "__main__":
174 |
175 | # Read in the command line arguments
176 | arg, parser = setup_argparser()
177 |
178 | # Now run the main function :)
179 | network_analysis_from_corrmat(
180 | arg.corr_mat_file,
181 | arg.names_file,
182 | arg.centroids_file,
183 | arg.output_dir,
184 | cost=arg.cost,
185 | n_rand=arg.n_rand,
186 | edge_swap_seed=arg.seed)
187 |
188 | # =============================================================================
189 | # Wooo! All done :)
190 | # =============================================================================
191 |
--------------------------------------------------------------------------------
/setup.py:
--------------------------------------------------------------------------------
1 | from setuptools import setup, find_packages
2 | PACKAGES = find_packages()
3 |
4 | install_requires = [
5 | "pandas",
6 | "python-louvain==0.11",
7 | "numpy",
8 | "scikit-learn",
9 | "scipy",
10 | "networkx>=2.2",
11 | "seaborn",
12 | "forceatlas2",
13 | "nilearn==0.5.2"]
14 |
15 |
16 | if __name__ == '__main__':
17 | setup(
18 | name='scona',
19 | version='0.1dev',
20 | packages=PACKAGES,
21 | package_data={'': ['*.txt', '*.csv']},
22 | license='MIT license',
23 | install_requires=install_requires,
24 | tests_require=['pytest', 'unittest'],
25 | test_suite='py.test',
26 | )
27 |
--------------------------------------------------------------------------------
/tests/.fixture_hash:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/WhitakerLab/scona/bb29a4247f4d53e0a1e56459b3a908f2048b54c7/tests/.fixture_hash
--------------------------------------------------------------------------------
/tests/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/WhitakerLab/scona/bb29a4247f4d53e0a1e56459b3a908f2048b54c7/tests/__init__.py
--------------------------------------------------------------------------------
/tests/graph_measures_test.py:
--------------------------------------------------------------------------------
1 | import unittest
2 | import networkx as nx
3 | import numpy as np
4 | import scona.make_graphs as mkg
5 | import scona.graph_measures as gm
6 |
7 |
8 | class Partitioning(unittest.TestCase):
9 | @classmethod
10 | def setUpClass(self):
11 | self.karate = nx.karate_club_graph()
12 |
13 | self.totalpart = {x: [x] for x in list(self.karate.nodes)}
14 | self.triviallpart = {0: [x for x in list(self.karate.nodes)]}
15 | self.nonpart = {0: [0]}
16 | n, m = gm.calc_nodal_partition(self.karate)
17 | self.bestpart_n = n
18 | self.bestpart_m = m
19 |
20 | self.nonbinary = nx.karate_club_graph()
21 | nx.set_edge_attributes(self.nonbinary, '0.5', name='weight')
22 |
23 | def throw_out_nonbinary_graph(self):
24 | with self.assertRaises(Exception):
25 | gm.calc_nodal_partition(self.nonbinary)
26 |
27 | def check_n_m_consistency(self):
28 | return
29 |
30 | def test_total_partition_pc(self):
31 | pc = gm.participation_coefficient(self.karate, self.totalpart)
32 | for x in pc.values():
33 | assert x == 1
34 |
35 | def test_total_partition_zs(self):
36 | zs = gm.z_score(self.karate, self.totalpart)
37 | for x in zs.values():
38 | assert x == 0
39 |
40 | def test_trivial_partition_pc(self):
41 | pc = gm.participation_coefficient(self.karate, self.triviallpart)
42 | for x in pc.values():
43 | assert x == 0
44 |
45 | def test_trivial_partition_zs(self):
46 | zs = gm.z_score(self.karate, self.triviallpart)
47 | karate_degrees = list(dict(self.karate.degree()).values())
48 | karate_degree = np.mean(karate_degrees)
49 | karate_std = np.std(karate_degrees)
50 | for node, score in zs.items():
51 | assert score == (self.karate.degree(node)
52 | - karate_degree)/karate_std
53 |
54 | def test_non_partition_pc(self):
55 | pc = gm.participation_coefficient(self.karate, self.nonpart)
56 | assert pc == {0: 1}
57 |
58 |
59 | def shortest_path_test():
60 | G = nx.complete_graph(6)
61 | sp = gm.shortest_path(G)
62 | assert sp == {x: 1 for x in G.nodes}
63 |
64 |
65 | class AnatomicalMeasures(unittest.TestCase):
66 | @classmethod
67 | def setUpClass(self):
68 | self.no_centroids = nx.karate_club_graph()
69 | self.identical_centroids = nx.karate_club_graph()
70 | mkg.assign_node_centroids(
71 | self.identical_centroids,
72 | [(-1, 0, 0) for x in self.no_centroids.nodes])
73 | self.opposing_centroids = nx.complete_graph(6)
74 | mkg.assign_node_centroids(
75 | self.opposing_centroids,
76 | [((-1)**i, 0, 0) for i in self.no_centroids.nodes])
77 |
78 | def test_no_centroids_assign_distance(self):
79 | with self.assertRaises(Exception):
80 | gm.assign_nodal_distance(self.no_centroids)
81 |
82 | def test_no_centroids_assign_interhem(self):
83 | with self.assertRaises(Exception):
84 | gm.assign_interhem(self.no_centroids)
85 |
86 | def test_identical_centroids_assign_distance(self):
87 | gm.assign_nodal_distance(self.identical_centroids)
88 | assert (nx.get_edge_attributes(self.identical_centroids, 'euclidean')
89 | == {edge: 0 for edge in self.no_centroids.edges})
90 | assert (nx.get_node_attributes(self.identical_centroids, 'average_dist')
91 | == {node: 0 for node in self.no_centroids.nodes})
92 | assert (nx.get_node_attributes(self.identical_centroids, 'total_dist')
93 | == {node: 0 for node in self.no_centroids.nodes})
94 |
95 | def test_identical_centroids_assign_interhem(self):
96 | gm.assign_interhem(self.identical_centroids)
97 | assert (nx.get_edge_attributes(self.identical_centroids, 'interhem')
98 | == {edge: 0 for edge in self.no_centroids.edges})
99 | assert (nx.get_node_attributes(self.identical_centroids, 'interhem')
100 | == {node: 0 for node in self.no_centroids.nodes})
101 | assert (nx.get_node_attributes(self.identical_centroids, 'interhem_proportion')
102 | == {node: 0 for node in self.no_centroids.nodes})
103 |
104 | def test_opposing_centroids_assign_distance(self):
105 | gm.assign_nodal_distance(self.opposing_centroids)
106 | assert (nx.get_edge_attributes(self.opposing_centroids, 'euclidean')
107 | == {edge: (1+(-1)**(sum(edge)+1))
108 | for edge in self.opposing_centroids.edges})
109 | assert (nx.get_node_attributes(self.opposing_centroids, 'average_dist')
110 | == {node: 1.2 for node in self.opposing_centroids.nodes})
111 | assert (nx.get_node_attributes(self.opposing_centroids, 'total_dist')
112 | == {node: 6 for node in self.opposing_centroids.nodes})
113 |
114 | def test_opposing_centroids_assign_interhem(self):
115 | gm.assign_interhem(self.opposing_centroids)
116 | assert (nx.get_edge_attributes(self.opposing_centroids, 'interhem')
117 | == {edge: (1+(-1)**(sum(edge)+1))//2
118 | for edge in self.opposing_centroids.edges})
119 | assert (nx.get_node_attributes(self.opposing_centroids, 'interhem')
120 | == {node: 3 for node in self.opposing_centroids.nodes})
121 | assert (nx.get_node_attributes(self.opposing_centroids, 'interhem_proportion')
122 | == {node: 0.6 for node in self.opposing_centroids.nodes})
123 |
124 |
125 | # omit testing of calc_modularity or rich_club since these
126 | # are relabeled networkx measures
127 |
128 |
129 | class SmallWorlds(unittest.TestCase):
130 | @classmethod
131 | def setUpClass(self):
132 | self.watts_strogatz_2 = nx.watts_strogatz_graph(6, 4, 0)
133 | self.watts_strogatz_1 = nx.watts_strogatz_graph(6, 2, 0)
134 | self.watts_strogatz_random_2 = nx.watts_strogatz_graph(6, 4, 0.5)
135 |
136 | def test_watts_strogatz_1_no_small_world(self):
137 | assert (gm.small_world_coefficient(
138 | self.watts_strogatz_1, self.watts_strogatz_2)
139 | == 0)
140 | assert (gm.small_world_coefficient(
141 | self.watts_strogatz_1, self.watts_strogatz_random_2)
142 | == 0)
143 |
144 | def test_randomising_watts_strogatz_increases_small_worldness(self):
145 | assert (gm.small_world_coefficient(
146 | self.watts_strogatz_random_2, self.watts_strogatz_2)
147 | > 1)
148 |
149 |
150 | class GlobalMeasuresMethod(unittest.TestCase):
151 | @classmethod
152 | def setUpClass(self):
153 | self.karate = nx.karate_club_graph()
154 | self.measures_no_part = gm.calculate_global_measures(
155 | self.karate,
156 | partition=None)
157 | self.totalpart = {x: x for x in list(self.karate.nodes)}
158 | self.measures_part = gm.calculate_global_measures(
159 | self.karate,
160 | partition=self.totalpart)
161 | self.extra_measure = {'hat': 'cap'}
162 |
163 | def test_average_clustering(self):
164 | assert 'average_clustering' in self.measures_part
165 | assert 'average_clustering' in self.measures_no_part
166 |
167 | def test_average_shortest_path_length(self):
168 | assert 'average_shortest_path_length' in self.measures_part
169 | assert 'average_shortest_path_length' in self.measures_no_part
170 |
171 | def test_assortativity(self):
172 | assert 'assortativity' in self.measures_part
173 | assert 'assortativity' in self.measures_no_part
174 |
175 | def test_modularity(self):
176 | print(self.measures_no_part)
177 | print(self.measures_part)
178 | assert 'modularity' in self.measures_part
179 | assert 'modularity' not in self.measures_no_part
180 |
181 | def test_efficiency(self):
182 | assert 'efficiency' in self.measures_part
183 | assert 'efficiency' in self.measures_no_part
184 |
185 | def test_from_existing(self):
186 | assert (gm.calculate_global_measures(
187 | self.karate,
188 | partition=self.totalpart,
189 | existing_global_measures=self.measures_no_part)
190 | == self.measures_part)
191 | measures_with_extra = self.measures_no_part.copy()
192 | measures_with_extra.update(self.extra_measure)
193 | new_measures = self.measures_part.copy()
194 | new_measures.update(self.extra_measure)
195 | assert (gm.calculate_global_measures(
196 | self.karate,
197 | partition=self.totalpart,
198 | existing_global_measures=measures_with_extra)
199 | == new_measures)
200 |
--------------------------------------------------------------------------------
/tests/make_corr_matrices_test.py:
--------------------------------------------------------------------------------
1 | from scona.make_corr_matrices import create_residuals_df, \
2 | get_non_numeric_cols, create_corrmat
3 | from scona.stats_functions import residuals
4 | import pytest
5 | import pandas as pd
6 | import numpy as np
7 |
8 |
9 | @pytest.fixture
10 | def subject_array():
11 | return np.array([np.arange(x, x+4) for x in [7, 4, 1]])
12 |
13 |
14 | @pytest.fixture
15 | def subject_data(subject_array):
16 | columns = ['noggin_left', 'noggin_right', 'day of week', 'haircut']
17 | data = subject_array
18 | return pd.DataFrame(data, columns=columns)
19 |
20 |
21 | @pytest.fixture
22 | def subject_residuals():
23 | columns = ['noggin_left', 'noggin_right', 'day of week', 'haircut']
24 | data = np.array([[1, 0, -1], [0, -5, 5], [1, 2, 3], [300, 200, 100]]).T
25 | return pd.DataFrame(data, columns=columns)
26 |
27 |
28 | def test_non_numeric_cols(subject_residuals):
29 | df = subject_residuals
30 | assert get_non_numeric_cols(df).size == 0
31 | df['hats'] = 'stetson'
32 | assert get_non_numeric_cols(df) == np.array(['hats'])
33 |
34 |
35 | def test_create_residuals_df_covars_plural(subject_array, subject_data):
36 | names, covars = ['noggin_left', 'noggin_right'], ['day of week', 'haircut']
37 | array_resids = [residuals(subject_array[:, 2:].T,
38 | subject_array[:, i]) for i in [0, 1]]
39 | np.testing.assert_almost_equal(
40 | np.array(create_residuals_df(subject_data, names, covars)[names]),
41 | np.array(array_resids).T)
42 |
43 |
44 | def test_create_residuals_df_covars_singular(subject_array, subject_data):
45 | names, covars = ['noggin_left', 'noggin_right'], ['day of week']
46 | array_resids = [residuals(subject_array[:, 2:3].T,
47 | subject_array[:, i]) for i in [0, 1]]
48 | np.testing.assert_almost_equal(
49 | np.array(create_residuals_df(subject_data, names, covars)[names]),
50 | np.array(array_resids).T)
51 |
52 |
53 | def test_create_residuals_df_covars_none(subject_array, subject_data):
54 | names, covars = ['noggin_left', 'noggin_right'], []
55 | array_resids = [residuals(subject_array[:, 2:2].T, subject_array[:, i])
56 | for i in [0, 1]]
57 | np.testing.assert_almost_equal(
58 | np.array(create_residuals_df(subject_data, names, covars)[names]),
59 | np.array(array_resids).T)
60 |
61 |
62 | def test_create_corrmat_pearson(subject_residuals):
63 | df_res = subject_residuals
64 | names = ['noggin_left', 'noggin_right']
65 | np.testing.assert_almost_equal(
66 | np.array(create_corrmat(df_res, names)),
67 | np.array([[1, -0.5], [-0.5, 1]]))
68 |
69 |
70 | def test_create_corrmat_spearman(subject_residuals):
71 | df_res = subject_residuals
72 | names = ['noggin_left', 'noggin_right']
73 | np.testing.assert_almost_equal(
74 | np.array(create_corrmat(df_res, names, method='spearman')),
75 | np.array([[1, -0.5], [-0.5, 1]]))
76 |
--------------------------------------------------------------------------------
/tests/make_graphs_test.py:
--------------------------------------------------------------------------------
1 | import pytest
2 | import unittest
3 | import pandas as pd
4 | import networkx as nx
5 | import numpy as np
6 | import scona.make_graphs as mkg
7 |
8 |
9 | def symmetric_matrix_1():
10 | return np.array([[1+x, 2+x, 1+x] for x in [1, 4, 1]])
11 |
12 |
13 | def symmetric_df_1():
14 | return pd.DataFrame(symmetric_matrix_1(), index=['a', 'b', 'c'])
15 |
16 |
17 | def simple_weighted_graph():
18 | G = nx.Graph()
19 | nx.add_path(G, [1, 2], weight=2)
20 | nx.add_path(G, [1, 0], weight=3)
21 | nx.add_path(G, [2, 0], weight=5)
22 | return G
23 |
24 |
25 | def simple_anatomical_graph():
26 | G = simple_weighted_graph()
27 | mkg.assign_node_centroids(G,
28 | [(1, 0, 0), (0, 1, 0), (0, 0, 1)])
29 | mkg.assign_node_names(G, ['a', 'b', 'c'])
30 | return G
31 |
32 |
33 | def em():
34 | return nx.algorithms.isomorphism.numerical_edge_match('weight', 1)
35 |
36 |
37 | def nm(exclude=[]):
38 | nm = ["name", "name_34", "name_68", "hemi",
39 | "centroids", "x", "y", "z",
40 | "hats", "socks"]
41 | return nx.algorithms.isomorphism.categorical_node_match(
42 | [x for x in nm if x not in exclude],
43 | [None for i in range(len([x for x in nm if x not in exclude]))])
44 |
45 |
46 | class AnatCopying(unittest.TestCase):
47 | # Test anatomical copying methods from make_graphs
48 | @classmethod
49 | def setUpClass(cls):
50 | cls.G = simple_weighted_graph()
51 |
52 | cls.Gname = simple_weighted_graph()
53 | nx.set_node_attributes(cls.Gname,
54 | {0: 'a', 1: 'b', 2: 'c'},
55 | name='name')
56 |
57 | cls.Gcentroids = simple_weighted_graph()
58 | nx.set_node_attributes(cls.Gcentroids,
59 | {0: (1, 0, 0), 1: (0, 1, 0), 2: (0, 0, 1)},
60 | name='centroids')
61 | nx.set_node_attributes(cls.Gcentroids,
62 | {0: 1, 1: 0, 2: 0},
63 | name='x')
64 | nx.set_node_attributes(cls.Gcentroids,
65 | {0: 0, 1: 1, 2: 0},
66 | name='y')
67 | nx.set_node_attributes(cls.Gcentroids,
68 | {0: 0, 1: 0, 2: 1},
69 | name='z')
70 |
71 | cls.H = simple_anatomical_graph()
72 |
73 | cls.J = nx.Graph()
74 | cls.J.add_nodes_from(cls.H.nodes)
75 | mkg.copy_anatomical_data(cls.J, cls.H)
76 | cls.K = simple_anatomical_graph()
77 | cls.K.remove_edges_from(cls.H.edges)
78 | cls.L = mkg.anatomical_copy(cls.H)
79 | cls.R = mkg.anatomical_copy(cls.H)
80 | nx.set_node_attributes(cls.R, 'stetson', name='hats')
81 |
82 | def test_assign_name_to_nodes(self):
83 | G1 = simple_weighted_graph()
84 | mkg.assign_node_names(G1,
85 | ['a', 'b', 'c'])
86 | assert nx.is_isomorphic(G1, self.Gname,
87 | edge_match=em(),
88 | node_match=nm(
89 | exclude=['centroids', 'y', 'x', 'z']))
90 |
91 | def test_assign_name_to_nodes_308_style(self):
92 | return
93 |
94 | def test_assign_node_centroids(self):
95 | G1 = simple_weighted_graph()
96 | mkg.assign_node_centroids(G1,
97 | [(1, 0, 0), (0, 1, 0), (0, 0, 1)])
98 |
99 | assert nx.is_isomorphic(
100 | G1,
101 | self.Gcentroids,
102 | edge_match=em(),
103 | node_match=nm(exclude=['name', "name_34", "name_68", "hemi"]))
104 |
105 | def test_copy_anatomical_data(self):
106 |
107 | G2 = nx.Graph()
108 | mkg.copy_anatomical_data(G2, self.H)
109 | assert nx.is_isomorphic(G2, nx.Graph(),
110 | edge_match=em(),
111 | node_match=nm())
112 |
113 | assert nx.is_isomorphic(self.J, self.K,
114 | edge_match=em(),
115 | node_match=nm())
116 |
117 | G4 = simple_weighted_graph()
118 | mkg.copy_anatomical_data(G4, self.H)
119 | assert nx.is_isomorphic(G4, self.H,
120 | edge_match=em(),
121 | node_match=nm())
122 |
123 | def no_data_does_nothing(self):
124 | mkg.copy_anatomical_data(self.H, self.G)
125 | assert nx.is_isomorphic(self.H, simple_anatomical_graph(),
126 | edge_match=em(),
127 | node_match=nm())
128 |
129 | def copy_different_anatomical_keys(self):
130 | G5 = simple_weighted_graph()
131 | G6 = simple_weighted_graph()
132 | nx.set_node_attributes(G6, 'bowler', name='hats')
133 | mkg.copy_anatomical_data(G5, G6)
134 | assert nx.is_isomorphic(G5, simple_weighted_graph(),
135 | edge_match=em(),
136 | node_match=nm())
137 |
138 | mkg.copy_anatomical_data(
139 | G5, G6, nodal_keys=['hats', 'socks'])
140 | assert nx.is_isomorphic(G5, G6,
141 | edge_match=em(),
142 | node_match=nm())
143 |
144 | def test_anatomical_copy(self):
145 | assert nx.is_isomorphic(self.H, self.L,
146 | edge_match=em(),
147 | node_match=nm())
148 |
149 | def test_anatomical_copy_hats(self):
150 | # check hats not copied to P
151 | P = mkg.anatomical_copy(self.R)
152 | assert nx.is_isomorphic(self.H, P,
153 | edge_match=em(),
154 | node_match=nm())
155 | # check otherwise the same
156 | assert nx.is_isomorphic(self.R, P,
157 | edge_match=em(),
158 | node_match=nm(exclude=['hats']))
159 | # check hats copied if specified as an additional key
160 | new_keys = mkg.anatomical_node_attributes()
161 | new_keys.append('hats')
162 | Q = mkg.anatomical_copy(self.R, nodal_keys=new_keys)
163 | assert nx.is_isomorphic(self.R, Q,
164 | edge_match=em(),
165 | node_match=nm())
166 |
167 | def test_matchers(self):
168 | N = nx.Graph()
169 | N.add_nodes_from({0, 1, 2})
170 | assert mkg.is_nodal_match(self.G, N)
171 | assert mkg.is_nodal_match(self.H, N)
172 |
173 | def test_different_vertex_sets(self):
174 | M = nx.Graph()
175 | S = nx.karate_club_graph()
176 | assert not mkg.is_nodal_match(self.G, M)
177 | assert not mkg.is_nodal_match(self.G, S)
178 | assert not mkg.is_nodal_match(S, M)
179 |
180 | def test_key_matchings(self):
181 | assert not mkg.is_nodal_match(
182 | simple_weighted_graph(),
183 | simple_anatomical_graph(),
184 | keys=['x'])
185 | assert mkg.is_nodal_match(
186 | self.R,
187 | simple_anatomical_graph(),
188 | keys=['x'])
189 | assert not mkg.is_nodal_match(
190 | self.R,
191 | simple_anatomical_graph(),
192 | keys=['x', 'hats'])
193 |
194 | def check_anatomical_matches(self):
195 | assert mkg.is_anatomical_match(self.L, self.H)
196 | assert not mkg.is_anatomical_match(self.G, self.H)
197 |
198 | def check_anat_matches_with_hats(self):
199 | assert mkg.is_anatomical_match(self.H, self.R)
200 |
201 |
202 | def test_weighted_graph_from_matrix_isomorphic():
203 | G1 = mkg.weighted_graph_from_matrix(symmetric_matrix_1())
204 | G2 = simple_weighted_graph()
205 | assert nx.is_isomorphic(G1, G2, edge_match=em())
206 |
207 |
208 | def test_weighted_graph_from_df_isomorphic():
209 | G1 = mkg.weighted_graph_from_df(symmetric_df_1())
210 | G2 = simple_weighted_graph()
211 | assert nx.is_isomorphic(G1, G2, edge_match=em())
212 |
213 |
214 | def test_scale_weights():
215 | G1 = mkg.scale_weights(simple_weighted_graph())
216 | G2 = mkg.weighted_graph_from_matrix(symmetric_matrix_1()*-1)
217 | assert nx.is_isomorphic(G1, G2, edge_match=em())
218 |
219 |
220 | def test_threshold_graph_mst_too_large_exception():
221 | with pytest.raises(Exception):
222 | mkg.threshold_graph(simple_weighted_graph(), 30)
223 |
224 |
225 | def test_threshold_graph_mst_false():
226 | G1 = mkg.threshold_graph(simple_anatomical_graph(), 30, mst=False)
227 | G2 = nx.Graph()
228 | G2.add_nodes_from(G1.nodes)
229 | G2._node = simple_anatomical_graph()._node
230 | G2.add_edge(0, 2)
231 | assert nx.is_isomorphic(G1, G2, edge_match=em(), node_match=nm())
232 |
233 |
234 | def test_threshold_graph_mst_true():
235 | G1 = mkg.threshold_graph(simple_anatomical_graph(), 70, mst=True)
236 | G2 = nx.Graph()
237 | G2.add_nodes_from(G1.nodes)
238 | G2._node = simple_anatomical_graph()._node
239 | G2.add_edge(0, 2)
240 | G2.add_edge(0, 1)
241 | assert nx.is_isomorphic(G1, G2, edge_match=em(), node_match=nm())
242 |
243 |
244 | def test_graph_at_cost_df():
245 | G1 = mkg.graph_at_cost(symmetric_df_1(), 70)
246 | G2 = mkg.threshold_graph(simple_weighted_graph(), 70)
247 | assert nx.is_isomorphic(G1, G2, edge_match=em())
248 |
249 |
250 | def test_graph_at_cost_array():
251 | G1 = mkg.graph_at_cost(symmetric_matrix_1(), 70)
252 | G2 = mkg.threshold_graph(simple_weighted_graph(), 70)
253 | assert nx.is_isomorphic(G1, G2, edge_match=em())
254 |
255 | # Test random graph generation
256 | # This is non-deterministic, so we will check that the random graphs have
257 | # the properties we would like them to have. Namely, they:
258 | # * should be connected binary graphs
259 | # * they should not be equal to the original graphs
260 | # * The number of edges should be constant
261 | # * the degree distribution should be contant
262 | # * should fail on particular graphs
263 |
264 |
265 | class RandomGraphs(unittest.TestCase):
266 | @classmethod
267 | def setUpClass(cls):
268 | cls.karate_club_graph = nx.karate_club_graph()
269 | cls.karate_club_graph_rand = mkg.random_graph(cls.karate_club_graph)
270 | cls.karate_club_graph_list = mkg.get_random_graphs(cls.karate_club_graph, n=5)
271 | cls.lattice = nx.grid_graph(dim=[5, 5, 5], periodic=True)
272 | cls.lattice_rand = mkg.random_graph(cls.lattice)
273 | cls.lattice_list = mkg.get_random_graphs(cls.lattice, n=5)
274 | G = nx.Graph()
275 | nx.add_path(G, [0, 1, 2])
276 | cls.short_path = G
277 |
278 | def test_random_graph_type(self):
279 | self.assertTrue(isinstance(self.karate_club_graph_rand, nx.Graph))
280 | self.assertTrue(isinstance(self.lattice_rand, nx.Graph))
281 |
282 | def test_random_graph_connected(self):
283 | self.assertTrue(nx.is_connected(self.karate_club_graph_rand))
284 | self.assertTrue(nx.is_connected(self.lattice_rand))
285 |
286 | def test_random_graph_edges(self):
287 | self.assertEqual(self.karate_club_graph_rand.size(), self.karate_club_graph.size())
288 | self.assertEqual(self.lattice_rand.size(), self.lattice.size())
289 |
290 | def test_random_graph_degree_distribution(self):
291 | self.assertEqual(dict(self.karate_club_graph_rand.degree()),
292 | dict(self.karate_club_graph.degree()))
293 | self.assertEqual(dict(self.lattice_rand.degree()),
294 | dict(self.lattice.degree()))
295 |
296 | def test_random_graph_makes_changes(self):
297 | self.assertNotEqual(self.karate_club_graph, self.karate_club_graph_rand)
298 | self.assertNotEqual(self.lattice, self.lattice_rand)
299 |
300 | def test_random_graph_fail_short_path(self):
301 | with self.assertRaises(Exception):
302 | mkg.random_graph(self.short_path)
303 |
304 | def test_get_random_graphs(self):
305 | from itertools import combinations
306 | self.assertEqual(len(self.karate_club_graph_list), 5)
307 | self.assertEqual(len(self.lattice_list), 5)
308 | k = self.karate_club_graph_list
309 | k.append(self.karate_club_graph)
310 | for a, b in combinations(k, 2):
311 | self.assertNotEqual(a, b)
312 | L = self.lattice_list
313 | L.append(self.lattice)
314 | for a, b in combinations(L, 2):
315 | self.assertNotEqual(a, b)
316 |
--------------------------------------------------------------------------------
/tests/regression_test.py:
--------------------------------------------------------------------------------
1 | import unittest
2 | from tests.write_fixtures import generate_fixture_hashes, unpickle_hash
3 |
4 |
5 | class FixturesTest(unittest.TestCase):
6 |
7 | # ------------------- setup and teardown ---------------------------
8 | @classmethod
9 | def setUpClass(cls):
10 | cls.hash_dict_original = unpickle_hash()
11 | print('\nin set up - this takes about 80 secs')
12 | cls.hash_dict_new = generate_fixture_hashes()
13 | # define dictionary keys for individual files for checking
14 | folder = 'temporary_test_fixtures'
15 | cls.corrmat = folder + '/corrmat_file.txt'
16 | cls.gm = (folder +
17 | '/network-analysis/GlobalMeasures_corrmat_file_cost010.csv')
18 | cls.lm = (folder +
19 | '/network-analysis/NodalMeasures_corrmat_file_cost010.csv')
20 | cls.rich = (folder +
21 | '/network-analysis/rich_club_corrmat_file_cost010.csv')
22 |
23 | # --------------------------- Tests --------------------------------
24 | # Each of these tests checks that ourly newly generated version of
25 | # file_x matches the fixture version
26 |
27 | def test_corrmat_matches_fixture(self):
28 | # test new correlation matrix against fixture
29 | print('\ntesting new correlation matrix against fixture')
30 | self.assertEqual(self.hash_dict_new[self.corrmat],
31 | self.hash_dict_original[self.corrmat])
32 |
33 | def test_lm_against_fixture(self):
34 | # test new local measures against fixture
35 | print('\ntesting new nodal measures against fixture')
36 | self.assertEqual(self.hash_dict_new[self.lm],
37 | self.hash_dict_original[self.lm])
38 |
39 | def test_gm_against_fixture(self):
40 | # test new global measures against fixture
41 | print('\ntesting new global measures against fixture')
42 | self.assertEqual(self.hash_dict_new[self.gm],
43 | self.hash_dict_original[self.gm])
44 |
45 | def test_rich_against_fixture(self):
46 | # test rich club against fixture
47 | print('\ntesting rich club against fixture')
48 | self.assertEqual(self.hash_dict_new[self.rich],
49 | self.hash_dict_original[self.rich])
50 |
51 |
52 | if __name__ == '__main__':
53 | unittest.main()
54 |
--------------------------------------------------------------------------------
/tests/write_fixtures.py:
--------------------------------------------------------------------------------
1 | # -------------------------- Write fixtures ---------------------------
2 | # To regression test our wrappers we need examples. This script
3 | # generates files. We save these files once, and regression_test.py
4 | # re-generates these files to tests them for identicality with the
5 | # presaved examples (fixtures). If they are found not to be identical
6 | # it throws up an error.
7 | #
8 | # The point of this is to check that throughout the changes we make to
9 | # scona the functionality of this script stays the same
10 | #
11 | # Currently the functionality of write_fixtures is to generate corrmat
12 | # and network_analysis data via the functions
13 | # corrmat_from_regionalmeasures and network_analysis_from_corrmat.
14 | # ---------------------------------------------------------------------
15 | import os
16 | import scona as scn
17 | import scona.datasets as datasets
18 |
19 |
20 | def recreate_correlation_matrix_fixture(folder):
21 | # generate a correlation matrix in the given folder using
22 | # the Whitaker_Vertes dataset
23 | regionalmeasures, names, covars, centroids = (
24 | datasets.NSPN_WhitakerVertes_PNAS2016._data())
25 | corrmat_path = os.path.join(folder, 'corrmat_file.txt')
26 | scn.wrappers.corrmat_from_regionalmeasures(
27 | regionalmeasures,
28 | names,
29 | corrmat_path)
30 |
31 |
32 | def recreate_network_analysis_fixture(folder, corrmat_path):
33 | # generate network analysis in the given folder using the #####
34 | # data in example_data and the correlation matrix given #####
35 | # by corrmat_path #####
36 | regionalmeasures, names, covars, centroids = (
37 | datasets.NSPN_WhitakerVertes_PNAS2016._data())
38 | # It is necessary to specify a random seed because
39 | # network_analysis_from_corrmat generates random graphs to
40 | # calculate global measures
41 | scn.wrappers.network_analysis_from_corrmat(
42 | corrmat_path,
43 | names,
44 | centroids,
45 | os.path.join(os.getcwd(), folder, 'network-analysis'),
46 | cost=10,
47 | n_rand=10, # this is not a reasonable
48 | # value for n, we generate only 10 random
49 | # graphs to save time
50 | edge_swap_seed=2984
51 | )
52 |
53 |
54 | def write_fixtures(folder='temporary_test_fixtures'):
55 | # Run functions corrmat_from_regionalmeasures and ##
56 | # network_analysis_from_corrmat to save corrmat in given folder ##
57 | # --------------------------------------------------------------##
58 | # if the folder does not exist, create it
59 | if not os.path.isdir(os.path.join(os.getcwd(), folder)):
60 | os.makedirs(os.path.join(os.getcwd(), folder))
61 | # generate and save the correlation matrix
62 | print("generating new correlation matrix")
63 | recreate_correlation_matrix_fixture(folder)
64 | # generate and save the network analysis
65 | print("generating new network analysis")
66 | corrmat_path = os.path.join(folder, 'corrmat_file.txt')
67 | recreate_network_analysis_fixture(folder, corrmat_path)
68 |
69 |
70 | def delete_fixtures(folder):
71 | import shutil
72 | print('\ndeleting temporary files')
73 | shutil.rmtree(os.getcwd()+folder)
74 |
75 |
76 | def hash_folder(folder='temporary_test_fixtures'):
77 | hashes = {}
78 | for path, directories, files in os.walk(folder):
79 | for file in sorted(files):
80 | hashes[os.path.join(path, file)] = hash_file(
81 | os.path.join(path, file))
82 | for dir in sorted(directories):
83 | hashes.update(hash_folder(os.path.join(path, dir)))
84 | break
85 | return hashes
86 |
87 |
88 | def hash_file(filename):
89 | import hashlib
90 | m = hashlib.sha256()
91 | with open(filename, 'rb') as f:
92 | while True:
93 | b = f.read(2**10)
94 | if not b:
95 | break
96 | m.update(b)
97 | return m.hexdigest()
98 |
99 |
100 | def generate_fixture_hashes(folder='temporary_test_fixtures'):
101 | # generate the fixtures
102 | write_fixtures(folder=folder)
103 | # calculate the hash
104 | hash_dict = hash_folder(folder=folder)
105 | # delete the new files
106 | delete_fixtures("/"+folder)
107 | # return hash
108 | return hash_dict
109 |
110 |
111 | def pickle_hash(hash_dict):
112 | import pickle
113 | with open("tests/.fixture_hash", 'wb') as f:
114 | pickle.dump(hash_dict, f)
115 |
116 |
117 | def unpickle_hash():
118 | import pickle
119 | # import fixture relevant to the current python, networkx versions
120 | print('loading test fixtures')
121 | with open("tests/.fixture_hash", "rb") as f:
122 | pickle_file = pickle.load(f)
123 | return pickle_file
124 |
125 |
126 | if __name__ == '__main__':
127 | if (input("Are you sure you want to update scona's test fixtures? (y/n)")
128 | == 'y'):
129 | hash_dict = generate_fixture_hashes()
130 | pickle_hash(hash_dict)
131 |
--------------------------------------------------------------------------------
/tutorials/prepare_data.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {},
6 | "source": [
7 | "# Getting your data ready"
8 | ]
9 | },
10 | {
11 | "cell_type": "markdown",
12 | "metadata": {},
13 | "source": [
14 | "# scona\n",
15 | "\n",
16 | "scona is a tool to perform network analysis over correlation networks of brain regions. \n",
17 | "This tutorial will go through the basic functionality of scona, taking us from our inputs (a matrix of structural regional measures over subjects) to a report of local network measures for each brain region, and network level comparisons to a cohort of random graphs of the same degree. "
18 | ]
19 | },
20 | {
21 | "cell_type": "code",
22 | "execution_count": 1,
23 | "metadata": {},
24 | "outputs": [],
25 | "source": [
26 | "import numpy as np\n",
27 | "import networkx as nx\n",
28 | "import scona as scn\n",
29 | "import scona.datasets as datasets"
30 | ]
31 | },
32 | {
33 | "cell_type": "markdown",
34 | "metadata": {},
35 | "source": [
36 | "### Importing data\n",
37 | "\n",
38 | "A scona analysis starts with four inputs.\n",
39 | "* __regional_measures__\n",
40 | " A pandas DataFrame with subjects as rows. The columns should include structural measures for each brain region, as well as any subject-wise covariates. \n",
41 | "* __names__\n",
42 | " A list of names of the brain regions. This will be used to specify which columns of the __regional_measures__ matrix to want to correlate over.\n",
43 | "* __covars__ _(optional)_ \n",
44 | " A list of your covariates. This will be used to specify which columns of __regional_measure__ you wish to correct for. \n",
45 | "* __centroids__\n",
46 | " A list of tuples representing the cartesian coordinates of brain regions. This list should be in the same order as the list of brain regions to accurately assign coordinates to regions. The coordinates are expected to obey the convention the the x=0 plane is the same plane that separates the left and right hemispheres of the brain. "
47 | ]
48 | },
49 | {
50 | "cell_type": "code",
51 | "execution_count": 2,
52 | "metadata": {
53 | "scrolled": true
54 | },
55 | "outputs": [
56 | {
57 | "data": {
58 | "text/html": [
59 | "