├── basic_configuration.ipynb ├── basic_data_input.ipynb ├── basic_data_input_bids.ipynb ├── basic_data_output.ipynb ├── basic_error_and_crashes.ipynb ├── basic_function_nodes.ipynb ├── basic_graph_visualization.ipynb ├── basic_import_workflows.ipynb ├── basic_interfaces.ipynb ├── basic_iteration.ipynb ├── basic_joinnodes.ipynb ├── basic_mapnodes.ipynb ├── basic_model_specification.ipynb ├── basic_nodes.ipynb ├── basic_plugins.ipynb ├── basic_workflow.ipynb ├── example_1stlevel.ipynb ├── example_2ndlevel.ipynb ├── example_metaflow.ipynb ├── example_normalize.ipynb ├── example_preprocessing.ipynb ├── introduction_dataset.ipynb ├── introduction_docker.ipynb ├── introduction_jupyter-notebook.ipynb ├── introduction_nipype.ipynb ├── introduction_python.ipynb ├── resources_help.ipynb ├── resources_installation.ipynb ├── resources_python_cheat_sheet.ipynb ├── resources_resources.ipynb ├── y_index_with_advanced_and_developer_section.ipynb ├── z_advanced_caching.ipynb ├── z_advanced_commandline.ipynb ├── z_advanced_databases.ipynb ├── z_advanced_debug.ipynb ├── z_advanced_export_workflow.ipynb ├── z_advanced_resources_and_profiling.ipynb ├── z_development_github.ipynb ├── z_development_interface.ipynb └── z_development_report_issue.ipynb /basic_configuration.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": { 6 | "deletable": true, 7 | "editable": true 8 | }, 9 | "source": [ 10 | "# Execution Configuration Options\n", 11 | "\n", 12 | "Nipype gives you many liberties on how to create workflows, but the execution of them uses a lot of default parameters. But you have of course all the freedom to change them as you like.\n", 13 | "\n", 14 | "Nipype looks for the configuration options in the local folder under the name ``nipype.cfg`` and in ``~/.nipype/nipype.cfg`` (in this order). It can be divided into **Logging** and **Execution** options. A few of the possible options are the following:\n", 15 | "\n", 16 | "### Logging\n", 17 | "\n", 18 | "- **workflow_level**: How detailed the logs regarding workflow should be\n", 19 | "- **log_to_file**: Indicates whether logging should also send the output to a file\n", 20 | "\n", 21 | "### Execution\n", 22 | "\n", 23 | "- **stop_on_first_crash**: Should the workflow stop upon first node crashing or try to execute as many nodes as possible?\n", 24 | "- **remove_unnecessary_outputs**: This will remove any interface outputs not needed by the workflow. If the required outputs from a node changes, rerunning the workflow will rerun the node. Outputs of leaf nodes (nodes whose outputs are not connected to any other nodes) will never be deleted independent of this parameter.\n", 25 | "- **use_relative_paths**: Should the paths stored in results (and used to look for inputs) be relative or absolute. Relative paths allow moving the whole working directory around but may cause problems with symlinks. \n", 26 | "- **job_finished_timeout**: When batch jobs are submitted through, SGE/PBS/Condor they could be killed externally. Nipype checks to see if a results file exists to determine if the node has completed. This timeout determines for how long this check is done after a job finish is detected. (float in seconds; default value: 5)\n", 27 | "- **poll_sleep_duration**: This controls how long the job submission loop will sleep between submitting all pending jobs and checking for job completion. To be nice to cluster schedulers the default is set to 2\n", 28 | "\n", 29 | "\n", 30 | "For the full list, see [Configuration File](http://nipype.readthedocs.io/en/latest/users/config_file.html)." 31 | ] 32 | }, 33 | { 34 | "cell_type": "markdown", 35 | "metadata": { 36 | "deletable": true, 37 | "editable": true 38 | }, 39 | "source": [ 40 | "# Global, workflow & node level\n", 41 | "\n", 42 | "The configuration options can be changed globally (i.e. for all workflows), for just a workflow, or for just a node. The implementations look as follows:" 43 | ] 44 | }, 45 | { 46 | "cell_type": "markdown", 47 | "metadata": { 48 | "deletable": true, 49 | "editable": true 50 | }, 51 | "source": [ 52 | "### At the global level:" 53 | ] 54 | }, 55 | { 56 | "cell_type": "code", 57 | "execution_count": null, 58 | "metadata": { 59 | "collapsed": false, 60 | "deletable": true, 61 | "editable": true 62 | }, 63 | "outputs": [], 64 | "source": [ 65 | "from nipype import config, logging\n", 66 | "\n", 67 | "config_dict={'execution': {'remove_unnecessary_outputs': 'true',\n", 68 | " 'keep_inputs': 'false',\n", 69 | " 'poll_sleep_duration': '60',\n", 70 | " 'stop_on_first_rerun': 'false',\n", 71 | " 'hash_method': 'timestamp',\n", 72 | " 'local_hash_check': 'true',\n", 73 | " 'create_report': 'true',\n", 74 | " 'crashdump_dir': '/home/user/crash_folder',\n", 75 | " 'use_relative_paths': 'false',\n", 76 | " 'job_finished_timeout': '5'},\n", 77 | " 'logging': {'workflow_level': 'INFO',\n", 78 | " 'filemanip_level': 'INFO',\n", 79 | " 'interface_level': 'INFO',\n", 80 | " 'log_directory': '/home/user/log_folder',\n", 81 | " 'log_to_file': 'true'}}\n", 82 | "config.update_config(config_dict)\n", 83 | "logging.update_logging(config)" 84 | ] 85 | }, 86 | { 87 | "cell_type": "markdown", 88 | "metadata": { 89 | "deletable": true, 90 | "editable": true 91 | }, 92 | "source": [ 93 | "### At the workflow level:" 94 | ] 95 | }, 96 | { 97 | "cell_type": "code", 98 | "execution_count": null, 99 | "metadata": { 100 | "collapsed": true, 101 | "deletable": true, 102 | "editable": true 103 | }, 104 | "outputs": [], 105 | "source": [ 106 | "# Change execution parameters\n", 107 | "wf.config['execution']['stop_on_first_crash'] = 'true'\n", 108 | "\n", 109 | "# Change logging parameters\n", 110 | "wf.config['logging'] = {'workflow_level' : 'DEBUG',\n", 111 | " 'filemanip_level' : 'DEBUG',\n", 112 | " 'interface_level' : 'DEBUG',\n", 113 | " 'log_to_file' : 'True',\n", 114 | " 'log_directory' : '/home/user/log_folder'}" 115 | ] 116 | }, 117 | { 118 | "cell_type": "markdown", 119 | "metadata": { 120 | "deletable": true, 121 | "editable": true 122 | }, 123 | "source": [ 124 | "### At the node level:" 125 | ] 126 | }, 127 | { 128 | "cell_type": "code", 129 | "execution_count": null, 130 | "metadata": { 131 | "collapsed": true, 132 | "deletable": true, 133 | "editable": true 134 | }, 135 | "outputs": [], 136 | "source": [ 137 | "bet.config = {'execution': {'keep_unnecessary_outputs': 'false'}}" 138 | ] 139 | } 140 | ], 141 | "metadata": { 142 | "anaconda-cloud": {}, 143 | "kernelspec": { 144 | "display_name": "Python [default]", 145 | "language": "python", 146 | "name": "python2" 147 | }, 148 | "language_info": { 149 | "codemirror_mode": { 150 | "name": "ipython", 151 | "version": 2 152 | }, 153 | "file_extension": ".py", 154 | "mimetype": "text/x-python", 155 | "name": "python", 156 | "nbconvert_exporter": "python", 157 | "pygments_lexer": "ipython2", 158 | "version": "2.7.13" 159 | } 160 | }, 161 | "nbformat": 4, 162 | "nbformat_minor": 0 163 | } 164 | -------------------------------------------------------------------------------- /basic_data_input.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": { 6 | "deletable": true, 7 | "editable": true 8 | }, 9 | "source": [ 10 | "# Data Input\n", 11 | "\n", 12 | "To do any computation, you need to have data. Getting the data in the framework of a workflow is therefore the first step of every analysis. Nipype provides many different modules to grab or select the data:\n", 13 | "\n", 14 | " DataFinder\n", 15 | " DataGrabber\n", 16 | " FreeSurferSource\n", 17 | " JSONFileGrabber\n", 18 | " S3DataGrabber\n", 19 | " SSHDataGrabber\n", 20 | " SelectFiles\n", 21 | " XNATSource\n", 22 | "\n", 23 | "This tutorial will only cover some of them. For the rest, see the section [``interfaces.io``](http://nipype.readthedocs.io/en/latest/interfaces/generated/nipype.interfaces.io.html) on the official homepage." 24 | ] 25 | }, 26 | { 27 | "cell_type": "markdown", 28 | "metadata": { 29 | "deletable": true, 30 | "editable": true 31 | }, 32 | "source": [ 33 | "# Dataset structure\n", 34 | "\n", 35 | "To be able to import data, you first need to be aware about the structure of your dataset. The structure of the dataset for this tutorial is according to BIDS, and looks as follows:\n", 36 | "\n", 37 | " ds102\n", 38 | " ├── CHANGES\n", 39 | " ├── dataset_description.json\n", 40 | " ├── participants.tsv\n", 41 | " ├── README\n", 42 | " ├── sub-01\n", 43 | " │   ├── anat\n", 44 | " │   │   └── sub-01_T1w.nii.gz\n", 45 | " │   └── func\n", 46 | " │   ├── sub-01_task-flanker_run-1_bold.nii.gz\n", 47 | " │   ├── sub-01_task-flanker_run-1_events.tsv\n", 48 | " │   ├── sub-01_task-flanker_run-2_bold.nii.gz\n", 49 | " │   └── sub-01_task-flanker_run-2_events.tsv\n", 50 | " ├── sub-02\n", 51 | " │   ├── anat\n", 52 | " │   │   └── sub-02_T1w.nii.gz\n", 53 | " │   └── func\n", 54 | " │   ├── sub-02_task-flanker_run-1_bold.nii.gz\n", 55 | " │   ├── sub-02_task-flanker_run-1_events.tsv\n", 56 | " │   ├── sub-02_task-flanker_run-2_bold.nii.gz\n", 57 | " │   └── sub-02_task-flanker_run-2_events.tsv\n", 58 | " ├── sub-03\n", 59 | " │   ├── anat\n", 60 | " │   │   └── sub-03_T1w.nii.gz\n", 61 | " │   └── func\n", 62 | " │   ├── sub-03_task-flanker_run-1_bold.nii.gz\n", 63 | " │   ├── sub-03_task-flanker_run-1_events.tsv\n", 64 | " │   ├── sub-03_task-flanker_run-2_bold.nii.gz\n", 65 | " │   └── sub-03_task-flanker_run-2_events.tsv\n", 66 | " ├── ...\n", 67 | " .\n", 68 | " └── task-flanker_bold.json" 69 | ] 70 | }, 71 | { 72 | "cell_type": "markdown", 73 | "metadata": { 74 | "deletable": true, 75 | "editable": true 76 | }, 77 | "source": [ 78 | "# DataGrabber\n", 79 | "\n", 80 | "``DataGrabber`` is a generic data grabber module that wraps around ``glob`` to select your neuroimaging data in an intelligent way. As an example, let's assume we want to grab the anatomical and functional images of a certain subject.\n", 81 | "\n", 82 | "First, we need to create the ``DataGrabber`` node. This node needs to have some input fields for all dynamic parameters (e.g. subject identifier, task identifier), as well as the two desired output fields ``anat`` and ``func``." 83 | ] 84 | }, 85 | { 86 | "cell_type": "code", 87 | "execution_count": null, 88 | "metadata": { 89 | "collapsed": false, 90 | "deletable": true, 91 | "editable": true 92 | }, 93 | "outputs": [], 94 | "source": [ 95 | "from nipype import DataGrabber, Node\n", 96 | "\n", 97 | "# Create DataGrabber node\n", 98 | "dg = Node(DataGrabber(infields=['subject_id', 'task_id'],\n", 99 | " outfields=['anat', 'func']),\n", 100 | " name='datagrabber')\n", 101 | "\n", 102 | "# Location of the dataset folder\n", 103 | "dg.inputs.base_directory = '/data/ds102'\n", 104 | "\n", 105 | "# Necessary default parameters\n", 106 | "dg.inputs.template = '*'\n", 107 | "dg.inputs.sort_filelist = True" 108 | ] 109 | }, 110 | { 111 | "cell_type": "markdown", 112 | "metadata": { 113 | "deletable": true, 114 | "editable": true 115 | }, 116 | "source": [ 117 | "Second, we know that the two files we desire are the the following location:\n", 118 | "\n", 119 | " anat = /data/ds102/sub-01/anat/sub-01_T1w.nii.gz\n", 120 | " func = /data/ds102/sub-01/func/sub-01_task-flanker_run-1_bold.nii.gz\n", 121 | "\n", 122 | "We see that the two files only have two dynamic parameters between subjects and conditions:\n", 123 | "\n", 124 | " subject_id: in this case 'sub-01'\n", 125 | " task_id: in this case 1\n", 126 | "\n", 127 | "This means that we can rewrite the paths as follows:\n", 128 | "\n", 129 | " anat = /data/ds102/[subject_id]/anat/[subject_id]_T1w.nii.gz\n", 130 | " func = /data/ds102/[subject_id]/func/[subject_id]_task-flanker_run-[task_id]_bold.nii.gz\n", 131 | "\n", 132 | "Therefore, we need the parameter ``subject_id`` for the anatomical image and the parameter ``subject_id`` and ``task_id`` for the functional image. In the context of DataGabber, this is specified as follows:" 133 | ] 134 | }, 135 | { 136 | "cell_type": "code", 137 | "execution_count": null, 138 | "metadata": { 139 | "collapsed": true, 140 | "deletable": true, 141 | "editable": true 142 | }, 143 | "outputs": [], 144 | "source": [ 145 | "dg.inputs.template_args = {'anat': [['subject_id']],\n", 146 | " 'func': [['subject_id', 'task_id']]}" 147 | ] 148 | }, 149 | { 150 | "cell_type": "markdown", 151 | "metadata": { 152 | "deletable": true, 153 | "editable": true 154 | }, 155 | "source": [ 156 | "Now, comes the most important part of DataGrabber. We need to specify the template structure to find the specific data. This can be done as follows." 157 | ] 158 | }, 159 | { 160 | "cell_type": "code", 161 | "execution_count": null, 162 | "metadata": { 163 | "collapsed": true, 164 | "deletable": true, 165 | "editable": true 166 | }, 167 | "outputs": [], 168 | "source": [ 169 | "dg.inputs.field_template = {'anat': '%s/anat/*_T1w.nii.gz',\n", 170 | " 'func': '%s/func/*run-%d_bold.nii.gz'}" 171 | ] 172 | }, 173 | { 174 | "cell_type": "markdown", 175 | "metadata": { 176 | "deletable": true, 177 | "editable": true 178 | }, 179 | "source": [ 180 | "You'll notice that we use ``%s``, ``%02d`` and ``*`` for placeholders in the data paths. ``%s`` is a placeholder for a string and is filled out by ``subject_id``. ``%02d`` is a placeholder for a integer number and is filled out by ``task_id``. ``*`` is used as a wild card, e.g. a placeholder for any possible string combination. This is all to set up the ``DataGrabber`` node." 181 | ] 182 | }, 183 | { 184 | "cell_type": "markdown", 185 | "metadata": { 186 | "deletable": true, 187 | "editable": true 188 | }, 189 | "source": [ 190 | "Now it is up to you how you want to feed the dynamic parameters into the node. You can either do this by using another node (e.g. ``IdentityInterface``) and feed ``subject_id`` and ``task_id`` as connections to the ``DataGrabber`` node or specify them directly as node inputs." 191 | ] 192 | }, 193 | { 194 | "cell_type": "code", 195 | "execution_count": null, 196 | "metadata": { 197 | "collapsed": false, 198 | "deletable": true, 199 | "editable": true 200 | }, 201 | "outputs": [], 202 | "source": [ 203 | "# Using the IdentityInterface\n", 204 | "from nipype import IdentityInterface\n", 205 | "infosource = Node(IdentityInterface(fields=['subject_id', 'contrasts']),\n", 206 | " name=\"infosource\")\n", 207 | "infosource.inputs.contrasts = 1\n", 208 | "subject_list = ['sub-01',\n", 209 | " 'sub-02',\n", 210 | " 'sub-03',\n", 211 | " 'sub-04',\n", 212 | " 'sub-05']\n", 213 | "infosource.iterables = [('subject_id', subject_list)]" 214 | ] 215 | }, 216 | { 217 | "cell_type": "markdown", 218 | "metadata": { 219 | "deletable": true, 220 | "editable": true 221 | }, 222 | "source": [ 223 | "Now you only have to connect ``infosource`` with your ``DataGrabber`` and run the workflow to iterate over subjects 1, 2 and 3." 224 | ] 225 | }, 226 | { 227 | "cell_type": "markdown", 228 | "metadata": { 229 | "deletable": true, 230 | "editable": true 231 | }, 232 | "source": [ 233 | "If you specify the inputs to the ``DataGrabber`` node directly, you can do this as follows:" 234 | ] 235 | }, 236 | { 237 | "cell_type": "code", 238 | "execution_count": null, 239 | "metadata": { 240 | "collapsed": true, 241 | "deletable": true, 242 | "editable": true 243 | }, 244 | "outputs": [], 245 | "source": [ 246 | "# Specifying the input fields of DataGrabber directly\n", 247 | "dg.inputs.subject_id = 'sub-01'\n", 248 | "dg.inputs.task_id = 1" 249 | ] 250 | }, 251 | { 252 | "cell_type": "markdown", 253 | "metadata": { 254 | "deletable": true, 255 | "editable": true 256 | }, 257 | "source": [ 258 | "Now let's run the ``DataGrabber`` node and let's look at the output:" 259 | ] 260 | }, 261 | { 262 | "cell_type": "code", 263 | "execution_count": null, 264 | "metadata": { 265 | "collapsed": false, 266 | "deletable": true, 267 | "editable": true 268 | }, 269 | "outputs": [ 270 | { 271 | "name": "stdout", 272 | "output_type": "stream", 273 | "text": [ 274 | "170301-21:53:31,59 workflow INFO:\n", 275 | "\t Executing node datagrabber in dir: /tmp/tmp6AloiV/datagrabber\n", 276 | "170301-21:53:31,84 workflow INFO:\n", 277 | "\t Runtime memory and threads stats unavailable\n", 278 | "\n", 279 | "anat = /data/ds102/sub-01/anat/sub-01_T1w.nii.gz\n", 280 | "func = /data/ds102/sub-01/func/sub-01_task-flanker_run-1_bold.nii.gz\n", 281 | "\n" 282 | ] 283 | } 284 | ], 285 | "source": [ 286 | "print dg.run().outputs" 287 | ] 288 | }, 289 | { 290 | "cell_type": "markdown", 291 | "metadata": { 292 | "deletable": true, 293 | "editable": true 294 | }, 295 | "source": [ 296 | "# SelectFiles\n", 297 | "\n", 298 | "`SelectFiles` is a more flexible alternative to `DataGrabber`. It uses the {}-based string formating syntax to plug values into string templates and collect the data. These templates can also be combined with glob wild cards. The field names in the formatting template (i.e. the terms in braces) will become inputs fields on the interface, and the keys in the templates dictionary will form the output fields.\n", 299 | "\n", 300 | "Let's focus again on the data we want to import:\n", 301 | "\n", 302 | " anat = /data/ds102/sub-01/anat/sub-01_T1w.nii.gz\n", 303 | " func = /data/ds102/sub-01/func/sub-01_task-flanker_run-1_bold.nii.gz\n", 304 | " \n", 305 | "Now, we can replace those paths with the accoridng {}-based strings.\n", 306 | "\n", 307 | " anat = /data/ds102/{subject_id}/anat/{subject_id}_T1w.nii.gz\n", 308 | " func = /data/ds102/{subject_id}/func/{subject_id}_task-flanker_run-{task_id}_bold.nii.gz\n", 309 | "\n", 310 | "How would this look like as a `SelectFiles` node?" 311 | ] 312 | }, 313 | { 314 | "cell_type": "code", 315 | "execution_count": null, 316 | "metadata": { 317 | "collapsed": false, 318 | "deletable": true, 319 | "editable": true 320 | }, 321 | "outputs": [], 322 | "source": [ 323 | "from nipype import SelectFiles, Node\n", 324 | "\n", 325 | "# String template with {}-based strings\n", 326 | "templates = {'anat': '{subject_id}/anat/{subject_id}_T1w.nii.gz',\n", 327 | " 'func': '{subject_id}/func/{subject_id}_task-flanker_run-{task_id}_bold.nii.gz'}\n", 328 | "\n", 329 | "# Create SelectFiles node\n", 330 | "sf = Node(SelectFiles(templates),\n", 331 | " name='selectfiles')\n", 332 | "\n", 333 | "# Location of the dataset folder\n", 334 | "sf.inputs.base_directory = '/data/ds102'\n", 335 | "\n", 336 | "# Feed {}-based placeholder strings with values\n", 337 | "sf.inputs.subject_id = 'sub-01'\n", 338 | "sf.inputs.task_id = '1'" 339 | ] 340 | }, 341 | { 342 | "cell_type": "markdown", 343 | "metadata": { 344 | "deletable": true, 345 | "editable": true 346 | }, 347 | "source": [ 348 | "Let's check if we get what we wanted." 349 | ] 350 | }, 351 | { 352 | "cell_type": "code", 353 | "execution_count": null, 354 | "metadata": { 355 | "collapsed": false, 356 | "deletable": true, 357 | "editable": true 358 | }, 359 | "outputs": [ 360 | { 361 | "name": "stdout", 362 | "output_type": "stream", 363 | "text": [ 364 | "170301-21:53:57,750 workflow INFO:\n", 365 | "\t Executing node selectfiles in dir: /tmp/tmpejvdlC/selectfiles\n", 366 | "170301-21:53:57,763 workflow INFO:\n", 367 | "\t Runtime memory and threads stats unavailable\n", 368 | "\n", 369 | "anat = /data/ds102/sub-01/anat/sub-01_T1w.nii.gz\n", 370 | "func = /data/ds102/sub-01/func/sub-01_task-flanker_run-1_bold.nii.gz\n", 371 | "\n" 372 | ] 373 | } 374 | ], 375 | "source": [ 376 | "print sf.run().outputs" 377 | ] 378 | }, 379 | { 380 | "cell_type": "markdown", 381 | "metadata": { 382 | "deletable": true, 383 | "editable": true 384 | }, 385 | "source": [ 386 | "Perfect! But why is `SelectFiles` more flexible than `DataGrabber`? First, you perhaps noticed that with the {}-based string, we can reuse the same input (e.g. `subject_id`) multiple time in the same string, without feeding it multiple times into the template.\n", 387 | "\n", 388 | "Additionally, you can also select multiple files without the need of an iterable node. For example, let's assume we want to select both functional images (`'run-1'` and `'run-2'`) at once. We can do this by using the following file template:\n", 389 | "\n", 390 | " {subject_id}_task-flanker_run-[1,2]_bold.nii.gz'\n", 391 | "\n", 392 | "Let's see how this works:" 393 | ] 394 | }, 395 | { 396 | "cell_type": "code", 397 | "execution_count": null, 398 | "metadata": { 399 | "collapsed": false, 400 | "deletable": true, 401 | "editable": true 402 | }, 403 | "outputs": [ 404 | { 405 | "name": "stdout", 406 | "output_type": "stream", 407 | "text": [ 408 | "170301-21:54:03,222 workflow INFO:\n", 409 | "\t Executing node selectfiles in dir: /tmp/tmpjgAYwb/selectfiles\n", 410 | "170301-21:54:03,259 workflow INFO:\n", 411 | "\t Runtime memory and threads stats unavailable\n", 412 | "\n", 413 | "anat = /data/ds102/sub-01/anat/sub-01_T1w.nii.gz\n", 414 | "func = ['/data/ds102/sub-01/func/sub-01_task-flanker_run-1_bold.nii.gz', '/data/ds102/sub-01/func/sub-01_task-flanker_run-2_bold.nii.gz']\n", 415 | "\n" 416 | ] 417 | } 418 | ], 419 | "source": [ 420 | "from nipype import SelectFiles, Node\n", 421 | "from os.path import abspath as opap\n", 422 | "\n", 423 | "# String template with {}-based strings\n", 424 | "templates = {'anat': '{subject_id}/anat/{subject_id}_T1w.nii.gz',\n", 425 | " 'func': '{subject_id}/func/{subject_id}_task-flanker_run-[1,2]_bold.nii.gz'}\n", 426 | "\n", 427 | "# Create SelectFiles node\n", 428 | "sf = Node(SelectFiles(templates),\n", 429 | " name='selectfiles')\n", 430 | "\n", 431 | "# Location of the dataset folder\n", 432 | "sf.inputs.base_directory = '/data/ds102'\n", 433 | "\n", 434 | "# Feed {}-based placeholder strings with values\n", 435 | "sf.inputs.subject_id = 'sub-01'\n", 436 | "\n", 437 | "# Print SelectFiles output\n", 438 | "print sf.run().outputs" 439 | ] 440 | }, 441 | { 442 | "cell_type": "markdown", 443 | "metadata": { 444 | "deletable": true, 445 | "editable": true 446 | }, 447 | "source": [ 448 | "As you can see, now `func` contains two file paths, one for the first and one for the second run. As a side node, you could have also gotten them same thing with the wild card `*`:\n", 449 | "\n", 450 | " {subject_id}_task-flanker_run-*_bold.nii.gz'" 451 | ] 452 | }, 453 | { 454 | "cell_type": "markdown", 455 | "metadata": { 456 | "deletable": true, 457 | "editable": true 458 | }, 459 | "source": [ 460 | "## FreeSurferSource\n", 461 | "\n", 462 | "***Note: FreeSurfer and the recon-all output is not included in this tutorial.***\n", 463 | "\n", 464 | "`FreeSurferSource` is a specific case of a file grabber that felicitates the data import of outputs from the FreeSurfer recon-all algorithm. This of course requires that you've already run `recon-all` on your subject.\n", 465 | "\n", 466 | "Before you can run `FreeSurferSource`, you first have to specify the path to the FreeSurfer output folder, i.e. you have to specify the SUBJECTS_DIR variable. This can be done as follows:" 467 | ] 468 | }, 469 | { 470 | "cell_type": "code", 471 | "execution_count": null, 472 | "metadata": { 473 | "collapsed": true, 474 | "deletable": true, 475 | "editable": true 476 | }, 477 | "outputs": [], 478 | "source": [ 479 | "from nipype.interfaces.freesurfer import FSCommand\n", 480 | "from os.path import abspath as opap\n", 481 | "\n", 482 | "# Path to your freesurfer output folder\n", 483 | "fs_dir = opap('/data/ds102/freesurfer')\n", 484 | "\n", 485 | "# Set SUBJECTS_DIR\n", 486 | "FSCommand.set_default_subjects_dir(fs_dir)" 487 | ] 488 | }, 489 | { 490 | "cell_type": "markdown", 491 | "metadata": { 492 | "deletable": true, 493 | "editable": true 494 | }, 495 | "source": [ 496 | "To create the `FreeSurferSource` node, do as follows:" 497 | ] 498 | }, 499 | { 500 | "cell_type": "code", 501 | "execution_count": null, 502 | "metadata": { 503 | "collapsed": false, 504 | "deletable": true, 505 | "editable": true 506 | }, 507 | "outputs": [], 508 | "source": [ 509 | "from nipype import Node\n", 510 | "from nipype.interfaces.io import FreeSurferSource\n", 511 | "\n", 512 | "# Create FreeSurferSource node\n", 513 | "fssource = Node(FreeSurferSource(subjects_dir=fs_dir),\n", 514 | " name='fssource')" 515 | ] 516 | }, 517 | { 518 | "cell_type": "markdown", 519 | "metadata": { 520 | "deletable": true, 521 | "editable": true 522 | }, 523 | "source": [ 524 | "Let's now run it for a specific subject." 525 | ] 526 | }, 527 | { 528 | "cell_type": "code", 529 | "execution_count": null, 530 | "metadata": { 531 | "collapsed": false, 532 | "deletable": true, 533 | "editable": true 534 | }, 535 | "outputs": [ 536 | { 537 | "name": "stdout", 538 | "output_type": "stream", 539 | "text": [ 540 | "170302-17:50:07,668 workflow INFO:\n", 541 | "\t Executing node fssource in dir: /tmp/tmpI0UTIX/fssource\n" 542 | ] 543 | } 544 | ], 545 | "source": [ 546 | "fssource.inputs.subject_id = 'sub001'\n", 547 | "result = fssource.run()" 548 | ] 549 | }, 550 | { 551 | "cell_type": "markdown", 552 | "metadata": { 553 | "deletable": true, 554 | "editable": true 555 | }, 556 | "source": [ 557 | "Did it work? Let's try to access multiple FreeSurfer outputs:" 558 | ] 559 | }, 560 | { 561 | "cell_type": "code", 562 | "execution_count": null, 563 | "metadata": { 564 | "collapsed": false, 565 | "deletable": true, 566 | "editable": true 567 | }, 568 | "outputs": [ 569 | { 570 | "name": "stdout", 571 | "output_type": "stream", 572 | "text": [ 573 | "aparc_aseg: [u'/data/ds102/freesurfer/sub001/mri/aparc.a2009s+aseg.mgz', u'/data/ds102/freesurfer/sub001/mri/aparc+aseg.mgz']\n", 574 | "\n", 575 | "brainmask: /data/ds102/freesurfer/sub001/mri/brainmask.mgz\n", 576 | "\n", 577 | "inflated: [u'/data/ds102/freesurfer/sub001/surf/rh.inflated', u'/data/ds102/freesurfer/sub001/surf/lh.inflated']\n", 578 | "\n" 579 | ] 580 | } 581 | ], 582 | "source": [ 583 | "print 'aparc_aseg: %s\\n' % result.outputs.aparc_aseg\n", 584 | "print 'brainmask: %s\\n' % result.outputs.brainmask\n", 585 | "print 'inflated: %s\\n' % result.outputs.inflated" 586 | ] 587 | }, 588 | { 589 | "cell_type": "markdown", 590 | "metadata": { 591 | "deletable": true, 592 | "editable": true 593 | }, 594 | "source": [ 595 | "It seems to be working as it should. But as you can see, the `inflated` output actually contains the file location for both hemispheres. With `FreeSurferSource` we can also restrict the file selection to a single hemisphere. To do this, we use the `hemi` input filed:" 596 | ] 597 | }, 598 | { 599 | "cell_type": "code", 600 | "execution_count": null, 601 | "metadata": { 602 | "collapsed": false, 603 | "deletable": true, 604 | "editable": true 605 | }, 606 | "outputs": [ 607 | { 608 | "name": "stdout", 609 | "output_type": "stream", 610 | "text": [ 611 | "170302-17:50:13,835 workflow INFO:\n", 612 | "\t Executing node fssource in dir: /tmp/tmpI0UTIX/fssource\n" 613 | ] 614 | } 615 | ], 616 | "source": [ 617 | "fssource.inputs.hemi = 'lh'\n", 618 | "result = fssource.run()" 619 | ] 620 | }, 621 | { 622 | "cell_type": "markdown", 623 | "metadata": { 624 | "deletable": true, 625 | "editable": true 626 | }, 627 | "source": [ 628 | "Let's take a look again at the `inflated` output." 629 | ] 630 | }, 631 | { 632 | "cell_type": "code", 633 | "execution_count": null, 634 | "metadata": { 635 | "collapsed": false, 636 | "deletable": true, 637 | "editable": true 638 | }, 639 | "outputs": [ 640 | { 641 | "data": { 642 | "text/plain": [ 643 | "u'/data/ds102/freesurfer/sub001/surf/lh.inflated'" 644 | ] 645 | }, 646 | "execution_count": null, 647 | "metadata": {}, 648 | "output_type": "execute_result" 649 | } 650 | ], 651 | "source": [ 652 | "result.outputs.inflated" 653 | ] 654 | }, 655 | { 656 | "cell_type": "markdown", 657 | "metadata": { 658 | "deletable": true, 659 | "editable": true 660 | }, 661 | "source": [ 662 | "Perfect!" 663 | ] 664 | } 665 | ], 666 | "metadata": { 667 | "anaconda-cloud": {}, 668 | "kernelspec": { 669 | "display_name": "Python [default]", 670 | "language": "python", 671 | "name": "python2" 672 | }, 673 | "language_info": { 674 | "codemirror_mode": { 675 | "name": "ipython", 676 | "version": 2 677 | }, 678 | "file_extension": ".py", 679 | "mimetype": "text/x-python", 680 | "name": "python", 681 | "nbconvert_exporter": "python", 682 | "pygments_lexer": "ipython2", 683 | "version": "2.7.13" 684 | } 685 | }, 686 | "nbformat": 4, 687 | "nbformat_minor": 0 688 | } 689 | -------------------------------------------------------------------------------- /basic_data_output.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": { 6 | "deletable": true, 7 | "editable": true 8 | }, 9 | "source": [ 10 | "# Data Output\n", 11 | "\n", 12 | "Similarly important to data input is data output. Using a data output module allows you to restructure and rename computed output and to spatial differentiate relevant output files from the temporary computed intermediate files in the working directory. Nipype provides the following modules to handle data stream output:\n", 13 | "\n", 14 | " DataSink\n", 15 | " JSONFileSink\n", 16 | " MySQLSink\n", 17 | " SQLiteSink\n", 18 | " XNATSink\n", 19 | "\n", 20 | "This tutorial covers only `DataSink`. For the rest, see the section [``interfaces.io``](http://nipype.readthedocs.io/en/latest/interfaces/generated/nipype.interfaces.io.html) on the official homepage." 21 | ] 22 | }, 23 | { 24 | "cell_type": "markdown", 25 | "metadata": { 26 | "deletable": true, 27 | "editable": true 28 | }, 29 | "source": [ 30 | "# Preparation\n", 31 | "\n", 32 | "Before we can use `DataSink` we first need to run a workflow. For this purpose, let's create a very short preprocessing workflow that realigns and smooths one functional image of one subject." 33 | ] 34 | }, 35 | { 36 | "cell_type": "markdown", 37 | "metadata": { 38 | "deletable": true, 39 | "editable": true 40 | }, 41 | "source": [ 42 | "First, let's create a `SelectFiles` node to . For an explanation about this step, see the [Data Input](basic_data_input.ipynb) tutorial." 43 | ] 44 | }, 45 | { 46 | "cell_type": "code", 47 | "execution_count": null, 48 | "metadata": { 49 | "collapsed": true, 50 | "deletable": true, 51 | "editable": true 52 | }, 53 | "outputs": [], 54 | "source": [ 55 | "from nipype import SelectFiles, Node\n", 56 | "\n", 57 | "# Create SelectFiles node\n", 58 | "templates={'func': '{subject_id}/func/{subject_id}_task-flanker_run-1_bold.nii.gz'}\n", 59 | "sf = Node(SelectFiles(templates),\n", 60 | " name='selectfiles')\n", 61 | "sf.inputs.base_directory = '/data/ds102'\n", 62 | "sf.inputs.subject_id = 'sub-01'" 63 | ] 64 | }, 65 | { 66 | "cell_type": "markdown", 67 | "metadata": { 68 | "deletable": true, 69 | "editable": true 70 | }, 71 | "source": [ 72 | "Second, let's create the motion correction and smoothing node. For an explanation about this step, see the [Nodes](basic_nodes.ipynb) and [Interfaces](basic_interfaces.ipynb) tutorial." 73 | ] 74 | }, 75 | { 76 | "cell_type": "code", 77 | "execution_count": null, 78 | "metadata": { 79 | "collapsed": true, 80 | "deletable": true, 81 | "editable": true 82 | }, 83 | "outputs": [], 84 | "source": [ 85 | "from nipype.interfaces.fsl import MCFLIRT, IsotropicSmooth\n", 86 | "\n", 87 | "# Create Motion Correction Node\n", 88 | "mcflirt = Node(MCFLIRT(mean_vol=True,\n", 89 | " save_plots=True),\n", 90 | " name='mcflirt')\n", 91 | "\n", 92 | "# Create Smoothing node\n", 93 | "smooth = Node(IsotropicSmooth(fwhm=4),\n", 94 | " name='smooth')" 95 | ] 96 | }, 97 | { 98 | "cell_type": "markdown", 99 | "metadata": { 100 | "deletable": true, 101 | "editable": true 102 | }, 103 | "source": [ 104 | "Third, let's create the workflow that will contain those three nodes. For an explanation about this step, see the [Workflow](basic_workflow.ipynb) tutorial." 105 | ] 106 | }, 107 | { 108 | "cell_type": "code", 109 | "execution_count": null, 110 | "metadata": { 111 | "collapsed": true, 112 | "deletable": true, 113 | "editable": true 114 | }, 115 | "outputs": [], 116 | "source": [ 117 | "from nipype import Workflow\n", 118 | "from os.path import abspath\n", 119 | "\n", 120 | "# Create a preprocessing workflow\n", 121 | "wf = Workflow(name=\"preprocWF\")\n", 122 | "wf.base_dir = 'working_dir'\n", 123 | "\n", 124 | "# Connect the three nodes to each other\n", 125 | "wf.connect([(sf, mcflirt, [(\"func\", \"in_file\")]),\n", 126 | " (mcflirt, smooth, [(\"out_file\", \"in_file\")])])" 127 | ] 128 | }, 129 | { 130 | "cell_type": "markdown", 131 | "metadata": { 132 | "deletable": true, 133 | "editable": true 134 | }, 135 | "source": [ 136 | "Now that everything is set up, let's run the preprocessing workflow." 137 | ] 138 | }, 139 | { 140 | "cell_type": "code", 141 | "execution_count": null, 142 | "metadata": { 143 | "collapsed": false, 144 | "deletable": true, 145 | "editable": true 146 | }, 147 | "outputs": [], 148 | "source": [ 149 | "wf.run()" 150 | ] 151 | }, 152 | { 153 | "cell_type": "markdown", 154 | "metadata": { 155 | "deletable": true, 156 | "editable": true 157 | }, 158 | "source": [ 159 | "After the execution of the workflow we have all the data hidden in the working directory `'working_dir'`. Let's take a closer look at the content of this folder:\n", 160 | "\n", 161 | " working_dir\n", 162 | " └── preprocWF\n", 163 | " ├── d3.js\n", 164 | " ├── graph1.json\n", 165 | " ├── graph.json\n", 166 | " ├── index.html\n", 167 | " ├── mcflirt\n", 168 | " │   ├── _0x6148b774a1205e01fbc692453a68ee85.json\n", 169 | " │   ├── command.txt\n", 170 | " │   ├── _inputs.pklz\n", 171 | " │   ├── _node.pklz\n", 172 | " │   ├── _report\n", 173 | " │   │   └── report.rst\n", 174 | " │   ├── result_mcflirt.pklz\n", 175 | " │   └── sub-01_task-flanker_run-1_bold_mcf.nii.gz\n", 176 | " ├── selectfiles\n", 177 | " │   ├── _0x6a583c5c1c472209ca26f29f15c0bd38.json\n", 178 | " │   ├── _inputs.pklz\n", 179 | " │   ├── _node.pklz\n", 180 | " │   ├── _report\n", 181 | " │   │   └── report.rst\n", 182 | " │   └── result_selectfiles.pklz\n", 183 | " └── smooth\n", 184 | " ├── _0x553087282cd3b58a5c06b5f9699308bf.json\n", 185 | " ├── command.txt\n", 186 | " ├── _inputs.pklz\n", 187 | " ├── _node.pklz\n", 188 | " ├── _report\n", 189 | " │   └── report.rst\n", 190 | " ├── result_smooth.pklz\n", 191 | " └── sub-01_task-flanker_run-1_bold_mcf_smooth.nii.gz" 192 | ] 193 | }, 194 | { 195 | "cell_type": "markdown", 196 | "metadata": { 197 | "deletable": true, 198 | "editable": true 199 | }, 200 | "source": [ 201 | "As we can see, there is way too much content that we might not really care about. To relocate and rename all the files that are relevant for you, you can use `DataSink`?" 202 | ] 203 | }, 204 | { 205 | "cell_type": "markdown", 206 | "metadata": { 207 | "deletable": true, 208 | "editable": true 209 | }, 210 | "source": [ 211 | "# DataSink\n", 212 | "\n", 213 | "`DataSink` is Nipype's standard output module to restructure your output files. It allows you to relocate and rename files that you deem relevant.\n", 214 | "\n", 215 | "Based on the preprocessing pipeline above, let's say we want to keep the smoothed functional images as well as the motion correction paramters. To do this, we first need to create the `DataSink` object." 216 | ] 217 | }, 218 | { 219 | "cell_type": "code", 220 | "execution_count": null, 221 | "metadata": { 222 | "collapsed": false, 223 | "deletable": true, 224 | "editable": true 225 | }, 226 | "outputs": [], 227 | "source": [ 228 | "from nipype.interfaces.io import DataSink\n", 229 | "\n", 230 | "# Create DataSink object\n", 231 | "sinker = Node(DataSink(), name='sinker')\n", 232 | "\n", 233 | "# Name of the output folder\n", 234 | "sinker.inputs.base_directory = 'output'\n", 235 | "\n", 236 | "# Connect DataSink with the relevant nodes\n", 237 | "wf.connect([(smooth, sinker, [('out_file', 'in_file')]),\n", 238 | " (mcflirt, sinker, [('mean_img', 'mean_img'),\n", 239 | " ('par_file', 'par_file')]),\n", 240 | " ])\n", 241 | "wf.run()" 242 | ] 243 | }, 244 | { 245 | "cell_type": "markdown", 246 | "metadata": { 247 | "deletable": true, 248 | "editable": true 249 | }, 250 | "source": [ 251 | "Let's take a look at the `output` folder:\n", 252 | "\n", 253 | " output\n", 254 | " ├── in_file\n", 255 | " │   └── sub-01_task-flanker_run-1_bold_mcf_smooth.nii.gz\n", 256 | " ├── mean_img\n", 257 | " │   └── sub-01_task-flanker_run-1_bold_mcf.nii.gz_mean_reg.nii.gz\n", 258 | " └── par_file\n", 259 | " └── sub-01_task-flanker_run-1_bold_mcf.nii.gz.par" 260 | ] 261 | }, 262 | { 263 | "cell_type": "markdown", 264 | "metadata": { 265 | "deletable": true, 266 | "editable": true 267 | }, 268 | "source": [ 269 | "This looks nice. It is what we asked it to do. But having a specific output folder for each individual output file might be suboptimal. So let's change the code above to save the output in one folder, which we will call `'preproc'`.\n", 270 | "\n", 271 | "For this we can use the same code as above. We only have to change the connection part:" 272 | ] 273 | }, 274 | { 275 | "cell_type": "code", 276 | "execution_count": null, 277 | "metadata": { 278 | "collapsed": false, 279 | "deletable": true, 280 | "editable": true 281 | }, 282 | "outputs": [], 283 | "source": [ 284 | "wf.connect([(smooth, sinker, [('out_file', 'preproc.@in_file')]),\n", 285 | " (mcflirt, sinker, [('mean_img', 'preproc.@mean_img'),\n", 286 | " ('par_file', 'preproc.@par_file')]),\n", 287 | " ])\n", 288 | "wf.run()" 289 | ] 290 | }, 291 | { 292 | "cell_type": "markdown", 293 | "metadata": { 294 | "deletable": true, 295 | "editable": true 296 | }, 297 | "source": [ 298 | "Let's take a look at the new output folder structure:\n", 299 | "\n", 300 | " output\n", 301 | " └── preproc\n", 302 | " ├── sub-01_task-flanker_run-1_bold_mcf.nii.gz_mean_reg.nii.gz\n", 303 | " ├── sub-01_task-flanker_run-1_bold_mcf.nii.gz.par\n", 304 | " └── sub-01_task-flanker_run-1_bold_mcf_smooth.nii.gz" 305 | ] 306 | }, 307 | { 308 | "cell_type": "markdown", 309 | "metadata": { 310 | "deletable": true, 311 | "editable": true 312 | }, 313 | "source": [ 314 | "This is already much better. But what if you want to rename the output files to represent something a bit readable. For this `DataSink` has the `substitution` input field.\n", 315 | "\n", 316 | "For example, let's assume we want to get rid of the string `'task-flanker'` and `'bold_mcf'` and that we want to rename the mean file, as well as adapt the file ending of the motion parameter file:" 317 | ] 318 | }, 319 | { 320 | "cell_type": "code", 321 | "execution_count": null, 322 | "metadata": { 323 | "collapsed": false, 324 | "deletable": true, 325 | "editable": true 326 | }, 327 | "outputs": [], 328 | "source": [ 329 | "# Define substitution strings\n", 330 | "substitutions = [('_task-flanker', ''),\n", 331 | " ('_bold_mcf', ''),\n", 332 | " ('.nii.gz_mean_reg', '_mean'),\n", 333 | " ('.nii.gz.par', '.par')]\n", 334 | "\n", 335 | "# Feed the substitution strings to the DataSink node\n", 336 | "sinker.inputs.substitutions = substitutions\n", 337 | "\n", 338 | "# Run the workflow again with the substitutions in place\n", 339 | "wf.run()" 340 | ] 341 | }, 342 | { 343 | "cell_type": "markdown", 344 | "metadata": { 345 | "deletable": true, 346 | "editable": true 347 | }, 348 | "source": [ 349 | "Now, let's take a final look at the output folder:\n", 350 | "\n", 351 | " output\n", 352 | " └── preproc\n", 353 | " ├── sub-01_run-1_mean.nii.gz\n", 354 | " ├── sub-01_run-1.par\n", 355 | " └── sub-01_run-1_smooth.nii.gz\n", 356 | "\n", 357 | "Cool, much more clearly!" 358 | ] 359 | } 360 | ], 361 | "metadata": { 362 | "anaconda-cloud": {}, 363 | "kernelspec": { 364 | "display_name": "Python [default]", 365 | "language": "python", 366 | "name": "python2" 367 | }, 368 | "language_info": { 369 | "codemirror_mode": { 370 | "name": "ipython", 371 | "version": 2 372 | }, 373 | "file_extension": ".py", 374 | "mimetype": "text/x-python", 375 | "name": "python", 376 | "nbconvert_exporter": "python", 377 | "pygments_lexer": "ipython2", 378 | "version": "2.7.13" 379 | } 380 | }, 381 | "nbformat": 4, 382 | "nbformat_minor": 0 383 | } 384 | -------------------------------------------------------------------------------- /basic_function_nodes.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": { 6 | "deletable": true, 7 | "editable": true 8 | }, 9 | "source": [ 10 | "# Function Node\n", 11 | "\n", 12 | "Satra once called the `Function` module, the \"do anything you want card\". Which is a perfect description. Because it allows you to put any code you want into an empty node, which you than can put in your workflow exactly where it needs to be.\n", 13 | "\n", 14 | "You might have already seen the `Function` module in the [example section in the Node tutorial](basic_nodes.ipynb#Example-of-a-simple-node). Let's take a closer look at it again." 15 | ] 16 | }, 17 | { 18 | "cell_type": "code", 19 | "execution_count": null, 20 | "metadata": { 21 | "collapsed": true, 22 | "deletable": true, 23 | "editable": true 24 | }, 25 | "outputs": [], 26 | "source": [ 27 | "# Import Node and Function module\n", 28 | "from nipype import Node, Function\n", 29 | "\n", 30 | "# Create a small example function\n", 31 | "def add_two(x_input):\n", 32 | " return x_input + 2\n", 33 | "\n", 34 | "# Create Node\n", 35 | "addtwo = Node(Function(input_names=[\"x_input\"],\n", 36 | " output_names=[\"val_output\"],\n", 37 | " function=add_two),\n", 38 | " name='add_node')" 39 | ] 40 | }, 41 | { 42 | "cell_type": "markdown", 43 | "metadata": { 44 | "deletable": true, 45 | "editable": true 46 | }, 47 | "source": [ 48 | "# Trap 1\n", 49 | "\n", 50 | "There are only two traps that you should be aware when you're using the `Function` module. The first one is about naming the input variables. The variable name for the node input has to be the exactly the same name as the function input parameter, in this case this is `x_input`.\n", 51 | "\n", 52 | "Otherwise you get the following error:\n", 53 | "\n", 54 | " TypeError: add_two() got an unexpected keyword argument 'x_input'\n", 55 | " Interface Function failed to run." 56 | ] 57 | }, 58 | { 59 | "cell_type": "markdown", 60 | "metadata": { 61 | "deletable": true, 62 | "editable": true 63 | }, 64 | "source": [ 65 | "# Trap 2\n", 66 | "\n", 67 | "If you want to use another module inside a function, you have to import it again inside the function. Let's take a look at the following example:" 68 | ] 69 | }, 70 | { 71 | "cell_type": "code", 72 | "execution_count": null, 73 | "metadata": { 74 | "collapsed": false, 75 | "deletable": true, 76 | "editable": true 77 | }, 78 | "outputs": [ 79 | { 80 | "name": "stdout", 81 | "output_type": "stream", 82 | "text": [ 83 | "170301-21:55:47,598 workflow INFO:\n", 84 | "\t Executing node rndArray_node in dir: /tmp/tmpv4BGTx/rndArray_node\n", 85 | "170301-21:55:47,633 workflow INFO:\n", 86 | "\t Runtime memory and threads stats unavailable\n", 87 | "\n", 88 | "random_array = [[ 0.55392783 0.56238157 0.26244335]\n", 89 | " [ 0.25663815 0.20904142 0.5810782 ]\n", 90 | " [ 0.18068192 0.65697574 0.1218128 ]]\n", 91 | "\n" 92 | ] 93 | } 94 | ], 95 | "source": [ 96 | "from nipype import Node, Function\n", 97 | "\n", 98 | "# Create the Function object\n", 99 | "def get_random_array(array_shape):\n", 100 | "\n", 101 | " # Import random function\n", 102 | " from numpy.random import random\n", 103 | " \n", 104 | " return random(array_shape)\n", 105 | "\n", 106 | "# Create Function Node that executes get_random_array\n", 107 | "rndArray = Node(Function(input_names=[\"array_shape\"],\n", 108 | " output_names=[\"random_array\"],\n", 109 | " function=get_random_array),\n", 110 | " name='rndArray_node')\n", 111 | "\n", 112 | "# Specify the array_shape of the random array\n", 113 | "rndArray.inputs.array_shape = (3, 3)\n", 114 | "\n", 115 | "# Run node\n", 116 | "rndArray.run()\n", 117 | "\n", 118 | "# Print output\n", 119 | "print rndArray.result.outputs" 120 | ] 121 | }, 122 | { 123 | "cell_type": "markdown", 124 | "metadata": { 125 | "deletable": true, 126 | "editable": true 127 | }, 128 | "source": [ 129 | "Now, let's see what happens if we move the import of `random` outside the scope of `get_random_array`:" 130 | ] 131 | }, 132 | { 133 | "cell_type": "code", 134 | "execution_count": null, 135 | "metadata": { 136 | "collapsed": false, 137 | "deletable": true, 138 | "editable": true 139 | }, 140 | "outputs": [ 141 | { 142 | "name": "stdout", 143 | "output_type": "stream", 144 | "text": [ 145 | "170301-21:55:47,697 workflow INFO:\n", 146 | "\t Executing node rndArray_node in dir: /tmp/tmpFBMKdD/rndArray_node\n" 147 | ] 148 | }, 149 | { 150 | "ename": "NameError", 151 | "evalue": "global name 'random' is not defined\nInterface Function failed to run. ", 152 | "output_type": "error", 153 | "traceback": [ 154 | "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m", 155 | "\u001b[0;31mNameError\u001b[0m Traceback (most recent call last)", 156 | "\u001b[0;32m\u001b[0m in \u001b[0;36m\u001b[0;34m()\u001b[0m\n\u001b[1;32m 20\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 21\u001b[0m \u001b[0;31m# Run node\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 22\u001b[0;31m \u001b[0mrndArray\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mrun\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 23\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 24\u001b[0m \u001b[0;31m# Print output\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", 157 | "\u001b[0;32m/opt/conda/envs/python2/lib/python2.7/site-packages/nipype/pipeline/engine/nodes.pyc\u001b[0m in \u001b[0;36mrun\u001b[0;34m(self, updatehash)\u001b[0m\n\u001b[1;32m 392\u001b[0m self.inputs.get_traitsfree())\n\u001b[1;32m 393\u001b[0m \u001b[0;32mtry\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 394\u001b[0;31m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_run_interface\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 395\u001b[0m \u001b[0;32mexcept\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 396\u001b[0m \u001b[0mos\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mremove\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mhashfile_unfinished\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", 158 | "\u001b[0;32m/opt/conda/envs/python2/lib/python2.7/site-packages/nipype/pipeline/engine/nodes.pyc\u001b[0m in \u001b[0;36m_run_interface\u001b[0;34m(self, execute, updatehash)\u001b[0m\n\u001b[1;32m 502\u001b[0m \u001b[0mold_cwd\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mos\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mgetcwd\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 503\u001b[0m \u001b[0mos\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mchdir\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0moutput_dir\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 504\u001b[0;31m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_result\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_run_command\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mexecute\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 505\u001b[0m \u001b[0mos\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mchdir\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mold_cwd\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 506\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n", 159 | "\u001b[0;32m/opt/conda/envs/python2/lib/python2.7/site-packages/nipype/pipeline/engine/nodes.pyc\u001b[0m in \u001b[0;36m_run_command\u001b[0;34m(self, execute, copyfiles)\u001b[0m\n\u001b[1;32m 628\u001b[0m \u001b[0mlogger\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0minfo\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m'Running: %s'\u001b[0m \u001b[0;34m%\u001b[0m \u001b[0mcmd\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 629\u001b[0m \u001b[0;32mtry\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 630\u001b[0;31m \u001b[0mresult\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_interface\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mrun\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 631\u001b[0m \u001b[0;32mexcept\u001b[0m \u001b[0mException\u001b[0m \u001b[0;32mas\u001b[0m \u001b[0mmsg\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 632\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_result\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mruntime\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mstderr\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mmsg\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", 160 | "\u001b[0;32m/opt/conda/envs/python2/lib/python2.7/site-packages/nipype/interfaces/base.pyc\u001b[0m in \u001b[0;36mrun\u001b[0;34m(self, **inputs)\u001b[0m\n\u001b[1;32m 1041\u001b[0m version=self.version)\n\u001b[1;32m 1042\u001b[0m \u001b[0;32mtry\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m-> 1043\u001b[0;31m \u001b[0mruntime\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_run_wrapper\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mruntime\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 1044\u001b[0m \u001b[0moutputs\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0maggregate_outputs\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mruntime\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 1045\u001b[0m \u001b[0mruntime\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mendTime\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mdt\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0misoformat\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mdt\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mutcnow\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", 161 | "\u001b[0;32m/opt/conda/envs/python2/lib/python2.7/site-packages/nipype/interfaces/base.pyc\u001b[0m in \u001b[0;36m_run_wrapper\u001b[0;34m(self, runtime)\u001b[0m\n\u001b[1;32m 998\u001b[0m \u001b[0mruntime\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0menviron\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;34m'DISPLAY'\u001b[0m\u001b[0;34m]\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;34m':%d'\u001b[0m \u001b[0;34m%\u001b[0m \u001b[0mvdisp_num\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 999\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m-> 1000\u001b[0;31m \u001b[0mruntime\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_run_interface\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mruntime\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 1001\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 1002\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_redirect_x\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", 162 | "\u001b[0;32m/opt/conda/envs/python2/lib/python2.7/site-packages/nipype/interfaces/utility.pyc\u001b[0m in \u001b[0;36m_run_interface\u001b[0;34m(self, runtime)\u001b[0m\n\u001b[1;32m 497\u001b[0m \u001b[0msetattr\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mruntime\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m'runtime_threads'\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mnum_threads\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 498\u001b[0m \u001b[0;32melse\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 499\u001b[0;31m \u001b[0mout\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mfunction_handle\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m**\u001b[0m\u001b[0margs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 500\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 501\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0mlen\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_output_names\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;34m==\u001b[0m \u001b[0;36m1\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", 163 | "\u001b[0;32m\u001b[0m in \u001b[0;36mget_random_array\u001b[0;34m(array_shape)\u001b[0m\n", 164 | "\u001b[0;31mNameError\u001b[0m: global name 'random' is not defined\nInterface Function failed to run. " 165 | ] 166 | } 167 | ], 168 | "source": [ 169 | "from nipype import Node, Function\n", 170 | "\n", 171 | "# Import random function\n", 172 | "from numpy.random import random\n", 173 | "\n", 174 | "\n", 175 | "# Create the Function object\n", 176 | "def get_random_array(array_shape):\n", 177 | " \n", 178 | " return random(array_shape)\n", 179 | "\n", 180 | "# Create Function Node that executes get_random_array\n", 181 | "rndArray = Node(Function(input_names=[\"array_shape\"],\n", 182 | " output_names=[\"random_array\"],\n", 183 | " function=get_random_array),\n", 184 | " name='rndArray_node')\n", 185 | "\n", 186 | "# Specify the array_shape of the random array\n", 187 | "rndArray.inputs.array_shape = (3, 3)\n", 188 | "\n", 189 | "# Run node\n", 190 | "rndArray.run()\n", 191 | "\n", 192 | "# Print output\n", 193 | "print rndArray.result.outputs" 194 | ] 195 | }, 196 | { 197 | "cell_type": "markdown", 198 | "metadata": { 199 | "deletable": true, 200 | "editable": true 201 | }, 202 | "source": [ 203 | "As you can see, if we don't import `random` inside the scope of the function, we receive the following error:\n", 204 | "\n", 205 | " NameError: global name 'random' is not defined\n", 206 | " Interface Function failed to run. " 207 | ] 208 | } 209 | ], 210 | "metadata": { 211 | "anaconda-cloud": {}, 212 | "kernelspec": { 213 | "display_name": "Python [default]", 214 | "language": "python", 215 | "name": "python2" 216 | }, 217 | "language_info": { 218 | "codemirror_mode": { 219 | "name": "ipython", 220 | "version": 2 221 | }, 222 | "file_extension": ".py", 223 | "mimetype": "text/x-python", 224 | "name": "python", 225 | "nbconvert_exporter": "python", 226 | "pygments_lexer": "ipython2", 227 | "version": "2.7.13" 228 | } 229 | }, 230 | "nbformat": 4, 231 | "nbformat_minor": 0 232 | } 233 | -------------------------------------------------------------------------------- /basic_joinnodes.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "\n", 8 | "\n", 9 | "# JoinNode\n", 10 | "\n", 11 | "JoinNode have the opposite effect of a [MapNode](basic_mapnodes.ipynb) or [iterables](basic_iteration.ipynb). Where they split up the execution workflow into many different branches, a JoinNode merges them back into on node. For a more detailed explanation, check out [JoinNode, synchronize and itersource](http://nipype.readthedocs.io/en/latest/users/joinnode_and_itersource.html) from the main homepage." 12 | ] 13 | }, 14 | { 15 | "cell_type": "markdown", 16 | "metadata": {}, 17 | "source": [ 18 | "## Simple example\n", 19 | "\n", 20 | "Let's consider the very simple example depicted at the top of this page:" 21 | ] 22 | }, 23 | { 24 | "cell_type": "code", 25 | "execution_count": null, 26 | "metadata": { 27 | "collapsed": false 28 | }, 29 | "outputs": [], 30 | "source": [ 31 | "from nipype import Node, JoinNode, Workflow\n", 32 | "\n", 33 | "# Specify fake input node A\n", 34 | "a = Node(interface=A(), name=\"a\")\n", 35 | "\n", 36 | "# Iterate over fake node B's input 'in_file?\n", 37 | "b = Node(interface=B(), name=\"b\")\n", 38 | "b.iterables = ('in_file', [file1, file2])\n", 39 | "\n", 40 | "# Pass results on to fake node C\n", 41 | "c = Node(interface=C(), name=\"c\")\n", 42 | "\n", 43 | "# Join forked execution workflow in fake node D\n", 44 | "d = JoinNode(interface=D(),\n", 45 | " joinsource=\"b\",\n", 46 | " joinfield=\"in_files\",\n", 47 | " name=\"d\")\n", 48 | "\n", 49 | "# Put everything into a workflow as usual\n", 50 | "workflow = Workflow(name=\"workflow\")\n", 51 | "workflow.connect([(a, b, [('subject', 'subject')]),\n", 52 | " (b, c, [('out_file', 'in_file')])\n", 53 | " (c, d, [('out_file', 'in_files')])\n", 54 | " ])" 55 | ] 56 | }, 57 | { 58 | "cell_type": "markdown", 59 | "metadata": {}, 60 | "source": [ 61 | "As you can see, setting up a ``JoinNode`` is rather simple. The only difference to a normal ``Node`` are the ``joinsource`` and the ``joinfield``. ``joinsource`` specifies from which node the information to join is coming and the ``joinfield`` specifies the input field of the JoinNode where the information to join will be entering the node." 62 | ] 63 | }, 64 | { 65 | "cell_type": "markdown", 66 | "metadata": {}, 67 | "source": [ 68 | "## More realistic example\n", 69 | "\n", 70 | "Let's consider another example where we have one node that iterates over 3 different numbers and another node that joins those three different numbers (each coming from a separate branch of the workflow) into one list. To make the whole thing a bit more realistic, the second node will use the ``Function`` interface to do something with those numbers, before we spit them out again." 71 | ] 72 | }, 73 | { 74 | "cell_type": "code", 75 | "execution_count": null, 76 | "metadata": { 77 | "collapsed": true 78 | }, 79 | "outputs": [], 80 | "source": [ 81 | "from nipype import JoinNode, Node, Workflow\n", 82 | "from nipype.interfaces.utility import Function, IdentityInterface" 83 | ] 84 | }, 85 | { 86 | "cell_type": "code", 87 | "execution_count": null, 88 | "metadata": { 89 | "collapsed": true 90 | }, 91 | "outputs": [], 92 | "source": [ 93 | "# Create iteration node\n", 94 | "from nipype import IdentityInterface\n", 95 | "iternode = Node(IdentityInterface(fields=['number_id']),\n", 96 | " name=\"iternode\")\n", 97 | "iternode.iterables = [('number_id', [1, 4, 9])]" 98 | ] 99 | }, 100 | { 101 | "cell_type": "code", 102 | "execution_count": null, 103 | "metadata": { 104 | "collapsed": false 105 | }, 106 | "outputs": [], 107 | "source": [ 108 | "# Create join node - compute square root for each element in the joined list\n", 109 | "def compute_sqrt(numbers):\n", 110 | " from math import sqrt\n", 111 | " return [sqrt(e) for e in numbers]\n", 112 | "\n", 113 | "joinnode = JoinNode(Function(input_names=['numbers'],\n", 114 | " output_names=['sqrts'],\n", 115 | " function=compute_sqrt),\n", 116 | " name='joinnode',\n", 117 | " joinsource='iternode',\n", 118 | " joinfield=['numbers'])" 119 | ] 120 | }, 121 | { 122 | "cell_type": "code", 123 | "execution_count": null, 124 | "metadata": { 125 | "collapsed": false 126 | }, 127 | "outputs": [ 128 | { 129 | "name": "stdout", 130 | "output_type": "stream", 131 | "text": [ 132 | "170306-22:38:22,861 workflow INFO:\n", 133 | "\t Workflow joinflow settings: ['check', 'execution', 'logging']\n", 134 | "170306-22:38:22,871 workflow INFO:\n", 135 | "\t Running serially.\n", 136 | "170306-22:38:22,873 workflow INFO:\n", 137 | "\t Executing node joinnode in dir: /tmp/tmpm8NCMb/joinflow/joinnode\n" 138 | ] 139 | } 140 | ], 141 | "source": [ 142 | "# Create the workflow and run it\n", 143 | "joinflow = Workflow(name='joinflow')\n", 144 | "joinflow.connect(iternode, 'number_id', joinnode, 'numbers')\n", 145 | "res = joinflow.run()" 146 | ] 147 | }, 148 | { 149 | "cell_type": "markdown", 150 | "metadata": {}, 151 | "source": [ 152 | "Now, let's look at the input and output of the joinnode:" 153 | ] 154 | }, 155 | { 156 | "cell_type": "code", 157 | "execution_count": null, 158 | "metadata": { 159 | "collapsed": false 160 | }, 161 | "outputs": [ 162 | { 163 | "data": { 164 | "text/plain": [ 165 | "\n", 166 | "sqrts = [1.0, 2.0, 3.0]" 167 | ] 168 | }, 169 | "execution_count": null, 170 | "metadata": {}, 171 | "output_type": "execute_result" 172 | } 173 | ], 174 | "source": [ 175 | "res.nodes()[0].result.outputs" 176 | ] 177 | }, 178 | { 179 | "cell_type": "code", 180 | "execution_count": null, 181 | "metadata": { 182 | "collapsed": false 183 | }, 184 | "outputs": [ 185 | { 186 | "data": { 187 | "text/plain": [ 188 | "\n", 189 | "function_str = \n", 190 | "ignore_exception = \n", 191 | "numbers = \n", 192 | "numbersJ1 = 1\n", 193 | "numbersJ2 = 4\n", 194 | "numbersJ3 = 9" 195 | ] 196 | }, 197 | "execution_count": null, 198 | "metadata": {}, 199 | "output_type": "execute_result" 200 | } 201 | ], 202 | "source": [ 203 | "res.nodes()[0].inputs" 204 | ] 205 | } 206 | ], 207 | "metadata": { 208 | "anaconda-cloud": {}, 209 | "kernelspec": { 210 | "display_name": "Python [conda root]", 211 | "language": "python", 212 | "name": "conda-root-py" 213 | }, 214 | "language_info": { 215 | "codemirror_mode": { 216 | "name": "ipython", 217 | "version": 2 218 | }, 219 | "file_extension": ".py", 220 | "mimetype": "text/x-python", 221 | "name": "python", 222 | "nbconvert_exporter": "python", 223 | "pygments_lexer": "ipython2", 224 | "version": "2.7.13" 225 | } 226 | }, 227 | "nbformat": 4, 228 | "nbformat_minor": 0 229 | } 230 | -------------------------------------------------------------------------------- /basic_mapnodes.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": { 6 | "collapsed": true, 7 | "deletable": true, 8 | "editable": true 9 | }, 10 | "source": [ 11 | "\n", 12 | "\n", 13 | "# MapNode\n", 14 | "\n", 15 | "If you want to iterate over a list of inputs, but need to feed all iterated outputs afterwards as one input (an array) to the next node, you need to use a **``MapNode``**. A ``MapNode`` is quite similar to a normal ``Node``, but it can take a list of inputs and operate over each input separately, ultimately returning a list of outputs. (The main homepage has a [nice section](http://nipype.readthedocs.io/en/latest/users/mapnode_and_iterables.html) about ``MapNode`` and ``iterables`` if you want to learn more).\n", 16 | "\n", 17 | "Let's demonstrate this with a simple function interface:" 18 | ] 19 | }, 20 | { 21 | "cell_type": "code", 22 | "execution_count": null, 23 | "metadata": { 24 | "collapsed": true, 25 | "deletable": true, 26 | "editable": true 27 | }, 28 | "outputs": [], 29 | "source": [ 30 | "from nipype import Function\n", 31 | "def square_func(x):\n", 32 | " return x ** 2\n", 33 | "square = Function([\"x\"], [\"f_x\"], square_func)" 34 | ] 35 | }, 36 | { 37 | "cell_type": "markdown", 38 | "metadata": { 39 | "deletable": true, 40 | "editable": true 41 | }, 42 | "source": [ 43 | "We see that this function just takes a numeric input and returns its squared value." 44 | ] 45 | }, 46 | { 47 | "cell_type": "code", 48 | "execution_count": null, 49 | "metadata": { 50 | "collapsed": false, 51 | "deletable": true, 52 | "editable": true 53 | }, 54 | "outputs": [ 55 | { 56 | "data": { 57 | "text/plain": [ 58 | "4" 59 | ] 60 | }, 61 | "execution_count": null, 62 | "metadata": {}, 63 | "output_type": "execute_result" 64 | } 65 | ], 66 | "source": [ 67 | "square.run(x=2).outputs.f_x" 68 | ] 69 | }, 70 | { 71 | "cell_type": "markdown", 72 | "metadata": { 73 | "deletable": true, 74 | "editable": true 75 | }, 76 | "source": [ 77 | "What if we wanted to square a list of numbers? We could set an iterable and just split up the workflow in multiple sub-workflows. But say we were making a simple workflow that squared a list of numbers and then summed them. The sum node would expect a list, but using an iterable would make a bunch of sum nodes, and each would get one number from the list. The solution here is to use a `MapNode`.\n", 78 | "\n", 79 | "The `MapNode` constructor has a field called `iterfield`, which tells it what inputs should be expecting a list." 80 | ] 81 | }, 82 | { 83 | "cell_type": "code", 84 | "execution_count": null, 85 | "metadata": { 86 | "collapsed": true, 87 | "deletable": true, 88 | "editable": true 89 | }, 90 | "outputs": [], 91 | "source": [ 92 | "from nipype import MapNode\n", 93 | "square_node = MapNode(square, name=\"square\", iterfield=[\"x\"])" 94 | ] 95 | }, 96 | { 97 | "cell_type": "code", 98 | "execution_count": null, 99 | "metadata": { 100 | "collapsed": false, 101 | "deletable": true, 102 | "editable": true 103 | }, 104 | "outputs": [ 105 | { 106 | "name": "stdout", 107 | "output_type": "stream", 108 | "text": [ 109 | "170301-21:57:47,860 workflow INFO:\n", 110 | "\t Executing node square in dir: /tmp/tmpZKVt49/square\n", 111 | "170301-21:57:47,870 workflow INFO:\n", 112 | "\t Executing node _square0 in dir: /tmp/tmpZKVt49/square/mapflow/_square0\n", 113 | "170301-21:57:47,882 workflow INFO:\n", 114 | "\t Runtime memory and threads stats unavailable\n", 115 | "170301-21:57:47,887 workflow INFO:\n", 116 | "\t Executing node _square1 in dir: /tmp/tmpZKVt49/square/mapflow/_square1\n", 117 | "170301-21:57:47,906 workflow INFO:\n", 118 | "\t Runtime memory and threads stats unavailable\n", 119 | "170301-21:57:47,911 workflow INFO:\n", 120 | "\t Executing node _square2 in dir: /tmp/tmpZKVt49/square/mapflow/_square2\n", 121 | "170301-21:57:47,923 workflow INFO:\n", 122 | "\t Runtime memory and threads stats unavailable\n", 123 | "170301-21:57:47,926 workflow INFO:\n", 124 | "\t Executing node _square3 in dir: /tmp/tmpZKVt49/square/mapflow/_square3\n", 125 | "170301-21:57:47,936 workflow INFO:\n", 126 | "\t Runtime memory and threads stats unavailable\n" 127 | ] 128 | }, 129 | { 130 | "data": { 131 | "text/plain": [ 132 | "[0, 1, 4, 9]" 133 | ] 134 | }, 135 | "execution_count": null, 136 | "metadata": {}, 137 | "output_type": "execute_result" 138 | } 139 | ], 140 | "source": [ 141 | "square_node.inputs.x = range(4)\n", 142 | "square_node.run().outputs.f_x" 143 | ] 144 | }, 145 | { 146 | "cell_type": "markdown", 147 | "metadata": { 148 | "deletable": true, 149 | "editable": true 150 | }, 151 | "source": [ 152 | "Because `iterfield` can take a list of names, you can operate over multiple sets of data, as long as they're the same length. The values in each list will be paired; it does not compute a combinatoric product of the lists." 153 | ] 154 | }, 155 | { 156 | "cell_type": "code", 157 | "execution_count": null, 158 | "metadata": { 159 | "collapsed": true, 160 | "deletable": true, 161 | "editable": true 162 | }, 163 | "outputs": [], 164 | "source": [ 165 | "def power_func(x, y):\n", 166 | " return x ** y" 167 | ] 168 | }, 169 | { 170 | "cell_type": "code", 171 | "execution_count": null, 172 | "metadata": { 173 | "collapsed": false, 174 | "deletable": true, 175 | "editable": true 176 | }, 177 | "outputs": [ 178 | { 179 | "name": "stdout", 180 | "output_type": "stream", 181 | "text": [ 182 | "170301-21:57:47,970 workflow INFO:\n", 183 | "\t Executing node power in dir: /tmp/tmpJPLPn8/power\n", 184 | "170301-21:57:47,985 workflow INFO:\n", 185 | "\t Executing node _power0 in dir: /tmp/tmpJPLPn8/power/mapflow/_power0\n", 186 | "170301-21:57:47,999 workflow INFO:\n", 187 | "\t Runtime memory and threads stats unavailable\n", 188 | "170301-21:57:48,2 workflow INFO:\n", 189 | "\t Executing node _power1 in dir: /tmp/tmpJPLPn8/power/mapflow/_power1\n", 190 | "170301-21:57:48,15 workflow INFO:\n", 191 | "\t Runtime memory and threads stats unavailable\n", 192 | "170301-21:57:48,20 workflow INFO:\n", 193 | "\t Executing node _power2 in dir: /tmp/tmpJPLPn8/power/mapflow/_power2\n", 194 | "170301-21:57:48,34 workflow INFO:\n", 195 | "\t Runtime memory and threads stats unavailable\n", 196 | "170301-21:57:48,43 workflow INFO:\n", 197 | "\t Executing node _power3 in dir: /tmp/tmpJPLPn8/power/mapflow/_power3\n", 198 | "170301-21:57:48,59 workflow INFO:\n", 199 | "\t Runtime memory and threads stats unavailable\n", 200 | "[1, 1, 4, 27]\n" 201 | ] 202 | } 203 | ], 204 | "source": [ 205 | "power = Function([\"x\", \"y\"], [\"f_xy\"], power_func)\n", 206 | "power_node = MapNode(power, name=\"power\", iterfield=[\"x\", \"y\"])\n", 207 | "power_node.inputs.x = range(4)\n", 208 | "power_node.inputs.y = range(4)\n", 209 | "print(power_node.run().outputs.f_xy)" 210 | ] 211 | }, 212 | { 213 | "cell_type": "markdown", 214 | "metadata": { 215 | "deletable": true, 216 | "editable": true 217 | }, 218 | "source": [ 219 | "But not every input needs to be an iterfield." 220 | ] 221 | }, 222 | { 223 | "cell_type": "code", 224 | "execution_count": null, 225 | "metadata": { 226 | "collapsed": false, 227 | "deletable": true, 228 | "editable": true 229 | }, 230 | "outputs": [ 231 | { 232 | "name": "stdout", 233 | "output_type": "stream", 234 | "text": [ 235 | "170301-21:57:48,84 workflow INFO:\n", 236 | "\t Executing node power in dir: /tmp/tmphJmuk0/power\n", 237 | "170301-21:57:48,96 workflow INFO:\n", 238 | "\t Executing node _power0 in dir: /tmp/tmphJmuk0/power/mapflow/_power0\n", 239 | "170301-21:57:48,112 workflow INFO:\n", 240 | "\t Runtime memory and threads stats unavailable\n", 241 | "170301-21:57:48,117 workflow INFO:\n", 242 | "\t Executing node _power1 in dir: /tmp/tmphJmuk0/power/mapflow/_power1\n", 243 | "170301-21:57:48,131 workflow INFO:\n", 244 | "\t Runtime memory and threads stats unavailable\n", 245 | "170301-21:57:48,135 workflow INFO:\n", 246 | "\t Executing node _power2 in dir: /tmp/tmphJmuk0/power/mapflow/_power2\n", 247 | "170301-21:57:48,150 workflow INFO:\n", 248 | "\t Runtime memory and threads stats unavailable\n", 249 | "170301-21:57:48,159 workflow INFO:\n", 250 | "\t Executing node _power3 in dir: /tmp/tmphJmuk0/power/mapflow/_power3\n", 251 | "170301-21:57:48,176 workflow INFO:\n", 252 | "\t Runtime memory and threads stats unavailable\n", 253 | "[0, 1, 8, 27]\n" 254 | ] 255 | } 256 | ], 257 | "source": [ 258 | "power_node = MapNode(power, name=\"power\", iterfield=[\"x\"])\n", 259 | "power_node.inputs.x = range(4)\n", 260 | "power_node.inputs.y = 3\n", 261 | "print(power_node.run().outputs.f_xy)" 262 | ] 263 | }, 264 | { 265 | "cell_type": "markdown", 266 | "metadata": { 267 | "collapsed": true, 268 | "deletable": true, 269 | "editable": true 270 | }, 271 | "source": [ 272 | "As in the case of `iterables`, each underlying `MapNode` execution can happen in **parallel**. Hopefully, you see how these tools allow you to write flexible, reusable workflows that will help you processes large amounts of data efficiently and reproducibly." 273 | ] 274 | }, 275 | { 276 | "cell_type": "markdown", 277 | "metadata": { 278 | "deletable": true, 279 | "editable": true 280 | }, 281 | "source": [ 282 | "# Why is this important?\n", 283 | "\n", 284 | "Let's consider we have multiple functional images (A) and each of them should be motioned corrected (B1, B2, B3,..). But afterwards, we want to put them all together into a GLM, i.e. the input for the GLM should be an array of [B1, B2, B3, ...]. [Iterables](basic_iteration.ipynb) can't do that. They would split up the pipeline. Therefore, we need **MapNodes**.\n", 285 | "\n", 286 | "\n", 287 | "\n", 288 | "Let's look at a simple example, where we want to motion correct two functional images. For this we need two nodes:\n", 289 | " - Gunzip, to unzip the files (plural)\n", 290 | " - Realign, to do the motion correction" 291 | ] 292 | }, 293 | { 294 | "cell_type": "code", 295 | "execution_count": null, 296 | "metadata": { 297 | "collapsed": true, 298 | "deletable": true, 299 | "editable": true 300 | }, 301 | "outputs": [], 302 | "source": [ 303 | "from nipype.algorithms.misc import Gunzip\n", 304 | "from nipype.interfaces.spm import Realign\n", 305 | "from nipype.pipeline.engine import Node, MapNode, Workflow\n", 306 | "\n", 307 | "files = ['/data/ds102/sub-01/func/sub-01_task-flanker_run-1_bold.nii.gz',\n", 308 | " '/data/ds102/sub-01/func/sub-01_task-flanker_run-2_bold.nii.gz']\n", 309 | "\n", 310 | "realign = Node(Realign(register_to_mean=True),\n", 311 | " name='motion_correction')" 312 | ] 313 | }, 314 | { 315 | "cell_type": "markdown", 316 | "metadata": { 317 | "deletable": true, 318 | "editable": true 319 | }, 320 | "source": [ 321 | "If we try to specify the input for the **Gunzip** node with a simple **Node**, we get the following error:" 322 | ] 323 | }, 324 | { 325 | "cell_type": "code", 326 | "execution_count": null, 327 | "metadata": { 328 | "collapsed": false, 329 | "deletable": true, 330 | "editable": true 331 | }, 332 | "outputs": [ 333 | { 334 | "ename": "TraitError", 335 | "evalue": "The 'in_file' trait of a GunzipInputSpec instance must be an existing file name, but a value of ['/data/ds102/sub-01/func/sub-01_task-flanker_run-1_bold.nii.gz', '/data/ds102/sub-01/func/sub-01_task-flanker_run-2_bold.nii.gz'] was specified.", 336 | "output_type": "error", 337 | "traceback": [ 338 | "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m", 339 | "\u001b[0;31mTraitError\u001b[0m Traceback (most recent call last)", 340 | "\u001b[0;32m\u001b[0m in \u001b[0;36m\u001b[0;34m()\u001b[0m\n\u001b[1;32m 1\u001b[0m \u001b[0mgunzip\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mNode\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mGunzip\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mname\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;34m'gunzip'\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 2\u001b[0;31m \u001b[0mgunzip\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0minputs\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0min_file\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mfiles\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m", 341 | "\u001b[0;32m/opt/conda/envs/python2/lib/python2.7/site-packages/nipype/interfaces/traits_extension.pyc\u001b[0m in \u001b[0;36mvalidate\u001b[0;34m(self, object, name, value)\u001b[0m\n\u001b[1;32m 72\u001b[0m \u001b[0mNote\u001b[0m\u001b[0;34m:\u001b[0m \u001b[0mThe\u001b[0m \u001b[0;34m'fast validator'\u001b[0m \u001b[0mversion\u001b[0m \u001b[0mperforms\u001b[0m \u001b[0mthis\u001b[0m \u001b[0mcheck\u001b[0m \u001b[0;32min\u001b[0m \u001b[0mC\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 73\u001b[0m \"\"\"\n\u001b[0;32m---> 74\u001b[0;31m \u001b[0mvalidated_value\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0msuper\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mBaseFile\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mvalidate\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mobject\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mname\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mvalue\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 75\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0;32mnot\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mexists\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 76\u001b[0m \u001b[0;32mreturn\u001b[0m \u001b[0mvalidated_value\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", 342 | "\u001b[0;32m/opt/conda/envs/python2/lib/python2.7/site-packages/traits/trait_types.pyc\u001b[0m in \u001b[0;36mvalidate\u001b[0;34m(self, object, name, value)\u001b[0m\n\u001b[1;32m 347\u001b[0m \u001b[0;32mreturn\u001b[0m \u001b[0mvalue\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 348\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 349\u001b[0;31m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0merror\u001b[0m\u001b[0;34m(\u001b[0m \u001b[0mobject\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mname\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mvalue\u001b[0m \u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 350\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 351\u001b[0m \u001b[0;32mdef\u001b[0m \u001b[0mcreate_editor\u001b[0m \u001b[0;34m(\u001b[0m \u001b[0mself\u001b[0m \u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", 343 | "\u001b[0;32m/opt/conda/envs/python2/lib/python2.7/site-packages/traits/trait_handlers.pyc\u001b[0m in \u001b[0;36merror\u001b[0;34m(self, object, name, value)\u001b[0m\n\u001b[1;32m 170\u001b[0m \"\"\"\n\u001b[1;32m 171\u001b[0m raise TraitError( object, name, self.full_info( object, name, value ),\n\u001b[0;32m--> 172\u001b[0;31m value )\n\u001b[0m\u001b[1;32m 173\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 174\u001b[0m \u001b[0;32mdef\u001b[0m \u001b[0mfull_info\u001b[0m \u001b[0;34m(\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mobject\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mname\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mvalue\u001b[0m \u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", 344 | "\u001b[0;31mTraitError\u001b[0m: The 'in_file' trait of a GunzipInputSpec instance must be an existing file name, but a value of ['/data/ds102/sub-01/func/sub-01_task-flanker_run-1_bold.nii.gz', '/data/ds102/sub-01/func/sub-01_task-flanker_run-2_bold.nii.gz'] was specified." 345 | ] 346 | } 347 | ], 348 | "source": [ 349 | "gunzip = Node(Gunzip(), name='gunzip',)\n", 350 | "gunzip.inputs.in_file = files" 351 | ] 352 | }, 353 | { 354 | "cell_type": "markdown", 355 | "metadata": { 356 | "deletable": true, 357 | "editable": true 358 | }, 359 | "source": [ 360 | "```bash\n", 361 | "TraitError: The 'in_file' trait of a GunzipInputSpec instance must be an existing file name, but a value of ['/data/ds102/sub-01/func/sub-01_task-flanker_run-1_bold.nii.gz', '/data/ds102/sub-01/func/sub-01_task-flanker_run-2_bold.nii.gz'] was specified.\n", 362 | "```" 363 | ] 364 | }, 365 | { 366 | "cell_type": "markdown", 367 | "metadata": { 368 | "deletable": true, 369 | "editable": true 370 | }, 371 | "source": [ 372 | "But if we do it with a **MapNode**, it works:" 373 | ] 374 | }, 375 | { 376 | "cell_type": "code", 377 | "execution_count": null, 378 | "metadata": { 379 | "collapsed": true, 380 | "deletable": true, 381 | "editable": true 382 | }, 383 | "outputs": [], 384 | "source": [ 385 | "gunzip = MapNode(Gunzip(), name='gunzip',\n", 386 | " iterfield=['in_file'])\n", 387 | "gunzip.inputs.in_file = files" 388 | ] 389 | }, 390 | { 391 | "cell_type": "markdown", 392 | "metadata": { 393 | "deletable": true, 394 | "editable": true 395 | }, 396 | "source": [ 397 | "Now, we just have to create a workflow, connect the nodes and we can run it:" 398 | ] 399 | }, 400 | { 401 | "cell_type": "code", 402 | "execution_count": null, 403 | "metadata": { 404 | "collapsed": false, 405 | "deletable": true, 406 | "editable": true 407 | }, 408 | "outputs": [], 409 | "source": [ 410 | "mcflow = Workflow(name='realign_with_spm')\n", 411 | "mcflow.connect(gunzip, 'out_file', realign, 'in_files')\n", 412 | "mcflow.base_dir = '/data'\n", 413 | "mcflow.run('MultiProc', plugin_args={'n_procs': 4})" 414 | ] 415 | } 416 | ], 417 | "metadata": { 418 | "anaconda-cloud": {}, 419 | "kernelspec": { 420 | "display_name": "Python [default]", 421 | "language": "python", 422 | "name": "python2" 423 | }, 424 | "language_info": { 425 | "codemirror_mode": { 426 | "name": "ipython", 427 | "version": 2 428 | }, 429 | "file_extension": ".py", 430 | "mimetype": "text/x-python", 431 | "name": "python", 432 | "nbconvert_exporter": "python", 433 | "pygments_lexer": "ipython2", 434 | "version": "2.7.13" 435 | } 436 | }, 437 | "nbformat": 4, 438 | "nbformat_minor": 0 439 | } 440 | -------------------------------------------------------------------------------- /basic_model_specification.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# Model Specification for 1st-Level fMRI Analysis\n", 8 | "\n", 9 | "Nipype provides also an interfaces to create a first level Model for an fMRI analysis. Such a model is needed to specify the study specific information, such as **condition**, their **onsets** and **durations**. For more information, make sure to check out [Model Specificaton](http://nipype.readthedocs.io/en/latest/users/model_specification.html) and [nipype.algorithms.modelgen](http://nipype.readthedocs.io/en/latest/interfaces/generated/nipype.algorithms.modelgen.html)" 10 | ] 11 | }, 12 | { 13 | "cell_type": "markdown", 14 | "metadata": {}, 15 | "source": [ 16 | "## Simple Example\n", 17 | "\n", 18 | "Let's consider a simple experiment, where we have three different stimuli such as ``'faces'``, ``'houses'`` and ``'scrambled pix'``. Now each of those three conditions has different stimuli onsets, but all of them have a stimuli presentation duration of 3 seconds.\n", 19 | "\n", 20 | "So to summarize:\n", 21 | "\n", 22 | " conditions = ['faces', 'houses', 'scrambled pix']\n", 23 | " onsets = [[0, 30, 60, 90],\n", 24 | " [10, 40, 70, 100],\n", 25 | " [20, 50, 80, 110]]\n", 26 | " durations = [[3], [3], [3]]\n", 27 | " \n", 28 | "The way we would create this model with Nipype is almsot as simple as that. The only step that is missing is to put this all into a ``Bunch`` object. This can be done as follows:" 29 | ] 30 | }, 31 | { 32 | "cell_type": "code", 33 | "execution_count": null, 34 | "metadata": { 35 | "collapsed": false 36 | }, 37 | "outputs": [], 38 | "source": [ 39 | "from nipype.interfaces.base import Bunch\n", 40 | "\n", 41 | "conditions = ['faces', 'houses', 'scrambled pix']\n", 42 | "onsets = [[0, 30, 60, 90],\n", 43 | " [10, 40, 70, 100],\n", 44 | " [20, 50, 80, 110]]\n", 45 | "durations = [[3], [3], [3]]\n", 46 | "\n", 47 | "subject_info = Bunch(conditions=conditions,\n", 48 | " onsets=onsets,\n", 49 | " durations=durations)" 50 | ] 51 | }, 52 | { 53 | "cell_type": "markdown", 54 | "metadata": {}, 55 | "source": [ 56 | "It's also possible to specify additional regressors. For this you need to additionally specify:\n", 57 | "\n", 58 | "- **``regressors``**: list of regressors that you want to include in the model (must correspond to the number of volumes in the functional run)\n", 59 | "- **``regressor_names``**: name of the regressors that you want to include" 60 | ] 61 | }, 62 | { 63 | "cell_type": "markdown", 64 | "metadata": {}, 65 | "source": [ 66 | "## Example based on dataset\n", 67 | "\n", 68 | "Now for a more realistic example, let's look at a TVA file from our tutorial dataset." 69 | ] 70 | }, 71 | { 72 | "cell_type": "code", 73 | "execution_count": null, 74 | "metadata": { 75 | "collapsed": false, 76 | "deletable": true, 77 | "editable": true 78 | }, 79 | "outputs": [ 80 | { 81 | "name": "stdout", 82 | "output_type": "stream", 83 | "text": [ 84 | "onset\tduration\ttrial_type\tresponse_time\tcorrectness\tStimVar\tRsponse\tStimulus\tcond\r\n", 85 | "0.0\t2.0\tincongruent_correct\t1.095\tcorrect\t2\t1\tincongruent\tcond003\r\n", 86 | "10.0\t2.0\tincongruent_correct\t0.988\tcorrect\t2\t1\tincongruent\tcond003\r\n", 87 | "20.0\t2.0\tcongruent_correct\t0.591\tcorrect\t1\t1\tcongruent\tcond001\r\n", 88 | "30.0\t2.0\tcongruent_correct\t0.499\tcorrect\t1\t1\tcongruent\tcond001\r\n", 89 | "40.0\t2.0\tincongruent_correct\t0.719\tcorrect\t2\t1\tincongruent\tcond003\r\n", 90 | "52.0\t2.0\tcongruent_correct\t0.544\tcorrect\t1\t1\tcongruent\tcond001\r\n", 91 | "64.0\t2.0\tcongruent_correct\t0.436\tcorrect\t1\t1\tcongruent\tcond001\r\n", 92 | "76.0\t2.0\tincongruent_correct\t0.47\tcorrect\t2\t1\tincongruent\tcond003\r\n", 93 | "88.0\t2.0\tcongruent_correct\t0.409\tcorrect\t1\t1\tcongruent\tcond001\r\n", 94 | "102.0\t2.0\tincongruent_correct\t0.563\tcorrect\t2\t1\tincongruent\tcond003\r\n", 95 | "116.0\t2.0\tcongruent_correct\t0.493\tcorrect\t1\t1\tcongruent\tcond001\r\n", 96 | "130.0\t2.0\tcongruent_correct\t0.398\tcorrect\t1\t1\tcongruent\tcond001\r\n", 97 | "140.0\t2.0\tcongruent_correct\t0.466\tcorrect\t1\t1\tcongruent\tcond001\r\n", 98 | "150.0\t2.0\tincongruent_correct\t0.518\tcorrect\t2\t1\tincongruent\tcond003\r\n", 99 | "164.0\t2.0\tincongruent_correct\t0.56\tcorrect\t2\t1\tincongruent\tcond003\r\n", 100 | "174.0\t2.0\tincongruent_correct\t0.533\tcorrect\t2\t1\tincongruent\tcond003\r\n", 101 | "184.0\t2.0\tcongruent_correct\t0.439\tcorrect\t1\t1\tcongruent\tcond001\r\n", 102 | "196.0\t2.0\tcongruent_correct\t0.458\tcorrect\t1\t1\tcongruent\tcond001\r\n", 103 | "208.0\t2.0\tincongruent_correct\t0.734\tcorrect\t2\t1\tincongruent\tcond003\r\n", 104 | "220.0\t2.0\tincongruent_correct\t0.479\tcorrect\t2\t1\tincongruent\tcond003\r\n", 105 | "232.0\t2.0\tincongruent_correct\t0.538\tcorrect\t2\t1\tincongruent\tcond003\r\n", 106 | "246.0\t2.0\tcongruent_correct\t0.54\tcorrect\t1\t1\tcongruent\tcond001\r\n", 107 | "260.0\t2.0\tincongruent_correct\t0.622\tcorrect\t2\t1\tincongruent\tcond003\r\n", 108 | "274.0\t2.0\tcongruent_correct\t0.488\tcorrect\t1\t1\tcongruent\tcond001\r\n" 109 | ] 110 | } 111 | ], 112 | "source": [ 113 | "!cat /data/ds102/sub-01/func/sub-01_task-flanker_run-1_events.tsv" 114 | ] 115 | }, 116 | { 117 | "cell_type": "markdown", 118 | "metadata": {}, 119 | "source": [ 120 | "So, the only things that we need to specify our model are the onset and the stimuli type, i.e. **column 0** and **column 5 or 7**. Those we can get with the command:" 121 | ] 122 | }, 123 | { 124 | "cell_type": "code", 125 | "execution_count": null, 126 | "metadata": { 127 | "collapsed": false 128 | }, 129 | "outputs": [], 130 | "source": [ 131 | "import numpy as np\n", 132 | "filename = '/data/ds102/sub-01/func/sub-01_task-flanker_run-1_events.tsv'\n", 133 | "trailinfo = np.genfromtxt(filename, delimiter='\\t', dtype=None, skip_header=1)\n", 134 | "trailinfo = [[t[0], t[7]] for t in trailinfo]\n", 135 | "trailinfo" 136 | ] 137 | }, 138 | { 139 | "cell_type": "markdown", 140 | "metadata": {}, 141 | "source": [ 142 | "Before we can use the onsets, we first need to split them into the two conditions:" 143 | ] 144 | }, 145 | { 146 | "cell_type": "code", 147 | "execution_count": null, 148 | "metadata": { 149 | "collapsed": true 150 | }, 151 | "outputs": [], 152 | "source": [ 153 | "onset1 = []\n", 154 | "onset2 = []\n", 155 | "\n", 156 | "for t in trailinfo:\n", 157 | " if 'incongruent' in t[1]:\n", 158 | " onset2.append(t[0])\n", 159 | " else:\n", 160 | " onset1.append(t[0])\n", 161 | "\n", 162 | "print onset1\n", 163 | "print onset2" 164 | ] 165 | }, 166 | { 167 | "cell_type": "markdown", 168 | "metadata": {}, 169 | "source": [ 170 | "The last thing we now need to to is to put this into a ``Bunch`` object and we're done:" 171 | ] 172 | }, 173 | { 174 | "cell_type": "code", 175 | "execution_count": null, 176 | "metadata": { 177 | "collapsed": true 178 | }, 179 | "outputs": [], 180 | "source": [ 181 | "from nipype.interfaces.base import Bunch\n", 182 | "\n", 183 | "conditions = ['congruent', 'incongruent']\n", 184 | "onsets = [onset1, onset2]\n", 185 | "durations = [[2], [2]]\n", 186 | "\n", 187 | "subject_info = Bunch(conditions=conditions,\n", 188 | " onsets=onsets,\n", 189 | " durations=durations)" 190 | ] 191 | } 192 | ], 193 | "metadata": { 194 | "kernelspec": { 195 | "display_name": "Python [default]", 196 | "language": "python", 197 | "name": "python2" 198 | }, 199 | "language_info": { 200 | "codemirror_mode": { 201 | "name": "ipython", 202 | "version": 2 203 | }, 204 | "file_extension": ".py", 205 | "mimetype": "text/x-python", 206 | "name": "python", 207 | "nbconvert_exporter": "python", 208 | "pygments_lexer": "ipython2", 209 | "version": "2.7.13" 210 | } 211 | }, 212 | "nbformat": 4, 213 | "nbformat_minor": 2 214 | } 215 | -------------------------------------------------------------------------------- /basic_plugins.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": { 6 | "collapsed": true, 7 | "deletable": true, 8 | "editable": true 9 | }, 10 | "source": [ 11 | "# Execution Plugins\n", 12 | "\n", 13 | "As you learned in the [Workflow](basic_workflow.ipynb) tutorial, a workflow is executed with the ``run`` method. For example:\n", 14 | "\n", 15 | " workflow.run()\n", 16 | "\n", 17 | "Whenever you execute a workflow like this, it will be executed in serial order. This means that no node will be executed in parallel, even if they are completely independent of each other. Now, while this might be preferable under certain circumstances, we usually want to executed workflows in parallel. For this, Nipype provides many different plugins." 18 | ] 19 | }, 20 | { 21 | "cell_type": "markdown", 22 | "metadata": { 23 | "deletable": true, 24 | "editable": true 25 | }, 26 | "source": [ 27 | "## Local execution\n", 28 | "\n", 29 | "### ``Linear`` Plugin\n", 30 | "\n", 31 | "If you want to run your workflow in a linear fashion, just use the following code:\n", 32 | "\n", 33 | " workflow.run(plugin='Linear')" 34 | ] 35 | }, 36 | { 37 | "cell_type": "markdown", 38 | "metadata": { 39 | "deletable": true, 40 | "editable": true 41 | }, 42 | "source": [ 43 | "### ``MultiProc`` Plugin\n", 44 | "\n", 45 | "The easiest way to executed a workflow locally in parallel is the ``MultiProc`` plugin:\n", 46 | "\n", 47 | " workflow.run(plugin='MultiProc', plugin_args={'n_procs': 4})\n", 48 | "\n", 49 | "The additional plugin argument ``n_procs``, specifies how many cores should be used for the parallel execution. In this case, it's 4.\n", 50 | "\n", 51 | "The `MultiProc` plugin uses the [multiprocessing](http://docs.python.org/library/multiprocessing.html) package in the standard library, and is the only parallel plugin that is guaranteed to work right out of the box." 52 | ] 53 | }, 54 | { 55 | "cell_type": "markdown", 56 | "metadata": { 57 | "deletable": true, 58 | "editable": true 59 | }, 60 | "source": [ 61 | "## Cluster execution\n", 62 | "\n", 63 | "There are many different plugins to run Nipype on a cluster, such as: ``PBS``, ``SGE``, ``LSF``, ``Condor`` and ``IPython``. Implementing them is as easy as ``'MultiProc'``.\n", 64 | "\n", 65 | " workflow.run('PBS', plugin_args={'qsub_args': '-q many'})\n", 66 | " workflow.run('SGE', plugin_args={'qsub_args': '-q many'})\n", 67 | " workflow.run('LSF', plugin_args={'qsub_args': '-q many'})\n", 68 | " workflow.run('Condor')\n", 69 | " workflow.run('IPython')\n", 70 | " \n", 71 | " workflow.run('PBSGraph', plugin_args={'qsub_args': '-q many'})\n", 72 | " workflow.run('SGEGraph', plugin_args={'qsub_args': '-q many'})\n", 73 | " workflow.run('CondorDAGMan')\n", 74 | "\n", 75 | "For a complete list and explanation of all supported plugins, see: http://nipype.readthedocs.io/en/latest/users/plugins.html" 76 | ] 77 | } 78 | ], 79 | "metadata": { 80 | "anaconda-cloud": {}, 81 | "kernelspec": { 82 | "display_name": "Python [default]", 83 | "language": "python", 84 | "name": "python2" 85 | }, 86 | "language_info": { 87 | "codemirror_mode": { 88 | "name": "ipython", 89 | "version": 2 90 | }, 91 | "file_extension": ".py", 92 | "mimetype": "text/x-python", 93 | "name": "python", 94 | "nbconvert_exporter": "python", 95 | "pygments_lexer": "ipython2", 96 | "version": "2.7.13" 97 | } 98 | }, 99 | "nbformat": 4, 100 | "nbformat_minor": 0 101 | } 102 | -------------------------------------------------------------------------------- /introduction_dataset.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": { 6 | "collapsed": true, 7 | "deletable": true, 8 | "editable": true 9 | }, 10 | "source": [ 11 | "

\n", 12 | "

BRAIN IMAGING

\n", 13 | "

DATA STRUCTURE

" 14 | ] 15 | }, 16 | { 17 | "cell_type": "markdown", 18 | "metadata": { 19 | "deletable": true, 20 | "editable": true 21 | }, 22 | "source": [ 23 | "The dataset for this tutorial is structured according to the [Brain Imaging Data Structure (BIDS)](http://bids.neuroimaging.io/). BIDS is a simple and intuitive way to organize and describe your neuroimaging and behavioral data. Neuroimaging experiments result in complicated data that can be arranged in many different ways. So far there is no consensus how to organize and share data obtained in neuroimaging experiments. BIDS tackles this problem by suggesting a new standard for the arrangement of neuroimaging datasets." 24 | ] 25 | }, 26 | { 27 | "cell_type": "markdown", 28 | "metadata": { 29 | "deletable": true, 30 | "editable": true 31 | }, 32 | "source": [ 33 | "The idea of BIDS is that the file and folder names follow a strict set of rules:\n", 34 | "\n", 35 | "![](../static/images/bids.png)\n" 36 | ] 37 | }, 38 | { 39 | "cell_type": "markdown", 40 | "metadata": { 41 | "deletable": true, 42 | "editable": true 43 | }, 44 | "source": [ 45 | "Using the same structure for all of your studies will allow you to easily reuse all of your scripts between studies. But additionally, it also has the advantage that sharing code with and using scripts from other researchers will be much easier." 46 | ] 47 | }, 48 | { 49 | "cell_type": "markdown", 50 | "metadata": { 51 | "deletable": true, 52 | "editable": true 53 | }, 54 | "source": [ 55 | "# Tutorial Dataset\n", 56 | "\n", 57 | "The dataset for this tutorial is coming from [https://openfmri.org/](https://openfmri.org/), a homepage dedicated to the free and open sharing of raw magnetic resonance imaging (MRI) datasets. We already downloaded the dataset [ds102](https://openfmri.org/dataset/ds000102/) into the data folder at the current location. To reduce the size of the total dataset, only the first five subjects (`sub-01`, `sub-02`, `sub-03`, `sub-04`, and `sub-05`) were kept.\n", 58 | "\n", 59 | "So let's have a look at the tutorial dataset." 60 | ] 61 | }, 62 | { 63 | "cell_type": "code", 64 | "execution_count": null, 65 | "metadata": { 66 | "collapsed": false, 67 | "deletable": true, 68 | "editable": true 69 | }, 70 | "outputs": [ 71 | { 72 | "name": "stdout", 73 | "output_type": "stream", 74 | "text": [ 75 | "/data/ds102\r\n", 76 | "├── CHANGES\r\n", 77 | "├── dataset_description.json\r\n", 78 | "├── participants.tsv\r\n", 79 | "├── README\r\n", 80 | "├── sub-01\r\n", 81 | "│   ├── anat\r\n", 82 | "│   │   └── sub-01_T1w.nii.gz\r\n", 83 | "│   └── func\r\n", 84 | "│   ├── sub-01_task-flanker_run-1_bold.nii.gz\r\n", 85 | "│   ├── sub-01_task-flanker_run-1_events.tsv\r\n", 86 | "│   ├── sub-01_task-flanker_run-2_bold.nii.gz\r\n", 87 | "│   └── sub-01_task-flanker_run-2_events.tsv\r\n", 88 | "├── sub-02\r\n", 89 | "│   ├── anat\r\n", 90 | "│   │   └── sub-02_T1w.nii.gz\r\n", 91 | "│   └── func\r\n", 92 | "│   ├── sub-02_task-flanker_run-1_bold.nii.gz\r\n", 93 | "│   ├── sub-02_task-flanker_run-1_events.tsv\r\n", 94 | "│   ├── sub-02_task-flanker_run-2_bold.nii.gz\r\n", 95 | "│   └── sub-02_task-flanker_run-2_events.tsv\r\n", 96 | "├── sub-03\r\n", 97 | "│   ├── anat\r\n", 98 | "│   │   └── sub-03_T1w.nii.gz\r\n", 99 | "│   └── func\r\n", 100 | "│   ├── sub-03_task-flanker_run-1_bold.nii.gz\r\n", 101 | "│   ├── sub-03_task-flanker_run-1_events.tsv\r\n", 102 | "│   ├── sub-03_task-flanker_run-2_bold.nii.gz\r\n", 103 | "│   └── sub-03_task-flanker_run-2_events.tsv\r\n", 104 | "├── sub-04\r\n", 105 | "│   ├── anat\r\n", 106 | "│   │   └── sub-04_T1w.nii.gz\r\n", 107 | "│   └── func\r\n", 108 | "│   ├── sub-04_task-flanker_run-1_bold.nii.gz\r\n", 109 | "│   ├── sub-04_task-flanker_run-1_events.tsv\r\n", 110 | "│   ├── sub-04_task-flanker_run-2_bold.nii.gz\r\n", 111 | "│   └── sub-04_task-flanker_run-2_events.tsv\r\n", 112 | "├── sub-05\r\n", 113 | "│   ├── anat\r\n", 114 | "│   │   └── sub-05_T1w.nii.gz\r\n", 115 | "│   └── func\r\n", 116 | "│   ├── sub-05_task-flanker_run-1_bold.nii.gz\r\n", 117 | "│   ├── sub-05_task-flanker_run-1_events.tsv\r\n", 118 | "│   ├── sub-05_task-flanker_run-2_bold.nii.gz\r\n", 119 | "│   └── sub-05_task-flanker_run-2_events.tsv\r\n", 120 | "├── T1w.json\r\n", 121 | "└── task-flanker_bold.json\r\n", 122 | "\r\n", 123 | "15 directories, 31 files\r\n" 124 | ] 125 | } 126 | ], 127 | "source": [ 128 | "!tree /data/ds102" 129 | ] 130 | }, 131 | { 132 | "cell_type": "markdown", 133 | "metadata": { 134 | "deletable": true, 135 | "editable": true 136 | }, 137 | "source": [ 138 | "As you can, we have five subjects, each with one anatomical T1w image and with two functional images, `run-1` and `run-2`. " 139 | ] 140 | }, 141 | { 142 | "cell_type": "markdown", 143 | "metadata": { 144 | "deletable": true, 145 | "editable": true 146 | }, 147 | "source": [ 148 | "# Behavioral Task\n", 149 | "\n", 150 | "Subject from the ds102 dataset did the Flanker task in the scanner. They had to indicate with two buttons the direction of a central arrow in an array of 5 arrows. In **congruent** trials the flanking arrows pointed in the same direction as the central arrow (e.g., < < < < <), while in more demanding **incongruent** trials the flanking arrows pointed in the opposite direction (e.g., < < > < <).\n", 151 | "\n", 152 | "To each of the functional images above, we therefore also have a tab-separated values file (`tva`), containing information such as stimuli onset, duration, type, etc.\n", 153 | "\n", 154 | "So let's have a look at one of them:" 155 | ] 156 | }, 157 | { 158 | "cell_type": "code", 159 | "execution_count": null, 160 | "metadata": { 161 | "collapsed": false, 162 | "deletable": true, 163 | "editable": true 164 | }, 165 | "outputs": [ 166 | { 167 | "name": "stdout", 168 | "output_type": "stream", 169 | "text": [ 170 | "onset duration trial_type response_time correctness StimVar Rsponse Stimulus cond\r\n", 171 | "0.0 2.0 incongruent_correct 1.095 correct 2 1 incongruent cond003\r\n", 172 | "10.0 2.0 incongruent_correct 0.988 correct 2 1 incongruent cond003\r\n", 173 | "20.0 2.0 congruent_correct 0.591 correct 1 1 congruent cond001\r\n", 174 | "30.0 2.0 congruent_correct 0.499 correct 1 1 congruent cond001\r\n", 175 | "40.0 2.0 incongruent_correct 0.719 correct 2 1 incongruent cond003\r\n", 176 | "52.0 2.0 congruent_correct 0.544 correct 1 1 congruent cond001\r\n", 177 | "64.0 2.0 congruent_correct 0.436 correct 1 1 congruent cond001\r\n", 178 | "76.0 2.0 incongruent_correct 0.47 correct 2 1 incongruent cond003\r\n", 179 | "88.0 2.0 congruent_correct 0.409 correct 1 1 congruent cond001\r\n", 180 | "102.0 2.0 incongruent_correct 0.563 correct 2 1 incongruent cond003\r\n", 181 | "116.0 2.0 congruent_correct 0.493 correct 1 1 congruent cond001\r\n", 182 | "130.0 2.0 congruent_correct 0.398 correct 1 1 congruent cond001\r\n", 183 | "140.0 2.0 congruent_correct 0.466 correct 1 1 congruent cond001\r\n", 184 | "150.0 2.0 incongruent_correct 0.518 correct 2 1 incongruent cond003\r\n", 185 | "164.0 2.0 incongruent_correct 0.56 correct 2 1 incongruent cond003\r\n", 186 | "174.0 2.0 incongruent_correct 0.533 correct 2 1 incongruent cond003\r\n", 187 | "184.0 2.0 congruent_correct 0.439 correct 1 1 congruent cond001\r\n", 188 | "196.0 2.0 congruent_correct 0.458 correct 1 1 congruent cond001\r\n", 189 | "208.0 2.0 incongruent_correct 0.734 correct 2 1 incongruent cond003\r\n", 190 | "220.0 2.0 incongruent_correct 0.479 correct 2 1 incongruent cond003\r\n", 191 | "232.0 2.0 incongruent_correct 0.538 correct 2 1 incongruent cond003\r\n", 192 | "246.0 2.0 congruent_correct 0.54 correct 1 1 congruent cond001\r\n", 193 | "260.0 2.0 incongruent_correct 0.622 correct 2 1 incongruent cond003\r\n", 194 | "274.0 2.0 congruent_correct 0.488 correct 1 1 congruent cond001\r\n" 195 | ] 196 | } 197 | ], 198 | "source": [ 199 | "!cat /data/ds102/sub-01/func/sub-01_task-flanker_run-1_events.tsv" 200 | ] 201 | } 202 | ], 203 | "metadata": { 204 | "anaconda-cloud": {}, 205 | "kernelspec": { 206 | "display_name": "Python [default]", 207 | "language": "python", 208 | "name": "python2" 209 | }, 210 | "language_info": { 211 | "codemirror_mode": { 212 | "name": "ipython", 213 | "version": 2 214 | }, 215 | "file_extension": ".py", 216 | "mimetype": "text/x-python", 217 | "name": "python", 218 | "nbconvert_exporter": "python", 219 | "pygments_lexer": "ipython2", 220 | "version": "2.7.13" 221 | } 222 | }, 223 | "nbformat": 4, 224 | "nbformat_minor": 0 225 | } 226 | -------------------------------------------------------------------------------- /introduction_docker.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": { 6 | "deletable": true, 7 | "editable": true 8 | }, 9 | "source": [ 10 | "
\n", 11 | "\n", 12 | "# Docker\n", 13 | "\n", 14 | "[Docker](https://www.docker.com) is an open-source project that automates the deployment of applications inside software containers. Those containers wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, system tools, software libraries, such as Python, FSL, AFNI, SPM, FreeSurfer, ANTs, etc. This guarantees that it will always run the same, regardless of the environment it is running in.\n", 15 | "\n", 16 | "Important: **You don't need Docker to run Nipype on your system**. For Mac and Linux users, it probably is much simpler to install Nipype directly on your system. For more information on how to do this see the [Installation Section](resources_installation.ipynb) of this tutorial. But for Windows user, or users that don't want to setup all the dependencies themselves, Docker is the way to go." 17 | ] 18 | }, 19 | { 20 | "cell_type": "markdown", 21 | "metadata": {}, 22 | "source": [ 23 | "# Docker Image for the interactive Nipype Tutorial\n", 24 | "\n", 25 | "If you want to run this Nipype Tutorial with the example dataset locally on your own system, you need to use the docker image, provided under [miykael/nipype_course](https://github.com/miykael/nipype_course). This docker image sets up a Linux environment on your system, with functioning Python, Nipype, FSL, AFNI, ANTs and SPM12 software package, some example data and all the tutorial notebooks to learn Nipype." 26 | ] 27 | }, 28 | { 29 | "cell_type": "markdown", 30 | "metadata": {}, 31 | "source": [ 32 | "# Install Docker\n", 33 | "\n", 34 | "Before you can do anything, you first need to install [Docker](https://www.docker.com) on your system. The installation process differes per system. Luckily, the docker homepage has nice instructions for...\n", 35 | "\n", 36 | " - [Ubuntu](https://docs.docker.com/engine/installation/linux/ubuntu/) or [Debian](https://docs.docker.com/engine/installation/linux/debian/)\n", 37 | " - [Windows 7/8/9/10](https://docs.docker.com/toolbox/toolbox_install_windows/) or [Windows 10Pro](https://docs.docker.com/docker-for-windows/install/)\n", 38 | " - [OS X (from El Capitan 10.11 on)](https://docs.docker.com/docker-for-mac/install/) or [OS X (before El Capitan 10.11)](https://docs.docker.com/toolbox/toolbox_install_mac/).\n", 39 | "\n", 40 | "Once Docker is installed, open up the docker terminal and test it works with the command:\n", 41 | "\n", 42 | " docker run hello-world\n", 43 | "\n", 44 | "**Note:** Mac and Linux users might need to use ``sudo`` to run ``docker`` commands." 45 | ] 46 | }, 47 | { 48 | "cell_type": "markdown", 49 | "metadata": {}, 50 | "source": [ 51 | "# How to run ``miykael/nipype_course``\n", 52 | "\n", 53 | "After installing docker on your system and making sure that the ``hello-world`` example was running, we are good to go to start the Nipype Course image. The exact implementation is a bit different for Windows user, but the general command looks as follows:\n", 54 | "\n", 55 | " docker run -ti --rm -p 8888:8888 -v /home/username/results:/output miykael/nipype_course\n", 56 | "\n", 57 | "But what do those flags mean?\n", 58 | "\n", 59 | "- The ``-ti`` flag tells docker that it should open an interactive container instance.\n", 60 | "- The ``--rm`` flag tells docker that the container should automatically be removed after we close docker.\n", 61 | "- The ``-p`` flag specifies which port we want to make available for docker.\n", 62 | "- The ``-v`` flag tells docker which folder (here: ``/home/username/results`` it should mount to make it accesible inside the container. The second part of the ``-v`` flag (here: ``/output``) specifies under which path the mounted folder can be found inside the container. This means that we can use the folder ``/output`` inside the tutorial to save data outside the docker container under ``/home/username/results``. **Important**: To use the ``results`` folder, you first need to create it on your system!\n", 63 | "- The last argument ``miykael/nipype_course`` tells docker that we want to run this docker image.\n", 64 | "\n", 65 | "To run a docker image, docker will look for the specified image on [Docker Hub](https://hub.docker.com/r/miykael/nipype_course/). If the docker image was already download to your system, it will be directly opened. Otherwise, it first needs to download all containers, which might take some time. \n", 66 | "\n", 67 | "\n", 68 | "## Run a docker image on Linux or Mac\n", 69 | "\n", 70 | "Running a docker image on a Linux or Mac OS is very simple. Make sure that you've created a results folder on your system (e.g. ``mkdir -p /home/username/results``). Then just open a new terminal and use the command from above:\n", 71 | "\n", 72 | " docker run -ti --rm -p 8888:8888 -v /home/username/results:/output miykael/nipype_course\n", 73 | "\n", 74 | "Once the docker image is downloaded, open the shown URL link in your browser and you are good to go. The URL will look something like:\n", 75 | "\n", 76 | " http://localhost:8888/?token=0312c1ef3b61d7a44ff5346d3d150c23249a548850e13868\n", 77 | "\n", 78 | "\n", 79 | "## Run a docker image on Windows\n", 80 | "\n", 81 | "Running a docker image on Windows is a bit trickier than on Ubuntu. Assuming you've installed the DockerToolbox, open the Docker Quickstart Terminal (encircled in red).\n", 82 | "\n", 83 | "
\n", 84 | "\n", 85 | "Once the docker terminal is ready (when you see the whale), we can execute the following steps (see also figure):\n", 86 | "\n", 87 | "1. We need to check the IP adress of your docker machine. For this, use the command: \n", 88 | "\n", 89 | " ``docker-machine ip``\n", 90 | "\n", 91 | " In my case, this returned ``192.168.99.100``\n", 92 | "\n", 93 | "2. If you haven't already created a new folder to store your container output into, do so. You can create the folder either in the explorer as usual or do it with the command ``mkdir -p`` in the docker console. For example like this:\n", 94 | "\n", 95 | " ``mkdir -p /c/Users/username/results``\n", 96 | "\n", 97 | " Please replace ``username`` with the name of the current user on your system. **Pay attention** that the folder paths in the docker terminal are not backslash (``\\``) as we usually have in Windows. Also, ``C:\\`` needs to be specified as ``/c/``.\n", 98 | "\n", 99 | "3. Now, we can open run the container with the command from above:\n", 100 | "\n", 101 | " ``docker run -ti --rm -p 8888:8888 -v /c/Users/username/outputs:/output miykael/nipype_course``\n", 102 | "\n", 103 | "4. Once the docker image is downloaded, it will show you an URL that looks something like this:\n", 104 | "\n", 105 | " ``http://localhost:8888/?token=0312c1ef3b61d7a44ff5346d3d150c23249a548850e13868``\n", 106 | " \n", 107 | " This URL will not work on a Windows system. To make it work, you need to replace the string ``localhost`` with the IP address of your docker machine, that we acquired under step 1. Afterwards, your URL should look something like this:\n", 108 | "\n", 109 | " ``http://192.168.99.100:8888/?token=0312c1ef3b61d7a44ff5346d3d150c23249a548850e13868``\n", 110 | "\n", 111 | " Copy this link into your webbrowser and you're good to go!" 112 | ] 113 | }, 114 | { 115 | "cell_type": "markdown", 116 | "metadata": {}, 117 | "source": [ 118 | "# Docker tips and tricks\n", 119 | "\n", 120 | "\n", 121 | "## Access Docker Container with ``bash`` or ``ipython``\n", 122 | "\n", 123 | "You don't have to open a jupyter notebook when you run ``miykael/nipype_course``. You can also access the docker container directly with ``bash`` or ``ipython`` by adding it to the end of your command, i.e.:\n", 124 | "\n", 125 | " docker run -ti --rm -v /home/username/results:/output miykael/nipype_course bash\n", 126 | "\n", 127 | "This also works with other software commands, such as ``bet`` etc.\n", 128 | "\n", 129 | "\n", 130 | "## Stop Docker Container\n", 131 | "\n", 132 | "To stop a running docker container, either close the docker terminal or select the terminal and uste the ``Ctrl-C`` shortcut multiple times.\n", 133 | "\n", 134 | "\n", 135 | "## List all installed docker images\n", 136 | "\n", 137 | "To see a list of all installed docker images use:\n", 138 | "\n", 139 | " docker images\n", 140 | "\n", 141 | "\n", 142 | "## Delete a specific docker image\n", 143 | "\n", 144 | "To delete a specific docker image, first use the ``docker images`` command to list all installed containers and than use the ``IMAGE ID`` and the ``rmi`` instruction to delete the container:\n", 145 | "\n", 146 | " docker rmi -f 7d9495d03763\n", 147 | "\n", 148 | "\n", 149 | "## Export and Import a docker image\n", 150 | "\n", 151 | "If you don't want to depend on a internet connection, you can also export an already downloaded docker image and than later on import it on another PC. To do so, use the following two commands:\n", 152 | "\n", 153 | "\n", 154 | " # Export docker image miykael/nipype_course\n", 155 | " docker save -o nipype_course.tar miykael/nipype_course\n", 156 | "\n", 157 | " # Import docker image on another PC\n", 158 | " docker load --input nipype_course.tar\n", 159 | " \n", 160 | "It might be possible that you run into administrator privileges isssues because you ran your docker command with ``sudo``. This means that òther users don't have access rights to ``nipype_course.tar``. To avoid this, just change the rights of ``nipype_course.tar`` with the command:\n", 161 | "\n", 162 | " sudo chmod 777 nipype_course.tar" 163 | ] 164 | } 165 | ], 166 | "metadata": { 167 | "anaconda-cloud": {}, 168 | "kernelspec": { 169 | "display_name": "Python [default]", 170 | "language": "python", 171 | "name": "python2" 172 | }, 173 | "language_info": { 174 | "codemirror_mode": { 175 | "name": "ipython", 176 | "version": 2 177 | }, 178 | "file_extension": ".py", 179 | "mimetype": "text/x-python", 180 | "name": "python", 181 | "nbconvert_exporter": "python", 182 | "pygments_lexer": "ipython2", 183 | "version": "2.7.13" 184 | } 185 | }, 186 | "nbformat": 4, 187 | "nbformat_minor": 0 188 | } 189 | -------------------------------------------------------------------------------- /introduction_jupyter-notebook.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": { 6 | "deletable": true, 7 | "editable": true 8 | }, 9 | "source": [ 10 | "
\n", 11 | "\n", 12 | "# Jupyter Notebook\n", 13 | "\n", 14 | "This notebook was adapted from https://github.com/oesteban/biss2016 and is originally based on https://github.com/jvns/pandas-cookbook.\n", 15 | "\n", 16 | "[Jupyter Notebook](http://jupyter.org/) started as a web application, based on [IPython](https://ipython.org/) that can run Python code directly in the webbrowser. Now, Jupyter Notebook can handle over 40 programming languages and is *the* interactive, open source web application to run any scientific code." 17 | ] 18 | }, 19 | { 20 | "cell_type": "markdown", 21 | "metadata": { 22 | "deletable": true, 23 | "editable": true 24 | }, 25 | "source": [ 26 | "## How to run a cell\n", 27 | "\n", 28 | "First, we need to explain how to run cells. Try to run the cell below!" 29 | ] 30 | }, 31 | { 32 | "cell_type": "code", 33 | "execution_count": null, 34 | "metadata": { 35 | "collapsed": false, 36 | "deletable": true, 37 | "editable": true 38 | }, 39 | "outputs": [ 40 | { 41 | "name": "stdout", 42 | "output_type": "stream", 43 | "text": [ 44 | "Hi! This is a cell. Click on it and press the ▶ button above to run it\n" 45 | ] 46 | } 47 | ], 48 | "source": [ 49 | "import pandas as pd\n", 50 | "\n", 51 | "print \"Hi! This is a cell. Click on it and press the ▶ button above to run it\"" 52 | ] 53 | }, 54 | { 55 | "cell_type": "markdown", 56 | "metadata": { 57 | "deletable": true, 58 | "editable": true 59 | }, 60 | "source": [ 61 | "You can also run a cell with `Ctrl+Enter` or `Shift+Enter`. Experiment a bit with that." 62 | ] 63 | }, 64 | { 65 | "cell_type": "markdown", 66 | "metadata": { 67 | "deletable": true, 68 | "editable": true 69 | }, 70 | "source": [ 71 | "## Tab Completion" 72 | ] 73 | }, 74 | { 75 | "cell_type": "markdown", 76 | "metadata": { 77 | "deletable": true, 78 | "editable": true 79 | }, 80 | "source": [ 81 | "One of the most useful things about Jupyter Notebook is its tab completion. \n", 82 | "\n", 83 | "Try this: click just after `read_csv(` in the cell below and press `Shift+Tab` 4 times, slowly" 84 | ] 85 | }, 86 | { 87 | "cell_type": "code", 88 | "execution_count": null, 89 | "metadata": { 90 | "collapsed": false, 91 | "deletable": true, 92 | "editable": true 93 | }, 94 | "outputs": [], 95 | "source": [ 96 | "pd.read_csv(" 97 | ] 98 | }, 99 | { 100 | "cell_type": "markdown", 101 | "metadata": { 102 | "deletable": true, 103 | "editable": true 104 | }, 105 | "source": [ 106 | "After the first time, you should see this:\n", 107 | "\n", 108 | "![](../static/images/jupyter_tab-once.png)\n", 109 | "\n", 110 | "After the second time:\n", 111 | "![](../static/images/jupyter_tab-twice.png)\n", 112 | "\n", 113 | "After the fourth time, a big help box should pop up at the bottom of the screen, with the full documentation for the `read_csv` function:\n", 114 | "![](../static/images/jupyter_tab-4-times.png)\n", 115 | "\n", 116 | "I find this amazingly useful. I think of this as \"the more confused I am, the more times I should press `Shift+Tab`\".\n", 117 | "\n", 118 | "Okay, let's try tab completion for function names!" 119 | ] 120 | }, 121 | { 122 | "cell_type": "code", 123 | "execution_count": null, 124 | "metadata": { 125 | "collapsed": false, 126 | "deletable": true, 127 | "editable": true 128 | }, 129 | "outputs": [], 130 | "source": [ 131 | "pd.r" 132 | ] 133 | }, 134 | { 135 | "cell_type": "markdown", 136 | "metadata": { 137 | "deletable": true, 138 | "editable": true 139 | }, 140 | "source": [ 141 | "You should see this:\n", 142 | "\n", 143 | "![](../static/images/jupyter_function-completion.png)" 144 | ] 145 | }, 146 | { 147 | "cell_type": "markdown", 148 | "metadata": { 149 | "deletable": true, 150 | "editable": true 151 | }, 152 | "source": [ 153 | "## Get Help\n", 154 | "\n", 155 | "There's an additional way on how you can reach the help box shown above after the fourth `Shift+Tab` press. Instead, you can also use `obj?` or `obj??` to get help or more help for an object." 156 | ] 157 | }, 158 | { 159 | "cell_type": "code", 160 | "execution_count": null, 161 | "metadata": { 162 | "collapsed": true, 163 | "deletable": true, 164 | "editable": true 165 | }, 166 | "outputs": [], 167 | "source": [ 168 | "pd.read_csv?" 169 | ] 170 | }, 171 | { 172 | "cell_type": "markdown", 173 | "metadata": { 174 | "deletable": true, 175 | "editable": true 176 | }, 177 | "source": [ 178 | "## Writing code\n", 179 | "\n", 180 | "Writing code in the notebook is pretty normal." 181 | ] 182 | }, 183 | { 184 | "cell_type": "code", 185 | "execution_count": null, 186 | "metadata": { 187 | "collapsed": false, 188 | "deletable": true, 189 | "editable": true 190 | }, 191 | "outputs": [], 192 | "source": [ 193 | "def print_10_nums():\n", 194 | " for i in range(10):\n", 195 | " print(i), " 196 | ] 197 | }, 198 | { 199 | "cell_type": "code", 200 | "execution_count": null, 201 | "metadata": { 202 | "collapsed": false, 203 | "deletable": true, 204 | "editable": true, 205 | "scrolled": true 206 | }, 207 | "outputs": [ 208 | { 209 | "name": "stdout", 210 | "output_type": "stream", 211 | "text": [ 212 | "0 1 2 3 4 5 6 7 8 9\n" 213 | ] 214 | } 215 | ], 216 | "source": [ 217 | "print_10_nums()" 218 | ] 219 | }, 220 | { 221 | "cell_type": "markdown", 222 | "metadata": { 223 | "deletable": true, 224 | "editable": true 225 | }, 226 | "source": [ 227 | "If you messed something up and want to revert to an older version of a code in a cell, use `Ctrl+Z` or to go than back `Ctrl+Y`.\n", 228 | "\n", 229 | "For a full list of all keyboard shortcuts, click on the small keyboard icon in the notebook header or click on `Help > Keyboard Shortcuts`." 230 | ] 231 | }, 232 | { 233 | "cell_type": "markdown", 234 | "metadata": { 235 | "deletable": true, 236 | "editable": true 237 | }, 238 | "source": [ 239 | "## Saving a Notebook\n", 240 | "\n", 241 | "Jupyter Notebooks autosave, so you don't have to worry about losing code too much. At the top of the page you can usually see the current save status:\n", 242 | "\n", 243 | "- Last Checkpoint: 2 minutes ago (unsaved changes)\n", 244 | "- Last Checkpoint: a few seconds ago (autosaved)\n", 245 | "\n", 246 | "If you want to save a notebook on purpose, either click on `File > Save and Checkpoint` or press `Ctrl+S`." 247 | ] 248 | }, 249 | { 250 | "cell_type": "markdown", 251 | "metadata": { 252 | "deletable": true, 253 | "editable": true 254 | }, 255 | "source": [ 256 | "## Magic functions" 257 | ] 258 | }, 259 | { 260 | "cell_type": "markdown", 261 | "metadata": { 262 | "deletable": true, 263 | "editable": true 264 | }, 265 | "source": [ 266 | "IPython has all kinds of magic functions. Magic functions are prefixed by % or %%, and typically take their arguments without parentheses, quotes or even commas for convenience. Line magics take a single % and cell magics are prefixed with two %%.\n", 267 | "\n", 268 | "Some useful magic functions are:\n", 269 | "\n", 270 | "Magic Name | Effect\n", 271 | "---------- | -------------------------------------------------------------\n", 272 | "%env | Get, set, or list environment variables\n", 273 | "%pdb | Control the automatic calling of the pdb interactive debugger\n", 274 | "%pylab | Load numpy and matplotlib to work interactively\n", 275 | "%%debug | Activates debugging mode in cell\n", 276 | "%%html | Render the cell as a block of HTML\n", 277 | "%%latex | Render the cell as a block of latex\n", 278 | "%%sh | %%sh script magic\n", 279 | "%%time | Time execution of a Python statement or expression\n", 280 | "\n", 281 | "You can run `%magic` to get a list of magic functions or `%quickref` for a reference sheet." 282 | ] 283 | }, 284 | { 285 | "cell_type": "markdown", 286 | "metadata": { 287 | "deletable": true, 288 | "editable": true 289 | }, 290 | "source": [ 291 | "Example 1: Let's see how long a specific command takes with `%time` or `%%time`:" 292 | ] 293 | }, 294 | { 295 | "cell_type": "code", 296 | "execution_count": null, 297 | "metadata": { 298 | "collapsed": false, 299 | "deletable": true, 300 | "editable": true 301 | }, 302 | "outputs": [ 303 | { 304 | "name": "stdout", 305 | "output_type": "stream", 306 | "text": [ 307 | "CPU times: user 72 ms, sys: 24 ms, total: 96 ms\n", 308 | "Wall time: 94.9 ms\n" 309 | ] 310 | } 311 | ], 312 | "source": [ 313 | "%time result = sum([x for x in range(10**6)])" 314 | ] 315 | }, 316 | { 317 | "cell_type": "markdown", 318 | "metadata": { 319 | "deletable": true, 320 | "editable": true 321 | }, 322 | "source": [ 323 | "Example 2: Let's use `%%latex` to render a block of latex" 324 | ] 325 | }, 326 | { 327 | "cell_type": "code", 328 | "execution_count": null, 329 | "metadata": { 330 | "collapsed": false, 331 | "deletable": true, 332 | "editable": true 333 | }, 334 | "outputs": [ 335 | { 336 | "data": { 337 | "text/latex": [ 338 | "$$F(k) = \\int_{-\\infty}^{\\infty} f(x) e^{2\\pi i k} dx$$" 339 | ], 340 | "text/plain": [ 341 | "" 342 | ] 343 | }, 344 | "metadata": {}, 345 | "output_type": "display_data" 346 | } 347 | ], 348 | "source": [ 349 | "%%latex\n", 350 | "$$F(k) = \\int_{-\\infty}^{\\infty} f(x) e^{2\\pi i k} dx$$" 351 | ] 352 | } 353 | ], 354 | "metadata": { 355 | "anaconda-cloud": {}, 356 | "kernelspec": { 357 | "display_name": "Python [default]", 358 | "language": "python", 359 | "name": "python2" 360 | }, 361 | "language_info": { 362 | "codemirror_mode": { 363 | "name": "ipython", 364 | "version": 2 365 | }, 366 | "file_extension": ".py", 367 | "mimetype": "text/x-python", 368 | "name": "python", 369 | "nbconvert_exporter": "python", 370 | "pygments_lexer": "ipython2", 371 | "version": "2.7.13" 372 | } 373 | }, 374 | "nbformat": 4, 375 | "nbformat_minor": 0 376 | } 377 | -------------------------------------------------------------------------------- /introduction_nipype.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": { 6 | "deletable": true, 7 | "editable": true, 8 | "slideshow": { 9 | "slide_type": "slide" 10 | } 11 | }, 12 | "source": [ 13 | "" 14 | ] 15 | }, 16 | { 17 | "cell_type": "markdown", 18 | "metadata": { 19 | "deletable": true, 20 | "editable": true, 21 | "slideshow": { 22 | "slide_type": "fragment" 23 | } 24 | }, 25 | "source": [ 26 | "# What is Nipype?\n", 27 | "\n", 28 | "- **[Nipype](http://nipype.readthedocs.io/en/latest/)** is an open-source, community-developed software package written in **Python**.\n", 29 | "- Provides unified way of **interfacing** with heterogeneous neuroimaging software like [SPM](http://www.fil.ion.ucl.ac.uk/spm/), [FSL](http://fsl.fmrib.ox.ac.uk/fsl/fslwiki/), [FreeSurfer](http://surfer.nmr.mgh.harvard.edu/), [AFNI](https://afni.nimh.nih.gov/afni), [ANTS](http://stnava.github.io/ANTs/), [Camino](http://web4.cs.ucl.ac.uk/research/medic/camino/pmwiki/pmwiki.php), [MRtrix](http://www.brain.org.au/software/mrtrix/index.html), [MNE](https://martinos.org/mne/stable/index.html), [Slicer](https://www.slicer.org/) and many more.\n", 30 | "- Allows users to create **flexible, complex workflows** consisting of multiple processing steps using any software package above\n", 31 | "- Efficient and optimized computation through **parallel execution** plugins" 32 | ] 33 | }, 34 | { 35 | "cell_type": "markdown", 36 | "metadata": { 37 | "deletable": true, 38 | "editable": true, 39 | "slideshow": { 40 | "slide_type": "subslide" 41 | } 42 | }, 43 | "source": [ 44 | "# I don't need that, I'm happy with SPM12!\n", 45 | "\n", 46 | "I mean, there's no problem with SPM's batch system...\n", 47 | "\n", 48 | "\n", 49 | "\n", 50 | "ok, ok... it get's tiring to have a separate batch script for each subject and MATLAB license issues are sometimes a pain. But hey, the nice looking GUI makes it so easy to use!" 51 | ] 52 | }, 53 | { 54 | "cell_type": "markdown", 55 | "metadata": { 56 | "deletable": true, 57 | "editable": true, 58 | "slideshow": { 59 | "slide_type": "subslide" 60 | } 61 | }, 62 | "source": [ 63 | "Using SPM12 with Nipype is simpler than any ``matlabbatch`` and it's intuitive to read:\n", 64 | "\n", 65 | "```python\n", 66 | "from nipype.interfaces.spm import Smooth\n", 67 | "smooth = Smooth()\n", 68 | "smooth.inputs.in_files = 'functional.nii'\n", 69 | "smooth.inputs.fwhm = 6\n", 70 | "smooth.run()\n", 71 | "```" 72 | ] 73 | }, 74 | { 75 | "cell_type": "markdown", 76 | "metadata": { 77 | "deletable": true, 78 | "editable": true, 79 | "slideshow": { 80 | "slide_type": "subslide" 81 | } 82 | }, 83 | "source": [ 84 | "# I don't need that, I'm happy with FSL!\n", 85 | "\n", 86 | "The GUI might look a bit old fashion but the command line interface gives me all the flexibility I need!\n", 87 | "\n", 88 | "\n", 89 | "\n", 90 | "I don't care that it might be more difficult to learn than other neuroimaging softwares. At least it doesn't take me 20 clicks to do simple motion correction. And once you figure out the underlying commands, it's rather simple to script." 91 | ] 92 | }, 93 | { 94 | "cell_type": "markdown", 95 | "metadata": { 96 | "deletable": true, 97 | "editable": true, 98 | "slideshow": { 99 | "slide_type": "subslide" 100 | } 101 | }, 102 | "source": [ 103 | "Nipype makes using FSL even easier:\n", 104 | "\n", 105 | "```python\n", 106 | "from nipype.interfaces.fsl import MCFLIRT\n", 107 | "mcflt = MCFLIRT()\n", 108 | "mcflt.inputs.in_file = 'functional.nii'\n", 109 | "mcflt.run()\n", 110 | "```\n", 111 | "\n", 112 | "And gives you transparency to what's happening under the hood with one additional line:\n", 113 | "\n", 114 | "```python\n", 115 | "In [1]: mcflt.cmdline\n", 116 | "Out[1]: 'mcflirt -in functional.nii -out functional_mcf.nii'\n", 117 | "```" 118 | ] 119 | }, 120 | { 121 | "cell_type": "markdown", 122 | "metadata": { 123 | "deletable": true, 124 | "editable": true, 125 | "slideshow": { 126 | "slide_type": "subslide" 127 | } 128 | }, 129 | "source": [ 130 | "# I don't need that, I'm happy with FreeSurfer!\n", 131 | "\n", 132 | "You and your problems with fMRI data. I'm perfectly happy with FreeSurfer's command line interface. It gives me all I need to do surface based analyses.\n", 133 | "\n", 134 | "\n", 135 | "\n", 136 | "Of course, you can run your sequential FreeSurfer scripts as you want. But wouldn't it be nice to optimize computation time by using parallel computation?" 137 | ] 138 | }, 139 | { 140 | "cell_type": "markdown", 141 | "metadata": { 142 | "deletable": true, 143 | "editable": true, 144 | "slideshow": { 145 | "slide_type": "subslide" 146 | } 147 | }, 148 | "source": [ 149 | "Let's imagine you want to do smoothing on the surface, with **two different FWHM** values, on **both hemispheres** and this on **six subjects**, all in **parallel**? With Nipype this is as simple as that:\n", 150 | "\n", 151 | "```python\n", 152 | "from nipype.interfaces.freesurfer import SurfaceSmooth\n", 153 | "smoother = SurfaceSmooth()\n", 154 | "smoother.inputs.in_file = \"{hemi}.func.mgz\"\n", 155 | "smoother.iterables = [(\"hemi\", ['lh', 'rh']),\n", 156 | " (\"fwhm\", [4, 8]),\n", 157 | " (\"subject_id\", ['sub01', 'sub02', 'sub03',\n", 158 | " 'sub04', 'sub05', 'sub06']),\n", 159 | " ]\n", 160 | "smoother.run(mode='parallel')\n", 161 | "```" 162 | ] 163 | }, 164 | { 165 | "cell_type": "markdown", 166 | "metadata": { 167 | "deletable": true, 168 | "editable": true, 169 | "slideshow": { 170 | "slide_type": "subslide" 171 | } 172 | }, 173 | "source": [ 174 | "# But I like my neuorimaging toolbox\n", 175 | "\n", 176 | "- You can keep it! But instead of being stuck in MATLAB with SPM, or having scripting issues with FreeSurfer, ANTs or FSL,..\n", 177 | "- **Nipype** gives you the possibility to select the algorithms that you prefer from many different sofware packages.\n", 178 | "- In short, you can have all the advantages without the disadvantage of being stuck with a programming language or software package" 179 | ] 180 | }, 181 | { 182 | "cell_type": "markdown", 183 | "metadata": { 184 | "deletable": true, 185 | "editable": true, 186 | "slideshow": { 187 | "slide_type": "slide" 188 | } 189 | }, 190 | "source": [ 191 | "# A short Example\n", 192 | "\n", 193 | "Let's assume we want to do preprocessing that uses **SPM** for *motion correction*, **FreeSurfer** for *coregistration*, **ANTS** for *normalization* and **FSL** for *smoothing*. Normally this would be a hell of a mess. It would mean switching between multiple scripts in different programming languages with a lot of manual intervention. **Nipype comes to the rescue!**\n", 194 | "\n", 195 | "" 196 | ] 197 | }, 198 | { 199 | "cell_type": "markdown", 200 | "metadata": { 201 | "deletable": true, 202 | "editable": true, 203 | "slideshow": { 204 | "slide_type": "subslide" 205 | } 206 | }, 207 | "source": [ 208 | "# Code Example\n", 209 | "\n", 210 | "The code to create an Nipype workflow like the example before would look something like this:\n", 211 | "\n", 212 | "```python\n", 213 | "# Import modules\n", 214 | "import nipype\n", 215 | "from nipype.interfaces.freesurfer import BBRegister\n", 216 | "from nipype.interfaces.ants import WarpTimeSeriesImageMultiTransform\n", 217 | "from nipype.interfaces.fsl import SUSAN\n", 218 | "from nipype.interfaces.spm import Realing\n", 219 | "\n", 220 | "# Motion Correction (SPM)\n", 221 | "realign = Realing(register_to_mean=True)\n", 222 | "\n", 223 | "# Coregistration (FreeSurfer)\n", 224 | "coreg = BBRegister()\n", 225 | "\n", 226 | "# Normalization (ANTS)\n", 227 | "normalize = WarpTimeSeriesImageMultiTransform()\n", 228 | "\n", 229 | "# Smoothing (FSL)\n", 230 | "smooth = SUSAN(fwhm=6.0)\n", 231 | "\n", 232 | "\n", 233 | "```" 234 | ] 235 | }, 236 | { 237 | "cell_type": "markdown", 238 | "metadata": { 239 | "slideshow": { 240 | "slide_type": "subslide" 241 | } 242 | }, 243 | "source": [ 244 | "\n", 245 | "```python\n", 246 | "# Where can the raw data be found?\n", 247 | "grabber = nipype.DataGrabber()\n", 248 | "grabber.inputs.base_directory = '~/experiment_folder/data'\n", 249 | "grabber.inputs.subject_id = ['subject1', 'subject2', 'subject3']\n", 250 | "\n", 251 | "# Where should the output data be stored at?\n", 252 | "sink = nipype.DataSink()\n", 253 | "sink.inputs.base_directory = '~/experiment_folder/output_folder'\n", 254 | "```" 255 | ] 256 | }, 257 | { 258 | "cell_type": "markdown", 259 | "metadata": { 260 | "deletable": true, 261 | "editable": true, 262 | "slideshow": { 263 | "slide_type": "subslide" 264 | } 265 | }, 266 | "source": [ 267 | "```python\n", 268 | "# Create a workflow to connect all those nodes\n", 269 | "preprocflow = nipype.Workflow()\n", 270 | "\n", 271 | "# Connect the nodes to each other\n", 272 | "preprocflow.connect([(grabber -> realign ),\n", 273 | " (realign -> coreg ),\n", 274 | " (coreg -> normalize),\n", 275 | " (normalize -> smooth ),\n", 276 | " (smooth -> sink )\n", 277 | " ])\n", 278 | "\n", 279 | "# Run the workflow in parallel\n", 280 | "preprocflow.run(mode='parallel')\n", 281 | "```" 282 | ] 283 | }, 284 | { 285 | "cell_type": "markdown", 286 | "metadata": { 287 | "deletable": true, 288 | "editable": true, 289 | "slideshow": { 290 | "slide_type": "skip" 291 | } 292 | }, 293 | "source": [ 294 | "**Important**: This code is a shortened and simplified version of the real Nipype code. But it gives you a good idea of how intuitive it is to use Nipype for your neuroimaging analysis." 295 | ] 296 | }, 297 | { 298 | "cell_type": "markdown", 299 | "metadata": { 300 | "deletable": true, 301 | "editable": true, 302 | "slideshow": { 303 | "slide_type": "slide" 304 | } 305 | }, 306 | "source": [ 307 | "# So again, what is Nipype?\n", 308 | "\n", 309 | "Nipype consists of many parts, but the most important ones are [Interfaces](basic_interfaces.ipynb), the [Workflow Engine](basic_workflow.ipynb) and the [Execution Plugins](basic_plugins.ipynb):\n", 310 | "\n", 311 | "" 312 | ] 313 | }, 314 | { 315 | "cell_type": "markdown", 316 | "metadata": { 317 | "deletable": true, 318 | "editable": true, 319 | "slideshow": { 320 | "slide_type": "skip" 321 | } 322 | }, 323 | "source": [ 324 | "* **Interface**: Wraps a program or function\n", 325 | "\n", 326 | "* **Node/MapNode**: Wraps an `Interface` for use in a Workflow that provides caching and other goodies (e.g., pseudo-sandbox)\n", 327 | "* **Workflow**: A *graph* or *forest of graphs* whose nodes are of type `Node`, `MapNode` or `Workflow` and whose edges represent data flow\n", 328 | "\n", 329 | "* **Plugin**: A component that describes how a `Workflow` should be executed" 330 | ] 331 | }, 332 | { 333 | "cell_type": "markdown", 334 | "metadata": { 335 | "deletable": true, 336 | "editable": true, 337 | "slideshow": { 338 | "slide_type": "skip" 339 | } 340 | }, 341 | "source": [ 342 | "# Slideshow Mode" 343 | ] 344 | }, 345 | { 346 | "cell_type": "code", 347 | "execution_count": null, 348 | "metadata": { 349 | "collapsed": false, 350 | "deletable": true, 351 | "editable": true, 352 | "slideshow": { 353 | "slide_type": "skip" 354 | } 355 | }, 356 | "outputs": [], 357 | "source": [ 358 | "!jupyter-nbconvert --to slides introduction_nipype.ipynb --reveal-prefix=reveal.js" 359 | ] 360 | }, 361 | { 362 | "cell_type": "markdown", 363 | "metadata": { 364 | "deletable": true, 365 | "editable": true, 366 | "slideshow": { 367 | "slide_type": "skip" 368 | } 369 | }, 370 | "source": [ 371 | "
" 372 | ] 373 | } 374 | ], 375 | "metadata": { 376 | "anaconda-cloud": {}, 377 | "kernelspec": { 378 | "display_name": "Python [default]", 379 | "language": "python", 380 | "name": "python2" 381 | }, 382 | "language_info": { 383 | "codemirror_mode": { 384 | "name": "ipython", 385 | "version": 2 386 | }, 387 | "file_extension": ".py", 388 | "mimetype": "text/x-python", 389 | "name": "python", 390 | "nbconvert_exporter": "python", 391 | "pygments_lexer": "ipython2", 392 | "version": "2.7.13" 393 | } 394 | }, 395 | "nbformat": 4, 396 | "nbformat_minor": 0 397 | } 398 | -------------------------------------------------------------------------------- /resources_help.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": { 6 | "collapsed": true, 7 | "deletable": true, 8 | "editable": true 9 | }, 10 | "source": [ 11 | "# Where to find help" 12 | ] 13 | }, 14 | { 15 | "cell_type": "markdown", 16 | "metadata": { 17 | "collapsed": true, 18 | "deletable": true, 19 | "editable": true 20 | }, 21 | "source": [ 22 | "## Neurostar\n", 23 | "\n", 24 | "[NeuroStars.org](https://neurostars.org/) is a platform similar to StackOverflow but dedicated to neuroscience and neuroinformatics. If you have a problem or would like to ask a question about how to do something in Nipype please submit a question to [NeuroStars.org](https://neurostars.org/) with a nipype tag.\n", 25 | "\n", 26 | "All previous Nipype questions are available here: http://neurostars.org/t/nipype/" 27 | ] 28 | }, 29 | { 30 | "cell_type": "markdown", 31 | "metadata": { 32 | "deletable": true, 33 | "editable": true 34 | }, 35 | "source": [ 36 | "## Gitter\n", 37 | "\n", 38 | "[gitter.im](https://gitter.im/home/explore) stands under the motto 'where developers come to talk'. It is a place where developer change thoughts, opinions, ideas and feedbacks to a specific software. Nipype's gitter channel can be found under https://gitter.im/nipy/nipype. Use it to directly speak with the community." 39 | ] 40 | }, 41 | { 42 | "cell_type": "markdown", 43 | "metadata": { 44 | "deletable": true, 45 | "editable": true 46 | }, 47 | "source": [ 48 | "## Github\n", 49 | "\n", 50 | "[github.com](https://github.com/nipy/nipype) is where the source code of Nipype is stored. Feel free to fork the repo and submit changes if you want. If you found a bug in the scripts or have a specific ideas for changes, please open a new [issue](https://github.com/nipy/nipype/issues) and let the community help you." 51 | ] 52 | } 53 | ], 54 | "metadata": { 55 | "anaconda-cloud": {}, 56 | "kernelspec": { 57 | "display_name": "Python [default]", 58 | "language": "python", 59 | "name": "python2" 60 | }, 61 | "language_info": { 62 | "codemirror_mode": { 63 | "name": "ipython", 64 | "version": 2 65 | }, 66 | "file_extension": ".py", 67 | "mimetype": "text/x-python", 68 | "name": "python", 69 | "nbconvert_exporter": "python", 70 | "pygments_lexer": "ipython2", 71 | "version": "2.7.12" 72 | } 73 | }, 74 | "nbformat": 4, 75 | "nbformat_minor": 0 76 | } 77 | -------------------------------------------------------------------------------- /resources_installation.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": { 6 | "deletable": true, 7 | "editable": true 8 | }, 9 | "source": [ 10 | "# 1. Install Nipype\n", 11 | "\n", 12 | "Getting Nipype to run on your system is rather straight forward. And there are multiple ways to do the installation:\n", 13 | "\n", 14 | "\n", 15 | "### Using conda\n", 16 | "\n", 17 | "If you have [conda](http://conda.pydata.org/docs/index.html), [miniconda](https://conda.io/miniconda.html) or [anaconda](https://www.continuum.io/why-anaconda) on your system, than installing Nipype is just the following command:\n", 18 | "\n", 19 | " conda config --add channels conda-forge\n", 20 | " conda install nipype\n", 21 | "\n", 22 | "\n", 23 | "### Using ``pip`` or ``easy_install``\n", 24 | "\n", 25 | "Installing Nipype via ``pip`` or ``easy_install`` is as simple as you would imagine.\n", 26 | "\n", 27 | " pip install nipype\n", 28 | " \n", 29 | "or\n", 30 | " \n", 31 | " easy_install nipype\n", 32 | "\n", 33 | "\n", 34 | "### Using Debian or Ubuntu\n", 35 | "\n", 36 | "Installing Nipype on a Debian or Ubuntu system can also be done via ``apt-get``. For this use the following command:\n", 37 | "\n", 38 | " apt-gete install python-nipype\n", 39 | "\n", 40 | "\n", 41 | "### Using Github\n", 42 | "\n", 43 | "To make sure that you really have the newest version of Nipype on your system, you can run the pip command with a flag that points to the github repo:\n", 44 | "\n", 45 | " pip install -e git+https://github.com/nipy/nipype#egg=nipype" 46 | ] 47 | }, 48 | { 49 | "cell_type": "markdown", 50 | "metadata": { 51 | "deletable": true, 52 | "editable": true 53 | }, 54 | "source": [ 55 | "# 2. Install Dependencies\n", 56 | "\n", 57 | "For more information about the installation in general and to get a list of recommended software, go to the main page, under: http://nipype.readthedocs.io/en/latest/users/install.html\n", 58 | "\n", 59 | "For a more step by step installation guide for additional software dependencies like SPM, FSL, FreeSurfer and ANTs, go to the [Beginner's Guide](http://miykael.github.io/nipype-beginner-s-guide/installation.html).\n" 60 | ] 61 | }, 62 | { 63 | "cell_type": "markdown", 64 | "metadata": { 65 | "deletable": true, 66 | "editable": true 67 | }, 68 | "source": [ 69 | "# 3. Test Nipype" 70 | ] 71 | }, 72 | { 73 | "cell_type": "code", 74 | "execution_count": null, 75 | "metadata": { 76 | "collapsed": true, 77 | "deletable": true, 78 | "editable": true 79 | }, 80 | "outputs": [], 81 | "source": [ 82 | "# Import the nipype module\n", 83 | "import nipype\n", 84 | "\n", 85 | "# Run the test: Increase verbosity parameter for more info\n", 86 | "nipype.test(verbose=0)" 87 | ] 88 | }, 89 | { 90 | "cell_type": "markdown", 91 | "metadata": { 92 | "deletable": true, 93 | "editable": true 94 | }, 95 | "source": [ 96 | "The test will create a lot of output, but if all goes well you will see an OK at the end:\n", 97 | "\n", 98 | " ----------------------------------------------------------------------\n", 99 | " Ran 2497 tests in 68.486s\n", 100 | "\n", 101 | " OK (SKIP=13)\n", 102 | "\n", 103 | "The number of tests and time will vary depending on which interfaces you have installed on your system.\n", 104 | "\n", 105 | "Don’t worry if some modules are being skipped or some side modules show up as errors or failures during the run. As long as no main modules cause any problems, you’re fine. The number of tests and time will vary depending on which interfaces you have installed on your system. But if you receive an OK, errors=0 and failures=0 then everything is ready." 106 | ] 107 | } 108 | ], 109 | "metadata": { 110 | "anaconda-cloud": {}, 111 | "kernelspec": { 112 | "display_name": "Python [default]", 113 | "language": "python", 114 | "name": "python2" 115 | }, 116 | "language_info": { 117 | "codemirror_mode": { 118 | "name": "ipython", 119 | "version": 2 120 | }, 121 | "file_extension": ".py", 122 | "mimetype": "text/x-python", 123 | "name": "python", 124 | "nbconvert_exporter": "python", 125 | "pygments_lexer": "ipython2", 126 | "version": "2.7.12" 127 | } 128 | }, 129 | "nbformat": 4, 130 | "nbformat_minor": 0 131 | } 132 | -------------------------------------------------------------------------------- /resources_python_cheat_sheet.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": { 6 | "deletable": true, 7 | "editable": true 8 | }, 9 | "source": [ 10 | "# Python Cheat Sheet\n", 11 | "\n", 12 | "The following content is taken from http://www.ias.u-psud.fr/pperso/aboucaud/python/cheatsheet.html\n", 13 | "\n", 14 | "This cheat sheet should serve as a short refresher to everybody who hasn't used Python for some time." 15 | ] 16 | }, 17 | { 18 | "cell_type": "markdown", 19 | "metadata": { 20 | "deletable": true, 21 | "editable": true 22 | }, 23 | "source": [ 24 | "## Pure Python" 25 | ] 26 | }, 27 | { 28 | "cell_type": "markdown", 29 | "metadata": { 30 | "deletable": true, 31 | "editable": true 32 | }, 33 | "source": [ 34 | "### Types" 35 | ] 36 | }, 37 | { 38 | "cell_type": "code", 39 | "execution_count": null, 40 | "metadata": { 41 | "collapsed": true, 42 | "deletable": true, 43 | "editable": true 44 | }, 45 | "outputs": [], 46 | "source": [ 47 | "a = 2 # integer\n", 48 | "b = 5.0 # float\n", 49 | "c = 8.3e5 # exponential\n", 50 | "d = 1.5 + 0.5j # complex\n", 51 | "e = 4 > 5 # boolean\n", 52 | "f = 'word' # string" 53 | ] 54 | }, 55 | { 56 | "cell_type": "markdown", 57 | "metadata": { 58 | "deletable": true, 59 | "editable": true 60 | }, 61 | "source": [ 62 | "### Lists" 63 | ] 64 | }, 65 | { 66 | "cell_type": "code", 67 | "execution_count": null, 68 | "metadata": { 69 | "collapsed": true, 70 | "deletable": true, 71 | "editable": true 72 | }, 73 | "outputs": [], 74 | "source": [ 75 | "a = ['red', 'blue', 'green'] # manually initialization\n", 76 | "b = range(5) # initialization through a function\n", 77 | "c = [nu**2 for nu in b] # initialize through list comprehension\n", 78 | "d = [nu**2 for nu in b if b < 3] # list comprehension with condition\n", 79 | "e = c[0] # access element\n", 80 | "f = e[1: 2] # access a slice of the list\n", 81 | "g = ['re', 'bl'] + ['gr'] # list concatenation\n", 82 | "h = ['re'] * 5 # repeat a list\n", 83 | "['re', 'bl'].index('re') # returns index of 're'\n", 84 | "'re' in ['re', 'bl'] # true if 're' in list\n", 85 | "sorted([3, 2, 1]) # returns sorted list\n", 86 | "z = ['red'] + ['green', 'blue'] # list concatenation" 87 | ] 88 | }, 89 | { 90 | "cell_type": "markdown", 91 | "metadata": { 92 | "deletable": true, 93 | "editable": true 94 | }, 95 | "source": [ 96 | "### Dictionaries" 97 | ] 98 | }, 99 | { 100 | "cell_type": "code", 101 | "execution_count": null, 102 | "metadata": { 103 | "collapsed": true, 104 | "deletable": true, 105 | "editable": true 106 | }, 107 | "outputs": [], 108 | "source": [ 109 | "a = {'red': 'rouge', 'blue': 'bleu', 'green': 'vert'} # dictionary\n", 110 | "b = a['red'] # translate item\n", 111 | "c = [value for key, value in b.items()] # loop through contents\n", 112 | "d = a.get('yellow', 'no translation found') # return default" 113 | ] 114 | }, 115 | { 116 | "cell_type": "markdown", 117 | "metadata": { 118 | "deletable": true, 119 | "editable": true 120 | }, 121 | "source": [ 122 | "### Strings" 123 | ] 124 | }, 125 | { 126 | "cell_type": "code", 127 | "execution_count": null, 128 | "metadata": { 129 | "collapsed": true, 130 | "deletable": true, 131 | "editable": true 132 | }, 133 | "outputs": [], 134 | "source": [ 135 | "a = 'red' # assignment\n", 136 | "char = a[2] # access individual characters\n", 137 | "'red ' + 'blue' # string concatenation\n", 138 | "'1, 2, three'.split(',') # split string into list\n", 139 | "'.'.join(['1', '2', 'three']) # concatenate list into string" 140 | ] 141 | }, 142 | { 143 | "cell_type": "markdown", 144 | "metadata": { 145 | "deletable": true, 146 | "editable": true 147 | }, 148 | "source": [ 149 | "### Operators" 150 | ] 151 | }, 152 | { 153 | "cell_type": "code", 154 | "execution_count": null, 155 | "metadata": { 156 | "collapsed": true, 157 | "deletable": true, 158 | "editable": true 159 | }, 160 | "outputs": [], 161 | "source": [ 162 | "a = 2 # assignment\n", 163 | "a += 1 (*=, /=) # change and assign\n", 164 | "3 + 2 # addition\n", 165 | "3 / 2 # integer division (python2) or float division (python3)\n", 166 | "3 // 2 # integer division\n", 167 | "3 * 2 # multiplication\n", 168 | "3 ** 2 # exponent\n", 169 | "3 % 2 # remainder\n", 170 | "abs() # absolute value\n", 171 | "1 == 1 # equal\n", 172 | "2 > 1 # larger\n", 173 | "2 < 1 # smaller\n", 174 | "1 != 2 # not equal\n", 175 | "1 != 2 and 2 < 3 # logical AND\n", 176 | "1 != 2 or 2 < 3 # logical OR\n", 177 | "not 1 == 2 # logical NOT\n", 178 | "a in b # test if a is in b\n", 179 | "a is b # test if objects point to the same memory (id)" 180 | ] 181 | }, 182 | { 183 | "cell_type": "markdown", 184 | "metadata": { 185 | "deletable": true, 186 | "editable": true 187 | }, 188 | "source": [ 189 | "### Control Flow" 190 | ] 191 | }, 192 | { 193 | "cell_type": "code", 194 | "execution_count": null, 195 | "metadata": { 196 | "collapsed": true, 197 | "deletable": true, 198 | "editable": true 199 | }, 200 | "outputs": [], 201 | "source": [ 202 | "# if/elif/else\n", 203 | "a, b = 1, 2\n", 204 | "if a + b == 3:\n", 205 | " print 'True'\n", 206 | "elif a + b == 1:\n", 207 | " print 'False'\n", 208 | "else:\n", 209 | " print '?'\n", 210 | "\n", 211 | "# for\n", 212 | "a = ['red', 'blue', 'green']\n", 213 | "for color in a:\n", 214 | " print color\n", 215 | "\n", 216 | "# while\n", 217 | "number = 1\n", 218 | "while number < 10:\n", 219 | " print number\n", 220 | " number += 1\n", 221 | "\n", 222 | "# break\n", 223 | "number = 1\n", 224 | "while True:\n", 225 | " print number\n", 226 | " number += 1\n", 227 | " if number > 10:\n", 228 | " break\n", 229 | "\n", 230 | "# continue\n", 231 | "for i in range(20):\n", 232 | " if i % 2 == 0:\n", 233 | " continue\n", 234 | " print i" 235 | ] 236 | }, 237 | { 238 | "cell_type": "markdown", 239 | "metadata": { 240 | "deletable": true, 241 | "editable": true 242 | }, 243 | "source": [ 244 | "### Functions, Classes, Generators, Decorators" 245 | ] 246 | }, 247 | { 248 | "cell_type": "code", 249 | "execution_count": null, 250 | "metadata": { 251 | "collapsed": true, 252 | "deletable": true, 253 | "editable": true 254 | }, 255 | "outputs": [], 256 | "source": [ 257 | "# Function\n", 258 | "def myfunc(a1, a2):\n", 259 | " return x\n", 260 | "\n", 261 | "x = my_function(a1,a2)\n", 262 | "\n", 263 | "# Class\n", 264 | "class Point(object):\n", 265 | " def __init__(self, x):\n", 266 | " self.x = x\n", 267 | " def __call__(self):\n", 268 | " print self.x\n", 269 | "\n", 270 | "x = Point(3)\n", 271 | "\n", 272 | "# Generators\n", 273 | "def firstn(n):\n", 274 | " num = 0\n", 275 | " while num < n:\n", 276 | " yield num\n", 277 | " num += 1\n", 278 | "\n", 279 | "# consume the generator with list comprehension\n", 280 | "x = [i for i in firstn(10)]\n", 281 | "\n", 282 | "# Decorators\n", 283 | "class myDecorator(object):\n", 284 | " def __init__(self, f):\n", 285 | " self.f = f\n", 286 | " def __call__(self):\n", 287 | " print \"call\"\n", 288 | " self.f()\n", 289 | "\n", 290 | "@myDecorator\n", 291 | "def my_funct():\n", 292 | " print 'func'\n", 293 | "\n", 294 | "my_funct()" 295 | ] 296 | }, 297 | { 298 | "cell_type": "markdown", 299 | "metadata": { 300 | "deletable": true, 301 | "editable": true 302 | }, 303 | "source": [ 304 | "## IPython" 305 | ] 306 | }, 307 | { 308 | "cell_type": "markdown", 309 | "metadata": { 310 | "deletable": true, 311 | "editable": true 312 | }, 313 | "source": [ 314 | "### Python console" 315 | ] 316 | }, 317 | { 318 | "cell_type": "code", 319 | "execution_count": null, 320 | "metadata": { 321 | "collapsed": true, 322 | "deletable": true, 323 | "editable": true 324 | }, 325 | "outputs": [], 326 | "source": [ 327 | "? # Information about the object\n", 328 | ". # tab completion\n", 329 | "\n", 330 | "# measure runtime of a function:\n", 331 | "%timeit range(1000)\n", 332 | "100000 loops, best of 3: 7.76 us per loop\n", 333 | "\n", 334 | "# run scripts and debug\n", 335 | "%run\n", 336 | "%run -d # run in debug mode\n", 337 | "%run -t # measures execution time\n", 338 | "%run -p # runs a profiler\n", 339 | "%debug # jumps to the debugger after an exception\n", 340 | "\n", 341 | "%pdb # run debugger automatically on exception\n", 342 | "\n", 343 | "# examine history\n", 344 | "%history\n", 345 | "%history ~1/1-5 # lines 1-5 of last session\n", 346 | "\n", 347 | "# run shell commands\n", 348 | "!make # prefix command with \"!\"\n", 349 | "\n", 350 | "# clean namespace\n", 351 | "%reset" 352 | ] 353 | }, 354 | { 355 | "cell_type": "markdown", 356 | "metadata": { 357 | "deletable": true, 358 | "editable": true 359 | }, 360 | "source": [ 361 | "### Debugger commands" 362 | ] 363 | }, 364 | { 365 | "cell_type": "code", 366 | "execution_count": null, 367 | "metadata": { 368 | "collapsed": true, 369 | "deletable": true, 370 | "editable": true 371 | }, 372 | "outputs": [], 373 | "source": [ 374 | "n # execute next line" 375 | ] 376 | }, 377 | { 378 | "cell_type": "markdown", 379 | "metadata": { 380 | "deletable": true, 381 | "editable": true 382 | }, 383 | "source": [ 384 | "## NumPy (import numpy as np)" 385 | ] 386 | }, 387 | { 388 | "cell_type": "markdown", 389 | "metadata": { 390 | "deletable": true, 391 | "editable": true 392 | }, 393 | "source": [ 394 | "### array initialization" 395 | ] 396 | }, 397 | { 398 | "cell_type": "code", 399 | "execution_count": null, 400 | "metadata": { 401 | "collapsed": true, 402 | "deletable": true, 403 | "editable": true 404 | }, 405 | "outputs": [], 406 | "source": [ 407 | "np.array([2, 3, 4]) # direct initialization\n", 408 | "np.empty(20, dtype=np.float32) # single precision array with 20 entries\n", 409 | "np.zeros(200) # initialize 200 zeros\n", 410 | "np.ones((3,3), dtype=np.int32) # 3 x 3 integer matrix with ones\n", 411 | "np.eye(200) # ones on the diagonal\n", 412 | "np.zeros_like(a) # returns array with zeros and the shape of a\n", 413 | "np.linspace(0., 10., 100) # 100 points from 0 to 10\n", 414 | "np.arange(0, 100, 2) # points from 0 to <100 with step width 2\n", 415 | "np.logspace(-5, 2, 100) # 100 log-spaced points between 1e-5 and 1e2\n", 416 | "np.copy(a) # copy array to new memory" 417 | ] 418 | }, 419 | { 420 | "cell_type": "markdown", 421 | "metadata": { 422 | "deletable": true, 423 | "editable": true 424 | }, 425 | "source": [ 426 | "### reading/ writing files" 427 | ] 428 | }, 429 | { 430 | "cell_type": "code", 431 | "execution_count": null, 432 | "metadata": { 433 | "collapsed": true, 434 | "deletable": true, 435 | "editable": true 436 | }, 437 | "outputs": [], 438 | "source": [ 439 | "np.fromfile(fname/object, dtype=np.float32, count=5) # read binary data from file\n", 440 | "np.loadtxt(fname/object, skiprows=2, delimiter=',') # read ascii data from file" 441 | ] 442 | }, 443 | { 444 | "cell_type": "markdown", 445 | "metadata": { 446 | "deletable": true, 447 | "editable": true 448 | }, 449 | "source": [ 450 | "### array properties and operations" 451 | ] 452 | }, 453 | { 454 | "cell_type": "code", 455 | "execution_count": null, 456 | "metadata": { 457 | "collapsed": true, 458 | "deletable": true, 459 | "editable": true 460 | }, 461 | "outputs": [], 462 | "source": [ 463 | "a.shape # a tuple with the lengths of each axis\n", 464 | "len(a) # length of axis 0\n", 465 | "a.ndim # number of dimensions (axes)\n", 466 | "a.sort(axis=1) # sort array along axis\n", 467 | "a.flatten() # collapse array to one dimension\n", 468 | "a.conj() # return complex conjugate\n", 469 | "a.astype(np.int16) # cast to integer\n", 470 | "np.argmax(a, axis=2) # return index of maximum along a given axis\n", 471 | "np.cumsum(a) # return cumulative sum\n", 472 | "np.any(a) # True if any element is True\n", 473 | "np.all(a) # True if all elements are True\n", 474 | "np.argsort(a, axis=1) # return sorted index array along axis" 475 | ] 476 | }, 477 | { 478 | "cell_type": "markdown", 479 | "metadata": { 480 | "deletable": true, 481 | "editable": true 482 | }, 483 | "source": [ 484 | "### indexing" 485 | ] 486 | }, 487 | { 488 | "cell_type": "code", 489 | "execution_count": null, 490 | "metadata": { 491 | "collapsed": true, 492 | "deletable": true, 493 | "editable": true 494 | }, 495 | "outputs": [], 496 | "source": [ 497 | "a = np.arange(100) # initialization with 0 - 99\n", 498 | "a[: 3] = 0 # set the first three indices to zero\n", 499 | "a[1: 5] = 1 # set indices 1-4 to 1\n", 500 | "a[start:stop:step] # general form of indexing/slicing\n", 501 | "a[None, :] # transform to column vector\n", 502 | "a[[1, 1, 3, 8]] # return array with values of the indices\n", 503 | "a = a.reshape(10, 10) # transform to 10 x 10 matrix\n", 504 | "a.T # return transposed view\n", 505 | "np.transpose(a, (2, 1, 0)) # transpose array to new axis order\n", 506 | "a[a < 2] # returns array that fulfills element-wise condition" 507 | ] 508 | }, 509 | { 510 | "cell_type": "markdown", 511 | "metadata": { 512 | "deletable": true, 513 | "editable": true 514 | }, 515 | "source": [ 516 | "### boolean arrays" 517 | ] 518 | }, 519 | { 520 | "cell_type": "code", 521 | "execution_count": null, 522 | "metadata": { 523 | "collapsed": true, 524 | "deletable": true, 525 | "editable": true 526 | }, 527 | "outputs": [], 528 | "source": [ 529 | "a < 2 # returns array with boolean values\n", 530 | "np.logical_and(a < 2, b > 10) # element-wise logical and\n", 531 | "np.logical_or(a < 2, b > 10) # element-wise logical or\n", 532 | "~a # invert boolean array\n", 533 | "np.invert(a) # invert boolean array" 534 | ] 535 | }, 536 | { 537 | "cell_type": "markdown", 538 | "metadata": { 539 | "deletable": true, 540 | "editable": true 541 | }, 542 | "source": [ 543 | "### element-wise operations and math functions" 544 | ] 545 | }, 546 | { 547 | "cell_type": "code", 548 | "execution_count": null, 549 | "metadata": { 550 | "collapsed": true, 551 | "deletable": true, 552 | "editable": true 553 | }, 554 | "outputs": [], 555 | "source": [ 556 | "a * 5 # multiplication with scalar\n", 557 | "a + 5 # addition with scalar\n", 558 | "a + b # addition with array b\n", 559 | "a / b # division with b (np.NaN for division by zero)\n", 560 | "np.exp(a) # exponential (complex and real)\n", 561 | "np.power(a,b) # a to the power b\n", 562 | "np.sin(a) # sine\n", 563 | "np.cos(a) # cosine\n", 564 | "np.arctan2(y,x) # arctan(y/x)\n", 565 | "np.arcsin(x) # arcsin\n", 566 | "np.radians(a) # degrees to radians\n", 567 | "np.degrees(a) # radians to degrees\n", 568 | "np.var(a) # variance of array\n", 569 | "np.std(a, axis=1) # standard deviation" 570 | ] 571 | }, 572 | { 573 | "cell_type": "markdown", 574 | "metadata": { 575 | "deletable": true, 576 | "editable": true 577 | }, 578 | "source": [ 579 | "### inner / outer products" 580 | ] 581 | }, 582 | { 583 | "cell_type": "code", 584 | "execution_count": null, 585 | "metadata": { 586 | "collapsed": true, 587 | "deletable": true, 588 | "editable": true 589 | }, 590 | "outputs": [], 591 | "source": [ 592 | "np.dot(a, b) # inner matrix product: a_mi b_in\n", 593 | "np.einsum('ijkl,klmn->ijmn', a, b) # einstein summation convention\n", 594 | "np.sum(a, axis=1) # sum over axis 1\n", 595 | "np.abs(a) # return array with absolute values\n", 596 | "a[None, :] + b[:, None] # outer sum\n", 597 | "a[None, :] * b[:, None] # outer product\n", 598 | "np.outer(a, b) # outer product\n", 599 | "np.sum(a * a.T) # matrix norm" 600 | ] 601 | }, 602 | { 603 | "cell_type": "markdown", 604 | "metadata": { 605 | "deletable": true, 606 | "editable": true 607 | }, 608 | "source": [ 609 | "### interpolation, integration" 610 | ] 611 | }, 612 | { 613 | "cell_type": "code", 614 | "execution_count": null, 615 | "metadata": { 616 | "collapsed": true, 617 | "deletable": true, 618 | "editable": true 619 | }, 620 | "outputs": [], 621 | "source": [ 622 | "np.trapz(y, x=x, axis=1) # integrate along axis 1\n", 623 | "np.interp(x, xp, yp) # interpolate function xp, yp at points x" 624 | ] 625 | }, 626 | { 627 | "cell_type": "markdown", 628 | "metadata": { 629 | "deletable": true, 630 | "editable": true 631 | }, 632 | "source": [ 633 | "### fft" 634 | ] 635 | }, 636 | { 637 | "cell_type": "code", 638 | "execution_count": null, 639 | "metadata": { 640 | "collapsed": true, 641 | "deletable": true, 642 | "editable": true 643 | }, 644 | "outputs": [], 645 | "source": [ 646 | "np.fft.fft(y) # complex fourier transform of y\n", 647 | "np.fft.fftfreqs(len(y)) # fft frequencies for a given length\n", 648 | "np.fft.fftshift(freqs) # shifts zero frequency to the middle\n", 649 | "np.fft.rfft(y) # real fourier transform of y\n", 650 | "np.fft.rfftfreqs(len(y)) # real fft frequencies for a given length" 651 | ] 652 | }, 653 | { 654 | "cell_type": "markdown", 655 | "metadata": { 656 | "deletable": true, 657 | "editable": true 658 | }, 659 | "source": [ 660 | "### rounding" 661 | ] 662 | }, 663 | { 664 | "cell_type": "code", 665 | "execution_count": null, 666 | "metadata": { 667 | "collapsed": true, 668 | "deletable": true, 669 | "editable": true 670 | }, 671 | "outputs": [], 672 | "source": [ 673 | "np.ceil(a) # rounds to nearest upper int\n", 674 | "np.floor(a) # rounds to nearest lower int\n", 675 | "np.round(a) # rounds to neares int" 676 | ] 677 | }, 678 | { 679 | "cell_type": "markdown", 680 | "metadata": { 681 | "deletable": true, 682 | "editable": true 683 | }, 684 | "source": [ 685 | "### random variables" 686 | ] 687 | }, 688 | { 689 | "cell_type": "code", 690 | "execution_count": null, 691 | "metadata": { 692 | "collapsed": true, 693 | "deletable": true, 694 | "editable": true 695 | }, 696 | "outputs": [], 697 | "source": [ 698 | "np.random.normal(loc=0, scale=2, size=100) # 100 normal distributed random numbers\n", 699 | "np.random.seed(23032) # resets the seed value\n", 700 | "np.random.rand(200) # 200 random numbers in [0, 1)\n", 701 | "np.random.uniform(1, 30, 200) # 200 random numbers in [1, 30)\n", 702 | "np.random.random_integers(1, 15, 300) # 300 random integers between [1, 15]" 703 | ] 704 | }, 705 | { 706 | "cell_type": "markdown", 707 | "metadata": { 708 | "deletable": true, 709 | "editable": true 710 | }, 711 | "source": [ 712 | "## Matplotlib (import matplotlib.pyplot as plt)" 713 | ] 714 | }, 715 | { 716 | "cell_type": "markdown", 717 | "metadata": { 718 | "deletable": true, 719 | "editable": true 720 | }, 721 | "source": [ 722 | "### figures and axes" 723 | ] 724 | }, 725 | { 726 | "cell_type": "code", 727 | "execution_count": null, 728 | "metadata": { 729 | "collapsed": true, 730 | "deletable": true, 731 | "editable": true 732 | }, 733 | "outputs": [], 734 | "source": [ 735 | "fig = plt.figure(figsize=(5, 2), facecolor='black') # initialize figure\n", 736 | "ax = fig.add_subplot(3, 2, 2) # add second subplot in a 3 x 2 grid\n", 737 | "fig, axes = plt.subplots(5, 2, figsize=(5, 5)) # return fig and array of axes in a 5 x 2 grid\n", 738 | "ax = fig.add_axes([left, bottom, width, height]) # manually add axes at a certain position" 739 | ] 740 | }, 741 | { 742 | "cell_type": "markdown", 743 | "metadata": { 744 | "deletable": true, 745 | "editable": true 746 | }, 747 | "source": [ 748 | "### figures and axes properties" 749 | ] 750 | }, 751 | { 752 | "cell_type": "code", 753 | "execution_count": null, 754 | "metadata": { 755 | "collapsed": true, 756 | "deletable": true, 757 | "editable": true 758 | }, 759 | "outputs": [], 760 | "source": [ 761 | "fig.suptitle('title') # big figure title\n", 762 | "fig.subplots_adjust(bottom=0.1,\n", 763 | " right=0.8,\n", 764 | " top=0.9,\n", 765 | " wspace=0.2,\n", 766 | " hspace=0.5) # adjust subplot positions\n", 767 | "fig.tight_layout(pad=0.1,\n", 768 | " h_pad=0.5,\n", 769 | " w_pad=0.5,\n", 770 | " rect=None) # adjust subplots to fit perfectly into fig\n", 771 | "ax.set_xlabel() # set xlabel\n", 772 | "ax.set_ylabel() # set ylabel\n", 773 | "ax.set_xlim(1, 2) # sets x limits\n", 774 | "ax.set_ylim(3, 4) # sets y limits\n", 775 | "ax.set_title('blabla') # sets the axis title\n", 776 | "ax.set(xlabel='bla') # set multiple parameters at once\n", 777 | "ax.legend(loc='upper center') # activate legend\n", 778 | "ax.grid(True, which='both') # activate grid\n", 779 | "bbox = ax.get_position() # returns the axes bounding box\n", 780 | "bbox.x0 + bbox.width # bounding box parameters" 781 | ] 782 | }, 783 | { 784 | "cell_type": "markdown", 785 | "metadata": { 786 | "deletable": true, 787 | "editable": true 788 | }, 789 | "source": [ 790 | "### plotting routines" 791 | ] 792 | }, 793 | { 794 | "cell_type": "code", 795 | "execution_count": null, 796 | "metadata": { 797 | "collapsed": true, 798 | "deletable": true, 799 | "editable": true 800 | }, 801 | "outputs": [], 802 | "source": [ 803 | "ax.plot(x,y, '-o', c='red', lw=2, label='bla') # plots a line\n", 804 | "ax.scatter(x,y, s=20, c=color) # scatter plot\n", 805 | "ax.pcolormesh(xx,yy,zz, shading='gouraud') # fast colormesh function\n", 806 | "ax.colormesh(xx,yy,zz, norm=norm) # slower colormesh function\n", 807 | "ax.contour(xx,yy,zz, cmap='jet') # contour line plot\n", 808 | "ax.contourf(xx,yy,zz, vmin=2, vmax=4) # filled contours plot\n", 809 | "n, bins, patch = ax.hist(x, 50) # histogram\n", 810 | "ax.imshow(matrix, origin='lower', extent=(x1, x2, y1, y2)) # show image\n", 811 | "ax.specgram(y, FS=0.1, noverlap=128, scale='linear') # plot a spectrogram" 812 | ] 813 | } 814 | ], 815 | "metadata": { 816 | "anaconda-cloud": {}, 817 | "kernelspec": { 818 | "display_name": "Python [default]", 819 | "language": "python", 820 | "name": "python2" 821 | }, 822 | "language_info": { 823 | "codemirror_mode": { 824 | "name": "ipython", 825 | "version": 2 826 | }, 827 | "file_extension": ".py", 828 | "mimetype": "text/x-python", 829 | "name": "python", 830 | "nbconvert_exporter": "python", 831 | "pygments_lexer": "ipython2", 832 | "version": "2.7.12" 833 | } 834 | }, 835 | "nbformat": 4, 836 | "nbformat_minor": 0 837 | } 838 | -------------------------------------------------------------------------------- /resources_resources.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": { 6 | "collapsed": true, 7 | "deletable": true, 8 | "editable": true 9 | }, 10 | "source": [ 11 | "# Helpful Resources\n", 12 | "\n", 13 | "\n", 14 | "## Learn more about Nipype\n", 15 | "\n", 16 | "- [Nipype homepage](http://nipype.readthedocs.io/en/latest/): This is the best place to learn all you need to know about Nipype. For beginner's I recommend to check out the [Quickstart](http://nipype.readthedocs.io/en/latest/quickstart.html) section.\n", 17 | "- [Beginner's Guide](http://miykael.github.io/nipype-beginner-s-guide/): This beginner's guide is an in-depth step by step tutorial to Nipype.\n", 18 | "- [This course material](https://github.com/miykael/nipype_course): For additional information about the material used in this course.\n", 19 | "\n", 20 | "\n", 21 | "## Neuroimaging\n", 22 | "\n", 23 | "- [Neurostars.org](https://neurostars.org/): If you have any questions about Neuroinformatics, this is the place to go! \n", 24 | "- [Design efficiency in FMRI](http://imaging.mrc-cbu.cam.ac.uk/imaging/DesignEfficiency): A nice and detailed guide on how to design a good fMRI study.\n", 25 | "\n", 26 | "\n", 27 | "## Learn Python\n", 28 | "\n", 29 | "- [A Byte of Python](http://python.swaroopch.com/): A very nice introduction to Python in general.\n", 30 | "- [A Crash Course in Python for Scientists](http://nbviewer.jupyter.org/gist/rpmuller/5920182): a very good introduction to Python and scientific programming (e.g. Numpy, Scipy, Matplotlib)\n", 31 | "- [Codecademy - Python](https://www.codecademy.com/learn/python): An interactive online training and introduction to Python.\n", 32 | "- [Learn Python the Hard Way](http://learnpythonthehardway.org/book/index.html): A very good step by step introduction to Python.\n", 33 | "- [Python Scientific Lecture Notes](http://www.scipy-lectures.org/): A very good and more detailed introduction to Python and scientific programming.\n", 34 | "- If you're looking for a Python based IDE like Eclipse or MATLAB, check out [Pycharm](https://www.jetbrains.com/pycharm/) or [Spyder](https://github.com/spyder-ide/spyder/).\n", 35 | "- [Programming with Python](http://swcarpentry.github.io/python-novice-inflammation/): This short introduction by *software carpentry* teaches you the basics of scientific programming on very practical examples.\n", 36 | "\n", 37 | "\n", 38 | "## Learn Git\n", 39 | "\n", 40 | "- [Got 15 minutes and want to learn Git?](https://try.github.io/levels/1/challenges/1): Github's own git tutorial. It's fun and very short.\n", 41 | "- [Git Real](http://gitreal.codeschool.com/) on [Code School](https://www.codeschool.com/): An interactive tutorial about GIT\n", 42 | "- [Top 10 Git Tutorials for Beginners](http://sixrevisions.com/resources/git-tutorials-beginners/)\n", 43 | "\n", 44 | "\n", 45 | "## Learn Unix Shell\n", 46 | "\n", 47 | "- [the Unix Shell](http://swcarpentry.github.io/shell-novice/): If you're new to Linux, here's a quick starter guide by software carpentry that teaches you the basics." 48 | ] 49 | } 50 | ], 51 | "metadata": { 52 | "anaconda-cloud": {}, 53 | "kernelspec": { 54 | "display_name": "Python [default]", 55 | "language": "python", 56 | "name": "python2" 57 | }, 58 | "language_info": { 59 | "codemirror_mode": { 60 | "name": "ipython", 61 | "version": 2 62 | }, 63 | "file_extension": ".py", 64 | "mimetype": "text/x-python", 65 | "name": "python", 66 | "nbconvert_exporter": "python", 67 | "pygments_lexer": "ipython2", 68 | "version": "2.7.12" 69 | } 70 | }, 71 | "nbformat": 4, 72 | "nbformat_minor": 0 73 | } 74 | -------------------------------------------------------------------------------- /y_index_with_advanced_and_developer_section.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": 3, 6 | "metadata": { 7 | "collapsed": false, 8 | "deletable": true, 9 | "editable": true, 10 | "scrolled": false 11 | }, 12 | "outputs": [ 13 | { 14 | "data": { 15 | "text/html": [ 16 | "\n", 17 | "\n", 18 | "\n", 19 | " \n", 20 | "\n", 21 | "
\n", 22 | " \n", 23 | "
\n", 24 | "
\n", 25 | "
\n", 26 | "

Welcome to the Nipype Tutorial! It covers the basic concepts and most common use cases of Nipype and will teach\n", 27 | " you everything so that you can start creating your own workflows in no time. We recommend that you start with\n", 28 | " the introduction section to familiarize yourself with the tools used in this tutorial and then move on to the\n", 29 | " basic concepts section to learn everything you need to know for your everyday life with Nipype. The workflow\n", 30 | " examples section shows you a real example how you can use Nipype to analyze an actual dataset.\n", 31 | "

\n", 32 | " All of the notebooks used in this tutorial can be found on github.com/miykael/nipype_tutorial.\n", 33 | " But if you want to have the real experience and want to go through the computations by yourself, we highly\n", 34 | " recommend you to do the Nipype Course. This course\n", 35 | " gives you the opportunity to adapt the commands to your liking and discover the flexibility and real power of\n", 36 | " Nipype yourself. For the tutorial, you need to install a Docker image on your system that provides you a \n", 37 | " neuroimaging environment based on a Debian system, with working Python software (including Nipype, dipy, matplotlib,\n", 38 | " nibabel, nipy, numpy, pandas, scipy, seaborn and more), FSL, AFNI, ANTs and SPM12 (no license needed). This\n", 39 | " neuroimaging environment is based on the docker images under github.com/miykael/nipype_env,\n", 40 | " which allow you to run toolboxes like FSL, AFNI and ANTs on any system, including Windows.\n", 41 | "

\n", 42 | " For everything that isn't covered in this tutorial, check out the main homepage.\n", 43 | " And if you haven't had enough and want to learn even more about Nipype and Neuroimaging, make sure to look at\n", 44 | " the detailed beginner's guide.\n", 45 | "

\n", 46 | "
\n", 47 | "\n", 48 | " \n", 50 | "\n", 51 | " \n", 53 | " \n", 54 | "

Introduction

\n", 55 | "
\n", 56 | " Nipype\n", 57 | " Jupyter-Notebook\n", 58 | " Tutorial Dataset\n", 59 | " Docker\n", 60 | " Python\n", 61 | "
\n", 62 | "

This section is meant as a general overview. It should give you a short introduction to the main topics that\n", 63 | " you need to understand to use Nipype and this tutorial.

\n", 64 | "\n", 65 | "

Basic Concepts

\n", 66 | "
\n", 67 | " Interfaces\n", 68 | " Nodes\n", 69 | " Workflow\n", 70 | " Graph Visualization\n", 71 | " Data Input\n", 72 | " Data Output\n", 73 | " Iteration / Iterables\n", 74 | " MapNodes\n", 75 | " Function Nodes\n", 76 | " JoinNodes\n", 77 | " Model Specification\n", 78 | " Import existing Workflows\n", 79 | " Execution Plugins\n", 80 | " Execution Configuration\n", 81 | " Errors & Crashes\n", 82 | "
\n", 83 | "

This section will introduce you to all of the key players in Nipype. Basic concepts that you need to learn to\n", 84 | " fully understand and appreciate Nipype. Once you understand this section, you will know all that you need to know\n", 85 | " to create any kind of Nipype workflow.

\n", 86 | "\n", 87 | "

Workflow Examples

\n", 88 | "
\n", 89 | " Preprocessing\n", 90 | " 1st-level Analysis\n", 91 | " Normalize Data\n", 92 | " 2nd-level Analysis\n", 93 | "
\n", 94 | "

In this section you will find some practical examples that show you how to use Nipype in a \"real world\" scenario.

\n", 95 | "\n", 96 | "

Advanced Concepts

\n", 97 | "
\n", 98 | " Commandline Interface*\n", 99 | " Caching*\n", 100 | " Import from Database*\n", 101 | " Save Workflow to File*\n", 102 | " Resources & Profiling*\n", 103 | " Debug*\n", 104 | "
\n", 105 | "

PLACEHOLDER TEXT PLACEHOLDER TEXT PLACEHOLDER TEXT PLACEHOLDER TEXT PLACEHOLDER TEXT PLACEHOLDER TEXT PLACEHOLDER TEXT.

\n", 106 | " \n", 107 | "

For Developer

\n", 108 | "
\n", 109 | " Report Issues*\n", 110 | " Help via Github*\n", 111 | " Create your own Interface*\n", 112 | "
\n", 113 | "

PLACEHOLDER TEXT PLACEHOLDER TEXT PLACEHOLDER TEXT PLACEHOLDER TEXT PLACEHOLDER TEXT PLACEHOLDER TEXT PLACEHOLDER TEXT.

\n", 114 | " \n", 115 | "

Useful Resources & Links

\n", 116 | "
\n", 117 | " Install Nipype\n", 118 | " Useful Resources & Links\n", 119 | " Where to find Help\n", 120 | " Python Cheat Sheet\n", 121 | " Nipype (homepage)\n", 122 | " Nipype Beginner's Guide\n", 123 | " Github of Nipype Tutorial\n", 124 | " Github of Nipype Course\n", 125 | "
\n", 126 | "

This section will give you helpful links and resources, so that you always know where to go to learn more.

\n", 127 | "\n", 128 | "
\n", 129 | "
\n", 130 | "\n", 131 | "\n", 132 | "\n", 133 | "\n", 134 | "\n", 146 | "\n", 147 | "
\n", 148 | "\n", 149 | "

You want to help with this tutorial?

\n", 150 | "

Find the github repo of this tutorial under https://github.com/miykael/nipype_course.\n", 151 | " Feel free to send a pull request or leave an issue with your feedback or ideas.\n", 152 | "

\n", 153 | "To inspect the html code of this page, click:
" 154 | ], 155 | "text/plain": [ 156 | "" 157 | ] 158 | }, 159 | "metadata": {}, 160 | "output_type": "display_data" 161 | } 162 | ], 163 | "source": [ 164 | "%%html\n", 165 | "\n", 166 | "\n", 167 | "\n", 168 | " \n", 169 | "\n", 170 | "
\n", 171 | " \n", 172 | "
\n", 173 | "
\n", 174 | "
\n", 175 | "

Welcome to the Nipype Tutorial! It covers the basic concepts and most common use cases of Nipype and will teach\n", 176 | " you everything so that you can start creating your own workflows in no time. We recommend that you start with\n", 177 | " the introduction section to familiarize yourself with the tools used in this tutorial and then move on to the\n", 178 | " basic concepts section to learn everything you need to know for your everyday life with Nipype. The workflow\n", 179 | " examples section shows you a real example how you can use Nipype to analyze an actual dataset.\n", 180 | "

\n", 181 | " All of the notebooks used in this tutorial can be found on github.com/miykael/nipype_tutorial.\n", 182 | " But if you want to have the real experience and want to go through the computations by yourself, we highly\n", 183 | " recommend you to do the Nipype Course. This course\n", 184 | " gives you the opportunity to adapt the commands to your liking and discover the flexibility and real power of\n", 185 | " Nipype yourself. For the tutorial, you need to install a Docker image on your system that provides you a \n", 186 | " neuroimaging environment based on a Debian system, with working Python software (including Nipype, dipy, matplotlib,\n", 187 | " nibabel, nipy, numpy, pandas, scipy, seaborn and more), FSL, AFNI, ANTs and SPM12 (no license needed). This\n", 188 | " neuroimaging environment is based on the docker images under github.com/miykael/nipype_env,\n", 189 | " which allow you to run toolboxes like FSL, AFNI and ANTs on any system, including Windows.\n", 190 | "

\n", 191 | " For everything that isn't covered in this tutorial, check out the main homepage.\n", 192 | " And if you haven't had enough and want to learn even more about Nipype and Neuroimaging, make sure to look at\n", 193 | " the detailed beginner's guide.\n", 194 | "

\n", 195 | "
\n", 196 | "\n", 197 | " \n", 199 | "\n", 200 | " \n", 202 | " \n", 203 | "

Introduction

\n", 204 | "
\n", 205 | " Nipype\n", 206 | " Jupyter-Notebook\n", 207 | " Tutorial Dataset\n", 208 | " Docker\n", 209 | " Python\n", 210 | "
\n", 211 | "

This section is meant as a general overview. It should give you a short introduction to the main topics that\n", 212 | " you need to understand to use Nipype and this tutorial.

\n", 213 | "\n", 214 | "

Basic Concepts

\n", 215 | "
\n", 216 | " Interfaces\n", 217 | " Nodes\n", 218 | " Workflow\n", 219 | " Graph Visualization\n", 220 | " Data Input\n", 221 | " Data Output\n", 222 | " Iteration / Iterables\n", 223 | " MapNodes\n", 224 | " Function Nodes\n", 225 | " JoinNodes\n", 226 | " Model Specification\n", 227 | " Import existing Workflows\n", 228 | " Execution Plugins\n", 229 | " Execution Configuration\n", 230 | " Errors & Crashes\n", 231 | "
\n", 232 | "

This section will introduce you to all of the key players in Nipype. Basic concepts that you need to learn to\n", 233 | " fully understand and appreciate Nipype. Once you understand this section, you will know all that you need to know\n", 234 | " to create any kind of Nipype workflow.

\n", 235 | "\n", 236 | "

Workflow Examples

\n", 237 | "
\n", 238 | " Preprocessing\n", 239 | " 1st-level Analysis\n", 240 | " Normalize Data\n", 241 | " 2nd-level Analysis\n", 242 | "
\n", 243 | "

In this section you will find some practical examples that show you how to use Nipype in a \"real world\" scenario.

\n", 244 | "\n", 245 | "

Advanced Concepts

\n", 246 | "
\n", 247 | " Commandline Interface*\n", 248 | " Caching*\n", 249 | " Import from Database*\n", 250 | " Save Workflow to File*\n", 251 | " Resources & Profiling*\n", 252 | " Debug*\n", 253 | "
\n", 254 | "

PLACEHOLDER TEXT PLACEHOLDER TEXT PLACEHOLDER TEXT PLACEHOLDER TEXT PLACEHOLDER TEXT PLACEHOLDER TEXT PLACEHOLDER TEXT.

\n", 255 | " \n", 256 | "

For Developer

\n", 257 | "
\n", 258 | " Report Issues*\n", 259 | " Help via Github*\n", 260 | " Create your own Interface*\n", 261 | "
\n", 262 | "

PLACEHOLDER TEXT PLACEHOLDER TEXT PLACEHOLDER TEXT PLACEHOLDER TEXT PLACEHOLDER TEXT PLACEHOLDER TEXT PLACEHOLDER TEXT.

\n", 263 | " \n", 264 | "

Useful Resources & Links

\n", 265 | "
\n", 266 | " Install Nipype\n", 267 | " Useful Resources & Links\n", 268 | " Where to find Help\n", 269 | " Python Cheat Sheet\n", 270 | " Nipype (homepage)\n", 271 | " Nipype Beginner's Guide\n", 272 | " Github of Nipype Tutorial\n", 273 | " Github of Nipype Course\n", 274 | "
\n", 275 | "

This section will give you helpful links and resources, so that you always know where to go to learn more.

\n", 276 | "\n", 277 | "
\n", 278 | "
\n", 279 | "\n", 280 | "\n", 281 | "\n", 282 | "\n", 283 | "\n", 295 | "\n", 296 | "
\n", 297 | "\n", 298 | "

You want to help with this tutorial?

\n", 299 | "

Find the github repo of this tutorial under https://github.com/miykael/nipype_course.\n", 300 | " Feel free to send a pull request or leave an issue with your feedback or ideas.\n", 301 | "

\n", 302 | "To inspect the html code of this page, click:
" 303 | ] 304 | }, 305 | { 306 | "cell_type": "code", 307 | "execution_count": null, 308 | "metadata": { 309 | "collapsed": true 310 | }, 311 | "outputs": [], 312 | "source": [] 313 | } 314 | ], 315 | "metadata": { 316 | "anaconda-cloud": {}, 317 | "kernelspec": { 318 | "display_name": "Python [default]", 319 | "language": "python", 320 | "name": "python2" 321 | }, 322 | "language_info": { 323 | "codemirror_mode": { 324 | "name": "ipython", 325 | "version": 2 326 | }, 327 | "file_extension": ".py", 328 | "mimetype": "text/x-python", 329 | "name": "python", 330 | "nbconvert_exporter": "python", 331 | "pygments_lexer": "ipython2", 332 | "version": "2.7.13" 333 | } 334 | }, 335 | "nbformat": 4, 336 | "nbformat_minor": 0 337 | } 338 | -------------------------------------------------------------------------------- /z_advanced_caching.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "http://nipype.readthedocs.io/en/latest/users/caching_tutorial.html" 8 | ] 9 | }, 10 | { 11 | "cell_type": "markdown", 12 | "metadata": {}, 13 | "source": [ 14 | "# Nipype caching" 15 | ] 16 | }, 17 | { 18 | "cell_type": "code", 19 | "execution_count": null, 20 | "metadata": { 21 | "collapsed": true 22 | }, 23 | "outputs": [], 24 | "source": [ 25 | "from nipype.caching import Memory\n", 26 | "mem = Memory('.')" 27 | ] 28 | }, 29 | { 30 | "cell_type": "markdown", 31 | "metadata": {}, 32 | "source": [ 33 | "### Create `cacheable` objects" 34 | ] 35 | }, 36 | { 37 | "cell_type": "code", 38 | "execution_count": null, 39 | "metadata": { 40 | "collapsed": true 41 | }, 42 | "outputs": [], 43 | "source": [ 44 | "from nipype.interfaces.spm import Realign\n", 45 | "from nipype.interfaces.fsl import MCFLIRT\n", 46 | "\n", 47 | "spm_realign = mem.cache(Realign)\n", 48 | "fsl_realign = mem.cache(MCFLIRT)" 49 | ] 50 | }, 51 | { 52 | "cell_type": "markdown", 53 | "metadata": {}, 54 | "source": [ 55 | "### Execute interfaces" 56 | ] 57 | }, 58 | { 59 | "cell_type": "code", 60 | "execution_count": null, 61 | "metadata": { 62 | "collapsed": true 63 | }, 64 | "outputs": [], 65 | "source": [ 66 | "spm_results = spm_realign(in_files='ds107.nii', register_to_mean=False)\n", 67 | "fsl_results = fsl_realign(in_file='ds107.nii', ref_vol=0, save_plots=True)" 68 | ] 69 | }, 70 | { 71 | "cell_type": "code", 72 | "execution_count": null, 73 | "metadata": { 74 | "collapsed": true 75 | }, 76 | "outputs": [], 77 | "source": [ 78 | "subplot(211);plot(genfromtxt(fsl_results.outputs.par_file)[:, 3:])\n", 79 | "subplot(212);plot(genfromtxt(spm_results.outputs.realignment_parameters)[:,:3])" 80 | ] 81 | }, 82 | { 83 | "cell_type": "code", 84 | "execution_count": null, 85 | "metadata": { 86 | "collapsed": true 87 | }, 88 | "outputs": [], 89 | "source": [ 90 | "spm_results = spm_realign(in_files='ds107.nii', register_to_mean=False)\n", 91 | "fsl_results = fsl_realign(in_file='ds107.nii', ref_vol=0, save_plots=True)" 92 | ] 93 | }, 94 | { 95 | "cell_type": "markdown", 96 | "metadata": {}, 97 | "source": [ 98 | "### More caching" 99 | ] 100 | }, 101 | { 102 | "cell_type": "code", 103 | "execution_count": null, 104 | "metadata": { 105 | "collapsed": true 106 | }, 107 | "outputs": [], 108 | "source": [ 109 | "from os.path import abspath as opap\n", 110 | "files = [opap('../ds107/sub001/BOLD/task001_run001/bold.nii.gz'),\n", 111 | " opap('../ds107/sub001/BOLD/task001_run002/bold.nii.gz')]\n", 112 | "converter = mem.cache(MRIConvert)\n", 113 | "newfiles = []\n", 114 | "for idx, fname in enumerate(files):\n", 115 | " newfiles.append(converter(in_file=fname,\n", 116 | " out_type='nii').outputs.out_file)" 117 | ] 118 | }, 119 | { 120 | "cell_type": "code", 121 | "execution_count": null, 122 | "metadata": { 123 | "collapsed": true 124 | }, 125 | "outputs": [], 126 | "source": [ 127 | "os.chdir(tutorial_dir)" 128 | ] 129 | } 130 | ], 131 | "metadata": { 132 | "anaconda-cloud": {}, 133 | "kernelspec": { 134 | "display_name": "Python [conda root]", 135 | "language": "python", 136 | "name": "conda-root-py" 137 | }, 138 | "language_info": { 139 | "codemirror_mode": { 140 | "name": "ipython", 141 | "version": 2 142 | }, 143 | "file_extension": ".py", 144 | "mimetype": "text/x-python", 145 | "name": "python", 146 | "nbconvert_exporter": "python", 147 | "pygments_lexer": "ipython2", 148 | "version": "2.7.13" 149 | } 150 | }, 151 | "nbformat": 4, 152 | "nbformat_minor": 0 153 | } 154 | -------------------------------------------------------------------------------- /z_advanced_commandline.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": { 6 | "collapsed": true 7 | }, 8 | "source": [ 9 | "http://nipype.readthedocs.io/en/latest/users/cli.html" 10 | ] 11 | }, 12 | { 13 | "cell_type": "markdown", 14 | "metadata": { 15 | "collapsed": true 16 | }, 17 | "source": [ 18 | "http://nipype.readthedocs.io/en/latest/users/nipypecmd.html" 19 | ] 20 | }, 21 | { 22 | "cell_type": "code", 23 | "execution_count": null, 24 | "metadata": { 25 | "collapsed": true 26 | }, 27 | "outputs": [], 28 | "source": [] 29 | } 30 | ], 31 | "metadata": { 32 | "anaconda-cloud": {}, 33 | "kernelspec": { 34 | "display_name": "Python [default]", 35 | "language": "python", 36 | "name": "python2" 37 | }, 38 | "language_info": { 39 | "codemirror_mode": { 40 | "name": "ipython", 41 | "version": 2 42 | }, 43 | "file_extension": ".py", 44 | "mimetype": "text/x-python", 45 | "name": "python", 46 | "nbconvert_exporter": "python", 47 | "pygments_lexer": "ipython2", 48 | "version": "2.7.13" 49 | } 50 | }, 51 | "nbformat": 4, 52 | "nbformat_minor": 0 53 | } 54 | -------------------------------------------------------------------------------- /z_advanced_databases.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "https://github.com/nipy/nipype/blob/master/examples/fmri_ants_openfmri.py" 8 | ] 9 | }, 10 | { 11 | "cell_type": "code", 12 | "execution_count": null, 13 | "metadata": { 14 | "collapsed": true 15 | }, 16 | "outputs": [], 17 | "source": [] 18 | }, 19 | { 20 | "cell_type": "markdown", 21 | "metadata": {}, 22 | "source": [ 23 | "# Step 9: Connecting to Databases" 24 | ] 25 | }, 26 | { 27 | "cell_type": "code", 28 | "execution_count": null, 29 | "metadata": { 30 | "collapsed": true 31 | }, 32 | "outputs": [], 33 | "source": [ 34 | "from os.path import abspath as opap\n", 35 | "\n", 36 | "from nipype.interfaces.io import XNATSource\n", 37 | "from nipype.pipeline.engine import Node, Workflow\n", 38 | "from nipype.interfaces.fsl import BET\n", 39 | "\n", 40 | "subject_id = 'xnat_S00001'\n", 41 | "\n", 42 | "dg = Node(XNATSource(infields=['subject_id'],\n", 43 | " outfields=['struct'],\n", 44 | " config='/Users/satra/xnat_configs/nitrc_ir_config'),\n", 45 | " name='xnatsource')\n", 46 | "dg.inputs.query_template = ('/projects/fcon_1000/subjects/%s/experiments/xnat_E00001'\n", 47 | " '/scans/%s/resources/NIfTI/files')\n", 48 | "dg.inputs.query_template_args['struct'] = [['subject_id', 'anat_mprage_anonymized']]\n", 49 | "dg.inputs.subject_id = subject_id\n", 50 | "\n", 51 | "bet = Node(BET(), name='skull_stripper')\n", 52 | "\n", 53 | "wf = Workflow(name='testxnat')\n", 54 | "wf.base_dir = opap('xnattest')\n", 55 | "wf.connect(dg, 'struct', bet, 'in_file')" 56 | ] 57 | }, 58 | { 59 | "cell_type": "code", 60 | "execution_count": null, 61 | "metadata": { 62 | "collapsed": true 63 | }, 64 | "outputs": [], 65 | "source": [ 66 | "from nipype.interfaces.io import XNATSink\n", 67 | "\n", 68 | "ds = Node(XNATSink(config='/Users/satra/xnat_configs/central_config'),\n", 69 | " name='xnatsink')\n", 70 | "ds.inputs.project_id = 'NPTEST'\n", 71 | "ds.inputs.subject_id = 'NPTEST_xnat_S00001'\n", 72 | "ds.inputs.experiment_id = 'test_xnat'\n", 73 | "ds.inputs.reconstruction_id = 'bet'\n", 74 | "ds.inputs.share = True\n", 75 | "wf.connect(bet, 'out_file', ds, 'brain')" 76 | ] 77 | } 78 | ], 79 | "metadata": { 80 | "anaconda-cloud": {}, 81 | "kernelspec": { 82 | "display_name": "Python [conda root]", 83 | "language": "python", 84 | "name": "conda-root-py" 85 | }, 86 | "language_info": { 87 | "codemirror_mode": { 88 | "name": "ipython", 89 | "version": 2 90 | }, 91 | "file_extension": ".py", 92 | "mimetype": "text/x-python", 93 | "name": "python", 94 | "nbconvert_exporter": "python", 95 | "pygments_lexer": "ipython2", 96 | "version": "2.7.13" 97 | } 98 | }, 99 | "nbformat": 4, 100 | "nbformat_minor": 0 101 | } 102 | -------------------------------------------------------------------------------- /z_advanced_debug.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "http://nipype.readthedocs.io/en/latest/users/debug.html" 8 | ] 9 | }, 10 | { 11 | "cell_type": "code", 12 | "execution_count": null, 13 | "metadata": { 14 | "collapsed": true 15 | }, 16 | "outputs": [], 17 | "source": [] 18 | } 19 | ], 20 | "metadata": { 21 | "kernelspec": { 22 | "display_name": "Python [default]", 23 | "language": "python", 24 | "name": "python2" 25 | }, 26 | "language_info": { 27 | "codemirror_mode": { 28 | "name": "ipython", 29 | "version": 2 30 | }, 31 | "file_extension": ".py", 32 | "mimetype": "text/x-python", 33 | "name": "python", 34 | "nbconvert_exporter": "python", 35 | "pygments_lexer": "ipython2", 36 | "version": "2.7.13" 37 | } 38 | }, 39 | "nbformat": 4, 40 | "nbformat_minor": 2 41 | } 42 | -------------------------------------------------------------------------------- /z_advanced_export_workflow.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "http://nipype.readthedocs.io/en/latest/users/saving_workflows.html" 8 | ] 9 | }, 10 | { 11 | "cell_type": "code", 12 | "execution_count": null, 13 | "metadata": { 14 | "collapsed": true 15 | }, 16 | "outputs": [], 17 | "source": [] 18 | } 19 | ], 20 | "metadata": { 21 | "kernelspec": { 22 | "display_name": "Python [default]", 23 | "language": "python", 24 | "name": "python2" 25 | }, 26 | "language_info": { 27 | "codemirror_mode": { 28 | "name": "ipython", 29 | "version": 2 30 | }, 31 | "file_extension": ".py", 32 | "mimetype": "text/x-python", 33 | "name": "python", 34 | "nbconvert_exporter": "python", 35 | "pygments_lexer": "ipython2", 36 | "version": "2.7.13" 37 | } 38 | }, 39 | "nbformat": 4, 40 | "nbformat_minor": 2 41 | } 42 | -------------------------------------------------------------------------------- /z_advanced_resources_and_profiling.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "Look into: http://nipype.readthedocs.io/en/latest/users/resource_sched_profiler.html" 8 | ] 9 | }, 10 | { 11 | "cell_type": "code", 12 | "execution_count": null, 13 | "metadata": { 14 | "collapsed": true 15 | }, 16 | "outputs": [], 17 | "source": [] 18 | } 19 | ], 20 | "metadata": { 21 | "anaconda-cloud": {}, 22 | "kernelspec": { 23 | "display_name": "Python [default]", 24 | "language": "python", 25 | "name": "python2" 26 | }, 27 | "language_info": { 28 | "codemirror_mode": { 29 | "name": "ipython", 30 | "version": 2 31 | }, 32 | "file_extension": ".py", 33 | "mimetype": "text/x-python", 34 | "name": "python", 35 | "nbconvert_exporter": "python", 36 | "pygments_lexer": "ipython2", 37 | "version": "2.7.13" 38 | } 39 | }, 40 | "nbformat": 4, 41 | "nbformat_minor": 0 42 | } 43 | -------------------------------------------------------------------------------- /z_development_github.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": { 6 | "collapsed": true 7 | }, 8 | "source": [ 9 | "# Github\n", 10 | "\n", 11 | "step by step guide on how to submit PR's etc." 12 | ] 13 | } 14 | ], 15 | "metadata": { 16 | "anaconda-cloud": {}, 17 | "kernelspec": { 18 | "display_name": "Python [conda root]", 19 | "language": "python", 20 | "name": "conda-root-py" 21 | }, 22 | "language_info": { 23 | "codemirror_mode": { 24 | "name": "ipython", 25 | "version": 2 26 | }, 27 | "file_extension": ".py", 28 | "mimetype": "text/x-python", 29 | "name": "python", 30 | "nbconvert_exporter": "python", 31 | "pygments_lexer": "ipython2", 32 | "version": "2.7.13" 33 | } 34 | }, 35 | "nbformat": 4, 36 | "nbformat_minor": 0 37 | } 38 | -------------------------------------------------------------------------------- /z_development_interface.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "http://nipype.readthedocs.io/en/latest/devel/cmd_interface_devel.html" 8 | ] 9 | }, 10 | { 11 | "cell_type": "markdown", 12 | "metadata": {}, 13 | "source": [ 14 | "http://nipype.readthedocs.io/en/latest/devel/matlab_interface_devel.html" 15 | ] 16 | }, 17 | { 18 | "cell_type": "markdown", 19 | "metadata": {}, 20 | "source": [ 21 | "http://nipype.readthedocs.io/en/latest/devel/python_interface_devel.html" 22 | ] 23 | }, 24 | { 25 | "cell_type": "code", 26 | "execution_count": null, 27 | "metadata": { 28 | "collapsed": true 29 | }, 30 | "outputs": [], 31 | "source": [] 32 | } 33 | ], 34 | "metadata": { 35 | "kernelspec": { 36 | "display_name": "Python [default]", 37 | "language": "python", 38 | "name": "python2" 39 | }, 40 | "language_info": { 41 | "codemirror_mode": { 42 | "name": "ipython", 43 | "version": 2 44 | }, 45 | "file_extension": ".py", 46 | "mimetype": "text/x-python", 47 | "name": "python", 48 | "nbconvert_exporter": "python", 49 | "pygments_lexer": "ipython2", 50 | "version": "2.7.13" 51 | } 52 | }, 53 | "nbformat": 4, 54 | "nbformat_minor": 2 55 | } 56 | -------------------------------------------------------------------------------- /z_development_report_issue.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": { 6 | "collapsed": true 7 | }, 8 | "source": [ 9 | "# Report an issue\n", 10 | "\n", 11 | "step by step guide how to open an issue on github..." 12 | ] 13 | } 14 | ], 15 | "metadata": { 16 | "anaconda-cloud": {}, 17 | "kernelspec": { 18 | "display_name": "Python [conda root]", 19 | "language": "python", 20 | "name": "conda-root-py" 21 | }, 22 | "language_info": { 23 | "codemirror_mode": { 24 | "name": "ipython", 25 | "version": 2 26 | }, 27 | "file_extension": ".py", 28 | "mimetype": "text/x-python", 29 | "name": "python", 30 | "nbconvert_exporter": "python", 31 | "pygments_lexer": "ipython2", 32 | "version": "2.7.13" 33 | } 34 | }, 35 | "nbformat": 4, 36 | "nbformat_minor": 0 37 | } 38 | --------------------------------------------------------------------------------