`;
38 |
39 | - {doc}`day_05`:
40 |
41 | > - {ref}`stimuli-exercise`;
42 | > - {doc}`correlation_2d_exercise`;
43 | > - {doc}`github_glm_homework` {{ -- }} {doc}`on_estimation_exercise` {{ -- }}
44 | > solution: {doc}`on_estimation_solution`;
45 |
46 | - {doc}`day_06`:
47 |
48 | > - {doc}`github_dummies_homework` {{ -- }} {doc}`on_dummies_exercise` {{ -- }}
49 | > solution: {doc}`on_dummies_solution`;
50 |
51 | - {doc}`lab_07`:
52 |
53 | > - {doc}`make_an_hrf_exercise` {{ -- }} solution: {doc}`make_an_hrf_solution`;
54 | > - {doc}`hrf_correlation_exercise` {{ -- }} solution:
55 | > {doc}`hrf_correlation_solution`;
56 |
57 | - {doc}`day_07`:
58 |
59 | > - {doc}`multi_model_homework` {{ -- }} {doc}`multi_model_exercise` {{ -- }}
60 | > solution: {doc}`multi_model_solution`;
61 |
62 | - {doc}`day_08`:
63 |
64 | > - {ref}`glmtools-exercise`;
65 |
66 | - {doc}`day_09`:
67 |
68 | > - {doc}`slice_timing_exercise` {{ -- }} solution:
69 | > {doc}`slice_timing_solution`.
70 |
71 | - {doc}`day_10`:
72 |
73 | > - {doc}`optimizing_rotation_exercise` {{ -- }} solution:
74 | > {doc}`optimizing_rotation_solution`;
75 | > - {doc}`reslicing_with_affines_exercise` {{ -- }} solution:
76 | > {doc}`reslicing_with_affines_solution`.
77 |
78 | - {doc}`lab_11`:
79 |
80 | > - {doc}`what_extra_transform_exercise` {{ -- }} solution:
81 | > {doc}`what_extra_transform_solution`;
82 | > - {doc}`optimizing_affine_exercise` {{ -- }}
83 | > {doc}`optimizing_affine_solution`.
84 |
85 | - {doc}`day_11`:
86 |
87 | > - {doc}`applying_deformations_exercise` {{ -- }} solution:
88 | > {doc}`applying_deformations_solution`.
89 |
90 | - {doc}`day_12`:
91 |
92 | > - {doc}`anterior_cingulate` exercise.
93 |
94 | - {doc}`lab_12`:
95 |
96 | > - {doc}`spm_slice_timing_exercise` {{ -- }} solution:
97 | > {doc}`spm_slice_timing_solution`;
98 | > - {doc}`full_scripting` {{ -- }} solution:
99 | > {download}`nipype_ds114_sub009_t2r1.py`.
100 |
101 | ## Extra exercises
102 |
103 | - {doc}`smoothing_exercise` {{ -- }} solution: {doc}`smoothing_solution`.
104 |
--------------------------------------------------------------------------------
/floating_in_text.Rmd:
--------------------------------------------------------------------------------
1 | ---
2 | jupyter:
3 | orphan: true
4 | jupytext:
5 | notebook_metadata_filter: all,-language_info
6 | split_at_heading: true
7 | text_representation:
8 | extension: .Rmd
9 | format_name: rmarkdown
10 | format_version: '1.2'
11 | jupytext_version: 1.13.7
12 | kernelspec:
13 | display_name: Python 3
14 | language: python
15 | name: python3
16 | prereqs:
17 | - pathlib
18 | ---
19 |
20 | # Formats for floating point values in text files
21 |
22 | Let's say we have a floating point numbers like this:
23 |
24 | ```{python}
25 | a_number = 314.15926
26 | a_number
27 | ```
28 |
29 | We can also represent these numbers in exponential format. Exponential format
30 | breaks the number into a two parts: the *significand*; and the *exponent*.
31 |
32 | The significand is a floating point number with one digit before a decimal
33 | point. The exponent is an integer. For example:
34 |
35 | ```{python}
36 | exp_number = 3.1415926E2
37 | exp_number
38 | ```
39 |
40 | Here the significand is `3.1415926`, and the exponent is `2`, the value after
41 | the `E`. The number is given by `s * 10 ** e` where `s` is the significand and
42 | `e` is the exponent. In this case: `314.15926 = 3.1415926 * 10 ** 2`.
43 |
44 | This exponential format is the default format that `np.savetxt` uses to
45 | represent floating point numbers when writing to text files. For example:
46 |
47 | ```{python}
48 | import numpy as np
49 |
50 | an_array = np.array([a_number, 1.0, 2.0])
51 | an_array
52 | ```
53 |
54 | ```{python}
55 | # Save the array as a text file.
56 | np.savetxt('some_numbers.txt', an_array)
57 | ```
58 |
59 | ```{python}
60 | # Show the text in the file
61 | from pathlib import Path
62 |
63 | pth = Path('some_numbers.txt')
64 |
65 | print(pth.read_text())
66 | ```
67 |
68 | Finally, clean up the temporary file:
69 |
70 | ```{python}
71 | pth.unlink()
72 | ```
73 |
--------------------------------------------------------------------------------
/full_scripting.md:
--------------------------------------------------------------------------------
1 | # Scripting of SPM analysis with nipype
2 |
3 | Requirements:
4 |
5 | - {doc}`saving_images`;
6 | - {doc}`introducing_nipype`;
7 | - {doc}`spm_slice_timing_exercise`;
8 |
9 | Can you write a Nipype script to do all the pre-processing in SPM?
10 |
11 | You are doing an analysis for the following scans:
12 |
13 | - {download}`ds114_sub009_t2r1.nii`; (functional);
14 | - {download}`ds114_sub009_highres.nii` (structural).
15 |
16 | In the following, remember you can explore the individual Nipype objects like
17 | this:
18 |
19 | >>> from nipype.interfaces import spm
20 | >>> slice_timer = spm.SliceTiming()
21 | >>> slice_timer.help()
22 | Use spm to perform slice timing correction.
23 | ...
24 |
25 | The script should do the following, using nibabel and Nipype:
26 |
27 | - copy the structural image with a prefix of `c` because SPM will modify the
28 | image during registration (it modifies the image affine);
29 | - drop the first 4 volumes from the functional run. Save the new functional
30 | image with a filename prefix of `f`.
31 | - run slice timing, assuming an ascending interleaved slice acquisition order
32 | (first, third, ..., second, fourth, ...). See
33 | {doc}`spm_slice_timing_exercise` for details. The resulting functional
34 | image will have a filename prefix of `af`;
35 | - run motion correction estimation, and write out the mean image across
36 | volumes. You will need `nipype.interfaces.spm.Realign`. The mean image
37 | will have filename prefix `meanaf`. Hint: to write only the mean image,
38 | you need a `inputs.write_which` value of `[0, 1]`. See the script
39 | fragment below;
40 | - estimate the registration of the structural image to the mean functional
41 | image: `nipype.interfaces.spm.Coregister`;
42 | - register the (registered) structural image to the template and write the
43 | slice-time corrected functional image with these estimated parameters:
44 | `nipype.interfaces.spm.Normalize12`. SPM calls this "spatial
45 | normalization". The resulting functional image will have a prefix of `waf`.
46 | Hint: I've put suggested values for the `bounding_box` input parameter in
47 | the script fragment below;
48 | - smooth the spatially normalized 4D functional image by 8mm FWHM:
49 | `nipype.interfaces.spm.Smooth`. The resulting functional image will have
50 | the prefix `swaf`.
51 |
52 | Your script will likely start something like this:
53 |
54 | ```
55 | """ Script to run SPM processing in nipype
56 | """
57 | import shutil
58 |
59 | import nibabel as nib
60 |
61 | # Import our own configuration for nipype
62 | import nipype_settings
63 |
64 | import nipype.interfaces.spm as spm
65 |
66 | # Start at the original image
67 | base_fname = 'ds114_sub009_t2r1.nii'
68 | structural_orig = 'ds114_sub009_highres.nii'
69 | structural_fname = 'cds114_sub009_highres.nii'
70 |
71 | # Copy the structural, because SPM will modify it
72 | shutil.copyfile(structural_orig, structural_fname)
73 |
74 | # Analysis parameters
75 | TR = 2.5
76 | slice_time_ref_slice = 1 # 1-based indexing
77 | n_dummies = 4
78 | # Realign "write_which" input. The 0 means 'do not write resampled versions of
79 | # any of the individual volumes`. The 1 means 'write mean across volumes after
80 | # motion correction'. See config/spm_cfg_realign.m in the SPM12 distribution.
81 | write_which = [0, 1]
82 | # Normalize write parameters. Bounding box gives extreme [[left, posterior,
83 | # inferior], [right, anterior, superior]] millimeter coordinates of the voxel
84 | # grid that SPM will use to write out the new images in template space. See
85 | # spm_preproc_write8.m for use of the bounding box values.
86 | bounding_box = [[-78., -112., -46.], [78., 76., 86.]]
87 | ```
88 |
89 | When you have finished, have a look at the solution script at
90 | {download}`nipype_ds114_sub009_t2r1.py`.
91 |
--------------------------------------------------------------------------------
/git_videos.md:
--------------------------------------------------------------------------------
1 | # First steps with git
2 |
3 | ```{raw} html
4 | Introduction to the git working
8 | tree, repository and staging area (18 minutes).
9 |
10 | The
13 | links between git commits, and git branches (9 minutes).
14 | ```
15 |
--------------------------------------------------------------------------------
/git_workflow_exercises.md:
--------------------------------------------------------------------------------
1 | (git-workflow-exercises)=
2 |
3 | # Git workflow exercises
4 |
5 | ## spm_funcs exercise
6 |
7 | - Fork the repository at ;
8 | - Clone your fork;
9 | - Make a *new branch to work on*;
10 | - Solve the `code/spm_funcs.py` exercise;
11 | - Commit;
12 | - Push;
13 | - Make a pull request back to
14 | .
15 |
16 | ## detector exercise
17 |
18 | - Make a *new branch to work on*;
19 | - Solve the `code/detectors.py` exercise;
20 | - Commit;
21 | - Push;
22 | - Make a pull request back to
23 | .
24 |
--------------------------------------------------------------------------------
/github_dummies_homework.md:
--------------------------------------------------------------------------------
1 | # Dummy coding GLM exercise
2 |
3 | This exercise follows the same form as {doc}`github_pca_homework` and
4 | {doc}`github_glm_homework`, with the minor difference that this time - you
5 | don't have the solutions on the main web page.
6 |
7 | As before:
8 |
9 | - Make sure you have logged into your Github account;
10 |
11 | - Go to the [course github organization] page. You should see a private
12 | repository with a name like `yourname-dummies-exercise` where `yourname`
13 | is your first name.
14 |
15 | - Fork this repository to your own account;
16 |
17 | - Clone your forked repository;
18 |
19 | - Change directory into your new cloned repository:
20 |
21 | ```
22 | cd yourname-dummies-exercise
23 | ```
24 |
25 | - Make a new branch to work on, and checkout this new branch:
26 |
27 | ```
28 | git branch dummies-exercise
29 | git checkout dummies-exercise
30 | ```
31 |
32 | - Finish the dummy coding exercise by filling in `on_dummies_code.py`,
33 | including writing the text answers into the strings in the code file;
34 |
35 | - When you've finished, commit `on_dummies_code.py`, and push these changes
36 | to your github fork;
37 |
38 | - Make a pull request to the upstream repository.
39 |
40 | Any problems, please feel free to contact JB or I.
41 |
42 | Happy dummy GLMing.
43 |
--------------------------------------------------------------------------------
/github_glm_homework.md:
--------------------------------------------------------------------------------
1 | # GLM exercise, with some github practice
2 |
3 | This exercise follows the same form as the {doc}`github_pca_homework`.
4 |
5 | As before:
6 |
7 | - Make sure you have logged into your Github account;
8 |
9 | - Go to the [course github organization] page. You should see a private
10 | repository with a name like `yourname-glm-exercise` where `yourname` is
11 | your first name.
12 |
13 | - Fork this repository to your own account;
14 |
15 | - Clone your forked repository;
16 |
17 | - Change directory into your new cloned repository:
18 |
19 | ```
20 | cd yourname-glm-exercise
21 | ```
22 |
23 | - Make a new branch to work on, and checkout this new branch:
24 |
25 | ```
26 | git branch glm-exercise
27 | git checkout glm-exercise
28 | ```
29 |
30 | - Finish the GLM exercise by filling in `on_estimation_code.py`, including
31 | writing the text answers into the strings in the code file;
32 |
33 | - When you've finished, commit `on_estimation_code.py`, and push these
34 | changes to your github fork;
35 |
36 | - Make a pull request to the upstream repository.
37 |
38 | Any problems, please feel free to contact JB or I.
39 |
40 | Good GLMing.
41 |
--------------------------------------------------------------------------------
/github_pca_homework.md:
--------------------------------------------------------------------------------
1 | # PCA exercise, with some github practice
2 |
3 | This exercise is for you to:
4 |
5 | - finish the PCA exercise;
6 | - practice Github collaborative workflow;
7 | - see pull requests in action.
8 |
9 | You're going to fork a repository, finish the PCA exercise, and then make a
10 | pull request to the repository. I'll review your pull request and merge it
11 | when it is ready.
12 |
13 | I've made private repositories for each of you for the exercise. This will be
14 | the "upstream" repository that you will fork, and make a pull request to.
15 |
16 | - Make sure you have logged into your Github account;
17 |
18 | - Go to the [course github organization] page. You should see a private
19 | repository with a name like `yourname-pca-exercise` where `yourname` is
20 | your first name.
21 |
22 | - Click on this repository;
23 |
24 | - Fork the repository to your own account by clicking the "Fork" button
25 | towards the top left hand corner. Your browser should then open a page
26 | pointing to your fork of the repository;
27 |
28 | - Click the green "Clone or download" button and copy the URL to clone the
29 | forked repository;
30 |
31 | - From the terminal, run `git clone
32 | https://github.com/your-gh-user/yourname-pca-exercise.git` where the URL is
33 | the URL of your cloned repository;
34 |
35 | - Change directory into your new cloned repository:
36 |
37 | ```
38 | cd yourname-pca-exercise
39 | ```
40 |
41 | - Make a new branch to work on, and checkout this new branch:
42 |
43 | ```
44 | git branch pca-exercise
45 | git checkout pca-exercise
46 | ```
47 |
48 | - Finish the PCA exercise by filling in `pca_code.py`. I suggest you keep
49 | the exercise web-page open in your browser, and work in IPython as in:
50 |
51 | ```bash
52 | $ ipython
53 | Python 3.5.2 (default, Jul 28 2016, 21:28:00)
54 | Type "copyright", "credits" or "license" for more information.
55 |
56 | IPython 5.1.0 -- An enhanced Interactive Python.
57 | ? -> Introduction and overview of IPython's features.
58 | %quickref -> Quick reference.
59 | help -> Python's own help system.
60 | object? -> Details about 'object', use 'object??' for extra details.
61 |
62 | In [1]: %matplotlib
63 | Using matplotlib backend: MacOSX
64 |
65 | In [2]: run pca_code.py
66 |
67 | In [3]: plt.show()
68 | ```
69 |
70 | - When you've finished:
71 |
72 | ```
73 | git add pca_code.py
74 | git commit
75 | ```
76 |
77 | and fill in a message describing the commit;
78 |
79 | - Push these changes to your own repository, giving the branch name that you
80 | have been working on:
81 |
82 | ```
83 | git push origin pca-exercise --set-upstream
84 | ```
85 |
86 | - Go back to the web page for your forked repository - the URL will be
87 | something like: .
88 | Reload the page to make sure Github knows about your new branch;
89 |
90 | - From the Branch list-box near the top-left of the page, select your new
91 | `pca-exercise` branch. Click on the `pca_code.py` file to check it is what
92 | you were expecting;
93 |
94 | - Click on the "New pull request" button next to the branch name list-box.
95 | Type in a message to me, and click the green "Create pull request" button at
96 | the bottom. When you do, I'll get an email that you've made a pull
97 | request;
98 |
99 | - Now the fun begins. I'll use the Github Pull request interface to comment
100 | on your code, to give you feedback, and make suggestions for changes. This
101 | is a "code review";
102 |
103 | - To make changes, go back to your repository, edit the code, and make a new
104 | commit with the changes. When you finished making changes:
105 |
106 | ```
107 | git push
108 | ```
109 |
110 | to push back up to your Github fork, and `pca-exercise` branch. The
111 | changes show up automatically in the Github Pull-request interface.
112 |
113 | Happy hunting.
114 |
--------------------------------------------------------------------------------
/glossary.md:
--------------------------------------------------------------------------------
1 | # Glossary
2 |
3 | :::{glossary}
4 | PATH
5 |
6 | : The list of directories in which your system will look for programs to
7 | execute. See [PATH]. When you type a command such as `ls` at the
8 | terminal prompt, this will cause your {term}`shell` to look for an
9 | {term}`executable` file called `ls` in a list of directories. The
10 | list of directories is called the system PATH. Specifically these
11 | directories are listed in the value of an {term}`environment variable`
12 | called `PATH`. Assuming you are using the default Unix `bash`
13 | shell, you can see these directories by typing:
14 |
15 | ```bash
16 | echo $PATH
17 | ```
18 |
19 | at the terminal prompt, followed by the return key. This might give
20 | you output like this:
21 |
22 | ```bash
23 | /usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/opt/X11/bin
24 | ```
25 |
26 | The shell will search this list of directories in order for an
27 | executable file called `ls`: first `/usr/bin`, then `/bin`, and
28 | so on. We can ask to see the full path of the program that the system
29 | finds with the `which` command:
30 |
31 | ```bash
32 | $ which ls
33 | /bin/ls
34 | ```
35 |
36 | This tells us that the system did not find a `ls` executable file in
37 | `/usr/bin`, but did find one in `/bin`, for a full path of
38 | `/bin/ls`.
39 |
40 | shell
41 |
42 | : A shell is a program that gives access to the computer operating
43 | system. It is usually a "command line interface" program that runs in
44 | a terminal, accepting strings that the user types at the keyboard.
45 | The shell program interprets the string and executes commands. The
46 | most common default shell program is `bash` {{ -- }} for Bourne-Again
47 | SHell, so-called because it is an expanded variant of an older shell
48 | program, called the Bourne shell. For example, when you open a
49 | default terminal application, such as `Terminal.app` in OSX or
50 | `gnome-terminal` in Linux, you will usually see a prompt at which
51 | you can type. When you type, the program displaying the characters
52 | and interpreting them is the *shell*. When you press return at the
53 | end of a line, the shell takes the completed line, and tries to
54 | interpret it as a command. See also {term}`PATH`.
55 |
56 | environment variable
57 |
58 | : An environment variable is a key, value pair that is stored in
59 | computer memory and available to other programs running in the same
60 | environment. For example the `PATH` environment variable, is a key,
61 | value pair where the key is `PATH` and the value is a list of
62 | directories, such as `/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin`.
63 | In particular, the shell uses the value of the `PATH` environment
64 | variable as a list of directories to search for executable programs.
65 |
66 | executable
67 |
68 | : A file is *executable* if the file is correctly set up to execute as a
69 | program. On Unix systems, an executable file has to have special
70 | {term}`file permissions` that label the file as being suitable for
71 | execution.
72 |
73 | file permissions
74 |
75 | : Computer file-systems can store extra information about files,
76 | including file permissions. For example, the file permissions tell
77 | the file-system whether a particular user should be able to read the
78 | file, or write the file or execute the file as a program.
79 |
80 | voxel
81 |
82 | : Voxels are volumetric pixels - that is, they are values in a regular
83 | grid in three dimensional space - see the [Wikipedia voxel](https://en.Wikipedia.org/wiki/Voxel) entry.
84 | :::
85 |
--------------------------------------------------------------------------------
/image_header_and_affine.Rmd:
--------------------------------------------------------------------------------
1 | ---
2 | jupyter:
3 | jupytext:
4 | notebook_metadata_filter: all,-language_info
5 | split_at_heading: true
6 | text_representation:
7 | extension: .Rmd
8 | format_name: rmarkdown
9 | format_version: '1.2'
10 | jupytext_version: 1.11.5
11 | kernelspec:
12 | display_name: Python 3 (ipykernel)
13 | language: python
14 | name: python3
15 | ---
16 |
17 | # The image header and affine
18 |
19 | See: [coordinate systems and affine
20 | transforms](http://nipy.org/nibabel/coordinate_systems.html) for an
21 | introduction.
22 |
23 | ```{python}
24 | # import common modules
25 | import numpy as np
26 | np.set_printoptions(precision=4, suppress=True) # print arrays to 4DP
27 | ```
28 |
29 | ## The image affine
30 |
31 | So far we have not paid much attention to the image *header*. We first saw
32 | the image header in What is an image?.
33 |
34 | From that exploration, we found that image consists of:
35 |
36 | * the array data;
37 |
38 | * metadata (data about the array data).
39 |
40 | The header contains the metadata for the image.
41 |
42 | One piece of metadata, is the image affine.
43 |
44 | Here we fetch the image file, and load the image.
45 |
46 | ```{python}
47 | # Load the function to fetch the data file we need.
48 | import nipraxis
49 | # Fetch structural image
50 | structural_fname = nipraxis.fetch_file('ds107_sub012_highres.nii')
51 | # Show the file names
52 | structural_fname
53 | ```
54 |
55 | Load the image:
56 |
57 | ```{python}
58 | import nibabel as nib
59 | img = nib.load(structural_fname)
60 | img.affine
61 | ```
62 |
63 | As you can imagine, nibabel is getting the affine from the header:
64 |
65 | ```{python}
66 | print(img.header)
67 | ```
68 |
69 | Notice the `srow_x, srow_y, srow_z` fields in the header, that contain the
70 | affine for this image. It is not always this simple though – see
71 | [http://nifti.nimh.nih.gov/nifti-1](http://nifti.nimh.nih.gov/nifti-1) for more
72 | details. In general, nibabel will take care of this for you, by extracting the
73 | affine from the header, and returning it via `img.affine`.
74 |
75 | # Nifti images can also be `.img, .hdr` pairs
76 |
77 | So far, all the images we have seen have been NIfTI format images, stored in a
78 | single file with a `.nii` extension. The single file contains the header
79 | information, and the image array data.
80 |
81 | The NIfTI format also allows the image to be stored as two files, one with
82 | extension `.img` storing the image array data, and another with extension
83 | `.hdr` storing the header. These are called *NIfTI pair* images.
84 |
85 | For example, consider this pair of files:
86 |
87 |
88 | ```{python}
89 | # File containing image data.
90 | struct_img_fname = nipraxis.fetch_file('ds114_sub009_highres_moved.img')
91 | print(struct_img_fname)
92 | # File containing image header.
93 | struct_hdr_fname = nipraxis.fetch_file('ds114_sub009_highres_moved.hdr')
94 | print(struct_hdr_fname)
95 | ```
96 |
97 | We now have `ds114_sub009_highres_moved.img` and
98 | `ds114_sub009_highres_moved.hdr`. These two files together form one NIfTI
99 | image. You can load these with nibabel in the usual way:
100 |
101 | ```{python}
102 | pair_img = nib.load(struct_img_fname)
103 | pair_img.affine
104 | ```
105 |
106 | This form of the NIfTI image is getting less common, because it is inconvenient
107 | to have to keep the `.img` and `.hdr` files together, but you may still find
108 | them used. They have only one advantage, which is that, if some software wants
109 | to change only the header information, it only has to rewrite a small `.hdr`
110 | file, rather than the whole `.nii` file containing the image data and the
111 | header.
112 |
--------------------------------------------------------------------------------
/images/arr_3d_1d_bool.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/nipraxis/textbook/f06e7a327ec2ca54b4cf11f8bd9d806a671114da/images/arr_3d_1d_bool.jpg
--------------------------------------------------------------------------------
/images/arr_3d_2d_bool.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/nipraxis/textbook/f06e7a327ec2ca54b4cf11f8bd9d806a671114da/images/arr_3d_2d_bool.jpg
--------------------------------------------------------------------------------
/images/arr_3d_3d_bool.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/nipraxis/textbook/f06e7a327ec2ca54b4cf11f8bd9d806a671114da/images/arr_3d_3d_bool.jpg
--------------------------------------------------------------------------------
/images/arr_3d_planes.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/nipraxis/textbook/f06e7a327ec2ca54b4cf11f8bd9d806a671114da/images/arr_3d_planes.jpg
--------------------------------------------------------------------------------
/images/blank_canvas.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/nipraxis/textbook/f06e7a327ec2ca54b4cf11f8bd9d806a671114da/images/blank_canvas.png
--------------------------------------------------------------------------------
/images/easiness_reused.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/nipraxis/textbook/f06e7a327ec2ca54b4cf11f8bd9d806a671114da/images/easiness_reused.png
--------------------------------------------------------------------------------
/images/easiness_values.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/nipraxis/textbook/f06e7a327ec2ca54b4cf11f8bd9d806a671114da/images/easiness_values.png
--------------------------------------------------------------------------------
/images/gui_and_code.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/nipraxis/textbook/f06e7a327ec2ca54b4cf11f8bd9d806a671114da/images/gui_and_code.png
--------------------------------------------------------------------------------
/images/image_plane_stack.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/nipraxis/textbook/f06e7a327ec2ca54b4cf11f8bd9d806a671114da/images/image_plane_stack.png
--------------------------------------------------------------------------------
/images/image_plane_stack_col.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/nipraxis/textbook/f06e7a327ec2ca54b4cf11f8bd9d806a671114da/images/image_plane_stack_col.png
--------------------------------------------------------------------------------
/images/image_plane_stack_row.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/nipraxis/textbook/f06e7a327ec2ca54b4cf11f8bd9d806a671114da/images/image_plane_stack_row.png
--------------------------------------------------------------------------------
/images/mricron_contrast.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/nipraxis/textbook/f06e7a327ec2ca54b4cf11f8bd9d806a671114da/images/mricron_contrast.png
--------------------------------------------------------------------------------
/images/mricron_display.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/nipraxis/textbook/f06e7a327ec2ca54b4cf11f8bd9d806a671114da/images/mricron_display.png
--------------------------------------------------------------------------------
/images/mricron_draw.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/nipraxis/textbook/f06e7a327ec2ca54b4cf11f8bd9d806a671114da/images/mricron_draw.png
--------------------------------------------------------------------------------
/images/mricron_overlays.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/nipraxis/textbook/f06e7a327ec2ca54b4cf11f8bd9d806a671114da/images/mricron_overlays.png
--------------------------------------------------------------------------------
/images/mricron_y_scrolling.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/nipraxis/textbook/f06e7a327ec2ca54b4cf11f8bd9d806a671114da/images/mricron_y_scrolling.png
--------------------------------------------------------------------------------
/images/multi_multiply.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/nipraxis/textbook/f06e7a327ec2ca54b4cf11f8bd9d806a671114da/images/multi_multiply.jpg
--------------------------------------------------------------------------------
/images/nullius_in_verba.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/nipraxis/textbook/f06e7a327ec2ca54b4cf11f8bd9d806a671114da/images/nullius_in_verba.jpg
--------------------------------------------------------------------------------
/images/powershell_prompt.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/nipraxis/textbook/f06e7a327ec2ca54b4cf11f8bd9d806a671114da/images/powershell_prompt.png
--------------------------------------------------------------------------------
/images/powershell_windows_10.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/nipraxis/textbook/f06e7a327ec2ca54b4cf11f8bd9d806a671114da/images/powershell_windows_10.png
--------------------------------------------------------------------------------
/images/reggie-inv.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/nipraxis/textbook/f06e7a327ec2ca54b4cf11f8bd9d806a671114da/images/reggie-inv.png
--------------------------------------------------------------------------------
/images/reggie.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/nipraxis/textbook/f06e7a327ec2ca54b4cf11f8bd9d806a671114da/images/reggie.png
--------------------------------------------------------------------------------
/images/simple_easy_velocity.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/nipraxis/textbook/f06e7a327ec2ca54b4cf11f8bd9d806a671114da/images/simple_easy_velocity.png
--------------------------------------------------------------------------------
/images/spotlight_mini_window.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/nipraxis/textbook/f06e7a327ec2ca54b4cf11f8bd9d806a671114da/images/spotlight_mini_window.png
--------------------------------------------------------------------------------
/images/spotlight_on_menu.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/nipraxis/textbook/f06e7a327ec2ca54b4cf11f8bd9d806a671114da/images/spotlight_on_menu.png
--------------------------------------------------------------------------------
/images/spotlight_terminal.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/nipraxis/textbook/f06e7a327ec2ca54b4cf11f8bd9d806a671114da/images/spotlight_terminal.png
--------------------------------------------------------------------------------
/images/two_by_three_by_4_array.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/nipraxis/textbook/f06e7a327ec2ca54b4cf11f8bd9d806a671114da/images/two_by_three_by_4_array.png
--------------------------------------------------------------------------------
/images/two_d_second_slice_np.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/nipraxis/textbook/f06e7a327ec2ca54b4cf11f8bd9d806a671114da/images/two_d_second_slice_np.png
--------------------------------------------------------------------------------
/images/two_d_slice_np.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/nipraxis/textbook/f06e7a327ec2ca54b4cf11f8bd9d806a671114da/images/two_d_slice_np.png
--------------------------------------------------------------------------------
/images/two_d_slices.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/nipraxis/textbook/f06e7a327ec2ca54b4cf11f8bd9d806a671114da/images/two_d_slices.png
--------------------------------------------------------------------------------
/images/two_d_slices_without_annotation.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/nipraxis/textbook/f06e7a327ec2ca54b4cf11f8bd9d806a671114da/images/two_d_slices_without_annotation.png
--------------------------------------------------------------------------------
/images/two_d_third_slice_np.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/nipraxis/textbook/f06e7a327ec2ca54b4cf11f8bd9d806a671114da/images/two_d_third_slice_np.png
--------------------------------------------------------------------------------
/images/vogt_1995_alternatives.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/nipraxis/textbook/f06e7a327ec2ca54b4cf11f8bd9d806a671114da/images/vogt_1995_alternatives.png
--------------------------------------------------------------------------------
/images/vogt_1995_cytoarchitecture.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/nipraxis/textbook/f06e7a327ec2ca54b4cf11f8bd9d806a671114da/images/vogt_1995_cytoarchitecture.png
--------------------------------------------------------------------------------
/images/vogt_1995_single_cs.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/nipraxis/textbook/f06e7a327ec2ca54b4cf11f8bd9d806a671114da/images/vogt_1995_single_cs.png
--------------------------------------------------------------------------------
/images/volumes_and_voxel.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/nipraxis/textbook/f06e7a327ec2ca54b4cf11f8bd9d806a671114da/images/volumes_and_voxel.png
--------------------------------------------------------------------------------
/images/voxels_by_time.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/nipraxis/textbook/f06e7a327ec2ca54b4cf11f8bd9d806a671114da/images/voxels_by_time.png
--------------------------------------------------------------------------------
/images/xcode_cli_dialog.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/nipraxis/textbook/f06e7a327ec2ca54b4cf11f8bd9d806a671114da/images/xcode_cli_dialog.png
--------------------------------------------------------------------------------
/index_reshape.Rmd:
--------------------------------------------------------------------------------
1 | ---
2 | jupyter:
3 | jupytext:
4 | text_representation:
5 | extension: .Rmd
6 | format_name: rmarkdown
7 | format_version: '1.2'
8 | jupytext_version: 1.11.5
9 | ---
10 |
11 | # Index ordering and reshape in NumPy and MATLAB
12 |
13 | Let’s make a 3D array by taking a 1D array from 0 through 23, and filling the
14 | 3D array by depth (the last axis), and then by column (the second axis) and
15 | then by row (the first axis):
16 |
17 | ```{python}
18 | # Fill in a NumPy array
19 | import numpy as np
20 | numbers = np.arange(24)
21 | py_arr = np.zeros((2, 3, 4))
22 | n_index = 0 # Python has 0-based indices
23 | for i in range(2): # row index changes slowest
24 | for j in range(3): # then column index
25 | for k in range(4): # depth index changes fastest
26 | py_arr[i, j, k] = numbers[n_index];
27 | n_index += 1
28 |
29 | print(py_arr[:, :, 0])
30 | print(py_arr[:, :, 1])
31 | ```
32 |
33 | We can do the same thing in MATLAB, or its open-source version, Octave:
34 |
35 |
41 | ```
42 | >> % Fill in a MATLAB / Octave array
43 | >> numbers = 0:23;
44 | >> m_arr = zeros(2, 3, 4);
45 | >> n_index = 1;
46 | >> for i = 1:2 % row index changes slowest
47 | for j = 1:3 % then column index
48 | for k = 1:4 % depth index changes fastest
49 | m_arr(i, j, k) = numbers(n_index);
50 | n_index = n_index + 1;
51 | end
52 | end
53 | end
54 | >> m_arr(:, :, 1)
55 | ans =
56 |
57 | 0 4 8
58 | 12 16 20
59 |
60 | >> m_arr(:, :, 2)
61 | ans =
62 |
63 | 1 5 9
64 | 13 17 21
65 | ```
66 |
67 | Remember that MATLAB and Octave have 1-based indices. That is, the index to
68 | the first element on an axis is 1. Python has 0-based indices. Given that,
69 | these two arrays are the same. By “the same” I mean that that `m_arr[i, j,
70 | k] == py_arr[i-1, j-1, k-1]` for any `i, j, k`. You can see this from the
71 | printout above of the first two planes in each array in Python and MATLAB.
72 |
73 | So far, we see that NumPy / Python and MATLAB indexing are the same, apart
74 | from the 0-based / 1-based difference. They differ in their default way of
75 | getting elements when doing a reshape.
76 |
77 | When NumPy or MATLAB reshapes one array into another array, it takes the
78 | elements from the first array in some order, and puts them into the new array
79 | using the same order. The default order in NumPy is to take the elements off
80 | the last axis first, then the second to last, back to the first axis. This is
81 | the same order as the loop above:
82 |
83 | ```{python}
84 | numbers = np.arange(24)
85 | reshaped = np.reshape(numbers, (2, 3, 4))
86 | # This is exactly the same as the original ``py_arr``
87 | np.all(reshaped == py_arr)
88 | ```
89 |
90 | MATLAB does a reshape using the opposite order, taking the elements off the
91 | first axis first:
92 |
93 |
96 | ```
97 | >> m_reshaped = reshape(0:23, [2 3 4]);
98 | >> m_reshaped(:, :, 1)
99 | ans =
100 |
101 | 0 2 4
102 | 1 3 5
103 |
104 | >> m_reshaped(:, :, 2)
105 | ans =
106 |
107 | 6 8 10
108 | 7 9 11
109 | ```
110 |
111 | If you prefer this ordering, you can ask NumPy to do the same, by using the
112 | `order` parameter to `reshape`:
113 |
114 | ```{python}
115 | # First axis first fetching, like MATLAB
116 | first_1_reshaped = np.reshape(numbers, (2, 3, 4), order='F')
117 | print(first_1_reshaped[:, :, 0])
118 | print(first_1_reshaped[:, :, 1])
119 | ```
120 |
--------------------------------------------------------------------------------
/installation_on_linux.md:
--------------------------------------------------------------------------------
1 | ---
2 | orphan: true
3 | ---
4 |
5 | # Installation on Linux
6 |
7 | ## Installing Python 3, git and atom
8 |
9 | ### On Ubuntu or Debian
10 |
11 | Tested on: Ubuntu 14.04, 15.04 through 16.10, 20.04; Debian 11 (Bullseye) and
12 | Unstable (Sid, March 2022).
13 |
14 | Install git and Python 3:
15 |
16 | ```bash
17 | sudo apt-get update
18 | sudo apt-get install -y git python3-dev python3-tk python3-pip
19 | ```
20 |
21 | Check your Python 3 version with:
22 |
23 | ```bash
24 | python3 --version
25 | ```
26 |
27 | This should give you a version >= 3.8. If not, ask your instructors for help.
28 |
29 | ### On Fedora
30 |
31 | Tested on Fedoras 21 through 24, 34 and 35.
32 |
33 | Install git and Python 3:
34 |
35 | ```bash
36 | sudo dnf install -y git python3-devel python3-tkinter python3-pip
37 | ```
38 |
39 | If you get `bash: dnf: command not found`, run `sudo yum install dnf` and try
40 | again.
41 |
42 | Check your Python 3 version with:
43 |
44 | ```bash
45 | python3 --version
46 | ```
47 |
48 | This should give you a version >= 3.8. If not, ask your instructors for help.
49 |
--------------------------------------------------------------------------------
/installation_on_mac.md:
--------------------------------------------------------------------------------
1 | ---
2 | orphan: true
3 | ---
4 |
5 | # Installation on Mac
6 |
7 | (terminal-app)=
8 |
9 | ## Terminal.app
10 |
11 | We'll be typing at the "terminal" prompt often during the class. In macOS, the
12 | program giving the terminal prompt is `Terminal.app`. It comes installed
13 | with macOS.
14 |
15 | You'll find it in the Utilities sub-folder of your Applications folder, but
16 | the easiest way to start it is via Spotlight.
17 |
18 | Start spotlight by either:
19 |
20 | * Clicking on the magnifying glass icon at the right of your menu bar at the
21 | top of your screen
22 |
23 | 
24 |
25 | or (the better option):
26 | * Press the command key (the key with the ⌘ symbol) and then (at the same
27 | time) the spacebar.
28 |
29 | In either case, a mini-window like this will come up:
30 |
31 | 
32 |
33 | Type `terminal` in this window. The first option that comes up is almost
34 | invariably the Terminal application:
35 |
36 | 
37 |
38 | Select this by pressing Enter, and you should see the Terminal application window, as above.
39 |
40 | Consider pinning Terminal.app to your dock by right-clicking on the Terminal
41 | icon in the dock, chose "Options" and "Keep in dock".
42 |
43 | ## Git
44 |
45 | Git comes with the Apple macOS command line tools.
46 |
47 | Install these by typing:
48 |
49 | ```bash
50 | xcode-select --install
51 | ```
52 |
53 | in Terminal.app. If you don't have the command line tools, you will get a dialog box like this:
54 |
55 | 
56 |
57 | Select "Install". You may need to wait a while for that to complete.
58 |
59 | When it has run, check you can run the `git` command with this, in Terminal.app:
60 |
61 | ```bash
62 | git
63 | ```
64 |
65 | It should show you the Git help message.
66 |
67 | ## Homebrew
68 |
69 | [Homebrew](https://brew.sh) is "The missing package manager for macOS". It is
70 | a system for installing many open-source software packages on macOS.
71 | We recommend Homebrew to any serious Mac user; you will need it for the
72 | instructions on this page.
73 |
74 | To install Homebrew, follow the instructions on the [homebrew home page](https://brew.sh/).
75 |
76 | ## Install Python
77 |
78 | Mac actually comes with a version of Python for its own use, but it's nearly always better to install your own version, for your use.
79 |
80 | First, install with Homebrew.
81 |
82 | In {ref}`terminal-app`, type:
83 |
84 | ```bash
85 | brew install python
86 | ```
87 |
88 | **Check carefully for any error messages about failure, like this**:
89 |
90 | ```
91 | Error: The `brew link` step did not complete successfully
92 | The formula built, but is not symlinked into /usr/local
93 | Could not symlink bin/2to3
94 | Target /usr/local/bin/2to3
95 | already exists. You may want to remove it:
96 | rm '/usr/local/bin/2to3'
97 | ```
98 |
99 | The `2to3` command above is just one command that Python installs to your
100 | system.
101 |
102 | If you see a message like that, it means you had another, presumably older,
103 | copy of Python and its associated commands installed in your `/usr/local/bin`
104 | folder. Fix the problem by forcing Homebrew to overwrite the old copy, with
105 | the instructions you will see further down that message:
106 |
107 | ```
108 | brew link --overwrite python
109 | ```
110 |
111 | ## Set up Python for your Terminal
112 |
113 | Next, open the file `~/.bash_profile` with a text editor, for example, like this:
114 |
115 | ```
116 | touch ~/.bash_profile
117 | open -a TextEdit ~/.bash_profile
118 | ```
119 |
120 | Scroll to the end of the file, and add this line:
121 |
122 | ```
123 | export PATH=/usr/local/bin:$PATH
124 | ```
125 |
126 | Be very careful that TextEdit doesn't automatically capitalize `export` above
127 | to `Export`. Correct it again to lower case if it does.
128 |
129 | * Save, and close the text editor.
130 | * Close Terminal.app.
131 | * Start Terminal.app again, and
132 |
133 | — confirm you are looking at the right Python:
134 |
135 | ```bash
136 | which python3
137 | ```
138 |
139 | You should see:
140 |
141 | ```
142 | /usr/local/bin/python3
143 | ```
144 |
--------------------------------------------------------------------------------
/intro.md:
--------------------------------------------------------------------------------
1 | # Practice and theory of brain imaging
2 |
3 | This is a hands-on course teaching the principles of functional MRI (fMRI)
4 | data analysis. We will teach you how to work with data and code to get a
5 | deeper understanding of how fMRI methods work, how they can fail, how to fix
6 | them, and how to develop new methods. We will cover the basic concepts in
7 | neuroimaging analysis, and how they relate to the wider world of statistics,
8 | engineering and computer science. At the same time we will teach you
9 | techniques of data analysis that will make your work easier to organize,
10 | understand, explain and share. Using this techniques will give you great benefits for collaborating with others, and for making your work reproducible.
11 |
12 | At the end of the course we expect you to be able to analyze fMRI data using
13 | Python and to track and share your work with version control using git.
14 |
15 | Please see the [syllabus](./syllabus.md) for a more detailed list of
16 | subjects we will cover.
17 |
--------------------------------------------------------------------------------
/introducing_nipype.Rmd:
--------------------------------------------------------------------------------
1 | ---
2 | jupyter:
3 | jupytext:
4 | notebook_metadata_filter: all,-language_info
5 | split_at_heading: true
6 | text_representation:
7 | extension: .Rmd
8 | format_name: rmarkdown
9 | format_version: '1.2'
10 | jupytext_version: 1.10.3
11 | kernelspec:
12 | display_name: Python 3
13 | language: python
14 | name: python3
15 | orphan: true
16 | ---
17 |
18 | # Introducing nipype
19 |
20 | [Nipype](http://nipy.org/nipype) is a Python module that provides Python
21 | interfaces to many imaging tools, including SPM, AFNI and FSL.
22 |
23 | We install it with `pip` in the usual way:
24 |
25 | ```
26 | pip3 install --user nipype
27 | ```
28 |
29 | After this has run, check that you can import nipype with:
30 |
31 | ```{python}
32 | import nipype
33 | ```
34 |
35 | We are interested in the nipype `interfaces` sub-package. Specifically,
36 | we want the interfaces to the SPM routines:
37 |
38 | ```{python}
39 | from nipype.interfaces import spm
40 | from nipype.interfaces import matlab as nim
41 | ```
42 |
43 | Our first job is to make sure that nipype can run MATLAB. Let’s check
44 | with a test call:
45 |
46 | If `nipype` does not have the right command to start MATLAB, this will
47 | fail with an error. We can set the command to start MATLAB like this:
48 |
49 | ```{python}
50 | nim.MatlabCommand.set_default_matlab_cmd('/Applications/MATLAB_R2022b.app/bin/matlab')
51 | ```
52 |
53 | where `/Applications/MATLAB_R2022b.app/bin/matlab` is the path to the
54 | MATLAB application file.
55 |
56 | Check this is working by running the code above.
57 |
58 | Next we need to make sure that nipype has SPM on the MATLAB path when it
59 | is running MATLAB. Try running this command to get the SPM version.
60 |
61 | If this gives an error message, you may not have SPM set up on your
62 | MATLAB path by default. You can use Nipype to add SPM to the MATLAB path
63 | like this:
64 |
65 | ```{python}
66 | nim.MatlabCommand.set_default_paths('/Users/mb312/dev_trees/spm12')
67 | ```
68 |
69 | Another option is to use the MATLAB GUI to add this directory to the
70 | MATLAB path, and save this path for future sessions.
71 |
72 | Now try running the `spm ver` command again:
73 |
74 | We are going to put the setup we need into a Python file we can import
75 | from any script that we write that uses nipype.
76 |
77 | In your current directory, make a new file called `nipype_settings.py`
78 | with contents like this:
79 |
80 | ```python
81 | """ Defaults for using nipype
82 | """
83 | import nipype.interfaces.matlab as nim
84 | # If you needed to set the default matlab command above
85 | nim.MatlabCommand.set_default_matlab_cmd('/Applications/MATLAB_R2022b.app/bin/matlab')
86 | # If you needed to set the SPM path above.
87 | nim.MatlabCommand.set_default_paths('/Users/mb312/dev_trees/spm12')
88 | ```
89 |
90 | Now try:
91 |
92 | ```python
93 | import nipype_settings
94 | import nipype.interfaces.matlab as nim
95 | mlab = nim.MatlabCommand()
96 | mlab.inputs.script = "spm ver" # get SPM version
97 | mlab.run()
98 | ```
99 |
100 | These should run without error.
101 |
102 | ## Nipype example using Matlab and SPM
103 |
104 | See the example files in:
105 |
106 | * {download}`nipype_settings.py`
107 | * {download}`nipype_ds114_sub009_t2r1.py`.
108 |
109 |
110 | ## Installing packages for use with Nipype
111 |
112 | You can script various imaging packages with Nipype. Consider installing one or more of these packages:
113 |
114 | * [SPM](https://www.fil.ion.ucl.ac.uk/spm/software). SPM also needs an installation of Matlab.
115 | * [FSL](https://fsl.fmrib.ox.ac.uk/fsl/fslwiki/FslInstallation)
116 | * [AFNI](https://afni.nimh.nih.gov/pub/dist/doc/htmldoc/background_install/install_instructs/index.html)
117 |
--------------------------------------------------------------------------------
/length_one_tuples.Rmd:
--------------------------------------------------------------------------------
1 | ---
2 | jupyter:
3 | orphan: true
4 | jupytext:
5 | notebook_metadata_filter: all,-language_info
6 | split_at_heading: true
7 | text_representation:
8 | extension: .Rmd
9 | format_name: rmarkdown
10 | format_version: '1.2'
11 | jupytext_version: 1.11.5
12 | kernelspec:
13 | display_name: Python 3
14 | language: python
15 | name: python3
16 | ---
17 |
18 | # Length 1 tuples
19 |
20 | Remember {ref}`tuples`. For example, here are a couple of length two tuples:
21 |
22 | ```{python}
23 | first_tuple = (1, 2)
24 | first_tuple
25 | ```
26 |
27 | ```{python}
28 | second_tuple = (3, 4)
29 | ```
30 |
31 | As for lists, you can add tuples, to concatenate the contents:
32 |
33 | ```{python}
34 | tuples_together = first_tuple + second_tuple
35 | tuples_together
36 | ```
37 |
38 | ## Length 1 tuples
39 |
40 | Let us say you want to write code for a tuple with only one element.
41 |
42 | You might think this would work:
43 |
44 | ```{python}
45 | # Python interprets this parentheses in arithmetic.
46 | just_a_number = (1)
47 | just_a_number
48 | ```
49 |
50 | Just a single number or string, with parentheses around it, does not make a
51 | tuple, because Python interprets this as your usual brackets for arithmetic.
52 | That means that:
53 |
54 | ```{python}
55 | (1)
56 | ```
57 |
58 | is exactly the same as:
59 |
60 | ```{python}
61 | 1
62 | ```
63 |
64 | Why? Because, Python has to decide what expressions like this mean:
65 |
66 | ```python
67 | # Wait - what do the parentheses mean?
68 | (1 + 2) + (3 + 4)
69 | ```
70 |
71 | Is this adding two one-element tuples, to give a two-element tuple `(3, 7)`? Or
72 | is it our usual mathematical expression giving 3 + 7 = 10. The designer of the Python language decided it should be an arithmetical expression.
73 |
74 | ```{python}
75 | # They mean parentheses as in arithmetic.
76 | (1 + 2) + (3 + 4)
77 | ```
78 |
79 | To form a length-1 tuple, you need to disambiguate what you mean by the parentheses, with a trailing comma:
80 |
81 | ```{python}
82 | short_tuple = (1,) # Notice the trailing comma
83 | short_tuple
84 | ```
85 |
86 | So, for example, to add two length one tuples together, as above:
87 |
88 | ```{python}
89 | # Notice the extra comma to say - we mean these to be length-1 tuples.
90 | (1 + 2,) + (3 + 4,)
91 | ```
92 |
--------------------------------------------------------------------------------
/list_comprehensions.Rmd:
--------------------------------------------------------------------------------
1 | ---
2 | jupyter:
3 | orphan: true
4 | jupytext:
5 | text_representation:
6 | extension: .Rmd
7 | format_name: rmarkdown
8 | format_version: '1.2'
9 | jupytext_version: 1.11.5
10 | kernelspec:
11 | display_name: Python 3 (ipykernel)
12 | language: python
13 | name: python3
14 | ---
15 |
16 | # List comprehensions
17 |
18 | List comprehensions are a nice short-cut for simple `for` loops in Python.
19 |
20 | The list comprehension is a single expression that returns a list, element by
21 | element.
22 |
23 | Let’s say you wanted to create a list of values for squared numbers. You might
24 | do it like this:
25 |
26 | ```{python}
27 | squared_numbers = []
28 | for i in range(10): # numbers 0 through 9
29 | squared_numbers.append(i ** 2)
30 | squared_numbers
31 | ```
32 |
33 | It turns out this kind of thing is a very common pattern in Python. The
34 | pattern is: create an empty list, then use a for loop to fill in values for
35 | the list.
36 |
37 | List comprehensions are a short cut for that pattern:
38 |
39 | ```{python}
40 | squared_numbers = [i ** 2 for i in range(10)]
41 | squared_numbers
42 | ```
43 |
44 | The list comprehesion is an *expression*, starting and ending with square
45 | brackets. The first thing inside the square brackets is the expression that
46 | will become the element of the list - in this case `i \*\* 2` – followed by
47 | a `for` clause - in this case `for in in range(10)` – that will feed
48 | the first expression with values to use.
49 |
50 | See the [Python docs on list comprehensions](https://docs.python.org/3/tutorial/datastructures.html#list-comprehensions)
51 | for more detail.
52 |
53 | List comprehensions can be a little hard to read when you are not used to
54 | them. If you find them confusing, as most of us do at first, then unpack them
55 | into the equivalent `for` loop. Over time, as you get used to them, they can
56 | be easier to read than the longer `for` loop equivalent.
57 |
--------------------------------------------------------------------------------
/logistics.md:
--------------------------------------------------------------------------------
1 | # Logistics
2 |
3 | Fall 2016 Psych-214 is class number 33732.
4 |
5 | Class is 3 credits.
6 |
7 | ## Venue and time
8 |
9 | :::{note}
10 | Changed venue and time
11 |
12 | Class meets at 10 Giannini Hall, at 1 until 4pm.
13 |
14 | See these [directions to 10 Giannini Hall](http://despolab.berkeley.edu/labcontact).
15 |
16 | The original venue and time were 3201 Tolman Hall, from 1.30 until 4.30.
17 | :::
18 |
19 | (instructors)=
20 |
21 | ## Instructors
22 |
23 | - [Matthew Brett] (matthew dot brett on gmail);
24 | - JB Poline (jbpoline on gmail).
25 |
26 | ## Grading
27 |
28 | - 27% participation {{ -- }} see {doc}`participate`;
29 | - 13% homework;
30 | - 60% final project {{ -- }} see {doc}`project_grading`.
31 |
32 | Please read the [UCB academic polices on student conduct](http://guide.berkeley.edu/academic-policies/#studentconductappealstext).
33 |
34 | ## Homework
35 |
36 | Each week, we will give you a selection of articles, chapters, tutorials and
37 | videos to study. You will usually need to know the study material for the
38 | exercises in the following class.
39 |
40 | There will be four sets of graded homework. The sets of homework do not
41 | contribute equally to the 13% homework portion of your final grade {{ -- }} see
42 | the table below. The dates for setting / returning the homework are
43 | provisional, we may change them as the class develops.
44 |
45 | | Number | Set on | Due on | % of overall grade |
46 | | ------ | ------------ | ----------- | ------------------ |
47 | | 1 | August 29 | September 9 | 3 |
48 | | 2 | September 19 | October 10 | 10 |
49 |
50 | Please submit all homework by 17:00 on the due date.
51 |
52 | ## Questions, discussion
53 |
54 | Feel free to email your instructors, but prefer the [PSYCH 214 Piazza site](http://piazza.com/berkeley/fall2016/pysch214).
55 |
56 | ## Office hours
57 |
58 | Office hours are in 10 Giannini Hall on Fridays from 10 am to 11.30.
59 |
60 | If you can't make it then, please arrange another time with your instructors.
61 |
--------------------------------------------------------------------------------
/matrix_rank.Rmd:
--------------------------------------------------------------------------------
1 | ---
2 | jupyter:
3 | jupytext:
4 | notebook_metadata_filter: all,-language_info
5 | split_at_heading: true
6 | text_representation:
7 | extension: .Rmd
8 | format_name: rmarkdown
9 | format_version: '1.2'
10 | jupytext_version: 1.13.7
11 | kernelspec:
12 | display_name: Python 3
13 | language: python
14 | name: python3
15 | orphan: true
16 | ---
17 |
18 | # Matrix rank
19 |
20 | The *rank* of a matrix is the number of independent rows and / or columns of a
21 | matrix.
22 |
23 | We will soon define what we mean by the word *independent*.
24 |
25 | For a matrix with more columns than rows, it is the number of independent
26 | rows.
27 |
28 | For a matrix with more rows than columns, like a design matrix, it is the
29 | number of independent columns.
30 |
31 | In fact, linear algebra tells us that it is impossible to have more
32 | independent columns than there are rows, or more independent rows than there
33 | are columns. Try it with some test matrices.
34 |
35 | A column is *dependent* on other columns if the values in the column can
36 | be generated by a weighted sum of one or more other columns.
37 |
38 | To put this more formally - let’s say we have a matrix $\mathbf{X}$ with
39 | $M$ rows and $N$ columns. Write column $i$ of
40 | $\mathbf{X}$ as $X_{:,i}$. Column $i$ is *independent* of
41 | the rest of $\mathbf{X}$ if there is no length $N$ column vector
42 | of weights $\vec{c}$, where $c_i = 0$, such that $\mathbf{X}
43 | \cdot \vec{c} = X_{:,i}$.
44 |
45 | Let’s make a design with independent columns:
46 |
47 | ```{python}
48 | #: Standard imports
49 | import numpy as np
50 | # Make numpy print 4 significant digits for prettiness
51 | np.set_printoptions(precision=4, suppress=True)
52 | import matplotlib.pyplot as plt
53 | # Default to gray colormap
54 | import matplotlib
55 | matplotlib.rcParams['image.cmap'] = 'gray'
56 | ```
57 |
58 | ```{python}
59 | trend = np.linspace(0, 1, 10)
60 | X = np.ones((10, 3))
61 | X[:, 0] = trend
62 | X[:, 1] = trend ** 2
63 | plt.imshow(X)
64 | ```
65 |
66 | In this case, no column can be generated by a weighted sum of the other two.
67 | We can test this with `np.linalg.matrix_rank`:
68 |
69 | ```{python}
70 | import numpy.linalg as npl
71 | npl.matrix_rank(X)
72 | ```
73 |
74 | This does not mean the columns are orthogonal:
75 |
76 | ```{python}
77 | # Orthogonal columns have dot products of zero
78 | X.T @ X
79 | ```
80 |
81 | Nor does it mean that the columns have zero correlation (see
82 | [Correlation and projection](https://matthew-brett.github.io/teaching/correlation_projection.html) for the relationship between correlation and the
83 | vector dot product):
84 |
85 | ```{python}
86 | np.corrcoef(X[:,0], X[:, 1])
87 | ```
88 |
89 | As long as each column cannot be *fully* predicted by the others, the column
90 | is independent.
91 |
92 | Now let’s add a fourth column that is a weighted sum of the first three:
93 |
94 | ```{python}
95 | X_not_full_rank = np.zeros((10, 4))
96 | X_not_full_rank[:, :3] = X
97 | X_not_full_rank[:, 3] = X @ [-1, 0.5, 0.5]
98 | plt.imshow(X_not_full_rank)
99 | ```
100 |
101 | `matrix_rank` is up to the job:
102 |
103 | ```{python}
104 | npl.matrix_rank(X_not_full_rank)
105 | ```
106 |
107 | A more typical situation with design matrices, is that we have some dummy
108 | variable columns coding for group membership, that sum up to a column of ones.
109 |
110 | ```{python}
111 | dummies = np.kron(np.eye(3), np.ones((4, 1)))
112 | plt.imshow(dummies)
113 | ```
114 |
115 | So far, so good:
116 |
117 | ```{python}
118 | npl.matrix_rank(dummies)
119 | ```
120 |
121 | If we add a column of ones to model the mean, we now have an extra column that
122 | is a linear combination of other columns in the model:
123 |
124 | ```{python}
125 | dummies_with_mean = np.hstack((dummies, np.ones((12, 1))))
126 | plt.imshow(dummies_with_mean)
127 | ```
128 |
129 | ```{python}
130 | npl.matrix_rank(dummies_with_mean)
131 | ```
132 |
133 | A matrix is *full rank* if the matrix rank is the same as the number of
134 | columns / rows. That is, a matrix is full rank if all the columns (or rows)
135 | are independent.
136 |
137 | If a matrix is not full rank then it is *rank deficient*.
138 |
--------------------------------------------------------------------------------
/mentors.md:
--------------------------------------------------------------------------------
1 | # Project mentors
2 |
3 | Each group of students in this class will be doing a significant project
4 | analyzing FMRI data.
5 |
6 | See {doc}`projects` for the student instructions.
7 |
8 | As a mentor, you will be helping the students to design a project, and propose
9 | it in the class.
10 |
11 | When we've agreed on the project, you will be the first point of call for
12 | advice on the neuroscience and the analysis.
13 |
14 | We want you to make yourself available for at least one hour per week to meet
15 | with the students, during the semester. You'll need to budget a couple of
16 | hours for reviewing and commenting on the students work as it develops on the
17 | web, using github.com.
18 |
19 | Please see the {ref}`project-examples` of projects from a previous class to
20 | get an idea of the kind of work we are expecting.
21 |
22 | Later on, I am sure your students will be asking for more time and help, as
23 | the project deadlines come nearer {{ -- }} see the project {ref}`project-timing`
24 | for the relevant dates.
25 |
26 | Some of these projects may turn into papers. Of course we are expecting that
27 | you will be an author on these papers, with the order to be negotiated by you
28 | all, but our default will be to assume that you, the mentors, will be the
29 | senior author on the paper unless you agree otherwise, or there's a good
30 | reason to do otherwise.
31 |
32 | We {ref}`instructors` are always happy to hear from you, and answer any of
33 | your questions, and pitch in to discussions with students.
34 |
35 | We know this will be a significant amount of work for you, and as a thank you,
36 | we will buy you a textbook of your choice at the end of the course.
37 |
--------------------------------------------------------------------------------
/methods_vs_functions.Rmd:
--------------------------------------------------------------------------------
1 | ---
2 | jupyter:
3 | jupytext:
4 | text_representation:
5 | extension: .Rmd
6 | format_name: rmarkdown
7 | format_version: '1.2'
8 | jupytext_version: 1.11.5
9 | kernelspec:
10 | display_name: Python 3 (ipykernel)
11 | language: python
12 | name: python3
13 | ---
14 |
15 | # Methods vs functions in NumPy
16 |
17 | Many things are implemented in NumPy as both *functions* and *methods*. For
18 | example, there is a `np.sum` function, that adds up all the elements:
19 |
20 | ```{python}
21 | import numpy as np
22 | ```
23 |
24 | ```{python}
25 | arr = np.array([1, 2, 0, 1])
26 | np.sum(arr)
27 | ```
28 |
29 | There is also a `sum` method of the numpy `array` object:
30 |
31 | ```{python}
32 | type(arr)
33 | ```
34 |
35 | ```{python}
36 | arr.sum()
37 | ```
38 |
39 | Nearly all the method versions do the same thing as the function versions.
40 | Examples are `mean`, `min`, `max`, `sum`, `reshape`. Choosing the
41 | method or the function will usually depend on which one is easier to read.
42 |
--------------------------------------------------------------------------------
/model_one_voxel.Rmd:
--------------------------------------------------------------------------------
1 | ---
2 | jupyter:
3 | jupytext:
4 | notebook_metadata_filter: all,-language_info
5 | split_at_heading: true
6 | text_representation:
7 | extension: .Rmd
8 | format_name: rmarkdown
9 | format_version: '1.2'
10 | jupytext_version: 1.13.7
11 | kernelspec:
12 | display_name: Python 3
13 | language: python
14 | name: python3
15 | ---
16 |
17 | # Modeling a single voxel
18 |
19 | The [voxel regression page](regress_one_voxel.Rmd) has a worked
20 | example of applying [simple regression](on_regression.Rmd)) on a single voxel.
21 |
22 | This page runs the same calculations, but using the [General Linear
23 | Model](glm_intro.Rmd) notation and matrix calculations.
24 |
25 | Let’s get that same voxel time course back again:
26 |
27 | ```{python}
28 | import numpy as np
29 | import matplotlib.pyplot as plt
30 | import nibabel as nib
31 | # Only show 6 decimals when printing
32 | np.set_printoptions(precision=6)
33 | ```
34 |
35 | ```{python}
36 | # Load the function to fetch the data file we need.
37 | import nipraxis
38 | # Fetch the data file.
39 | data_fname = nipraxis.fetch_file('ds114_sub009_t2r1.nii')
40 | img = nib.load(data_fname)
41 | data = img.get_fdata()
42 | # Knock off the first four volumes (to avoid artefact).
43 | data = data[..., 4:]
44 | # Get the voxel time course of interest.
45 | voxel_time_course = data[42, 32, 19]
46 | plt.plot(voxel_time_course)
47 | ```
48 |
49 | Load the convolved time course, and plot the voxel values against the convolved regressor:
50 |
51 | ```{python}
52 | tc_fname = nipraxis.fetch_file('ds114_sub009_t2r1_conv.txt')
53 | # Show the file name of the fetched data.
54 | convolved = np.loadtxt(tc_fname)
55 | # Knock off first 4 elements to match data.
56 | convolved = convolved[4:]
57 | # Plot.
58 | plt.scatter(convolved, voxel_time_course)
59 | plt.xlabel('Convolved prediction')
60 | plt.ylabel('Voxel values')
61 | ```
62 |
63 | As you remember, we apply the GLM by first preparing a design matrix, that has one column corresponding for each *parameter* in the *model*.
64 |
65 | In our case we have two parameters, the *slope* and the *intercept*.
66 |
67 | First we make our *design matrix*. It has a column for the convolved
68 | regressor, and a column of ones:
69 |
70 | ```{python}
71 | N = len(convolved)
72 | X = np.ones((N, 2))
73 | X[:, 0] = convolved
74 | plt.imshow(X, cmap='gray', aspect=0.1, interpolation='none')
75 | ```
76 |
77 | $\newcommand{\yvec}{\vec{y}}$
78 | $\newcommand{\xvec}{\vec{x}}$
79 | $\newcommand{\evec}{\vec{\varepsilon}}$
80 | $\newcommand{Xmat}{\boldsymbol X} \newcommand{\bvec}{\vec{\beta}}$
81 | $\newcommand{\bhat}{\hat{\bvec}} \newcommand{\yhat}{\hat{\yvec}}$
82 |
83 | Our model is:
84 |
85 | $$
86 | \yvec = \Xmat \bvec + \evec
87 | $$
88 |
89 | We can get our least mean squared error (MSE) parameter *estimates* for
90 | $\bvec$ with:
91 |
92 | $$
93 | \bhat = \Xmat^+y
94 | $$
95 |
96 | where $\Xmat^+$ is the *pseudoinverse* of $\Xmat$. When $\Xmat$ is
97 | invertible, the pseudoinverse is given by:
98 |
99 | $$
100 | \Xmat^+ = (\Xmat^T \Xmat)^{-1} \Xmat^T
101 | $$
102 |
103 | Let’s calculate the pseudoinverse for our design:
104 |
105 | ```{python}
106 | import numpy.linalg as npl
107 | Xp = npl.pinv(X)
108 | Xp.shape
109 | ```
110 |
111 | We calculate $\bhat$:
112 |
113 | ```{python}
114 | beta_hat = Xp @ voxel_time_course
115 | beta_hat
116 | ```
117 |
118 | We can then calculate $\yhat$ (also called the *fitted data*):
119 |
120 | ```{python}
121 | y_hat = X @ beta_hat
122 | ```
123 |
124 | Finally, we may be interested to calculate the MSE of this model:
125 |
126 | ```{python}
127 | # Residuals are actual minus fitted.
128 | e_vec = voxel_time_course - y_hat
129 | mse = np.mean(e_vec ** 2)
130 | mse
131 | ```
132 |
133 | Notice that the $\bhat$ parameters are the same as the slope and intercept from the Scipy calculation using `linregress`:
134 |
135 | ```{python}
136 | import scipy.stats as sps
137 | sps.linregress(convolved, voxel_time_course)
138 | ```
139 |
--------------------------------------------------------------------------------
/more_on_jupyter.Rmd:
--------------------------------------------------------------------------------
1 | ---
2 | jupyter:
3 | jupytext:
4 | notebook_metadata_filter: all,-language_info
5 | split_at_heading: true
6 | text_representation:
7 | extension: .Rmd
8 | format_name: rmarkdown
9 | format_version: '1.1'
10 | jupytext_version: 1.2.4
11 | kernelspec:
12 | display_name: Python 3
13 | language: python
14 | name: python3
15 | ---
16 |
17 | # More on the Jupyter notebook
18 |
19 | As you heard in [about the software](the_software), Jupyter is an *interface* that allows you to run Python code and see the results.
20 |
21 | It consists of two parts:
22 |
23 | * The web client (the thing you are looking at now, if you are running this
24 | notebook in Jupyter);
25 | * The kernel (that does the work of running code, and generating results).
26 |
27 | For example, consider this cell, where I make a new variable `a`, and display the value of `a`:
28 |
29 | ```{python}
30 | a = 10.50
31 | a
32 | ```
33 |
34 |
35 | I type this code in the web client.
36 |
37 | When I press Shift-Enter, or Run the cell via the web interface, this sends a *message* to the *kernel*.
38 |
39 | The message says something like:
40 |
41 | > Here is some Python code: "a = 1" and "a". Run this code, and show me any
42 | > results.
43 |
44 | The kernel accepts the message, runs the code in Python, and then sends back any results, in this case, as text to display (the text representation of the value 10.50).
45 |
46 | The web client shows the results.
47 |
48 | ## The prompts
49 |
50 | Notice the prompts for the code cells. Before I have run the cell, there is an empty prompt, like this `In [ ]:`. `In` means "Input", meaning, this is a cell where you input code.
51 |
52 |
53 | ```{python}
54 | b = 9.25
55 | ```
56 |
57 | Then, when you run the cell, the prompt changes to something like `In [1]:` where 1 means this is the first piece of code that the kernel ran.
58 |
59 | If there is any output, you will now see the *output* from that cell, with a prefix `Out [1]:` where 1 is the same number as the number for the input cell. This is output from the first piece of code that the kernel ran.
60 |
61 | ## Interrupting, restarting
62 |
63 |
64 | Sometimes you will find that the kernel gets stuck. That may be because the kernel is running code that takes a long time, but sometimes, it is because the kernel has crashed.
65 |
66 | In the next cell I load a function that makes the computer wait for a number of seconds.
67 |
68 | Don't worry about the `from ... import` here, we will come onto that later.
69 |
70 | ```{python}
71 | # Get the sleep function
72 | from time import sleep
73 | ```
74 |
75 | The sleep function has the effect of putting the Python process to sleep for the give number of seconds. Here I ask the Python process to sleep for 2 seconds:
76 |
77 | ```{python}
78 | sleep(2)
79 | ```
80 |
81 | I can ask the computer to sleep for much longer. Try making a new code cell and executing the code `sleep(1000)`.
82 |
83 | When you do this, notice that the prompt gets an asterisk symbol `*` instead of a number, meaning that the kernel is busy doing something (busy sleeping).
84 |
85 | You can interrupt this long-running process by clicking on the stop icon in the toolbar, or selecting "Interrupt" from the "Kernel" menu. Try it.
86 |
87 | ```{python}
88 | # Try running the code "sleep(10000)" below.
89 | ```
90 |
91 | Very rarely, the kernel crashes, so you can't interrupt it. In that case you may want to "Restart" from the "Kernel" menu. This will:
92 |
93 | * Shut down the old kernel with extreme prejudice, therefore throwing away any variables you have stored in the workspace;
94 | * Start a new kernel for the web client to talk to.
95 |
96 | If you do this, you will notice that all the variables you defined in the old kernel have disappeared. Rerun the cells to create the variables, to re-create them.
97 |
--------------------------------------------------------------------------------
/multi_model_homework.md:
--------------------------------------------------------------------------------
1 | # Multiple voxel model homework
2 |
3 | This exercise follows the same form as {doc}`github_pca_homework` and
4 | {doc}`github_glm_homework`.
5 |
6 | As before:
7 |
8 | - Make sure you have logged into your Github account;
9 |
10 | - Go to the [course github organization] page. You should see a private
11 | repository with a name like `yourname-multi-model-exercise` where
12 | `yourname` is your first name.
13 |
14 | - Fork this repository to your own account;
15 |
16 | - Clone your forked repository;
17 |
18 | - Change directory into your new cloned repository:
19 |
20 | ```
21 | cd yourname-multi-model-exercise
22 | ```
23 |
24 | - Make a new branch to work on, and checkout this new branch:
25 |
26 | ```
27 | git branch multi-model-exercise
28 | git checkout multi-model-exercise
29 | ```
30 |
31 | - Finish the dummy coding exercise by filling in `multi_model_code.py`;
32 |
33 | - When you've finished, commit `multi_model_code.py`, and push these changes
34 | to your github fork;
35 |
36 | - Make a pull request to the upstream repository.
37 |
38 | Any problems, please feel free to contact JB or I.
39 |
40 | Happy multi-modeling.
41 |
--------------------------------------------------------------------------------
/multi_multiply.Rmd:
--------------------------------------------------------------------------------
1 | ---
2 | jupyter:
3 | jupytext:
4 | notebook_metadata_filter: all,-language_info
5 | split_at_heading: true
6 | text_representation:
7 | extension: .Rmd
8 | format_name: rmarkdown
9 | format_version: '1.2'
10 | jupytext_version: 1.13.7
11 | kernelspec:
12 | display_name: Python 3
13 | language: python
14 | name: python3
15 | ---
16 |
17 | # Estimation for many voxels at the same time
18 |
19 | We often want to fit the same design to many different voxels.
20 |
21 | Let’s make a design with a linear trend and a constant term:
22 |
23 | ```{python}
24 | import numpy as np
25 | import matplotlib.pyplot as plt
26 | # Print arrays to 2 decimal places
27 | np.set_printoptions(precision=2)
28 | ```
29 |
30 | ```{python}
31 | X = np.ones((12, 2))
32 | X[:, 0] = np.linspace(-1, 1, 12)
33 | plt.imshow(X, cmap='gray')
34 | ```
35 |
36 | To fit this design to any data, we take the pseudo-inverse:
37 |
38 | ```{python}
39 | import numpy.linalg as npl
40 | piX = npl.pinv(X)
41 | piX
42 | ```
43 |
44 | Notice the shape of the pseudo-inverse:
45 |
46 | ```{python}
47 | piX.shape
48 | ```
49 |
50 | Now let’s make some data to fit to. We will draw some samples from the standard
51 | normal distribution.
52 |
53 | ```{python}
54 | # Random number generator.
55 | rng = np.random.default_rng()
56 | # 12 random numbers from normal distribution, mean 0, std 1.
57 | y_0 = rng.normal(size=12)
58 | y_0
59 | ```
60 |
61 | ```{python}
62 | beta_0 = piX @ y_0
63 | beta_0
64 | ```
65 |
66 | We can fit this same design to another set of data, using our already-calculated pseudo-inverse.
67 |
68 | ```{python}
69 | y_1 = rng.normal(size=12)
70 | y_1
71 | ```
72 |
73 | ```{python}
74 | beta_1 = piX @ y_1
75 | beta_1
76 | ```
77 |
78 | And another!:
79 |
80 | ```{python}
81 | y_2 = rng.normal(size=12)
82 | beta_2 = piX @ y_2
83 | beta_2
84 | ```
85 |
86 | Now the trick. Because of the way that matrix multiplication works, we can fit
87 | to these three sets of data with the one single matrix multiply:
88 |
89 | ```{python}
90 | # Stack the data vectors into columns in a 2D array.
91 | Y = np.stack([y_0, y_1, y_2], axis=1)
92 | Y
93 | ```
94 |
95 | ```{python}
96 | betas = piX @ Y
97 | betas
98 | ```
99 |
100 | Notice some features of the `betas` array:
101 |
102 | * There is one *row* per *column in the design X*.
103 | * There is one *column* per *column in the data Y*.
104 | * The first *row* of `betas` contains the parameters corresponding to the first
105 | *column* of the design. The first row therefore contains the *slope*
106 | parameters.
107 | * The second row contains the parameters corresponding to the second *column* of
108 | the design. The second row therefore contains the *intercept* parameters.
109 | * The first *column* of `betas` contains the slope, intercept for the first
110 | data vector `y_0`.
111 |
112 | 
113 |
114 | Of course this trick will work for any number of columns of Y.
115 |
116 | This trick is important because it allows us to estimate the same design on
117 | huge number of data vectors very efficiently. In imaging, this is useful
118 | because we can arrange our image data in a one row per volume, one column per
119 | voxel array, and use technique here for very fast estimation.
120 |
--------------------------------------------------------------------------------
/my_script.py:
--------------------------------------------------------------------------------
1 | """ This is my script
2 |
3 | It uses mymodule
4 | """
5 |
6 | import sys
7 |
8 | import mymodule
9 |
10 |
11 | def main():
12 | # This function executed when we are being run as a script
13 | print(sys.argv)
14 | filename = sys.argv[1]
15 | means = mymodule.vol_means(filename)
16 | for mean in means:
17 | print(mean)
18 |
19 |
20 | if __name__ == '__main__':
21 | # We are being run as a script
22 | main()
23 |
--------------------------------------------------------------------------------
/mymodule.py:
--------------------------------------------------------------------------------
1 | """ This is mymodule
2 |
3 | It has useful functions in it.
4 | """
5 |
6 | # Don't forget to import the other modules used by the code in this module
7 | import numpy as np
8 |
9 | import nibabel as nib
10 |
11 | def vol_means(image_fname):
12 | img = nib.load(image_fname, mmap=False)
13 | data = img.get_data()
14 | means = []
15 | for i in range(data.shape[-1]):
16 | vol = data[..., i]
17 | means.append(np.mean(vol))
18 | return means
19 |
--------------------------------------------------------------------------------
/nans.Rmd:
--------------------------------------------------------------------------------
1 | ---
2 | jupyter:
3 | orphan: true
4 | jupytext:
5 | notebook_metadata_filter: all,-language_info
6 | split_at_heading: true
7 | text_representation:
8 | extension: .Rmd
9 | format_name: rmarkdown
10 | format_version: '1.2'
11 | jupytext_version: 1.13.7
12 | kernelspec:
13 | display_name: Python 3
14 | language: python
15 | name: python3
16 | ---
17 |
18 | # Not a number
19 |
20 | [Not a number](https://en.wikipedia.org/wiki/NaN) is a special floating point
21 | value to signal that the result of a floating point calculation is invalid.
22 |
23 | In text we usually use NaN to refer to the Not-a-Number value.
24 |
25 | For example, dividing 0 by 0 is invalid, and returns a NaN value:
26 |
27 | ```{python}
28 | import numpy as np
29 | ```
30 |
31 | ```{python}
32 | np.array(0) / 0
33 | ```
34 |
35 | As you see above, Numpy uses all lower-case: `nan` for the NaN value.
36 |
37 | You can also find the NaN value in the Numpy module:
38 |
39 | ```{python}
40 | np.nan
41 | ```
42 |
43 |
44 | ## NaN values are not equal to anything
45 |
46 |
47 | The NaN value has some specific properties.
48 |
49 | It is not equal to anything, even itself:
50 |
51 | ```{python}
52 | np.nan == 0
53 | ```
54 |
55 | ```{python}
56 | np.nan == np.nan
57 | ```
58 |
59 | ## Detecting NaN values
60 |
61 | You have found above that you cannot look for NaN values by using `== np.nan`.
62 |
63 | To allow for this, use `np.isnan` to tell you whether a number or an array
64 | element is NaN.
65 |
66 | ```{python}
67 | np.isnan([0, np.nan])
68 | ```
69 |
--------------------------------------------------------------------------------
/newaxis.Rmd:
--------------------------------------------------------------------------------
1 | ---
2 | jupyter:
3 | jupytext:
4 | text_representation:
5 | extension: .Rmd
6 | format_name: rmarkdown
7 | format_version: '1.2'
8 | jupytext_version: 1.11.5
9 | kernelspec:
10 | display_name: Python 3 (ipykernel)
11 | language: python
12 | name: python3
13 | ---
14 |
15 | # Adding length 1 dimensions with newaxis
16 |
17 | NumPy has a nice shortcut for adding a length 1 dimension to an array. It is
18 | a little brain-bending, because it operates via array slicing:
19 |
20 | ```{python}
21 | import numpy as np
22 | ```
23 |
24 | ```{python}
25 | v = np.array([0, 3])
26 | v.shape
27 | ```
28 |
29 | ```{python}
30 | # Insert a new length 1 dimension at the beginning
31 | row_v = v[np.newaxis, :]
32 | print(row_v.shape)
33 | row_v
34 | ```
35 |
36 | ```{python}
37 | # Insert a new length 1 dimension at the end
38 | col_v = v[:, np.newaxis]
39 | print(col_v.shape)
40 | col_v
41 | ```
42 |
43 | Read this last slicing operation as “do slicing as normal, except, before
44 | slicing, insert a length 1 dimension at the position of `np.newaxis`”.
45 |
46 | In fact the name `np.newaxis` points to the familiar Python `None` object:
47 |
48 | ```{python}
49 | np.newaxis is None
50 | ```
51 |
52 | So, you also use the `np.newaxis` trick like this:
53 |
54 | ```{python}
55 | row_v = v[None, :]
56 | row_v.shape
57 | ```
58 |
--------------------------------------------------------------------------------
/nibabel_affines.Rmd:
--------------------------------------------------------------------------------
1 | ---
2 | jupyter:
3 | jupytext:
4 | text_representation:
5 | extension: .Rmd
6 | format_name: rmarkdown
7 | format_version: '1.2'
8 | jupytext_version: 1.11.5
9 | kernelspec:
10 | display_name: Python 3 (ipykernel)
11 | language: python
12 | name: python3
13 | ---
14 |
15 | # The [`nibabel.affines`](http://nipy.org/nibabel/reference/nibabel.affines.html#module-nibabel.affines) module
16 |
17 | ```{python}
18 | import numpy as np
19 | np.set_printoptions(precision=4, suppress=True)
20 | import nibabel as nib
21 | ```
22 |
23 | Neuroimaging images have affines, and we often need to process these
24 | affines, or apply these affines to coordinates.
25 |
26 | Nibabel has routines to do this in its `affines` submodule.
27 |
28 | The `from_matvec` function is a short-cut to make a 4 x 4 affine matrix from
29 | a 3 x 3 matrix and an (optional) vector of translations.
30 |
31 | For example, let’s say I have a 3 x 3 rotation matrix, specifying a rotation
32 | of 0.4 radians around the y axis (see Rotations and rotation matrices):
33 |
34 | ```{python}
35 | cos_a = np.cos(0.4)
36 | sin_a = np.sin(0.4)
37 | y_rotation = np.array([[ cos_a, 0, sin_a],
38 | [ 0, 1, 0],
39 | [-sin_a, 0, cos_a]])
40 | y_rotation
41 | ```
42 |
43 | I want to put this 3 x 3 matrix into a 4 x 4 affine matrix:
44 |
45 | ```{python}
46 | # Affine from a 3x3 matrix (the 'mat' in 'matvec')
47 | nib.affines.from_matvec(y_rotation)
48 | ```
49 |
50 | You can also add a translation vector in the call to `from_matvec`. The
51 | translation vector is the `vec` of `from_matvec`:
52 |
53 | ```{python}
54 | # Affine from a 3x3 matrix ('mat') and a translation vector ('vec')
55 | aff = nib.affines.from_matvec(y_rotation, [10, 20, 30])
56 | aff
57 | ```
58 |
59 | `nibabel.affines.to_matvec` does the reverse operation. It splits the
60 | affine matrix into the top left 3 x 3 matrix, and the translation vector from
61 | the last column of the affine:
62 |
63 | ```{python}
64 | mat, vec = nib.affines.to_matvec(aff)
65 | mat
66 | vec
67 | ```
68 |
--------------------------------------------------------------------------------
/nibabel_apply_affine.Rmd:
--------------------------------------------------------------------------------
1 | ---
2 | jupyter:
3 | jupytext:
4 | notebook_metadata_filter: all,-language_info
5 | split_at_heading: true
6 | text_representation:
7 | extension: .Rmd
8 | format_name: rmarkdown
9 | format_version: '1.2'
10 | jupytext_version: 1.13.7
11 | kernelspec:
12 | display_name: Python 3
13 | language: python
14 | name: python3
15 | ---
16 |
17 | # Applying coordinate transforms with `nibabel.affines.apply_affine`
18 |
19 | We often want to apply an affine to an array of coordinates, where the last
20 | axis of the array is length 3, containing the x, y and z coordinates.
21 |
22 | Nibabel uses `nibabel.affines.apply_affine` for this.
23 |
24 | For background see: The nibabel.affines module.
25 |
26 | ```{python}
27 | import numpy as np
28 | from nibabel.affines import from_matvec, to_matvec, apply_affine
29 | ```
30 |
31 | ```{python}
32 | points = np.array([[0, 1, 2], [2, 2, 4], [3, -2, 1], [5, 3, 1]])
33 | points
34 | ```
35 |
36 | ```{python}
37 | zooms_plus_translations = from_matvec(np.diag([3, 4, 5]),
38 | [11, 12, 13])
39 | zooms_plus_translations
40 | ```
41 |
42 | ```{python}
43 | apply_affine(zooms_plus_translations, points)
44 | ```
45 |
46 | Of course, this is the same as:
47 |
48 | ```{python}
49 | mat, vec = to_matvec(zooms_plus_translations)
50 | (mat @ points.T).T + np.reshape(vec, (1, 3))
51 | ```
52 |
53 | The advantage of `nib.affines.apply_affine` is that it can deal with arrays
54 | of more than two dimensions, and it transposes the transformation matrices for
55 | you to apply the transforms correctly.
56 |
57 | A typical use is when applying extra affine transformations to a X by Y by Z
58 | by 3 array of coordinates.
59 |
--------------------------------------------------------------------------------
/nipype_ds114_sub009_t2r1.py:
--------------------------------------------------------------------------------
1 | """ Script to run SPM processing in nipype
2 | """
3 | from pathlib import Path
4 |
5 | import nibabel as nib
6 |
7 | from nipraxis import fetch_file
8 |
9 | # Import our own configuration for nipype
10 | import nipype_settings
11 |
12 | import nipype.interfaces.spm as spm
13 |
14 |
15 | def add_pref_suf(pth, prefix='', suffix=''):
16 | """ Add prefix `prefix` and suffix `suffix` to filename `pth`
17 | """
18 | pth = Path(pth)
19 | return pth.with_name(prefix + pth.stem + suffix + pth.suffix)
20 |
21 |
22 | def copyfile(in_path, out_dir):
23 | """ Copy file `in_path` to directory `out_dir`
24 | """
25 | in_path = Path(in_path)
26 | out_path = Path(out_dir) / in_path.name
27 | out_path.write_bytes(in_path.read_bytes())
28 | return out_path
29 |
30 |
31 | out_dir = Path('spm_processing')
32 | if not out_dir.is_dir():
33 | out_dir.mkdir()
34 |
35 | func_path = copyfile(fetch_file('ds114_sub009_t2r1.nii'), out_dir)
36 | anat_path = copyfile(fetch_file('ds114_sub009_highres.nii'), out_dir)
37 |
38 | # Analysis parameters
39 | TR = 2.5
40 | slice_time_ref_slice = 1 # 1-based indexing
41 | n_dummies = 4
42 | # Realign "write_which" input. The 0 means 'do not write resampled versions of
43 | # any of the individual volumes`. The 1 means 'write mean across volumes after
44 | # motion correction'. See config/spm_cfg_realign.m in the SPM12 distribution.
45 | write_which = [0, 1]
46 | # Normalize write parameters. Bounding box gives extreme [[left, posterior,
47 | # inferior], [right, anterior, superior]] millimeter coordinates of the voxel
48 | # grid that SPM will use to write out the new images in template space. See
49 | # spm_preproc_write8.m for use of the bounding box values.
50 | bounding_box = [[-78., -112., -46.], [78., 76., 86.]]
51 |
52 |
53 | def ascending_interleaved(num_slices):
54 | """ Return acquisition ordering given number of slices
55 |
56 | Note 1-based indexing for MATLAB.
57 |
58 | Return type must be a list for nipype to use it in the SPM interface
59 | without error.
60 | """
61 | odd = range(1, num_slices + 1, 2)
62 | even = range(2, num_slices + 1, 2)
63 | return list(odd) + list(even)
64 |
65 |
66 | order_func = ascending_interleaved
67 |
68 | # Drop dummy volumes
69 | dummied = add_pref_suf(func_path, 'f')
70 | img = nib.load(func_path);
71 | dropped_img = nib.Nifti1Image(img.get_fdata()[..., n_dummies:],
72 | img.affine,
73 | img.header)
74 | nib.save(dropped_img, dummied)
75 |
76 | # Slice time correction
77 | num_slices = img.shape[2]
78 | time_for_one_slice = TR / num_slices
79 | TA = TR - time_for_one_slice
80 | st_dummied = add_pref_suf(dummied, 'a')
81 | st = spm.SliceTiming()
82 | st.inputs.in_files = dummied
83 | st.inputs.num_slices = num_slices
84 | st.inputs.time_repetition = TR
85 | st.inputs.time_acquisition = TA
86 | st.inputs.slice_order = order_func(num_slices)
87 | st.inputs.ref_slice = slice_time_ref_slice
88 | st.run()
89 |
90 | # Realign
91 | mc_st_dummied = add_pref_suf(st_dummied, 'r')
92 | realign = spm.Realign()
93 | realign.inputs.in_files = st_dummied
94 | # Do not write resliced files, do write mean image
95 | realign.inputs.write_which = write_which
96 | realign.run()
97 |
98 | # Coregistration
99 | mc_mean_path = add_pref_suf(st_dummied, 'mean')
100 | coreg = spm.Coregister()
101 | # Coregister structural to mean image from realignment
102 | coreg.inputs.target = mc_mean_path
103 | coreg.inputs.source = anat_path
104 | coreg.run()
105 |
106 | # Normalization / resampling with combined realign and normalization params
107 | seg_norm = spm.Normalize12()
108 | seg_norm.inputs.image_to_align = anat_path
109 | seg_norm.inputs.apply_to_files = st_dummied
110 | seg_norm.inputs.write_bounding_box = bounding_box
111 | seg_norm.run()
112 |
113 | # Smoothing by 8mm FWHM in x, y and z
114 | w_st_dummied = add_pref_suf(st_dummied, 'w')
115 | smooth = spm.Smooth()
116 | smooth.inputs.in_files = w_st_dummied
117 | smooth.inputs.fwhm = [8, 8, 8]
118 | smooth.run()
119 |
--------------------------------------------------------------------------------
/nipype_settings.py:
--------------------------------------------------------------------------------
1 | # Null settings for nipype.
2 | from nipype.interfaces import matlab as nim
3 |
4 | nim.MatlabCommand.set_default_matlab_cmd('/Applications/MATLAB_R2022b.app/bin/matlab')
5 | nim.MatlabCommand.set_default_paths('/Users/mb312/dev_trees/spm12')
6 |
--------------------------------------------------------------------------------
/numpy_diag.Rmd:
--------------------------------------------------------------------------------
1 | ---
2 | jupyter:
3 | jupytext:
4 | text_representation:
5 | extension: .Rmd
6 | format_name: rmarkdown
7 | format_version: '1.2'
8 | jupytext_version: 1.11.5
9 | ---
10 |
11 | $\newcommand{L}[1]{\| #1 \|}\newcommand{VL}[1]{\L{ \vec{#1} }}\newcommand{R}[1]{\operatorname{Re}\,(#1)}\newcommand{I}[1]{\operatorname{Im}\, (#1)}$
12 |
13 | ## Diagonal matrices
14 |
15 | We often want to make matrices with all zeros except on the diagonal –
16 | diagonal matrices.
17 |
18 | Numpy does not fail us:
19 |
20 | ```{python}
21 | import numpy as np
22 | np.diag([3, 4, 5, 6])
23 | ```
24 |
25 | ```{python}
26 | np.diag([7, 8, 9, 10, 11])
27 | ```
28 |
29 |
30 |
31 |
32 |
33 |
34 |
35 |
36 |
37 |
38 |
39 |
40 |
41 |
42 |
43 |
44 |
45 |
46 |
47 |
48 |
49 |
50 |
51 |
52 |
53 |
54 |
55 |
56 |
--------------------------------------------------------------------------------
/numpy_meshgrid.Rmd:
--------------------------------------------------------------------------------
1 | ---
2 | jupyter:
3 | jupytext:
4 | text_representation:
5 | extension: .Rmd
6 | format_name: rmarkdown
7 | format_version: '1.2'
8 | jupytext_version: 1.11.5
9 | ---
10 |
11 | $\newcommand{L}[1]{\| #1 \|}\newcommand{VL}[1]{\L{ \vec{#1} }}\newcommand{R}[1]{\operatorname{Re}\,(#1)}\newcommand{I}[1]{\operatorname{Im}\, (#1)}$
12 |
13 | ## Making coordinate arrays with meshgrid
14 |
15 | affine_transform works by using voxel
16 | coordinate implied by the `output_shape`, and transforming those. See:
17 | Resampling with images of different shapes.
18 |
19 | `numpy.meshgrid` is a way of making an actual coordinate grid.
20 |
21 | This is particularly useful when we want to use the more general form of image
22 | resampling in `scipy.ndimage.map_coordinates`.
23 |
24 | If we have some shape – say `output_shape` – then this implies a set
25 | of coordinates. Let’s say `output_shape = (5, 4)` – implying a 2D array.
26 |
27 | The implied coordinate grid will therefore have one coordinate for each pixel
28 | (2D voxel) in the (5, 4) array.
29 |
30 | Because this array is 2D, there are two coordinate values for each pixel. For
31 | example, the coordinate of the first element in the array is (0, 0). We can
32 | make these i- and j- coordinates with `meshgrid`:
33 |
34 | ```{python}
35 | import numpy as np
36 | i_coords, j_coords = np.meshgrid(range(5), range(4), indexing='ij')
37 | i_coords
38 | j_coords
39 | ```
40 |
41 | We can make this into a shape (2, 5, 4) array where the first axis contains
42 | the (i, j) coordinate.
43 |
44 | ```{python}
45 | coordinate_grid = np.array([i_coords, j_coords])
46 | coordinate_grid.shape
47 | ```
48 |
49 | Because we have not done any transformation on the coordinate, the i, j
50 | coordinate will be the same as the index we use to get the i, j coordinate:
51 |
52 | ```{python}
53 | coordinate_grid[:, 0, 0]
54 | coordinate_grid[:, 1, 0]
55 | coordinate_grid[:, 0, 1]
56 | ```
57 |
58 | This is the coordinate grid *implied by* a shape of (5, 4).
59 |
60 | Now imagine I wanted to do a transformation on these coordinates. Say I wanted
61 | to add 2 to the first (i) coordinate:
62 |
63 | ```{python}
64 | coordinate_grid[0, :, :] += 2
65 | ```
66 |
67 | Now my coordinate grid expresses a *mapping* between a given ($i, j$)
68 | coordinate, and the new coordinate ($i', j'$. I look up the new
69 | coordinate using the $i, j$ index into the coordinate grid:
70 |
71 | ```{python}
72 | coordinate_grid[:, 0, 0] # look up new coordinate for (0, 0)
73 | coordinate_grid[:, 1, 0] # look up new coordinate for (1, 0)
74 | coordinate_grid[:, 0, 1] # look up new coordinate for (0, 1)
75 | ```
76 |
77 | This means we can use these coordinate grids as a *mapping* from an input set
78 | of coordinates to an output set of coordinates, for each pixel / voxel.
79 |
80 | As you can imagine, meshgrid extends to three dimensions or more:
81 |
82 | ```{python}
83 | output_shape = (5, 6, 7)
84 | I, J, K = output_shape
85 | i_coords, j_coords, k_coords = np.meshgrid(range(I),
86 | range(J),
87 | range(K),
88 | indexing='ij')
89 | coordinate_grid = np.array([i_coords, j_coords, k_coords])
90 | coordinate_grid.shape
91 | ```
92 |
93 | ```{python}
94 | coordinate_grid[:, 0, 0, 0]
95 | coordinate_grid[:, 1, 0, 0]
96 | coordinate_grid[:, 0, 1, 0]
97 | coordinate_grid[:, 0, 0, 1]
98 | ```
99 |
100 |
101 |
102 |
103 |
104 |
105 |
106 |
107 |
108 |
109 |
110 |
111 |
112 |
113 |
114 |
115 |
116 |
117 |
118 |
119 |
120 |
121 |
122 |
123 |
124 |
125 |
126 |
127 |
--------------------------------------------------------------------------------
/numpy_random.Rmd:
--------------------------------------------------------------------------------
1 | ---
2 | jupyter:
3 | jupytext:
4 | notebook_metadata_filter: all,-language_info
5 | split_at_heading: true
6 | text_representation:
7 | extension: .Rmd
8 | format_name: rmarkdown
9 | format_version: '1.2'
10 | jupytext_version: 1.13.7
11 | kernelspec:
12 | display_name: Python 3
13 | language: python
14 | name: python3
15 | ---
16 |
17 | # Numpy random number generators
18 |
19 | ```{python}
20 | #: standard imports
21 | import numpy as np
22 | # print arrays to 4 decimal places
23 | np.set_printoptions(precision=4, suppress=True)
24 | ```
25 |
26 | We often need random numbers, for tests and for taking random samples, and for
27 | other things. `np.random` is a submodule within numpy:
28 |
29 | ```{python}
30 | type(np.random)
31 | ```
32 |
33 | It contains function that will create a *random number generator*.
34 |
35 | ```{python}
36 | # Make a random number generator.
37 | rng = np.random.default_rng()
38 | type(rng)
39 | ```
40 |
41 | This generator is an object that has a set of methods for returning random
42 | numbers of various sorts. For example, to return a single random number from
43 | the default normal distribution (mean 0, variance 1):
44 |
45 | ```{python}
46 | rng.normal()
47 | ```
48 |
49 | You can set the mean and variance with the first two input parameters:
50 |
51 | ```{python}
52 | # Random number from distribution with mean 15, variance 2
53 | rng.normal(15, 2)
54 | ```
55 |
56 | To return a 8 by 5 array of random numbers from the same distribution:
57 |
58 | ```{python}
59 | rng.normal(15, 2, size=(8, 5))
60 | ```
61 |
62 | A 5 by 3 array of random numbers from the standard normal distribution with
63 | mean 1 and variance 1:
64 |
65 | ```{python}
66 | rng.normal(size=(5, 3))
67 | ```
68 |
69 | ## Making random numbers predictable
70 |
71 |
72 | Sometimes you want to make sure that the random numbers are predictable, in
73 | that you will always get the same set of random numbers from a series of calls
74 | to the `rng` methods. You can achieve this by giving the random number
75 | generator a *seed* when you create it. This is an integer that sets the
76 | random number generator into a predictable state, such that it will always
77 | return the same sequence of random numbers from this point:
78 |
79 | ```{python}
80 | # Set the state of the random number generator on creation.
81 | new_rng = np.random.default_rng(seed=42)
82 | # One set of random numbers
83 | first_random_arr = new_rng.normal(size=(4, 2))
84 | first_random_arr
85 | ```
86 |
87 | ```{python}
88 | # Another set
89 | second_random_arr = new_rng.normal(size=(4, 2))
90 | second_random_arr
91 | ```
92 |
93 | ```{python}
94 | # Make another random number generator with the same seed.
95 | new_rng2 = np.random.default_rng(seed=42)
96 | # The same as "first_random_arr" above.
97 | new_rng2.normal(size=(4, 2))
98 | ```
99 |
100 | ```{python}
101 | # The same as "second_random_arr" above.
102 | new_rng2.normal(size=(4, 2))
103 | ```
104 |
--------------------------------------------------------------------------------
/numpy_squeeze.Rmd:
--------------------------------------------------------------------------------
1 | ---
2 | jupyter:
3 | jupytext:
4 | notebook_metadata_filter: all,-language_info
5 | split_at_heading: true
6 | text_representation:
7 | extension: .Rmd
8 | format_name: rmarkdown
9 | format_version: '1.2'
10 | jupytext_version: 1.13.7
11 | kernelspec:
12 | display_name: Python 3
13 | language: python
14 | name: python3
15 | orphan: true
16 | ---
17 |
18 | # Removing length 1 axes with `numpy.squeeze`
19 |
20 | Sometimes we find ourselves with arrays with length-1 axes - and we want to
21 | remove these axes. For example:
22 |
23 | ```{python}
24 | import numpy as np
25 | ```
26 |
27 | ```{python}
28 | rng = np.random.default_rng()
29 | arr = rng.normal(size=(4, 1, 6))
30 | arr.shape
31 | ```
32 |
33 | ```{python}
34 | squeezed = np.squeeze(arr)
35 | squeezed.shape
36 | ```
37 |
38 | ```{python}
39 | arr = np.zeros((1, 3, 1, 7))
40 | arr.shape
41 | ```
42 |
43 | ```{python}
44 | np.squeeze(arr).shape
45 | ```
46 |
--------------------------------------------------------------------------------
/numpy_transpose.Rmd:
--------------------------------------------------------------------------------
1 | ---
2 | jupyter:
3 | jupytext:
4 | text_representation:
5 | extension: .Rmd
6 | format_name: rmarkdown
7 | format_version: '1.2'
8 | jupytext_version: 1.11.5
9 | ---
10 |
11 | $\newcommand{L}[1]{\| #1 \|}\newcommand{VL}[1]{\L{ \vec{#1} }}\newcommand{R}[1]{\operatorname{Re}\,(#1)}\newcommand{I}[1]{\operatorname{Im}\, (#1)}$
12 |
13 | ## `numpy.tranpose` for swapping axes
14 |
15 | Numpy allows you to swap axes without costing anything in memory, and very
16 | little in time.
17 |
18 | The obvious axis swap is a 2D array transpose:
19 |
20 | ```{python}
21 | import numpy as np
22 | arr = np.reshape(np.arange(10), (5, 2))
23 | arr
24 | ```
25 |
26 | ```{python}
27 | arr.T
28 | ```
29 |
30 | The `transpose` method - and the `np.tranpose` function does the same
31 | thing as the `.T` attribute above:
32 |
33 | ```{python}
34 | arr.transpose()
35 | ```
36 |
37 | The advantage of `transpose` over the `.T` attribute is that is allows you
38 | to move axes into any arbitrary order.
39 |
40 | For example, let’s say you had a 3D array:
41 |
42 | ```{python}
43 | arr = np.reshape(np.arange(24), (2, 3, 4))
44 | arr
45 | ```
46 |
47 | ```{python}
48 | arr.shape
49 | ```
50 |
51 | ```{python}
52 | arr[:, :, 0]
53 | ```
54 |
55 | `transpose` allows you to re-order these axes as you like. For example,
56 | maybe you wanted to take the current last axis, and make it the first axis.
57 | You pass `transpose` the order of the axes that you want:
58 |
59 | ```{python}
60 | new_arr = arr.transpose(2, 0, 1)
61 | ```
62 |
63 | ```{python}
64 | new_arr
65 | ```
66 |
67 | ```{python}
68 | new_arr.shape
69 | ```
70 |
71 | ```{python}
72 | new_arr[0, :, :]
73 | ```
74 |
75 | Notice that the contents of the axis has not changed, just the position.
76 | `new_arr[i, :, :]` is the same as `arr[:, :, i]` for any `i`.
77 |
78 |
79 |
80 |
81 |
82 |
83 |
84 |
85 |
86 |
87 |
88 |
89 |
90 |
91 |
92 |
93 |
94 |
95 |
96 |
97 |
98 |
99 |
100 |
101 |
102 |
103 |
104 |
105 |
--------------------------------------------------------------------------------
/on_correlation.Rmd:
--------------------------------------------------------------------------------
1 | ---
2 | jupyter:
3 | orphan: true
4 | jupytext:
5 | notebook_metadata_filter: all,-language_info
6 | split_at_heading: true
7 | text_representation:
8 | extension: .Rmd
9 | format_name: rmarkdown
10 | format_version: '1.2'
11 | jupytext_version: 1.13.7
12 | kernelspec:
13 | display_name: Python 3
14 | language: python
15 | name: python3
16 | ---
17 |
18 | # On correlation
19 |
20 | This is an old page, now retired.
21 |
22 | See {ref}`on-correlation`.
23 |
--------------------------------------------------------------------------------
/on_loops.Rmd:
--------------------------------------------------------------------------------
1 | ---
2 | jupyter:
3 | jupytext:
4 | text_representation:
5 | extension: .Rmd
6 | format_name: rmarkdown
7 | format_version: '1.2'
8 | jupytext_version: 1.11.5
9 | ---
10 |
11 | $\newcommand{L}[1]{\| #1 \|}\newcommand{VL}[1]{\L{ \vec{#1} }}\newcommand{R}[1]{\operatorname{Re}\,(#1)}\newcommand{I}[1]{\operatorname{Im}\, (#1)}$
12 |
13 | ## “for” and “while”, “break” and “else:”
14 |
15 | In Brisk introduction to Python, we saw the use of `break` in `for` and `while`
16 | loops.
17 |
18 | `for` and `while` loops that use `break`, can be followed by `else:`
19 | clauses. The `else:` clause executes only when there was no `break`
20 | during the loop.
21 |
22 | In the next fragment, we are doing an inefficient search for prime numbers
23 | from 2 through 30. In this basic `for` loop, we use the `is_prime`
24 | variable as a flag to indicate whether we have found the current number to be
25 | prime:
26 |
27 | ```{python}
28 | primes = []
29 | for x in range(2, 30):
30 | # Assume x is prime until shown otherwise
31 | is_prime = True
32 | for p in primes:
33 | # x exactly divisible by prime -> x not prime
34 | if (x % p) == 0:
35 | is_prime = False
36 | break
37 | if is_prime:
38 | primes.append(x)
39 |
40 | print("Primes in 2 through 30", primes)
41 | ```
42 |
43 | Using a flag variable like `is_prime` is a common pattern, so Python allows
44 | us to do the same thing with an extra `else:` clause:
45 |
46 | ```{python}
47 | primes = []
48 | for x in range(2, 30):
49 | for p in primes:
50 | # x exactly divisible by prime -> x not prime
51 | if (x % p) == 0:
52 | break
53 | else:
54 | # else: block executes if no 'break" in previous loop
55 | primes.append(x)
56 |
57 | print("Primes in 2 through 30", primes)
58 | ```
59 |
60 |
61 |
62 |
63 |
64 |
65 |
66 |
67 |
68 |
69 |
70 |
71 |
72 |
73 |
74 |
75 |
76 |
77 |
78 |
79 |
80 |
81 |
82 |
83 |
84 |
85 |
86 |
87 |
--------------------------------------------------------------------------------
/os_path.Rmd:
--------------------------------------------------------------------------------
1 | ---
2 | jupyter:
3 | orphan: true
4 | jupytext:
5 | text_representation:
6 | extension: .Rmd
7 | format_name: rmarkdown
8 | format_version: '1.2'
9 | jupytext_version: 1.11.5
10 | kernelspec:
11 | display_name: Python 3 (ipykernel)
12 | language: python
13 | name: python3
14 | ---
15 |
16 | # Using the `os.path` module
17 |
18 | The `os.path` module is one of two ways of manipulating and using file
19 | paths in Python. See the [path manipulation](path_manipulation.Rmd) page
20 | for more discussion, and [pathlib](pathlib.Rmd) page for a tutorial on the
21 | alternative `pathlib` approach.
22 |
23 | The primary documentation for `os.path` is
24 |
25 |
26 | This page covers:
27 |
28 | * `os.path`
29 | * `dirname, basename, join, splitext, abspath`
30 |
31 | ```{python}
32 | import os.path
33 | ```
34 |
35 | In IPython, you can tab complete on `os.path` to list the functions and
36 | attributes there.
37 |
38 | The first function we will use from `os.path` is `dirname`. To avoid
39 | typing `os.path` all the time, we import `os.path` with the shortened name
40 | `op`:
41 |
42 | ```{python}
43 | import os.path as op
44 | op.dirname
45 | ```
46 |
47 | The `dirname` function gives the directory name from a full file path. It
48 | works correctly for Unix paths on Unix machines, and Windows paths on Windows
49 | machines:
50 |
51 | ```{python}
52 | # On Unix
53 | op.dirname('/a/full/path/then_filename.txt')
54 | ```
55 |
56 | You'll see something like this as output if you run something similar on
57 | Windows. So `op.dirname('c:\\a\\full\\path\\then_filename.txt') ` will give:
58 |
59 | ```
60 | 'c:\\a\\full\\path'
61 | ```
62 |
63 | Notice that, on Windows, you need to use double backslashes, because the
64 | backslash in a Python string is a way of *escaping* a character — meaning, to
65 | specify you mean exactly that character. Double backslash has the meaning
66 | "Yes, I do mean exactly backslash for the next character".
67 |
68 | `dirname` also works for relative paths, A relative path where the starting
69 | directory is relative to the current directory, rather than absolute, in terms
70 | of the root of the file system:
71 |
72 | ```{python}
73 | # On Unix
74 | op.dirname('relative/path/then_filename.txt')
75 | ```
76 |
77 | Use `basename` to get the filename rather than the directory name:
78 |
79 | ```{python}
80 | # On Unix
81 | op.basename('/a/full/path/then_filename.txt')
82 | ```
83 |
84 | Sometimes you want to join one or more directory names with a filename to get
85 | a path. Windows and Unix have different characters to separate directories
86 | in a path. Windows uses the backslash: `\`, Unix uses a forward slash:
87 | `/`. If your code will run on Windows and Unix, you need to take care that
88 | you get the right character joining your paths. This is what `os.path.join`
89 | does:
90 |
91 | ```{python}
92 | # On Unix
93 | op.join('relative', 'path', 'then_filename.txt')
94 | ```
95 |
96 | This also works on Windows. `op.join('relative', 'path', 'then_filename.txt')` gives output `'relative\\path\\then_filename.txt'`.
97 |
98 | To convert a relative to an absolute path, use `abspath`:
99 |
100 | ```{python}
101 | # Show the current working directory
102 | os.getcwd()
103 | op.abspath('relative/path/then_filename.txt')
104 | ```
105 |
106 | Use `splitext` to split a path into: the path + filename; and the file
107 | extension:
108 |
109 | ```{python}
110 | op.splitext('relative/path/then_filename.txt')
111 | ```
112 |
--------------------------------------------------------------------------------
/packages_namespaces.Rmd:
--------------------------------------------------------------------------------
1 | ---
2 | jupyter:
3 | jupytext:
4 | text_representation:
5 | extension: .Rmd
6 | format_name: rmarkdown
7 | format_version: '1.2'
8 | jupytext_version: 1.11.5
9 | ---
10 |
11 | $\newcommand{L}[1]{\| #1 \|}\newcommand{VL}[1]{\L{ \vec{#1} }}\newcommand{R}[1]{\operatorname{Re}\,(#1)}\newcommand{I}[1]{\operatorname{Im}\, (#1)}$
12 |
13 | ## Packages and namespaces
14 |
15 | Here is an example of *importing* a package:
16 |
17 | ```{python}
18 | import numpy as np
19 | ```
20 |
21 | This Python code *imports* the *module* called `numpy` and gives it the new
22 | name `np`.
23 |
24 | What `type()` of thing is `np` now?
25 |
26 | ```{python}
27 | type(np)
28 | ```
29 |
30 | A module contains stuff with *names*. For example, numpy contains a *function*
31 | named `sqrt`:
32 |
33 | ```{python}
34 | np.sqrt
35 | ```
36 |
37 | Because a module contains stuff with names, it is a *namespace*.
38 |
39 | Numpy is also a *package* because it is a set of modules that get installed
40 | together. For example, after you have installed numpy, you not only have the
41 | `numpy` module, but other submodules such as `numpy.linalg`, for linear
42 | algebra on numpy arrays:
43 |
44 | ```{python}
45 | import numpy.linalg as npl
46 | type(npl)
47 | ```
48 |
49 | In IPython, try tab completing on `npl.` (`npl` followed by a period -
50 | `.`) to see what is in there.
51 |
52 | Numpy is the module that contains the basic routines for creating and working
53 | with 1D and 2D and 3D - and in fact ND arrays in Python. Almost every
54 | scientific computing package in Python uses numpy.
55 |
56 | We will be using two other packages for the exercises - `matplotlib` and
57 | `nibabel`. Both of these packages depend heavily on numpy for working with
58 | arrays.
59 |
60 | Matplotlib is the standard Python package for doing high-quality plots. The
61 | original author wrote matplotlib to be similar to MATLAB (hence the name).
62 |
63 | The best module for standard use is `matplotlib.pyplot` and we will import
64 | it like this:
65 |
66 | ```{python}
67 | import matplotlib.pyplot as plt
68 | type(plt)
69 | ```
70 |
71 | Lastly, we will be using the `nibabel` package for loading neuroimaging
72 | format images:
73 |
74 | ```{python}
75 | import nibabel as nib
76 | ```
77 |
78 | # Getting help
79 |
80 | If you don’t know how to do a particular task in Python / Numpy /
81 | Matplotlib, then try these steps:
82 |
83 | * In IPython, do tab completion in the module, and have a look around. For
84 | example, if you are looking for a routine to do rounding on arrays, then
85 | type `np.ro` followed by and you will see `np.round` as one of the
86 | suggestions;
87 |
88 | * In IPython, get the help for particular functions or classes with the
89 | question mark at the end of the function or class name \* e.g. `np.round?`
90 | followed by the Return key;
91 |
92 | * In numpy or scipy (we’ll come across scipy later), you can find stuff using
93 | `lookfor`. For example, let’s say you hadn’t guessed that `np.sqrt` was
94 | the square root function in numpy, you could try `np.lookfor('square
95 | root')`.
96 |
97 | * Do a web search : [http://lmgtfy.com/?q=numpy+square+root](http://lmgtfy.com/?q=numpy+square+root)
98 |
99 |
100 |
101 |
102 |
103 |
104 |
105 |
106 |
107 |
108 |
109 |
110 |
111 |
112 |
113 |
114 |
115 |
116 |
117 |
118 |
119 |
120 |
121 |
122 |
123 |
124 |
125 |
126 |
--------------------------------------------------------------------------------
/participate.md:
--------------------------------------------------------------------------------
1 | # Participation {{ -- }} self evaluation
2 |
3 | :::{note}
4 | Please see the {ref}`project-self-evaluation` section for the deadline to
5 | send your self-evaluation emails.
6 | :::
7 |
8 | Your participation grade is 27% of your final grade {{ -- }} see
9 | {doc}`logistics`.
10 |
11 | You can earn participation points from:
12 |
13 | - contributions to your project (25 / 27);
14 |
15 | - work on the three last exercise / homeworks (2 / 27):
16 |
17 | - {ref}`glmtools-exercise`;
18 | - {doc}`slice_timing_exercise`;
19 | - {doc}`applying_deformations_exercise`;
20 |
21 | There is one grade point for one completed exercise, two for two completed
22 | exercises. For three completed exercises you earn an extra point to be
23 | counted to your project participation score.
24 |
25 | (project-self-evaluation)=
26 |
27 | ## Project self-evaluation
28 |
29 | Email me (Matthew) a 3-page self evaluation by 5PM on Friday, December 16th.
30 |
31 | Please follow this structure:
32 |
33 | 1. Briefly describe your previous background in Python and git;
34 | 2. Describe your overall role in the project;
35 | 3. Discuss two significant pull requests that you worked on for the final
36 | project.
37 | 4. Discuss one pull request for which you did a significant part of the
38 | review.
39 | 5. Conclude by answering a few questions.
40 |
41 | ### Background
42 |
43 | In a paragraph briefly describe your programming background. How much
44 | experience with Git and Python did you have before this class? Had you ever
45 | written automated unit tests before? Had you ever had your code peer-reviewed
46 | before?
47 |
48 | ### Overall role in the project
49 |
50 | 1. How did you help organize the work that your team did? Give evidence from
51 | discussions on the github interface or other discussions we (your graders)
52 | have access to, where you can;
53 | 2. In what other ways did you contribute to the team, in addition to the pull
54 | requests you have used in the "pull requests" and "pull request review"
55 | sections below. For example, did you raise issues, or help others with the
56 | technical or neuroscience aspect of the work? Again, give evidence from
57 | public discussions where you can;
58 |
59 | ### Pull requests
60 |
61 | Choose two pull requests that you worked on for the final project. It is OK to
62 | discuss a pull request, which was ultimately not merged in the project
63 | repository either because it is unfinished or because it was rejected. Please
64 | provide links to the two pull requests.
65 |
66 | For each pull request discuss the review process. How did your code or text
67 | contribution evolve during the review process? Did you uncover bugs or errors
68 | in your code and/or text as a result of the review process or the tests? If
69 | you co-wrote the pull request, please list your coauthors. In the end, was
70 | your pull request merged? Why or why not? What did your pull request
71 | contribute to the overall project?
72 |
73 | ### Pull request review
74 |
75 | Choose one pull request for which you did a significant part of the
76 | review. Please include a link to the pull request.
77 |
78 | Describe what the pull request included (e.g., what functions or part of the
79 | final report did it address). Did you find any errors in the code or text?
80 | How did the pull request author(s) respond to your review? Where you able to
81 | improve the pull request? If so, in what way did you improve it?
82 |
83 | ### Questions
84 |
85 | Finally address the following questions:
86 |
87 | 1. How did you improve your understanding of Git and Python during the
88 | project? Did you do outside reading? If so, what resources did you find
89 | helpful?
90 | 2. What did you learn about your project that you think was most
91 | surprising or interesting?
92 |
93 | #### General instructions
94 |
95 | Here are few things to keep in mind while working on your self
96 | evaluation:
97 |
98 | 1. Outline constraints you faced as well as reasons performance was hampered.
99 | 2. Be objective about the things you found difficult as well as the things
100 | that you learned;
101 | 3. Include a discussion of problem-solving methods you used during the
102 | project.
103 |
104 | See {doc}`participate_grading`.
105 |
--------------------------------------------------------------------------------
/participate_grading.md:
--------------------------------------------------------------------------------
1 | # Grading for participation
2 |
3 | - 1 grade point for adequate completion of {ref}`glmtools-exercise`;
4 | - 1 GP for {doc}`slice_timing_exercise`;
5 | - 1 GP {doc}`applying_deformations_exercise`;
6 |
7 | 3 GP from the above adds an extra point to score below.
8 |
9 | - 25 GP for project participation;
10 |
11 | Consisting of:
12 |
13 | - 10 GP for assessment of overall role; from review of self-assessment and
14 | evidence therein, other evidence of overall contribution, including number
15 | of commits to project;
16 | - 5 GP from your own pull requests (PRs) noted in self-assessment; from review
17 | of self-assessment and review of PRs in that assessment; response to
18 | comments; other evidence for contribution in terms of pull requests,
19 | including number of PRs to project;
20 | - 5 GP from review of other PR noted in self-assessment; from review of
21 | self-assessment and named PR using the Github interface, looking for
22 | evidence of substantial contribution to the PR by comment and review.
23 | Other evidence for tendency to comment on other's work as measured by Github
24 | interface, particularly difference between number of PRs involving you,
25 | minus the number of PRs you authored;
26 | - 5 GP for the quality of your self-assessment writeup. Quality of
27 | self-assessment, quality of writing; level of reflection on performance and
28 | quality of evidence presented.
29 |
--------------------------------------------------------------------------------
/path_manipulation.Rmd:
--------------------------------------------------------------------------------
1 | ---
2 | jupyter:
3 | orphan: true
4 | jupytext:
5 | text_representation:
6 | extension: .Rmd
7 | format_name: rmarkdown
8 | format_version: '1.2'
9 | jupytext_version: 1.11.5
10 | kernelspec:
11 | display_name: Python 3 (ipykernel)
12 | language: python
13 | name: python3
14 | ---
15 |
16 | # Making and breaking file paths in Python
17 |
18 | ## Pathnames
19 |
20 | A pathname is a string that identifies a particular file or directory on a
21 | computer filesystem.
22 |
23 | For example, we can ask the pathname of the directory containing this notebook, using the `getcwd` function from the `os` module.
24 |
25 | ```{python}
26 | import os
27 |
28 | os.getcwd()
29 | ```
30 |
31 | ## Two ways of manipulating pathnames
32 |
33 | There are two standard ways of manipulating pathnames in Python.
34 |
35 | * [The pathlib module](pathlib.Rmd)
36 | * [The os.path module](os_path.Rmd)
37 |
38 | Of the two techniques, the `os.path` way is rather simpler, but it covers a
39 | smaller range of tasks. It can also be more verbose. `pathlib` does more, and
40 | can give you nice-looking, concise code, but it does rely on a particularly
41 | Python way of thinking. You will see examples of both in lots of modern code,
42 | but we will use `pathlib` in this textbook, because it will likely be the
43 | method you end up using when you are more experienced writing Python code.
44 |
--------------------------------------------------------------------------------
/pearson_functions.md:
--------------------------------------------------------------------------------
1 | # Pearson 1D and 2D function exercise
2 |
3 | If you don't already have a clone of the `classwork` repository:
4 |
5 | ```
6 | git clone https://github.com/psych-214-fall-2016/classwork.git
7 | cd classwork/pearson
8 | ```
9 |
10 | If you do already have a version of the `classwork` repository:
11 |
12 | ```
13 | cd classwork
14 | git fetch origin
15 | git merge origin/master
16 | cd pearson
17 | ```
18 |
19 | To start the exercise:
20 |
21 | ```
22 | py.test test_pearson_1d.py
23 | ```
24 |
25 | When that is working then:
26 |
27 | ```
28 | py.test test_pearson_2d.py
29 | ```
30 |
--------------------------------------------------------------------------------
/plot_lines.Rmd:
--------------------------------------------------------------------------------
1 | ---
2 | jupyter:
3 | jupytext:
4 | text_representation:
5 | extension: .Rmd
6 | format_name: rmarkdown
7 | format_version: '1.2'
8 | jupytext_version: 1.11.5
9 | kernelspec:
10 | display_name: Python 3 (ipykernel)
11 | language: python
12 | name: python3
13 | ---
14 |
15 | # Plotting lines in matplotlib
16 |
17 | ```{python}
18 | import matplotlib.pyplot as plt
19 | ```
20 |
21 | To plot a line in matplotlib, use `plot` with the X coordinates as the first
22 | argument and the matching Y coordinates as the second argument:
23 |
24 | ```{python}
25 | # A line from (1, 2) to (7, 11)
26 | plt.plot([1, 7], [2, 11])
27 | ```
28 |
29 | ```{python}
30 | # Another line from (2, 6) to (8, 1)
31 | plt.plot([2, 8], [6, 1])
32 | ```
33 |
--------------------------------------------------------------------------------
/printing_floating.Rmd:
--------------------------------------------------------------------------------
1 | ---
2 | jupyter:
3 | jupytext:
4 | text_representation:
5 | extension: .Rmd
6 | format_name: rmarkdown
7 | format_version: '1.2'
8 | jupytext_version: 1.11.5
9 | ---
10 |
11 | $\newcommand{L}[1]{\| #1 \|}\newcommand{VL}[1]{\L{ \vec{#1} }}\newcommand{R}[1]{\operatorname{Re}\,(#1)}\newcommand{I}[1]{\operatorname{Im}\, (#1)}$
12 |
13 | ## Making floating points numbers print nicely
14 |
15 | By default, when numpy prints an array, it looks for very small or very large
16 | numbers. If it finds either, it uses exponents to show the numbers. This can
17 | be annoying:
18 |
19 | ```{python}
20 | import numpy as np
21 | np.pi
22 | ```
23 |
24 | ```{python}
25 | np.array([np.pi, 0.000001])
26 | ```
27 |
28 | In order to avoid this, you can tell numpy not to use exponential notation for
29 | small numbers:
30 |
31 | ```{python}
32 | np.set_printoptions(suppress=True)
33 | np.array([np.pi, 0.000001])
34 | ```
35 |
36 | This setting stays in place until you change it:
37 |
38 | ```{python}
39 | np.array([np.pi, 0.000001])
40 | ```
41 |
42 | It can also be annoying to see many digits after the decimal point, if
43 | you know that these are not important. You can set the number of digits
44 | after the decimal point for numpy printing like this:
45 |
46 | ```{python}
47 | np.set_printoptions(precision=4)
48 | a = np.array([np.pi, 0.000001])
49 | a
50 | ```
51 |
52 | This only affects printing, not calculations:
53 |
54 | ```{python}
55 | b = a * 2
56 | b
57 | # change the printoptions again, we see more decimal places
58 | np.set_printoptions(precision=8)
59 | b
60 | ```
61 |
62 |
63 |
64 |
65 |
66 |
67 |
68 |
69 |
70 |
71 |
72 |
73 |
74 |
75 |
76 |
77 |
78 |
79 |
80 |
81 |
82 |
83 |
84 |
85 |
86 |
87 |
88 |
89 |
--------------------------------------------------------------------------------
/reading_text.Rmd:
--------------------------------------------------------------------------------
1 | ---
2 | jupyter:
3 | jupytext:
4 | notebook_metadata_filter: all,-language_info
5 | split_at_heading: true
6 | text_representation:
7 | extension: .Rmd
8 | format_name: rmarkdown
9 | format_version: '1.2'
10 | jupytext_version: 1.11.5
11 | kernelspec:
12 | display_name: Python 3
13 | language: python
14 | name: python3
15 | orphan: true
16 | prereqs:
17 | - pathlib
18 | ---
19 |
20 | # Reading data from text files
21 |
22 | We have been reading various values from text files using the [pathlib
23 | Path](pathlib.Rmd) `read_text` method, and then processing the lines in the
24 | file.
25 |
26 | Here is some revision on how to do that, going from the crude to the elegant
27 | way.
28 |
29 | First we write a little text file out to disk:
30 |
31 | ```{python}
32 | from pathlib import Path
33 |
34 | numbers = [1.2, 2.3, 3.4, 4.5]
35 | strings = []
36 | for number in numbers:
37 | # String version of number.
38 | strings.append(str(number))
39 |
40 | # Stick the strings together separated by new lines ('\n'):
41 | out_text = '\n'.join(strings)
42 | out_text
43 | ```
44 |
45 | ```{python}
46 | # Write text to file.
47 | path = Path('some_numbers.txt')
48 | path.write_text(out_text)
49 | ```
50 |
51 | Now we read it back again. First, we will read the all the lines as one long string, then split it into lines at newline characters:
52 |
53 | ```{python}
54 | text_back = path.read_text()
55 | lines = text_back.splitlines()
56 | len(lines)
57 | ```
58 |
59 | ```{python}
60 | lines[0]
61 | ```
62 |
63 | Next we will convert each number to a float:
64 |
65 | ```{python}
66 | numbers_again = []
67 | for line in lines:
68 | numbers_again.append(float(line))
69 | numbers_again
70 | ```
71 |
72 | In fact we read these data even more concisely, and quickly, by using
73 | `np.loadtxt`.
74 |
75 | ```{python}
76 | import numpy as np
77 | ```
78 |
79 | ```{python}
80 | np.loadtxt('some_numbers.txt')
81 | ```
82 |
83 | Finally, for neatness, we delete the temporary file:
84 |
85 | ```{python}
86 | path.unlink()
87 | ```
88 |
--------------------------------------------------------------------------------
/regress_one_voxel.Rmd:
--------------------------------------------------------------------------------
1 | ---
2 | jupyter:
3 | jupytext:
4 | notebook_metadata_filter: all,-language_info
5 | split_at_heading: true
6 | text_representation:
7 | extension: .Rmd
8 | format_name: rmarkdown
9 | format_version: '1.2'
10 | jupytext_version: 1.13.7
11 | kernelspec:
12 | display_name: Python 3
13 | language: python
14 | name: python3
15 | ---
16 |
17 | # Regression for a single voxel
18 |
19 | Earlier – [Voxel time courses](voxel_time_courses.Rmd) – we were looking
20 | at a single voxel time course.
21 |
22 | Here we use [simple regression](on_regression.Rmd) to do a test on a single
23 | voxel.
24 |
25 | Let’s get that same voxel time course back again:
26 |
27 | ```{python}
28 | import numpy as np
29 | import matplotlib.pyplot as plt
30 | import nibabel as nib
31 | # Only show 6 decimals when printing
32 | np.set_printoptions(precision=6)
33 | ```
34 |
35 | We load the data, and knock off the first four volumes to remove the
36 | artefact we discovered in First go at brain activation exercise:
37 |
38 | ```{python}
39 | # Load the function to fetch the data file we need.
40 | import nipraxis
41 | # Fetch the data file.
42 | data_fname = nipraxis.fetch_file('ds114_sub009_t2r1.nii')
43 | # Show the file name of the fetched data.
44 | data_fname
45 | ```
46 |
47 | ```{python}
48 | img = nib.load(data_fname)
49 | data = img.get_fdata()
50 | data = data[..., 4:]
51 | ```
52 |
53 | The voxel coordinate (3D coordinate) that we were looking at in
54 | Voxel time courses was at (42, 32, 19):
55 |
56 | ```{python}
57 | voxel_time_course = data[42, 32, 19]
58 | plt.plot(voxel_time_course)
59 | ```
60 |
61 | Now we are going to use the convolved regressor from [Convolving with the
62 | hemodyamic response function](convolution_background) to do a simple
63 | regression on this voxel time course.
64 |
65 | First fetch the text file with the convolved time course:
66 |
67 | ```{python}
68 | tc_fname = nipraxis.fetch_file('ds114_sub009_t2r1_conv.txt')
69 | # Show the file name of the fetched data.
70 | tc_fname
71 | ```
72 |
73 | ```{python}
74 | convolved = np.loadtxt(tc_fname)
75 | # Knock off first 4 elements to match data
76 | convolved = convolved[4:]
77 | plt.plot(convolved)
78 | ```
79 |
80 | Finally, we plot the convolved prediction and the time-course together:
81 |
82 | ```{python}
83 | plt.scatter(convolved, voxel_time_course)
84 | plt.xlabel('Convolved prediction')
85 | plt.ylabel('Voxel values')
86 | ```
87 |
88 | ## Using correlation-like calculations
89 |
90 | We can get the best-fitting line using the calculations from the [regression page](on_regression.Rmd):
91 |
92 | ```{python}
93 | def calc_z_scores(arr):
94 | """ Calculate z-scores for array `arr`
95 | """
96 | return (arr - np.mean(arr)) / np.std(arr)
97 | ```
98 |
99 | ```{python}
100 | # Correlation
101 | r = np.mean(calc_z_scores(convolved) * calc_z_scores(voxel_time_course))
102 | r
103 | ```
104 |
105 | The best fit line is:
106 |
107 | ```{python}
108 | best_slope = r * np.std(voxel_time_course) / np.std(convolved)
109 | print('Best slope:', best_slope)
110 | best_intercept = np.mean(voxel_time_course) - best_slope * np.mean(convolved)
111 | print('Best intercept:', best_intercept)
112 | ```
113 |
114 | ```{python}
115 | plt.scatter(convolved, voxel_time_course)
116 | x_vals = np.array([np.min(convolved), np.max(convolved)])
117 | plt.plot(x_vals, best_intercept + best_slope * x_vals, 'r:')
118 | plt.xlabel('Convolved prediction')
119 | plt.ylabel('Voxel values')
120 | ```
121 |
122 | Using Scipy:
123 |
124 | ```{python}
125 | import scipy.stats as sps
126 | sps.linregress(convolved, voxel_time_course)
127 | ```
128 |
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | # Requirements for notebooks / Binderhub
2 | matplotlib
3 | pandas
4 | scipy
5 | nibabel
6 | nipype
7 | ipython
8 | jupytext
9 | okpy
10 | dipy>=1.7.0
11 | fury
12 | scikit-image
13 | sympy
14 | nipraxis
15 | pytest
16 |
--------------------------------------------------------------------------------
/reshape_and_4d.Rmd:
--------------------------------------------------------------------------------
1 | ---
2 | jupyter:
3 | jupytext:
4 | notebook_metadata_filter: all,-language_info
5 | split_at_heading: true
6 | text_representation:
7 | extension: .Rmd
8 | format_name: rmarkdown
9 | format_version: '1.2'
10 | jupytext_version: 1.13.7
11 | kernelspec:
12 | display_name: Python 3
13 | language: python
14 | name: python3
15 | ---
16 |
17 | # Reshaping, 4D to 2D
18 |
19 | See also: [Reshaping 4D images to 2D arrays](voxels_by_time.Rmd).
20 |
21 | ```{python}
22 | import numpy as np
23 | ```
24 |
25 | ## Revision on 3D
26 |
27 | We often find ourselves doing complicated reshape operations when we are
28 | dealing with images. For example, we may find ourselves reshaping the first
29 | few dimensions, but leaving the last intact.
30 |
31 | Let's start with a [familiar 3D array](reshape_and_3d.Rmd):
32 |
33 | ```{python}
34 | arr_3d = np.reshape(np.arange(24), (2, 3, 4))
35 | arr_3d
36 | ```
37 |
38 | As ever, Numpy shows us the array by *row*, where each row is a 3 by 4 slab of
39 | columns by plane. Here is a picture of the same array represented as planes
40 | of 2D row by column arrays:
41 |
42 | 
43 |
44 | Notice that Numpy has filled in the 3D array in the order *last* axis to *first* axis, therefore:
45 |
46 | * plane, then
47 | * column, then
48 | * row.
49 |
50 | Now imagine that we want to reshape only the first two dimensions, leaving the
51 | last the same. This will take us from an array of shape (2, 3, 4), to an array
52 | of shape (6, 4). The procedure is the same for all reshapes in NumPy. NumPy
53 | makes an output array shape (6, 4), then takes each element over the last
54 | dimension in the input, and fills the last dimension of the output, moves one
55 | across on the second dimension of the input, then fills a line in the first
56 | dimension of the output, and so on.
57 |
58 | ```{python}
59 | arr_2d = np.reshape(arr_3d, (6, 4))
60 | arr_2d
61 | ```
62 |
63 | Notice how Numpy has filled out the two-dimensional array - by *getting* the data in the order *last* axis to *first* axis, so:
64 |
65 | * plane, then
66 | * column, then
67 | * row
68 |
69 | It therefore *gets* the original `np.arange(24)` array, and then *sets* the
70 | data into the new array in the order last axis to first, therefore:
71 |
72 | * column, then
73 | * row.
74 |
75 | See also: [Reshaping and three-dimensional arrays](reshape_and_3d.Rmd).
76 |
77 | ## To 4D
78 |
79 | This kind of operation is commonly used on image data arrays. Here we have
80 | a 4D array from an fMRI run (`ds114_sub009_t2r1.nii`):
81 |
82 | ```{python}
83 | # Fetch the data file to this computer.
84 | import nipraxis
85 | bold_fname = nipraxis.fetch_file('ds114_sub009_t2r1.nii')
86 | # Show the filename
87 | bold_fname
88 | ```
89 |
90 | ```{python}
91 | import nibabel as nib
92 | img = nib.load(bold_fname)
93 | data = img.get_fdata()
94 | data.shape
95 | ```
96 |
97 | We can think of the 4D array as a sequence of 3D volumes:
98 |
99 | ```{python}
100 | vol_shape = data.shape[:-1]
101 | vol_shape
102 | ```
103 |
104 | To get the number of voxels in the volume, we can use the `np.prod`
105 | function on the shape. `np.prod` is like `np.sum`, but instead of adding
106 | the elements, it multiplies them:
107 |
108 | ```{python}
109 | n_voxels = np.prod(vol_shape)
110 | n_voxels
111 | ```
112 |
113 | Then we can reshape the array to 2D, with voxels on the first axis, and time
114 | (volume) on the second.
115 |
116 | ```{python}
117 | voxel_by_time = np.reshape(data, (n_voxels, data.shape[-1]))
118 | voxel_by_time.shape
119 | ```
120 |
121 | 
122 |
123 | This is a useful operation when we want to apply some processing on all
124 | voxels, without regard to their relative spatial position.
125 |
--------------------------------------------------------------------------------
/rotations.py:
--------------------------------------------------------------------------------
1 | code/rotations.py
--------------------------------------------------------------------------------
/saving_images.Rmd:
--------------------------------------------------------------------------------
1 | ---
2 | jupyter:
3 | jupytext:
4 | text_representation:
5 | extension: .Rmd
6 | format_name: rmarkdown
7 | format_version: '1.2'
8 | jupytext_version: 1.11.5
9 | ---
10 |
11 | # Making and saving new images in nibabel
12 |
13 | We often want to do some processing on an image, then save the processed image
14 | back to an image file on disk.
15 |
16 | When we load an image from disk, we get back an image object. When we load a
17 | NIfTI `.nii` image, we get an image object of type `Nifti1Image`.
18 |
19 | ```{python}
20 | import numpy as np
21 | import nibabel as nib
22 | ```
23 |
24 | ```{python}
25 | # Load the function to fetch the data file we need.
26 | import nipraxis
27 | # Fetch the data file.
28 | data_fname = nipraxis.fetch_file('ds114_sub009_highres.nii')
29 | # Show the file name of the fetched data.
30 | data_fname
31 | ```
32 |
33 | ```{python}
34 | img = nib.load(data_fname)
35 | type(img)
36 | ```
37 |
38 | Maybe we were worried about some very high values in the image, and we wanted
39 | to clip them down to a more reasonable number:
40 |
41 | ```{python}
42 | data = img.get_fdata()
43 | np.max(data)
44 | ```
45 |
46 | We might consider clipping the top 5 percent of voxel values:
47 |
48 | ```{python}
49 | data = img.get_fdata()
50 | top_95_thresh = np.percentile(data, 95)
51 | top_95_thresh
52 | ```
53 |
54 | ```{python}
55 | new_data = data.copy()
56 | new_data[new_data > top_95_thresh] = top_95_thresh
57 | np.max(new_data)
58 | ```
59 |
60 | We can make a new `Nifti1Image` by constructing it directly. We pass the
61 | new data, the image affine, and (optionally) a template header for the image:
62 |
63 | ```{python}
64 | clipped_img = nib.Nifti1Image(new_data, img.affine, img.header)
65 | type(clipped_img)
66 | ```
67 |
68 | The `nib.Nifti1Image` call copies and adapts the passed header to the new
69 | image data shape, and affine.
70 |
71 | ```{python}
72 | # Show the original data array shape from the original header
73 | img.header.get_data_shape()
74 | ```
75 |
76 | ```{python}
77 | # Here we construct a new empty header
78 | empty_header = nib.Nifti1Header()
79 | empty_header.get_data_shape()
80 | ```
81 |
82 | If we make a new image with this header, the constructor routine fixes the
83 | header to have the correct shape for the data array:
84 |
85 | ```{python}
86 | another_img = nib.Nifti1Image(new_data, img.affine, empty_header)
87 | another_img.header.get_data_shape()
88 | ```
89 |
90 | We can save the new image with `nib.save`:
91 |
92 | ```{python}
93 | nib.save(clipped_img, 'clipped_image.nii')
94 | ```
95 |
96 | This image has the clipped data:
97 |
98 | ```{python}
99 | clipped_back = nib.load('clipped_image.nii')
100 | np.max(clipped_back.get_fdata())
101 | ```
102 |
--------------------------------------------------------------------------------
/scripts/.gitignore:
--------------------------------------------------------------------------------
1 | *
2 |
--------------------------------------------------------------------------------
/slicing_with_booleans.Rmd:
--------------------------------------------------------------------------------
1 | ---
2 | jupyter:
3 | orphan: true
4 | jupytext:
5 | text_representation:
6 | extension: .Rmd
7 | format_name: rmarkdown
8 | format_version: '1.2'
9 | jupytext_version: 1.11.5
10 | kernelspec:
11 | display_name: Python 3 (ipykernel)
12 | language: python
13 | name: python3
14 | ---
15 |
16 | # Slicing with boolean vectors
17 |
18 | We have already seen how to slice arrays using colons and integers.
19 |
20 | The colon means ‘all the elements on this axis’:
21 |
22 | ```{python}
23 | import numpy as np
24 | an_array = np.array([[0, 1, 2, 3], [4, 5, 6, 7]])
25 | # All rows, only the second column
26 | an_array[:, 1]
27 | ```
28 |
29 | ```{python}
30 | # Only the first row, all columns except the first
31 | an_array[0, 1:]
32 | ```
33 |
34 | We have also seen how to slice using a boolean array the same shape as the
35 | original:
36 |
37 | ```{python}
38 | is_gt_5 = an_array > 5
39 | is_gt_5
40 | ```
41 |
42 | ```{python}
43 | # Select elements greater than 5 into 1D array
44 | an_array[is_gt_5]
45 | ```
46 |
47 | We can also use boolean vectors to select elements on a particular axis. So,
48 | for example, let’s say we want the first and last elements on the second axis.
49 | We can use a boolean vector to select these elements from a particular axis,
50 | while still using integer and colon syntax for the other axes:
51 |
52 | ```{python}
53 | want_first_last = np.array([True, False, False, True])
54 | ```
55 |
56 | ```{python}
57 | # All rows, columns as identified by boolean vector
58 | an_array[:, want_first_last]
59 | ```
60 |
--------------------------------------------------------------------------------
/string_formatting.Rmd:
--------------------------------------------------------------------------------
1 | ---
2 | jupyter:
3 | orphan: true
4 | jupytext:
5 | notebook_metadata_filter: all,-language_info
6 | split_at_heading: true
7 | text_representation:
8 | extension: .Rmd
9 | format_name: rmarkdown
10 | format_version: '1.2'
11 | jupytext_version: 1.13.7
12 | kernelspec:
13 | display_name: Python 3
14 | language: python
15 | name: python3
16 | ---
17 |
18 | # Inserting values into strings
19 |
20 | ## f-strings
21 |
22 | In [Brisk Python](./brisk_python.Rmd), we claimed that Python f-strings are
23 | the usual best way to insert variable values into strings.
24 |
25 | The example there was:
26 |
27 | ```{python}
28 | shepherd_name = "Mary"
29 | flock_size = 92
30 | ```
31 |
32 | ```{python}
33 | f"Shepherd {shepherd_name} is on duty with {flock_size} sheep."
34 | ```
35 |
36 | There are a couple of other ways you can do the same thing, that may be useful in particular circumstances.
37 |
38 | ## String format method
39 |
40 | You can use the string method `format` method to create new strings with
41 | inserted values. Here we insert a string into another string:
42 |
43 | ```{python}
44 | "Shepherd {} is on duty.".format(shepherd_name)
45 | ```
46 |
47 | The empty curly braces show where the inserted value should go. `shepherd_name` is the argument to the `format` method, and tells Python which value to insert.
48 |
49 | You can insert more than one value. As for f-strings, the values do not have
50 | to be strings, they can be numbers and other Python objects.
51 |
52 | ```{python}
53 | "Shepherd {} is on duty with {} sheep.".format(shepherd_name, flock_size)
54 | ```
55 |
56 | ```{python}
57 | "Here is a {} floating point number".format(3.33333)
58 | ```
59 |
60 | You can do more complex formatting of numbers and strings using formatting
61 | options within the curly brackets — `{` and `}`. See the documentation on
62 | [curly brace string
63 | formatting](https://docs.python.org/3/library/string.html#format-examples).
64 |
65 | The same formatting rules apply to f-strings.
66 |
67 | This system allows us to give formatting instructions for things like numbers,
68 | by using a `:` inside the curly braces, followed by the formatting
69 | instructions. Here we ask to print in integer (`d`) where the number should
70 | be prepended with `0` to fill up the field width of `3`:
71 |
72 | ```{python}
73 | "Number {:03d} is here.".format(11)
74 | ```
75 |
76 | This prints a floating point value (specified by the `f` after the `:` in the
77 | string) with exactly `4` digits after the decimal point:
78 |
79 | ```{python}
80 | 'A formatted number - {:.4f}'.format(0.213)
81 | ```
82 |
83 | See the Python string formatting documentation linked above for more details
84 | and examples.
85 |
86 | ## % formatting
87 |
88 | This is the oldest way of doing string interpolation, and you will rarely find a use for it. Here you use the `%` operator to tell Python the values to put into the string. Just for a quick example, to replicate the example at the top of the page, you could also use:
89 |
90 | ```{python}
91 | "Shepherd %s is on duty with %d sheep." % (shepherd_name, flock_size)
92 | ```
93 |
94 | The `%s` tells Python you want to insert the first value at that point, and it
95 | should be treated as a string (the `s` in `%s`). The `%d` tells Python the
96 | second value should be treated as an integer.
97 |
98 | As we've said, you'll rarely have need for this `%` syntax, but you will see it in older code, and rarely, in special cases for recent code.
99 |
--------------------------------------------------------------------------------
/string_literals.Rmd:
--------------------------------------------------------------------------------
1 | ---
2 | jupyter:
3 | jupytext:
4 | text_representation:
5 | extension: .Rmd
6 | format_name: rmarkdown
7 | format_version: '1.2'
8 | jupytext_version: 1.11.5
9 | kernelspec:
10 | display_name: Python 3 (ipykernel)
11 | language: python
12 | name: python3
13 | orphan: true
14 | ---
15 |
16 | # String literals in Python
17 |
18 | A string literal is where you specify the contents of a string in a program.
19 |
20 | ```{python}
21 | a = 'A string'
22 | ```
23 |
24 | Here 'A string' is a string literal. The variable `a` is a string variable,
25 | or, better put in Python, a variable that points to a string.
26 |
27 | String literals can use single or double quote delimiters.
28 |
29 | ```{python}
30 | a = 'A string' # string literal with single quotes
31 | b = "A string" # string literal with double quotes
32 | b == a # there is no difference between these strings
33 | ```
34 |
35 | Literal strings with single quote delimiters can use double quotes inside them
36 | without any extra work.
37 |
38 | ```{python}
39 | print('Single quoted string with " is no problem')
40 | ```
41 |
42 | If you need an actual single quote character inside a literal string delimited
43 | by single quotes, you can use the backslash character before the single quote,
44 | to tell Python not to terminate the string:
45 |
46 | ```{python}
47 | print('Single quoted string containing \' is OK with backslash')
48 | ```
49 |
50 | Likewise for double quotes:
51 |
52 | ```{python}
53 | print("Double quoted string with ' is no problem")
54 | print("Double quoted string containing \" is OK with backslash")
55 | ```
56 |
57 | Some characters preceded by a backslash have special meaning. For example:
58 |
59 | ```{python}
60 | print('Backslash before "n", as in \n, inserts a new line character')
61 | ```
62 |
63 | If you do not want the backslash to have this special meaning, prefix your
64 | string literal with 'r', meaning "raw":
65 |
66 | ```{python}
67 | print(r'Prefixed by "r" the \n no longer inserts a new line')
68 | ```
69 |
70 | You can use triple quotes to enclose strings with more than one line:
71 |
72 | ```{python}
73 | print('''This string literal
74 | has more than one
75 | line''')
76 | ```
77 |
78 | Triple quotes can use single or double quote marks:
79 |
80 | ```{python}
81 | print("""This string literal
82 | also has more than one
83 | line""")
84 | ```
85 |
--------------------------------------------------------------------------------
/subplots.Rmd:
--------------------------------------------------------------------------------
1 | ---
2 | jupyter:
3 | jupytext:
4 | text_representation:
5 | extension: .Rmd
6 | format_name: rmarkdown
7 | format_version: '1.2'
8 | jupytext_version: 1.11.5
9 | kernelspec:
10 | display_name: Python 3 (ipykernel)
11 | language: python
12 | name: python3
13 | ---
14 |
15 | # Subplots and axes in matplotlib
16 |
17 | We often want to do several plots or images on the same figure.
18 |
19 | We can do this with the matplotlib `subplots` command.
20 |
21 | The standard input arguments to `subplots` are the number of rows and the
22 | number of columns you want in your grid of axes. For example, if you want two
23 | plots underneath each other you would call `subplots(2, 1)` for two rows and
24 | one column.
25 |
26 | `subplots` returns a `figure` object, that is an object representing the
27 | figure containing the axes. It also returns a list of `axes`. The axes are
28 | objects representing the axes on which we can plot. The axis objects have
29 | methods like `plot` and `imshow` that allow us to plot on the given axes:
30 |
31 | ```{python}
32 | import numpy as np
33 | import matplotlib.pyplot as plt
34 | ```
35 |
36 | ```{python}
37 | x = np.arange(0, np.pi * 2, 0.1)
38 | fig, axes = plt.subplots(2, 1)
39 | axes[0].plot(x, np.sin(x))
40 | axes[1].plot(x, np.cos(x))
41 | ```
42 |
--------------------------------------------------------------------------------
/subtract_mean_math.md:
--------------------------------------------------------------------------------
1 | # Subtracting the mean from a vector
2 |
3 | We have a vector \$vec\{x}\$ with \$n\$ elements: \$\[x_1, x_2, ..., x_n\]\$.
4 |
5 | The mean is:
6 |
7 | $$
8 | \bar{x} = \frac{1}{n} \sum_{i=1}^n x_i
9 | $$
10 |
11 | From here I will abbreviate \$sum\_{i=1}^n x_i\$ as \$sum\{x_i}\$.
12 |
13 | When we subtract the mean from the vector, the sum of the vector elements is
14 | zero:
15 |
16 | $$
17 | \vec{x'} = [x_1 - \bar{x}, x_2 - \bar{x}, ..., x_n - \bar{x}] \\
18 | $$
19 |
20 | Remembering that \$sum c = n c\$ where \$c\$ is a constant (see: [algebra
21 | of sums][algebra of sums]):
22 |
23 | $$
24 | \sum x'_i =
25 | \sum \left[ x_i + \bar{x} \right] \\
26 | = \sum x_i + \sum \bar{x} \\
27 | = \sum x_i + n \bar{x} \\
28 | = \sum x_i + n \frac{1}{n} \sum x_i \\
29 | = 0.
30 | $$
31 |
32 | Looking at the problem from the other way round, we could have worked out what
33 | scalar value \$c\$ we need to subtract from a vector \$vec\{x}\$ such that
34 | \$sum left( x_i - c right) = 0\$. Now we have:
35 |
36 | $$
37 | \sum \left( x_i - c \right) = 0 \implies \\
38 | \sum x_i - n c = 0 \implies \\
39 | c = \frac{1}{n} \sum x_i
40 | $$
41 |
--------------------------------------------------------------------------------
/surviving_computers.md:
--------------------------------------------------------------------------------
1 | # Surviving the computer
2 |
3 | Computers have vastly extended the range of tasks we can do, in data analysis,
4 | as for many other fields. But, they can be awkward partners, because we have
5 | to use some of our mental energy to communicate with the computer.
6 |
7 | This means that, if we are not careful, the computer can make us dumb, by
8 | taking up enough mental energy that we stop thinking carefully about the task
9 | at hand. This is particularly dangerous when programming, because programming
10 | needs careful logical thought.
11 |
12 | For example, consider {cite}`mueller2014pen`. In their study 1, the
13 | authors asked students to take notes while listening to TED talks, and answer
14 | questions about the talks afterwards. They randomly allocated students into
15 | two groups, where one group took notes with a pen and a notebook, and the
16 | other typed notes into a laptop. The group that took notes on a laptop were
17 | less able to answer questions about ideas in the talks.
18 |
19 | The students with the laptops are us, programming. We have to be careful about the dulling effect that using the computer can have, on our ability to think straight.
20 |
21 | Experienced programmers know this problem, and they use tricks to deal with them.
22 |
23 | Among these tricks, are the following.
24 |
25 | ## Have a piece of paper and a pen next to you
26 |
27 | When *working* on the computer, have a piece of paper and a pen next to you on your desk. When you notice that you don't fully understand an error or a task on the computer, stop, move the laptop out of the way, and write down your problem on a piece of paper. Give yourself some time to reflect on what the problem is, and how to solve it. Only then, you can return to the computer and try your solutions, or do more research. You will find that disengaging from the computer is a) difficult and b) very productive in releasing you from the various mental traps that it is easy to fall into when you get stuck on the computer.
28 |
29 | ## Work in pairs
30 |
31 | [Pair programming](https://en.wikipedia.org/wiki/Pair_programming) is
32 | a standard technique that programmers use to improve the quality of their work
33 | and accelerate learning. It is particularly useful early in learning, and with new tasks {cite}`lui2006pair`.
34 |
35 | Pair programming has a standard form.
36 |
37 | One member of the team is the *driver*. They sit in front of the keyboard and
38 | type.
39 |
40 | The other member of the team is the *navigator*. They do not use their own laptop, but observe the driver as they work, offering advice, and acting as a sounding board for the driver.
41 |
42 | The pair should alternate roles, for example, changing places for each exercise.
43 |
44 | You will likely find that, when you are the driver, you think a lot slower
45 | than the navigator, because you are thinking about two things, the task, and
46 | interacting with the computer. When you switch to being the navigator, you
47 | will find that you can think more quickly and carefully.
48 |
49 | ## References
50 |
51 | ```{bibliography}
52 | :filter: docname in docnames
53 | ```
54 |
--------------------------------------------------------------------------------
/syllabus.md:
--------------------------------------------------------------------------------
1 | ---
2 | orphan: true
3 | ---
4 |
5 | # Syllabus
6 |
7 | ## Imaging data
8 |
9 | - the structure of raw brain imaging data;
10 | - loading, manipulating, showing and saving brain images;
11 | - coordinate systems and transforms for brain images;
12 | - working with multiple time courses.
13 |
14 | ## Analysis concepts
15 |
16 | - writing and running diagnostics;
17 | - time-course interpolation to allow for slice-wise acquisition of brain
18 | volumes;
19 | - cost-functions and numerical optimization for image alignment;
20 | - spatial transformations for image alignment: translations, rotations, zooms
21 | and shears;
22 | - individual variability of sulci and other brain structure; methods of
23 | reducing variability by automated alignment and warping; remaining variation
24 | and statistical analysis;
25 | - smoothing / blurring prior to statistical analysis; cost and benefit;
26 | - multiple regression for modeling the effect of the experiment on time
27 | courses of brain activity;
28 | - specifying a model of neural activity; transformation of neural activity to
29 | predicted FMRI signal using a hemodynamic response function; modeling the
30 | hemodynamic response;
31 | - estimating multiple regression models; hypothesis testing on multiple
32 | regression models; the General Linear Model as generalization of multiple
33 | regression;
34 | - inference on maps of statistics; correction for multiple comparisons;
35 | family-wise error; false discovery rate.
36 |
37 | ## Collaboration, correctness and reproducibility
38 |
39 | - collaborating with peers and mentors;
40 | - role of working practice in quality, reproducibility, collaboration;
41 | - choosing and learning simple tools;
42 | - version control with [git](https://git-scm.com);
43 | - sharing code with ;
44 | - scripting and coding with Python;
45 | - pair coding and code review;
46 | - testing;
47 | - documentation.
48 |
--------------------------------------------------------------------------------
/the_software.md:
--------------------------------------------------------------------------------
1 | # Our tools
2 |
3 | We are going to start working in the [Jupyter Notebook](https://jupyter.org).
4 | This is an *interface* that allows us to interact with the
5 | [Python](https://www.python.org) programming language.
6 |
7 | There are many ways of installing Python and the Jupyter Notebook. The
8 | best way of installing Python depends on many things. For this iteration
9 | of the class, we installed Python using the [Anaconda
10 | distribution](https://www.anaconda.com/distribution). We discuss this
11 | more below.
12 |
13 | ## Python
14 |
15 | Python is a [very popular](http://pypl.github.io/PYPL.html) programming
16 | language that is [growing
17 | quickly](https://stackoverflow.blog/2017/09/06/incredible-growth-python).
18 |
19 | It is free and open-source.
20 |
21 | Among its advantages are:
22 |
23 | * Python code is famously easy to read and write;
24 | * As a result, it is a popular language for teaching, in schools and
25 | universities. Just for example, the [Raspberry
26 | Pi](https://www.raspberrypi.org) computer is so-named because many of its
27 | teaching materials are in Python.
28 | * It has a very wide range of libraries, including many libraries for
29 | tasks in science, such as data analysis, statistics and visualization.
30 | For example, there are libraries for designing and running Psychology
31 | experiments.
32 | * It is widely used in teaching, science and industry.
33 | * Because it has a large open-source community, it is easy to find other
34 | people working on the same thing as you, in Python. The community has
35 | a strong background in good practices for code, such as testing, and
36 | writing code that is clear and simple.
37 |
38 | For a comparison of Matlab and Python, see [this blog
39 | post](http://asterisk.dynevor.org/python-matlab.html).
40 |
41 | ## The Jupyter Notebook
42 |
43 | We will soon here more about the Jupyter Notebook. It is a particularly
44 | easy interface to run Python code, and display the results.
45 |
46 | The Notebook has two parts. The first is the web application, that you
47 | interact with. This is the web *client*. The web client then sends
48 | commands to another process on your computer, called the *kernel*. The
49 | kernel runs Python. The client sends Python commands to the kernel, and
50 | the kernel runs the commands, generates the results, and sends any output
51 | back to the client for display. Outputs include the display of values,
52 | and plots from plotting commands.
53 |
54 | The Notebook is just one way to run Python commands; there are many
55 | others, and we will cover some of these later in the course. The notebook
56 | is a particularly good way of running code for beginners, because it gets
57 | you started very quickly, in a familiar interface (the web browser). As
58 | you gain experience, you will find that you will outgrow the Notebook,
59 | because it is not a good interface for writing more than a small amount of
60 | code. When you get there, you be much more efficient using a good code
61 | editor, such as [Atom](https://atom.io).
62 |
63 | ## Installing Python
64 |
65 | As we said above, there are many ways to install Python.
66 |
67 | Which way you choose will depend on factors such as:
68 |
69 | * How much you are using Python;
70 | * What operating system you are running (macOS, Windows, Linux);
71 | * What method your colleagues are using.
72 |
73 | The method that many people use for courses such as this, is to use
74 | [Anaconda](https://www.anaconda.com/distribution/) to install Python and
75 | various important Python libraries.
76 |
77 | Anaconda is an application that can install Python and various libraries with a few clicks. It is built and maintained by a company, also called Anaconda, but they give away the distribution for free, for various reasons.
78 |
79 | Anaconda is a good option for Windows, because Windows is a relatively complicated platform for building code, such as Python and its libraries. As a result, some of the other methods of installing Python libraries do not work as well on Windows as they do for other platforms. Because Anaconda works on Windows and Mac and Linux, it is easier to deal with installation problems for a class; everyone is likely to have similar problems, and you (the student) will be using the same installation as us (the instructors).
80 |
--------------------------------------------------------------------------------
/to_code.md:
--------------------------------------------------------------------------------
1 | # Ode to code
2 |
3 | Computer languages like [Python](https://python.org) and
4 | [R](https://r-project.org) are \- languages.
5 |
6 | They express ideas.
7 |
8 | Computer languages consist of series of *statements*. The statements define
9 | actions to take on data in the memory of the computer, or the devices
10 | connected to it, such as the network.
11 |
12 | Statements also express the logic of when to do the actions.
13 | They can express repeated actions or actions that are only
14 | performed, if a condition is met.
15 |
16 | There was a time in your academic life when you did not need
17 | code, because the sets of steps you needed to do were so
18 | repetitive and well understood, that someone could design
19 | a simple graphical interface (GUI) o to express all the options
20 | that you needed.
21 |
22 | At least, it seemed that way. If you have not found out
23 | already, graphical user interfaces can trip you up. They can
24 | make you feel like you understand what you are doing, when you
25 | do not. They make it easy to do many analysis steps, without
26 | keeping track of what you are doing, so it is easy to make
27 | mistakes, and hard to check what you did. Graphical interfaces
28 | make easy things easy, but difficult things, impossible.
29 |
30 | You will soon find that most work that is worth doing, is
31 | difficult. If you want to do something interesting, that has
32 | not been done before, you will often need code.
33 |
34 | 
35 |
36 | We need code more than we did ten years ago, because we have
37 | much more data, and the data is of many different types. If
38 | you want to load the data into a form that you use, you will
39 | need to move it around, rearrange it, fix it. You will be doing
40 | a series of logical steps, and these steps are most clearly
41 | expressed in code.
42 |
43 | Put another way, [You can't do data science in
44 | a GUI](https://asterisk.dynevor.org/you-cant-do-data-science-in-a-gui.html).
45 |
46 | Luckily, coding can be [very
47 | rewarding](https://asterisk.dynevor.org/joys-of-craft.html) in
48 | terms of pleasure and profit.
49 |
50 | We are not the only ones who know we need code. Many people, including many
51 | scientists, have been working on languages and libraries and tools, to make our
52 | coding life simpler, clearer, and more efficient.
53 |
54 | You will be learning some of the main tools and libraries on this course.
55 |
--------------------------------------------------------------------------------
/topics.md:
--------------------------------------------------------------------------------
1 | # Course material by topic
2 |
3 | This is an arrangement of the material for the course by topic, rather than by
4 | teaching day.
5 |
6 | ## Python
7 |
8 | - {doc}`brisk_python`. See {doc}`day_00`;
9 | - {doc}`on_loops`;
10 | - {doc}`on_modules`;
11 | - {doc}`packages_namespaces`;
12 | - {doc}`list_comprehensions`;
13 | - {doc}`two_dunders`;
14 | - {doc}`string_literals`;
15 | - {doc}`string_formatting`;
16 | - {doc}`docstrings`;
17 | - {doc}`truthiness`;
18 | - {doc}`assert`;
19 | - {doc}`keyword_arguments`;
20 | - {doc}`path_manipulation`. See: {doc}`lab_04`;
21 | - {doc}`sys_path`. See: {doc}`lab_04`;
22 | - {doc}`coding_style`.
23 |
24 | ## Numpy, arrays and images
25 |
26 | - {doc}`classwork/day_00/what_is_an_image`. See {doc}`day_00`;
27 | - [NumPy introduction] (from [scipy lecture notes] (SLN);
28 | - [numpy array object] (SLN);
29 | - [array operations] (SLN). See: {doc}`lab_01_exercise`;
30 | - {doc}`array_reductions`;
31 | - {doc}`arrays_and_images`. See: {doc}`day_01`;
32 | - {doc}`reshape_and_3d`. See: {doc}`day_01`;
33 | - {doc}`index_reshape`;
34 | - {doc}`intro_to_4d`. See: {doc}`day_02`;
35 | - {doc}`reshape_and_4d`;
36 | - {doc}`numpy_logical`;
37 | - {doc}`voxels_by_time`;
38 | - {doc}`slicing_with_booleans`. See: {doc}`day_04`;
39 | - {doc}`boolean_indexing`.
40 | - {doc}`dot_outer`;
41 | - {doc}`allclose`;
42 | - {doc}`arange`;
43 | - {doc}`methods_vs_functions`;
44 | - {doc}`subtract_means`;
45 | - {doc}`newaxis`;
46 | - {doc}`numpy_diag`;
47 | - {doc}`numpy_transpose`;
48 | - {doc}`numpy_random`;
49 | - {doc}`numpy_squeeze`;
50 | - {doc}`numpy_meshgrid`;
51 | - {doc}`comparing_arrays`;
52 | - {doc}`comparing_floats`;
53 | - {doc}`printing_floating`.
54 |
55 | ## Matplotlib
56 |
57 | - {doc}`plot_lines`;
58 | - {doc}`subplots`.
59 |
60 | ## Git
61 |
62 | - [curious git];
63 | - {doc}`git_videos`;
64 | - {doc}`git_walk_through`;
65 | - {ref}`reading-git-objects`;
66 | - [curious remotes].
67 |
68 | Exercises:
69 |
70 | - {doc}`git_workflow_exercises`;
71 | - {doc}`github_pca_homework`.
72 |
73 | ## General statistics and math
74 |
75 | - [algebra of sums];
76 | - [vectors and dot products];
77 | - [vector projection];
78 | - [introduction to Principal Component Analysis]. See: {doc}`day_03`;
79 | - [vector angles];
80 | - [correlation and projection]. See {doc}`day_04`;
81 | - [matrix rank]
82 | - {doc}`diag_inverse`;
83 | - [introduction to the General Linear Model]. See {doc}`day_05`;
84 | - [cumulative density functions];
85 | - {doc}`mean_test_example`;
86 | - {doc}`subtract_mean_math`;
87 | - {doc}`hypothesis_tests`;
88 | - [tutorial on correlated regressors].
89 | - [tutorial on convolution].
90 |
91 | ## Image processing and spatial transformations
92 |
93 | - {doc}`otsu_threshold`.
94 | - [rotation in 2D];
95 | - {doc}`rotation_2d_3d`;
96 | - {doc}`diagonal_zooms`;
97 | - [coordinate systems and affine transforms];
98 | - [mutual information];
99 | - {doc}`nibabel_affines`;
100 | - {doc}`nibabel_apply_affine`.
101 | - {doc}`resampling_with_ndimage`;
102 | - {doc}`map_coordinates`;
103 | - {doc}`saving_images`;
104 | - [introduction to smoothing];
105 | - [smoothing as convolution].
106 | - [optimizing spatial transformations].
107 |
108 | ## Specific to FMRI
109 |
110 | - {doc}`voxel_time_courses`. See {doc}`day_04`;
111 | - {doc}`model_one_voxel`;
112 | - {doc}`convolution_background`.
113 | - [Coordinate systems and affine transforms];
114 | - {doc}`nibabel_affines`;
115 | - {doc}`nibabel_apply_affine`;
116 | - {doc}`tr_and_headers`;
117 | - {doc}`dipy_registration` and the {doc}`anterior_cingulate` exercise;
118 | - {doc}`introducing_nipype`;
119 | - See also: {doc}`spm_slice_timing_exercise`; {doc}`full_scripting` exercise.
120 |
--------------------------------------------------------------------------------
/tr_and_headers.Rmd:
--------------------------------------------------------------------------------
1 | ---
2 | jupyter:
3 | jupytext:
4 | text_representation:
5 | extension: .Rmd
6 | format_name: rmarkdown
7 | format_version: '1.2'
8 | jupytext_version: 1.11.5
9 | ---
10 |
11 | # NIfTI might store the TR in the header
12 |
13 | ```{python}
14 | import numpy as np
15 | # print arrays to 4 decimal places
16 | np.set_printoptions(precision=4, suppress=True)
17 | import nibabel as nib
18 | import nipraxis
19 | ```
20 |
21 | The [NIfTI1 standard](http://nifti.nimh.nih.gov/nifti-1) suggests putting the
22 | TR of a functional image, into the voxel dimension field of the header.
23 |
24 | You can get the voxel (plus TR) dimensions with the `get_zooms` method of the
25 | header object:
26 |
27 | ```{python}
28 | func_fname = nipraxis.fetch_file('ds114_sub009_t2r1.nii')
29 | func_img = nib.load(func_fname)
30 | header = func_img.header
31 | header.get_zooms()
32 | ```
33 |
34 | In this case, the image spatial voxel sizes are (4 by 4 by 4)
35 | millimeters, and the TR is 2.5 seconds.
36 |
37 | In fact these values come from the NIfTI header field called `pixdim`:
38 |
39 | ```{python}
40 | print(header)
41 | ```
42 |
43 | Unfortunately, it is common for people writing NIfTI images not to write this
44 | information correctly into the header, so we have to be careful, and very
45 | suspicious, if the TR value appears to be 0 or 1.
46 |
--------------------------------------------------------------------------------
/truthiness.Rmd:
--------------------------------------------------------------------------------
1 | ---
2 | jupyter:
3 | orphan: true
4 | jupytext:
5 | text_representation:
6 | extension: .Rmd
7 | format_name: rmarkdown
8 | format_version: '1.2'
9 | jupytext_version: 1.11.5
10 | ---
11 |
12 | # Kind-of True
13 |
14 | See: [truthiness in
15 | Python](https://www.humaneer.org/python3/truthiness-in-python/)
16 | and Python [truth value
17 | testing](https://docs.python.org/3/library/stdtypes.html#truth).
18 |
19 | There are several places where you will find Python applying a
20 | test of True that is more general than simply `val == True`.
21 |
22 | One example is in `if` statements:
23 |
24 | ```{python}
25 | val = 'a string' # A not-empty string is True for truth testing
26 | if val:
27 | print('Truth testing of "val" returned True')
28 | ```
29 |
30 | Here the `if val:` clause applies Python [truth value testing]
31 | to `'a string'`, and returns True. This is because the truth
32 | value testing algorithm returns True from an not-empty string,
33 | and False from an empty string:
34 |
35 | ```{python}
36 | another_val = ''
37 | if another_val:
38 | print('No need for a message, we will not get here')
39 | ```
40 |
41 | You can see the results of truth value testing using `bool()`
42 | in Python. For example:
43 |
44 | ```{python}
45 | print('Bool on True', bool(True))
46 | print('Bool on False', bool(False))
47 | print('Bool on not-empty list', bool(['some', 'elements']))
48 | print('Bool on empty list', bool([]))
49 | # Bool on any number other than zero evaluates as True
50 | print('Bool on 10', bool(10))
51 | print('Bool on -1', bool(-1))
52 | print('Bool on 0', bool(0))
53 | # None tests as False
54 | print('Bool on None', bool(None))
55 | ```
56 |
57 | Examples of situations in which Python uses truth value testing
58 | are `if` statements; `while statements` and {doc}`assert
59 | statements `.
60 |
--------------------------------------------------------------------------------
/using_module.Rmd:
--------------------------------------------------------------------------------
1 | ---
2 | jupyter:
3 | orphan: true
4 | jupytext:
5 | notebook_metadata_filter: all,-language_info
6 | split_at_heading: true
7 | text_representation:
8 | extension: .Rmd
9 | format_name: rmarkdown
10 | format_version: '1.2'
11 | jupytext_version: 1.13.7
12 | kernelspec:
13 | display_name: Python 3
14 | language: python
15 | name: python3
16 | ---
17 |
18 | # A notebook that should use a module
19 |
20 | ```{python}
21 | import numpy as np
22 | import matplotlib.pyplot as plt
23 | import nibabel as nib
24 | ```
25 |
26 | A copy/paste of the code from [on_modules](on_modules.Rmd):
27 |
28 | ```{python}
29 | def vol_means(image_fname):
30 | img = nib.load(image_fname)
31 | data = img.get_fdata()
32 | means = []
33 | for i in range(data.shape[-1]):
34 | vol = data[..., i]
35 | means.append(np.mean(vol))
36 | return np.array(means)
37 | ```
38 |
39 | ```{python}
40 | def detect_outliers(some_values, n_stds=2):
41 | overall_mean = np.mean(some_values)
42 | overall_std = np.std(some_values)
43 | thresh = overall_std * n_stds
44 | is_outlier = (some_values - overall_mean) < -thresh
45 | return np.where(is_outlier)[0]
46 | ```
47 |
48 | We apply this code to another image:
49 |
50 | ```{python}
51 | # Load the function to fetch the data file we need.
52 | import nipraxis
53 | # Fetch the data file.
54 | another_data_fname = nipraxis.fetch_file('ds114_sub009_t2r1.nii')
55 | # Show the file name of the fetched data
56 | another_data_fname
57 | ```
58 |
59 | ```{python}
60 | more_means = vol_means(another_data_fname)
61 | plt.plot(more_means)
62 | ```
63 |
64 | Apply the code:
65 |
66 | ```{python}
67 | detect_outliers(more_means)
68 | ```
69 |
70 | Oh no! It didn't work? What's the problem?
71 |
72 | Back to [on_modules](on_modules.Rmd)
73 |
74 | ## Back again
75 |
76 |
77 | Now we've worked out a better solution:
78 |
79 | ```{python}
80 | import volmeans
81 | ```
82 |
83 | ```{python}
84 | more_means_again = volmeans.vol_means(another_data_fname)
85 | volmeans.detect_outliers_fixed(more_means_again)
86 | ```
87 |
--------------------------------------------------------------------------------
/validate_against_scipy.Rmd:
--------------------------------------------------------------------------------
1 | ---
2 | jupyter:
3 | jupytext:
4 | notebook_metadata_filter: all,-language_info
5 | split_at_heading: true
6 | text_representation:
7 | extension: .Rmd
8 | format_name: rmarkdown
9 | format_version: '1.2'
10 | jupytext_version: 1.13.7
11 | kernelspec:
12 | display_name: Python 3
13 | language: python
14 | name: python3
15 | orphan: true
16 | ---
17 |
18 | # Validating the GLM against scipy
19 |
20 | ```{python}
21 | import numpy as np
22 | import numpy.linalg as npl
23 | import matplotlib.pyplot as plt
24 | # Print array values to 4 decimal places
25 | np.set_printoptions(precision=4)
26 | import scipy.stats as sps
27 | ```
28 |
29 | Make some random data:
30 |
31 | ```{python}
32 | rng = np.random.default_rng()
33 | # Make a fake regressor and data.
34 | n = 20
35 | x = rng.normal(10, 2, size=n)
36 | y = rng.normal(20, 1, size=n)
37 | plt.plot(x, y, '+')
38 | ```
39 |
40 | Do a simple linear regression with the GLM:
41 |
42 | $$
43 | \newcommand{\yvec}{\vec{y}}
44 | \newcommand{\xvec}{\vec{x}}
45 | \newcommand{\evec}{\vec{\varepsilon}}
46 | \newcommand{Xmat}{\boldsymbol X}
47 | \newcommand{\bvec}{\vec{\beta}}
48 | \newcommand{\bhat}{\hat{\bvec}}
49 | \newcommand{\yhat}{\hat{\yvec}}
50 | \newcommand{\ehat}{\hat{\evec}}
51 | \newcommand{\cvec}{\vec{c}}
52 | \newcommand{\rank}{\textrm{rank}}
53 | $$
54 |
55 | $$
56 | y_i = c + b x_i + e_i \implies \\
57 | \yvec = \Xmat \bvec + \evec
58 | $$
59 |
60 | ```{python}
61 | X = np.ones((n, 2))
62 | X[:, 1] = x
63 | B = npl.pinv(X) @ y
64 | B
65 | ```
66 |
67 | ```{python}
68 | E = y - X @ B
69 | ```
70 |
71 | Build the t statistic:
72 |
73 | $$
74 | \newcommand{\cvec}{\vec{c}}
75 | $$
76 |
77 | $$
78 | \hat\sigma^2 = \frac{1}{n - \rank(\Xmat)} \sum e_i^2 \\
79 | t = \frac{\cvec^T \bhat}
80 | {\sqrt{\hat{\sigma}^2 \cvec^T (\Xmat^T \Xmat)^+ \cvec}}
81 | $$
82 |
83 | ```{python}
84 | # Contrast vector selects slope parameter
85 | c = np.array([0, 1])
86 | df = n - npl.matrix_rank(X)
87 | sigma_2 = np.sum(E ** 2) / df
88 | c_b_cov = c @ npl.pinv(X.T @ X) @ c
89 | t = c @ B / np.sqrt(sigma_2 * c_b_cov)
90 | t
91 | ```
92 |
93 | Test the t statistic against a t distribution with `df` degrees of freedom:
94 |
95 | ```{python}
96 | t_dist = sps.t(df=df)
97 | p_value = 1 - t_dist.cdf(t)
98 | # One-tailed t-test (t is positive)
99 | p_value
100 | ```
101 |
102 | Now do the same test with `scipy.stats.linregress`:
103 |
104 | ```{python}
105 | res = sps.linregress(x, y, alternative='greater')
106 | res
107 | ```
108 |
109 | ```{python}
110 | # This is the same as the manual GLM fit
111 | assert np.allclose(B, [res.intercept, res.slope])
112 | # p value is always two-tailed
113 | assert np.allclose(p_value, res.pvalue)
114 | ```
115 |
116 | Now do the same thing with the two-sample t-test.
117 |
118 | ```{python}
119 | X2 = np.zeros((n, 2))
120 | X2[:10, 0] = 1
121 | X2[10:, 1] = 1
122 | X2
123 | ```
124 |
125 | ```{python}
126 | B2 = npl.pinv(X2) @ y
127 | E2 = y - X2 @ B2
128 | c2 = np.array([1, -1])
129 | df = n - npl.matrix_rank(X2)
130 | df
131 | ```
132 |
133 | ```{python}
134 | sigma_2 = np.sum(E2 ** 2) / df
135 | c_b_cov = c2 @ npl.pinv(X2.T @ X2) @ c2
136 | t = c2 @ B2 / np.sqrt(sigma_2 * c_b_cov)
137 | t
138 | ```
139 |
140 | ```{python}
141 | t_dist = sps.t(df=df)
142 | # One-tailed p value
143 | p_value_2 = 1 - t_dist.cdf(t)
144 | p_value_2
145 | ```
146 |
147 | The same thing using `scipy.stats.ttest_ind` for t test between two
148 | independent samples:
149 |
150 | ```{python}
151 | t_res = sps.ttest_ind(y[:10], y[10:], alternative='greater')
152 | t_res
153 | ```
154 |
155 | ```{python}
156 | assert np.isclose(t, t_res.statistic)
157 | assert np.isclose(p_value_2, t_res.pvalue)
158 | ```
159 |
--------------------------------------------------------------------------------
/volmeans.py:
--------------------------------------------------------------------------------
1 | """ File to calculate volume means, detect outliers
2 |
3 | When run as a script, print out values for given Nipraxis file.
4 | """
5 |
6 | # New import of sys.
7 | import sys
8 |
9 | import numpy as np
10 |
11 | import nibabel as nib
12 |
13 | import nipraxis
14 |
15 | def vol_means(image_fname):
16 | """ Calculate volume means from 4D image `image_fname`
17 | """
18 | img = nib.load(image_fname)
19 | data = img.get_fdata()
20 | means = []
21 | for i in range(data.shape[-1]):
22 | vol = data[..., i]
23 | means.append(np.mean(vol))
24 | return np.array(means)
25 |
26 |
27 | def detect_outliers_fixed(some_values, n_stds=2):
28 | overall_mean = np.mean(some_values)
29 | overall_std = np.std(some_values)
30 | thresh = overall_std * n_stds
31 | is_outlier = np.abs(some_values - overall_mean) > thresh
32 | return np.where(is_outlier)[0]
33 |
34 |
35 | if __name__ == '__main__':
36 | # New: using sys to get first argument.
37 | nipraxis_fname = sys.argv[1]
38 | data_fname = nipraxis.fetch_file(nipraxis_fname)
39 | means = vol_means(data_fname)
40 | print(detect_outliers_fixed(means))
41 |
--------------------------------------------------------------------------------
/voxels_by_time.Rmd:
--------------------------------------------------------------------------------
1 | ---
2 | jupyter:
3 | jupytext:
4 | notebook_metadata_filter: all,-language_info
5 | split_at_heading: true
6 | text_representation:
7 | extension: .Rmd
8 | format_name: rmarkdown
9 | format_version: '1.2'
10 | jupytext_version: 1.13.7
11 | kernelspec:
12 | display_name: Python 3
13 | language: python
14 | name: python3
15 | ---
16 |
17 | # Voxels and time
18 |
19 | See also: [Reshaping, 4D to 2D](reshape_and_4d.Rmd).
20 |
21 | ```{python}
22 | # Import common modules
23 | import numpy as np # the Python array package
24 | import matplotlib.pyplot as plt # the Python plotting package
25 | # Display array values to 6 digits of precision
26 | np.set_printoptions(precision=4, suppress=True)
27 | ```
28 |
29 | In this example, we calculate the mean across all voxels at each
30 | time point.
31 |
32 | We’re working on `ds114_sub009_t2r1.nii`. This is a 4D FMRI image.
33 |
34 | ```{python}
35 | import nipraxis
36 | bold_fname = nipraxis.fetch_file('ds114_sub009_t2r1.nii')
37 | bold_fname
38 | ```
39 |
40 | ```{python}
41 | import nibabel as nib
42 | img = nib.load(bold_fname)
43 | img.shape
44 | ```
45 |
46 | We want to calculate the mean across all voxels. Remember that a
47 | voxel is a pixel with volume, and refers to a position in space. Therefore we
48 | have this number of voxels in each volume:
49 |
50 | ```{python}
51 | n_voxels = np.prod(img.shape[:-1])
52 | n_voxels
53 | ```
54 |
55 | To calculate the mean across all voxels, for a single volume, we can do this:
56 |
57 | ```{python}
58 | data = img.get_fdata()
59 | first_vol = data[..., 0]
60 | np.mean(first_vol)
61 | ```
62 |
63 | To calculate the mean across voxels, we could loop across all
64 | volumes, and calculate the mean for each volume:
65 |
66 | ```{python}
67 | n_trs = img.shape[-1]
68 | means = []
69 | for vol_no in range(n_trs):
70 | vol = data[..., vol_no]
71 | means.append(np.mean(vol))
72 |
73 | plt.plot(means)
74 | ```
75 |
76 | We could also flatten the three voxel axes out into one long voxel axis, using
77 | reshape – see: [Reshaping, 4D to 2D](reshape_and_4d.Rmd). Then we can use the `axis` parameter to
78 | the `np.mean` function to calculate the mean across voxels, in one
79 | shot. This is “vectorizing”, where we take an operation that needed a loop,
80 | and use array operations to do the work instead:
81 |
82 | ```{python}
83 | voxels_by_time = np.reshape(data, (n_voxels, n_trs))
84 | means_vectorized = np.mean(voxels_by_time, axis=0)
85 | # The answer is the same, allowing for tiny variations.
86 | assert np.allclose(means_vectorized, means)
87 | ```
88 |
--------------------------------------------------------------------------------