├── .Rhistory
├── .editorconfig
├── .github
└── workflows
│ ├── README.md
│ ├── pr-close-signal.yaml
│ ├── pr-comment.yaml
│ ├── pr-post-remove-branch.yaml
│ ├── pr-preflight.yaml
│ ├── pr-receive.yaml
│ ├── sandpaper-main.yaml
│ ├── sandpaper-version.txt
│ ├── update-cache.yaml
│ └── update-workflows.yaml
├── .gitignore
├── .zenodo.json
├── AUTHORS
├── CITATION
├── CODE_OF_CONDUCT.md
├── CONTRIBUTING.md
├── LICENSE.md
├── README.md
├── config.yaml
├── episodes
├── 01-introduction.md
├── 02-image-basics.md
├── 03-skimage-images.md
├── 04-drawing.md
├── 05-creating-histograms.md
├── 06-blurring.md
├── 07-thresholding.md
├── 08-connected-components.md
├── 09-challenges.md
├── data
│ ├── beads.jpg
│ ├── board.jpg
│ ├── centers.txt
│ ├── chair.jpg
│ ├── colonies-01.tif
│ ├── colonies-02.tif
│ ├── colonies-03.tif
│ ├── eight.tif
│ ├── gaussian-original.png
│ ├── letterA.tif
│ ├── maize-root-cluster.jpg
│ ├── maize-roots-grayscale.jpg
│ ├── maize-seedlings.tif
│ ├── plant-seedling.jpg
│ ├── remote-control.jpg
│ ├── shapes-01.jpg
│ ├── shapes-02.jpg
│ ├── sudoku.png
│ ├── tree.jpg
│ ├── trial-016.jpg
│ ├── trial-020.jpg
│ ├── trial-216.jpg
│ ├── trial-293.jpg
│ ├── wellplate-01.jpg
│ └── wellplate-02.tif
├── fig
│ ├── 3D_petri_after_blurring.png
│ ├── 3D_petri_before_blurring.png
│ ├── Gaussian_2D.png
│ ├── Normal_Distribution_PDF.svg
│ ├── beads-canny-ui.png
│ ├── beads-out.png
│ ├── black-and-white-edge-pixels.jpg
│ ├── black-and-white-gradient.png
│ ├── black-and-white.jpg
│ ├── blur-demo.gif
│ ├── board-coordinates.jpg
│ ├── board-final.jpg
│ ├── cartesian-coordinates.png
│ ├── cat-corner-blue.png
│ ├── cat-eye-pixels.jpg
│ ├── cat.jpg
│ ├── chair-layers-rgb.png
│ ├── chair-original.jpg
│ ├── checkerboard-blue-channel.png
│ ├── checkerboard-green-channel.png
│ ├── checkerboard-red-channel.png
│ ├── checkerboard.png
│ ├── colonies-01-gray.png
│ ├── colonies-01-histogram.png
│ ├── colonies-01-mask.png
│ ├── colonies-01-summary.png
│ ├── colonies-01.jpg
│ ├── colonies-02-summary.png
│ ├── colonies-02.jpg
│ ├── colonies-03-summary.png
│ ├── colonies-03.jpg
│ ├── colonies01.png
│ ├── colony-mask.png
│ ├── colour-table.png
│ ├── combination.png
│ ├── drawing-practice.jpg
│ ├── eight.png
│ ├── five.png
│ ├── four-maize-roots-binary-improved.jpg
│ ├── four-maize-roots-binary.jpg
│ ├── four-maize-roots.jpg
│ ├── gaussian-blurred.png
│ ├── gaussian-kernel.png
│ ├── grayscale.png
│ ├── image-coordinates.png
│ ├── jupyter_overview.png
│ ├── left-hand-coordinates.png
│ ├── maize-root-cluster-histogram.png
│ ├── maize-root-cluster-mask.png
│ ├── maize-root-cluster-selected.png
│ ├── maize-root-cluster-threshold.jpg
│ ├── maize-roots-threshold.png
│ ├── maize-seedling-enlarged.jpg
│ ├── maize-seedling-original.jpg
│ ├── maize-seedlings-mask.png
│ ├── maize-seedlings-masked.jpg
│ ├── maize-seedlings.jpg
│ ├── petri-blurred-intensities-plot.png
│ ├── petri-dish.png
│ ├── petri-original-intensities-plot.png
│ ├── petri-selected-pixels-marker.png
│ ├── plant-seedling-colour-histogram.png
│ ├── plant-seedling-grayscale-histogram-mask.png
│ ├── plant-seedling-grayscale-histogram.png
│ ├── plant-seedling-grayscale.png
│ ├── quality-histogram.jpg
│ ├── quality-jpg.jpg
│ ├── quality-original.jpg
│ ├── quality-tif.jpg
│ ├── rectangle-gaussian-blurred.png
│ ├── remote-control-masked.jpg
│ ├── shapes-01-areas-histogram.png
│ ├── shapes-01-canny-edge-output.png
│ ├── shapes-01-canny-edges.png
│ ├── shapes-01-canny-track-edges.png
│ ├── shapes-01-cca-detail.png
│ ├── shapes-01-filtered-objects.png
│ ├── shapes-01-grayscale.png
│ ├── shapes-01-histogram.png
│ ├── shapes-01-labeled.png
│ ├── shapes-01-mask.png
│ ├── shapes-01-objects-coloured-by-area.png
│ ├── shapes-01-selected.png
│ ├── shapes-02-histogram.png
│ ├── shapes-02-mask.png
│ ├── shapes-02-selected.png
│ ├── source
│ │ └── 06-blurring
│ │ │ └── create_blur_animation.py
│ ├── sudoku-gray.png
│ ├── three-colours.png
│ ├── wellplate-01-masked.jpg
│ ├── wellplate-02-histogram.png
│ ├── wellplate-02-masked.jpg
│ ├── wellplate-02.jpg
│ └── zero.png
└── files
│ ├── cheatsheet.html
│ ├── cheatsheet.pdf
│ └── environment.yml
├── image-processing.Rproj
├── index.md
├── instructors
└── instructor-notes.md
├── learners
├── discuss.md
├── edge-detection.md
├── prereqs.md
├── reference.md
└── setup.md
├── profiles
└── learner-profiles.md
└── site
└── README.md
/.Rhistory:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/.Rhistory
--------------------------------------------------------------------------------
/.editorconfig:
--------------------------------------------------------------------------------
1 | root = true
2 |
3 | [*]
4 | charset = utf-8
5 | insert_final_newline = true
6 | trim_trailing_whitespace = true
7 |
8 | [*.md]
9 | indent_size = 2
10 | indent_style = space
11 | max_line_length = 100 # Please keep this in sync with bin/lesson_check.py!
12 | trim_trailing_whitespace = false # keep trailing spaces in markdown - 2+ spaces are translated to a hard break (
)
13 |
14 | [*.r]
15 | max_line_length = 80
16 |
17 | [*.py]
18 | indent_size = 4
19 | indent_style = space
20 | max_line_length = 79
21 |
22 | [*.sh]
23 | end_of_line = lf
24 |
25 | [Makefile]
26 | indent_style = tab
27 |
--------------------------------------------------------------------------------
/.github/workflows/README.md:
--------------------------------------------------------------------------------
1 | # Carpentries Workflows
2 |
3 | This directory contains workflows to be used for Lessons using the {sandpaper}
4 | lesson infrastructure. Two of these workflows require R (`sandpaper-main.yaml`
5 | and `pr-receive.yaml`) and the rest are bots to handle pull request management.
6 |
7 | These workflows will likely change as {sandpaper} evolves, so it is important to
8 | keep them up-to-date. To do this in your lesson you can do the following in your
9 | R console:
10 |
11 | ```r
12 | # Install/Update sandpaper
13 | options(repos = c(carpentries = "https://carpentries.r-universe.dev/",
14 | CRAN = "https://cloud.r-project.org"))
15 | install.packages("sandpaper")
16 |
17 | # update the workflows in your lesson
18 | library("sandpaper")
19 | update_github_workflows()
20 | ```
21 |
22 | Inside this folder, you will find a file called `sandpaper-version.txt`, which
23 | will contain a version number for sandpaper. This will be used in the future to
24 | alert you if a workflow update is needed.
25 |
26 | What follows are the descriptions of the workflow files:
27 |
28 | ## Deployment
29 |
30 | ### 01 Build and Deploy (sandpaper-main.yaml)
31 |
32 | This is the main driver that will only act on the main branch of the repository.
33 | This workflow does the following:
34 |
35 | 1. checks out the lesson
36 | 2. provisions the following resources
37 | - R
38 | - pandoc
39 | - lesson infrastructure (stored in a cache)
40 | - lesson dependencies if needed (stored in a cache)
41 | 3. builds the lesson via `sandpaper:::ci_deploy()`
42 |
43 | #### Caching
44 |
45 | This workflow has two caches; one cache is for the lesson infrastructure and
46 | the other is for the lesson dependencies if the lesson contains rendered
47 | content. These caches are invalidated by new versions of the infrastructure and
48 | the `renv.lock` file, respectively. If there is a problem with the cache,
49 | manual invaliation is necessary. You will need maintain access to the repository
50 | and you can either go to the actions tab and [click on the caches button to find
51 | and invalidate the failing cache](https://github.blog/changelog/2022-10-20-manage-caches-in-your-actions-workflows-from-web-interface/)
52 | or by setting the `CACHE_VERSION` secret to the current date (which will
53 | invalidate all of the caches).
54 |
55 | ## Updates
56 |
57 | ### Setup Information
58 |
59 | These workflows run on a schedule and at the maintainer's request. Because they
60 | create pull requests that update workflows/require the downstream actions to run,
61 | they need a special repository/organization secret token called
62 | `SANDPAPER_WORKFLOW` and it must have the `public_repo` and `workflow` scope.
63 |
64 | This can be an individual user token, OR it can be a trusted bot account. If you
65 | have a repository in one of the official Carpentries accounts, then you do not
66 | need to worry about this token being present because the Carpentries Core Team
67 | will take care of supplying this token.
68 |
69 | If you want to use your personal account: you can go to
70 |
71 | to create a token. Once you have created your token, you should copy it to your
72 | clipboard and then go to your repository's settings > secrets > actions and
73 | create or edit the `SANDPAPER_WORKFLOW` secret, pasting in the generated token.
74 |
75 | If you do not specify your token correctly, the runs will not fail and they will
76 | give you instructions to provide the token for your repository.
77 |
78 | ### 02 Maintain: Update Workflow Files (update-workflow.yaml)
79 |
80 | The {sandpaper} repository was designed to do as much as possible to separate
81 | the tools from the content. For local builds, this is absolutely true, but
82 | there is a minor issue when it comes to workflow files: they must live inside
83 | the repository.
84 |
85 | This workflow ensures that the workflow files are up-to-date. The way it work is
86 | to download the update-workflows.sh script from GitHub and run it. The script
87 | will do the following:
88 |
89 | 1. check the recorded version of sandpaper against the current version on github
90 | 2. update the files if there is a difference in versions
91 |
92 | After the files are updated, if there are any changes, they are pushed to a
93 | branch called `update/workflows` and a pull request is created. Maintainers are
94 | encouraged to review the changes and accept the pull request if the outputs
95 | are okay.
96 |
97 | This update is run weekly or on demand.
98 |
99 | ### 03 Maintain: Update Package Cache (update-cache.yaml)
100 |
101 | For lessons that have generated content, we use {renv} to ensure that the output
102 | is stable. This is controlled by a single lockfile which documents the packages
103 | needed for the lesson and the version numbers. This workflow is skipped in
104 | lessons that do not have generated content.
105 |
106 | Because the lessons need to remain current with the package ecosystem, it's a
107 | good idea to make sure these packages can be updated periodically. The
108 | update cache workflow will do this by checking for updates, applying them in a
109 | branch called `updates/packages` and creating a pull request with _only the
110 | lockfile changed_.
111 |
112 | From here, the markdown documents will be rebuilt and you can inspect what has
113 | changed based on how the packages have updated.
114 |
115 | ## Pull Request and Review Management
116 |
117 | Because our lessons execute code, pull requests are a secruity risk for any
118 | lesson and thus have security measures associted with them. **Do not merge any
119 | pull requests that do not pass checks and do not have bots commented on them.**
120 |
121 | This series of workflows all go together and are described in the following
122 | diagram and the below sections:
123 |
124 | 
125 |
126 | ### Pre Flight Pull Request Validation (pr-preflight.yaml)
127 |
128 | This workflow runs every time a pull request is created and its purpose is to
129 | validate that the pull request is okay to run. This means the following things:
130 |
131 | 1. The pull request does not contain modified workflow files
132 | 2. If the pull request contains modified workflow files, it does not contain
133 | modified content files (such as a situation where @carpentries-bot will
134 | make an automated pull request)
135 | 3. The pull request does not contain an invalid commit hash (e.g. from a fork
136 | that was made before a lesson was transitioned from styles to use the
137 | workbench).
138 |
139 | Once the checks are finished, a comment is issued to the pull request, which
140 | will allow maintainers to determine if it is safe to run the
141 | "Receive Pull Request" workflow from new contributors.
142 |
143 | ### Receive Pull Request (pr-receive.yaml)
144 |
145 | **Note of caution:** This workflow runs arbitrary code by anyone who creates a
146 | pull request. GitHub has safeguarded the token used in this workflow to have no
147 | priviledges in the repository, but we have taken precautions to protect against
148 | spoofing.
149 |
150 | This workflow is triggered with every push to a pull request. If this workflow
151 | is already running and a new push is sent to the pull request, the workflow
152 | running from the previous push will be cancelled and a new workflow run will be
153 | started.
154 |
155 | The first step of this workflow is to check if it is valid (e.g. that no
156 | workflow files have been modified). If there are workflow files that have been
157 | modified, a comment is made that indicates that the workflow is not run. If
158 | both a workflow file and lesson content is modified, an error will occurr.
159 |
160 | The second step (if valid) is to build the generated content from the pull
161 | request. This builds the content and uploads three artifacts:
162 |
163 | 1. The pull request number (pr)
164 | 2. A summary of changes after the rendering process (diff)
165 | 3. The rendered files (build)
166 |
167 | Because this workflow builds generated content, it follows the same general
168 | process as the `sandpaper-main` workflow with the same caching mechanisms.
169 |
170 | The artifacts produced are used by the next workflow.
171 |
172 | ### Comment on Pull Request (pr-comment.yaml)
173 |
174 | This workflow is triggered if the `pr-receive.yaml` workflow is successful.
175 | The steps in this workflow are:
176 |
177 | 1. Test if the workflow is valid and comment the validity of the workflow to the
178 | pull request.
179 | 2. If it is valid: create an orphan branch with two commits: the current state
180 | of the repository and the proposed changes.
181 | 3. If it is valid: update the pull request comment with the summary of changes
182 |
183 | Importantly: if the pull request is invalid, the branch is not created so any
184 | malicious code is not published.
185 |
186 | From here, the maintainer can request changes from the author and eventually
187 | either merge or reject the PR. When this happens, if the PR was valid, the
188 | preview branch needs to be deleted.
189 |
190 | ### Send Close PR Signal (pr-close-signal.yaml)
191 |
192 | Triggered any time a pull request is closed. This emits an artifact that is the
193 | pull request number for the next action
194 |
195 | ### Remove Pull Request Branch (pr-post-remove-branch.yaml)
196 |
197 | Tiggered by `pr-close-signal.yaml`. This removes the temporary branch associated with
198 | the pull request (if it was created).
199 |
--------------------------------------------------------------------------------
/.github/workflows/pr-close-signal.yaml:
--------------------------------------------------------------------------------
1 | name: "Bot: Send Close Pull Request Signal"
2 |
3 | on:
4 | pull_request:
5 | types:
6 | [closed]
7 |
8 | jobs:
9 | send-close-signal:
10 | name: "Send closing signal"
11 | runs-on: ubuntu-22.04
12 | if: ${{ github.event.action == 'closed' }}
13 | steps:
14 | - name: "Create PRtifact"
15 | run: |
16 | mkdir -p ./pr
17 | printf ${{ github.event.number }} > ./pr/NUM
18 | - name: Upload Diff
19 | uses: actions/upload-artifact@v4
20 | with:
21 | name: pr
22 | path: ./pr
23 |
--------------------------------------------------------------------------------
/.github/workflows/pr-comment.yaml:
--------------------------------------------------------------------------------
1 | name: "Bot: Comment on the Pull Request"
2 |
3 | # read-write repo token
4 | # access to secrets
5 | on:
6 | workflow_run:
7 | workflows: ["Receive Pull Request"]
8 | types:
9 | - completed
10 |
11 | concurrency:
12 | group: pr-${{ github.event.workflow_run.pull_requests[0].number }}
13 | cancel-in-progress: true
14 |
15 |
16 | jobs:
17 | # Pull requests are valid if:
18 | # - they match the sha of the workflow run head commit
19 | # - they are open
20 | # - no .github files were committed
21 | test-pr:
22 | name: "Test if pull request is valid"
23 | runs-on: ubuntu-22.04
24 | if: >
25 | github.event.workflow_run.event == 'pull_request' &&
26 | github.event.workflow_run.conclusion == 'success'
27 | outputs:
28 | is_valid: ${{ steps.check-pr.outputs.VALID }}
29 | payload: ${{ steps.check-pr.outputs.payload }}
30 | number: ${{ steps.get-pr.outputs.NUM }}
31 | msg: ${{ steps.check-pr.outputs.MSG }}
32 | steps:
33 | - name: 'Download PR artifact'
34 | id: dl
35 | uses: carpentries/actions/download-workflow-artifact@main
36 | with:
37 | run: ${{ github.event.workflow_run.id }}
38 | name: 'pr'
39 |
40 | - name: "Get PR Number"
41 | if: ${{ steps.dl.outputs.success == 'true' }}
42 | id: get-pr
43 | run: |
44 | unzip pr.zip
45 | echo "NUM=$(<./NR)" >> $GITHUB_OUTPUT
46 |
47 | - name: "Fail if PR number was not present"
48 | id: bad-pr
49 | if: ${{ steps.dl.outputs.success != 'true' }}
50 | run: |
51 | echo '::error::A pull request number was not recorded. The pull request that triggered this workflow is likely malicious.'
52 | exit 1
53 | - name: "Get Invalid Hashes File"
54 | id: hash
55 | run: |
56 | echo "json<> $GITHUB_OUTPUT
59 | - name: "Check PR"
60 | id: check-pr
61 | if: ${{ steps.dl.outputs.success == 'true' }}
62 | uses: carpentries/actions/check-valid-pr@main
63 | with:
64 | pr: ${{ steps.get-pr.outputs.NUM }}
65 | sha: ${{ github.event.workflow_run.head_sha }}
66 | headroom: 3 # if it's within the last three commits, we can keep going, because it's likely rapid-fire
67 | invalid: ${{ fromJSON(steps.hash.outputs.json)[github.repository] }}
68 | fail_on_error: true
69 |
70 | # Create an orphan branch on this repository with two commits
71 | # - the current HEAD of the md-outputs branch
72 | # - the output from running the current HEAD of the pull request through
73 | # the md generator
74 | create-branch:
75 | name: "Create Git Branch"
76 | needs: test-pr
77 | runs-on: ubuntu-22.04
78 | if: ${{ needs.test-pr.outputs.is_valid == 'true' }}
79 | env:
80 | NR: ${{ needs.test-pr.outputs.number }}
81 | permissions:
82 | contents: write
83 | steps:
84 | - name: 'Checkout md outputs'
85 | uses: actions/checkout@v4
86 | with:
87 | ref: md-outputs
88 | path: built
89 | fetch-depth: 1
90 |
91 | - name: 'Download built markdown'
92 | id: dl
93 | uses: carpentries/actions/download-workflow-artifact@main
94 | with:
95 | run: ${{ github.event.workflow_run.id }}
96 | name: 'built'
97 |
98 | - if: ${{ steps.dl.outputs.success == 'true' }}
99 | run: unzip built.zip
100 |
101 | - name: "Create orphan and push"
102 | if: ${{ steps.dl.outputs.success == 'true' }}
103 | run: |
104 | cd built/
105 | git config --local user.email "actions@github.com"
106 | git config --local user.name "GitHub Actions"
107 | CURR_HEAD=$(git rev-parse HEAD)
108 | git checkout --orphan md-outputs-PR-${NR}
109 | git add -A
110 | git commit -m "source commit: ${CURR_HEAD}"
111 | ls -A | grep -v '^.git$' | xargs -I _ rm -r '_'
112 | cd ..
113 | unzip -o -d built built.zip
114 | cd built
115 | git add -A
116 | git commit --allow-empty -m "differences for PR #${NR}"
117 | git push -u --force --set-upstream origin md-outputs-PR-${NR}
118 |
119 | # Comment on the Pull Request with a link to the branch and the diff
120 | comment-pr:
121 | name: "Comment on Pull Request"
122 | needs: [test-pr, create-branch]
123 | runs-on: ubuntu-22.04
124 | if: ${{ needs.test-pr.outputs.is_valid == 'true' }}
125 | env:
126 | NR: ${{ needs.test-pr.outputs.number }}
127 | permissions:
128 | pull-requests: write
129 | steps:
130 | - name: 'Download comment artifact'
131 | id: dl
132 | uses: carpentries/actions/download-workflow-artifact@main
133 | with:
134 | run: ${{ github.event.workflow_run.id }}
135 | name: 'diff'
136 |
137 | - if: ${{ steps.dl.outputs.success == 'true' }}
138 | run: unzip ${{ github.workspace }}/diff.zip
139 |
140 | - name: "Comment on PR"
141 | id: comment-diff
142 | if: ${{ steps.dl.outputs.success == 'true' }}
143 | uses: carpentries/actions/comment-diff@main
144 | with:
145 | pr: ${{ env.NR }}
146 | path: ${{ github.workspace }}/diff.md
147 |
148 | # Comment if the PR is open and matches the SHA, but the workflow files have
149 | # changed
150 | comment-changed-workflow:
151 | name: "Comment if workflow files have changed"
152 | needs: test-pr
153 | runs-on: ubuntu-22.04
154 | if: ${{ always() && needs.test-pr.outputs.is_valid == 'false' }}
155 | env:
156 | NR: ${{ github.event.workflow_run.pull_requests[0].number }}
157 | body: ${{ needs.test-pr.outputs.msg }}
158 | permissions:
159 | pull-requests: write
160 | steps:
161 | - name: 'Check for spoofing'
162 | id: dl
163 | uses: carpentries/actions/download-workflow-artifact@main
164 | with:
165 | run: ${{ github.event.workflow_run.id }}
166 | name: 'built'
167 |
168 | - name: 'Alert if spoofed'
169 | id: spoof
170 | if: ${{ steps.dl.outputs.success == 'true' }}
171 | run: |
172 | echo 'body<> $GITHUB_ENV
173 | echo '' >> $GITHUB_ENV
174 | echo '## :x: DANGER :x:' >> $GITHUB_ENV
175 | echo 'This pull request has modified workflows that created output. Close this now.' >> $GITHUB_ENV
176 | echo '' >> $GITHUB_ENV
177 | echo 'EOF' >> $GITHUB_ENV
178 |
179 | - name: "Comment on PR"
180 | id: comment-diff
181 | uses: carpentries/actions/comment-diff@main
182 | with:
183 | pr: ${{ env.NR }}
184 | body: ${{ env.body }}
185 |
--------------------------------------------------------------------------------
/.github/workflows/pr-post-remove-branch.yaml:
--------------------------------------------------------------------------------
1 | name: "Bot: Remove Temporary PR Branch"
2 |
3 | on:
4 | workflow_run:
5 | workflows: ["Bot: Send Close Pull Request Signal"]
6 | types:
7 | - completed
8 |
9 | jobs:
10 | delete:
11 | name: "Delete branch from Pull Request"
12 | runs-on: ubuntu-22.04
13 | if: >
14 | github.event.workflow_run.event == 'pull_request' &&
15 | github.event.workflow_run.conclusion == 'success'
16 | permissions:
17 | contents: write
18 | steps:
19 | - name: 'Download artifact'
20 | uses: carpentries/actions/download-workflow-artifact@main
21 | with:
22 | run: ${{ github.event.workflow_run.id }}
23 | name: pr
24 | - name: "Get PR Number"
25 | id: get-pr
26 | run: |
27 | unzip pr.zip
28 | echo "NUM=$(<./NUM)" >> $GITHUB_OUTPUT
29 | - name: 'Remove branch'
30 | uses: carpentries/actions/remove-branch@main
31 | with:
32 | pr: ${{ steps.get-pr.outputs.NUM }}
33 |
--------------------------------------------------------------------------------
/.github/workflows/pr-preflight.yaml:
--------------------------------------------------------------------------------
1 | name: "Pull Request Preflight Check"
2 |
3 | on:
4 | pull_request_target:
5 | branches:
6 | ["main"]
7 | types:
8 | ["opened", "synchronize", "reopened"]
9 |
10 | jobs:
11 | test-pr:
12 | name: "Test if pull request is valid"
13 | if: ${{ github.event.action != 'closed' }}
14 | runs-on: ubuntu-22.04
15 | outputs:
16 | is_valid: ${{ steps.check-pr.outputs.VALID }}
17 | permissions:
18 | pull-requests: write
19 | steps:
20 | - name: "Get Invalid Hashes File"
21 | id: hash
22 | run: |
23 | echo "json<> $GITHUB_OUTPUT
26 | - name: "Check PR"
27 | id: check-pr
28 | uses: carpentries/actions/check-valid-pr@main
29 | with:
30 | pr: ${{ github.event.number }}
31 | invalid: ${{ fromJSON(steps.hash.outputs.json)[github.repository] }}
32 | fail_on_error: true
33 | - name: "Comment result of validation"
34 | id: comment-diff
35 | if: ${{ always() }}
36 | uses: carpentries/actions/comment-diff@main
37 | with:
38 | pr: ${{ github.event.number }}
39 | body: ${{ steps.check-pr.outputs.MSG }}
40 |
--------------------------------------------------------------------------------
/.github/workflows/pr-receive.yaml:
--------------------------------------------------------------------------------
1 | name: "Receive Pull Request"
2 |
3 | on:
4 | pull_request:
5 | types:
6 | [opened, synchronize, reopened]
7 |
8 | concurrency:
9 | group: ${{ github.ref }}
10 | cancel-in-progress: true
11 |
12 | jobs:
13 | test-pr:
14 | name: "Record PR number"
15 | if: ${{ github.event.action != 'closed' }}
16 | runs-on: ubuntu-22.04
17 | outputs:
18 | is_valid: ${{ steps.check-pr.outputs.VALID }}
19 | steps:
20 | - name: "Record PR number"
21 | id: record
22 | if: ${{ always() }}
23 | run: |
24 | echo ${{ github.event.number }} > ${{ github.workspace }}/NR # 2022-03-02: artifact name fixed to be NR
25 | - name: "Upload PR number"
26 | id: upload
27 | if: ${{ always() }}
28 | uses: actions/upload-artifact@v4
29 | with:
30 | name: pr
31 | path: ${{ github.workspace }}/NR
32 | - name: "Get Invalid Hashes File"
33 | id: hash
34 | run: |
35 | echo "json<> $GITHUB_OUTPUT
38 | - name: "echo output"
39 | run: |
40 | echo "${{ steps.hash.outputs.json }}"
41 | - name: "Check PR"
42 | id: check-pr
43 | uses: carpentries/actions/check-valid-pr@main
44 | with:
45 | pr: ${{ github.event.number }}
46 | invalid: ${{ fromJSON(steps.hash.outputs.json)[github.repository] }}
47 |
48 | build-md-source:
49 | name: "Build markdown source files if valid"
50 | needs: test-pr
51 | runs-on: ubuntu-22.04
52 | if: ${{ needs.test-pr.outputs.is_valid == 'true' }}
53 | env:
54 | GITHUB_PAT: ${{ secrets.GITHUB_TOKEN }}
55 | RENV_PATHS_ROOT: ~/.local/share/renv/
56 | CHIVE: ${{ github.workspace }}/site/chive
57 | PR: ${{ github.workspace }}/site/pr
58 | MD: ${{ github.workspace }}/site/built
59 | steps:
60 | - name: "Check Out Main Branch"
61 | uses: actions/checkout@v4
62 |
63 | - name: "Check Out Staging Branch"
64 | uses: actions/checkout@v4
65 | with:
66 | ref: md-outputs
67 | path: ${{ env.MD }}
68 |
69 | - name: "Set up R"
70 | uses: r-lib/actions/setup-r@v2
71 | with:
72 | use-public-rspm: true
73 | install-r: false
74 |
75 | - name: "Set up Pandoc"
76 | uses: r-lib/actions/setup-pandoc@v2
77 |
78 | - name: "Setup Lesson Engine"
79 | uses: carpentries/actions/setup-sandpaper@main
80 | with:
81 | cache-version: ${{ secrets.CACHE_VERSION }}
82 |
83 | - name: "Setup Package Cache"
84 | uses: carpentries/actions/setup-lesson-deps@main
85 | with:
86 | cache-version: ${{ secrets.CACHE_VERSION }}
87 |
88 | - name: "Validate and Build Markdown"
89 | id: build-site
90 | run: |
91 | sandpaper::package_cache_trigger(TRUE)
92 | sandpaper::validate_lesson(path = '${{ github.workspace }}')
93 | sandpaper:::build_markdown(path = '${{ github.workspace }}', quiet = FALSE)
94 | shell: Rscript {0}
95 |
96 | - name: "Generate Artifacts"
97 | id: generate-artifacts
98 | run: |
99 | sandpaper:::ci_bundle_pr_artifacts(
100 | repo = '${{ github.repository }}',
101 | pr_number = '${{ github.event.number }}',
102 | path_md = '${{ env.MD }}',
103 | path_pr = '${{ env.PR }}',
104 | path_archive = '${{ env.CHIVE }}',
105 | branch = 'md-outputs'
106 | )
107 | shell: Rscript {0}
108 |
109 | - name: "Upload PR"
110 | uses: actions/upload-artifact@v4
111 | with:
112 | name: pr
113 | path: ${{ env.PR }}
114 | overwrite: true
115 |
116 | - name: "Upload Diff"
117 | uses: actions/upload-artifact@v4
118 | with:
119 | name: diff
120 | path: ${{ env.CHIVE }}
121 | retention-days: 1
122 |
123 | - name: "Upload Build"
124 | uses: actions/upload-artifact@v4
125 | with:
126 | name: built
127 | path: ${{ env.MD }}
128 | retention-days: 1
129 |
130 | - name: "Teardown"
131 | run: sandpaper::reset_site()
132 | shell: Rscript {0}
133 |
--------------------------------------------------------------------------------
/.github/workflows/sandpaper-main.yaml:
--------------------------------------------------------------------------------
1 | name: "01 Build and Deploy Site"
2 |
3 | on:
4 | push:
5 | branches:
6 | - main
7 | - master
8 | schedule:
9 | - cron: '0 0 * * 2'
10 | workflow_dispatch:
11 | inputs:
12 | name:
13 | description: 'Who triggered this build?'
14 | required: true
15 | default: 'Maintainer (via GitHub)'
16 | reset:
17 | description: 'Reset cached markdown files'
18 | required: false
19 | default: false
20 | type: boolean
21 | jobs:
22 | full-build:
23 | name: "Build Full Site"
24 |
25 | # 2024-10-01: ubuntu-latest is now 24.04 and R is not installed by default in the runner image
26 | # pin to 22.04 for now
27 | runs-on: ubuntu-22.04
28 | permissions:
29 | checks: write
30 | contents: write
31 | pages: write
32 | env:
33 | GITHUB_PAT: ${{ secrets.GITHUB_TOKEN }}
34 | RENV_PATHS_ROOT: ~/.local/share/renv/
35 | steps:
36 |
37 | - name: "Checkout Lesson"
38 | uses: actions/checkout@v4
39 |
40 | - name: "Set up R"
41 | uses: r-lib/actions/setup-r@v2
42 | with:
43 | use-public-rspm: true
44 | install-r: false
45 |
46 | - name: "Set up Pandoc"
47 | uses: r-lib/actions/setup-pandoc@v2
48 |
49 | - name: "Setup Lesson Engine"
50 | uses: carpentries/actions/setup-sandpaper@main
51 | with:
52 | cache-version: ${{ secrets.CACHE_VERSION }}
53 |
54 | - name: "Setup Package Cache"
55 | uses: carpentries/actions/setup-lesson-deps@main
56 | with:
57 | cache-version: ${{ secrets.CACHE_VERSION }}
58 |
59 | - name: "Deploy Site"
60 | run: |
61 | reset <- "${{ github.event.inputs.reset }}" == "true"
62 | sandpaper::package_cache_trigger(TRUE)
63 | sandpaper:::ci_deploy(reset = reset)
64 | shell: Rscript {0}
65 |
--------------------------------------------------------------------------------
/.github/workflows/sandpaper-version.txt:
--------------------------------------------------------------------------------
1 | 0.16.12
2 |
--------------------------------------------------------------------------------
/.github/workflows/update-cache.yaml:
--------------------------------------------------------------------------------
1 | name: "03 Maintain: Update Package Cache"
2 |
3 | on:
4 | workflow_dispatch:
5 | inputs:
6 | name:
7 | description: 'Who triggered this build (enter github username to tag yourself)?'
8 | required: true
9 | default: 'monthly run'
10 | schedule:
11 | # Run every tuesday
12 | - cron: '0 0 * * 2'
13 |
14 | jobs:
15 | preflight:
16 | name: "Preflight Check"
17 | runs-on: ubuntu-22.04
18 | outputs:
19 | ok: ${{ steps.check.outputs.ok }}
20 | steps:
21 | - id: check
22 | run: |
23 | if [[ ${{ github.event_name }} == 'workflow_dispatch' ]]; then
24 | echo "ok=true" >> $GITHUB_OUTPUT
25 | echo "Running on request"
26 | # using single brackets here to avoid 08 being interpreted as octal
27 | # https://github.com/carpentries/sandpaper/issues/250
28 | elif [ `date +%d` -le 7 ]; then
29 | # If the Tuesday lands in the first week of the month, run it
30 | echo "ok=true" >> $GITHUB_OUTPUT
31 | echo "Running on schedule"
32 | else
33 | echo "ok=false" >> $GITHUB_OUTPUT
34 | echo "Not Running Today"
35 | fi
36 |
37 | check_renv:
38 | name: "Check if We Need {renv}"
39 | runs-on: ubuntu-22.04
40 | needs: preflight
41 | if: ${{ needs.preflight.outputs.ok == 'true'}}
42 | outputs:
43 | needed: ${{ steps.renv.outputs.exists }}
44 | steps:
45 | - name: "Checkout Lesson"
46 | uses: actions/checkout@v4
47 | - id: renv
48 | run: |
49 | if [[ -d renv ]]; then
50 | echo "exists=true" >> $GITHUB_OUTPUT
51 | fi
52 |
53 | check_token:
54 | name: "Check SANDPAPER_WORKFLOW token"
55 | runs-on: ubuntu-22.04
56 | needs: check_renv
57 | if: ${{ needs.check_renv.outputs.needed == 'true' }}
58 | outputs:
59 | workflow: ${{ steps.validate.outputs.wf }}
60 | repo: ${{ steps.validate.outputs.repo }}
61 | steps:
62 | - name: "validate token"
63 | id: validate
64 | uses: carpentries/actions/check-valid-credentials@main
65 | with:
66 | token: ${{ secrets.SANDPAPER_WORKFLOW }}
67 |
68 | update_cache:
69 | name: "Update Package Cache"
70 | needs: check_token
71 | if: ${{ needs.check_token.outputs.repo== 'true' }}
72 | runs-on: ubuntu-22.04
73 | env:
74 | GITHUB_PAT: ${{ secrets.GITHUB_TOKEN }}
75 | RENV_PATHS_ROOT: ~/.local/share/renv/
76 | steps:
77 |
78 | - name: "Checkout Lesson"
79 | uses: actions/checkout@v4
80 |
81 | - name: "Set up R"
82 | uses: r-lib/actions/setup-r@v2
83 | with:
84 | use-public-rspm: true
85 | install-r: false
86 |
87 | - name: "Update {renv} deps and determine if a PR is needed"
88 | id: update
89 | uses: carpentries/actions/update-lockfile@main
90 | with:
91 | cache-version: ${{ secrets.CACHE_VERSION }}
92 |
93 | - name: Create Pull Request
94 | id: cpr
95 | if: ${{ steps.update.outputs.n > 0 }}
96 | uses: carpentries/create-pull-request@main
97 | with:
98 | token: ${{ secrets.SANDPAPER_WORKFLOW }}
99 | delete-branch: true
100 | branch: "update/packages"
101 | commit-message: "[actions] update ${{ steps.update.outputs.n }} packages"
102 | title: "Update ${{ steps.update.outputs.n }} packages"
103 | body: |
104 | :robot: This is an automated build
105 |
106 | This will update ${{ steps.update.outputs.n }} packages in your lesson with the following versions:
107 |
108 | ```
109 | ${{ steps.update.outputs.report }}
110 | ```
111 |
112 | :stopwatch: In a few minutes, a comment will appear that will show you how the output has changed based on these updates.
113 |
114 | If you want to inspect these changes locally, you can use the following code to check out a new branch:
115 |
116 | ```bash
117 | git fetch origin update/packages
118 | git checkout update/packages
119 | ```
120 |
121 | - Auto-generated by [create-pull-request][1] on ${{ steps.update.outputs.date }}
122 |
123 | [1]: https://github.com/carpentries/create-pull-request/tree/main
124 | labels: "type: package cache"
125 | draft: false
126 |
--------------------------------------------------------------------------------
/.github/workflows/update-workflows.yaml:
--------------------------------------------------------------------------------
1 | name: "02 Maintain: Update Workflow Files"
2 |
3 | on:
4 | workflow_dispatch:
5 | inputs:
6 | name:
7 | description: 'Who triggered this build (enter github username to tag yourself)?'
8 | required: true
9 | default: 'weekly run'
10 | clean:
11 | description: 'Workflow files/file extensions to clean (no wildcards, enter "" for none)'
12 | required: false
13 | default: '.yaml'
14 | schedule:
15 | # Run every Tuesday
16 | - cron: '0 0 * * 2'
17 |
18 | jobs:
19 | check_token:
20 | name: "Check SANDPAPER_WORKFLOW token"
21 | runs-on: ubuntu-22.04
22 | outputs:
23 | workflow: ${{ steps.validate.outputs.wf }}
24 | repo: ${{ steps.validate.outputs.repo }}
25 | steps:
26 | - name: "validate token"
27 | id: validate
28 | uses: carpentries/actions/check-valid-credentials@main
29 | with:
30 | token: ${{ secrets.SANDPAPER_WORKFLOW }}
31 |
32 | update_workflow:
33 | name: "Update Workflow"
34 | runs-on: ubuntu-22.04
35 | needs: check_token
36 | if: ${{ needs.check_token.outputs.workflow == 'true' }}
37 | steps:
38 | - name: "Checkout Repository"
39 | uses: actions/checkout@v4
40 |
41 | - name: Update Workflows
42 | id: update
43 | uses: carpentries/actions/update-workflows@main
44 | with:
45 | clean: ${{ github.event.inputs.clean }}
46 |
47 | - name: Create Pull Request
48 | id: cpr
49 | if: "${{ steps.update.outputs.new }}"
50 | uses: carpentries/create-pull-request@main
51 | with:
52 | token: ${{ secrets.SANDPAPER_WORKFLOW }}
53 | delete-branch: true
54 | branch: "update/workflows"
55 | commit-message: "[actions] update sandpaper workflow to version ${{ steps.update.outputs.new }}"
56 | title: "Update Workflows to Version ${{ steps.update.outputs.new }}"
57 | body: |
58 | :robot: This is an automated build
59 |
60 | Update Workflows from sandpaper version ${{ steps.update.outputs.old }} -> ${{ steps.update.outputs.new }}
61 |
62 | - Auto-generated by [create-pull-request][1] on ${{ steps.update.outputs.date }}
63 |
64 | [1]: https://github.com/carpentries/create-pull-request/tree/main
65 | labels: "type: template and tools"
66 | draft: false
67 |
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
1 | # sandpaper files
2 | episodes/*html
3 | site/*
4 | !site/README.md
5 |
6 | # History files
7 | .Rhistory
8 | .Rapp.history
9 | # Session Data files
10 | .RData
11 | # User-specific files
12 | .Ruserdata
13 | # Example code in package build process
14 | *-Ex.R
15 | # Output files from R CMD build
16 | /*.tar.gz
17 | # Output files from R CMD check
18 | /*.Rcheck/
19 | # RStudio files
20 | .Rproj.user/
21 | # produced vignettes
22 | vignettes/*.html
23 | vignettes/*.pdf
24 | # OAuth2 token, see https://github.com/hadley/httr/releases/tag/v0.3
25 | .httr-oauth
26 | # knitr and R markdown default cache directories
27 | *_cache/
28 | /cache/
29 | # Temporary files created by R markdown
30 | *.utf8.md
31 | *.knit.md
32 | # R Environment Variables
33 | .Renviron
34 | # pkgdown site
35 | docs/
36 | # translation temp files
37 | po/*~
38 | # renv detritus
39 | renv/sandbox/
40 | *.pyc
41 | *~
42 | .DS_Store
43 | .ipynb_checkpoints
44 | .sass-cache
45 | .jekyll-cache/
46 | .jekyll-metadata
47 | __pycache__
48 | _site
49 | .Rproj.user
50 | .bundle/
51 | .vendor/
52 | vendor/
53 | .docker-vendor/
54 | Gemfile.lock
55 | .*history
56 | .vscode/*
57 |
--------------------------------------------------------------------------------
/.zenodo.json:
--------------------------------------------------------------------------------
1 | {
2 | "contributors": [
3 | {
4 | "type": "Editor",
5 | "name": "Toby Hodges",
6 | "orcid": "0000-0003-1766-456X"
7 | },
8 | {
9 | "type": "Editor",
10 | "name": "Ulf Schiller",
11 | "orcid": "0000-0001-8941-1284"
12 | },
13 | {
14 | "type": "Editor",
15 | "name": "Robert Turner"
16 | },
17 | {
18 | "type": "Editor",
19 | "name": "David Palmquist"
20 | },
21 | {
22 | "type": "Editor",
23 | "name": "Kimberly Meechan",
24 | "orcid": "0000-0003-4939-4170"
25 | }
26 | ],
27 | "creators": [
28 | {
29 | "name": "Ulf Schiller",
30 | "orcid": "0000-0001-8941-1284"
31 | },
32 | {
33 | "name": "Robert Turner"
34 | },
35 | {
36 | "name": "Candace Makeda Moore"
37 | },
38 | {
39 | "name": "Marianne Corvellec",
40 | "orcid": "0000-0002-1994-3581"
41 | },
42 | {
43 | "name": "Toby Hodges",
44 | "orcid": "0000-0003-1766-456X"
45 | },
46 | {
47 | "name": "govekk"
48 | },
49 | {
50 | "name": " Ruobin Qi",
51 | "orcid": "0000-0001-9072-9484"
52 | },
53 | {
54 | "name": "Marco Dalla Vecchia",
55 | "orcid": "0000-0002-0192-8439"
56 | }
57 | ],
58 | "license": {
59 | "id": "CC-BY-4.0"
60 | }
61 | }
62 |
--------------------------------------------------------------------------------
/AUTHORS:
--------------------------------------------------------------------------------
1 | Mark Meysenburg, mark.meysenburg@donae.edu
2 |
--------------------------------------------------------------------------------
/CITATION:
--------------------------------------------------------------------------------
1 | Image Processing workshop citation
2 |
--------------------------------------------------------------------------------
/CODE_OF_CONDUCT.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: "Contributor Code of Conduct"
3 | ---
4 |
5 | As contributors and maintainers of this project,
6 | we pledge to follow the [The Carpentries Code of Conduct][coc].
7 |
8 | Instances of abusive, harassing, or otherwise unacceptable behavior
9 | may be reported by following our [reporting guidelines][coc-reporting].
10 |
11 |
12 | [coc-reporting]: https://docs.carpentries.org/topic_folders/policies/incident-reporting.html
13 | [coc]: https://docs.carpentries.org/topic_folders/policies/code-of-conduct.html
14 |
--------------------------------------------------------------------------------
/CONTRIBUTING.md:
--------------------------------------------------------------------------------
1 | ## Contributing
2 |
3 | [The Carpentries][cp-site] ([Software Carpentry][swc-site], [Data
4 | Carpentry][dc-site], and [Library Carpentry][lc-site]) are open source
5 | projects, and we welcome contributions of all kinds: new lessons, fixes to
6 | existing material, bug reports, and reviews of proposed changes are all
7 | welcome.
8 |
9 | ### Contributor Agreement
10 |
11 | By contributing, you agree that we may redistribute your work under [our
12 | license](LICENSE.md). In exchange, we will address your issues and/or assess
13 | your change proposal as promptly as we can, and help you become a member of our
14 | community. Everyone involved in [The Carpentries][cp-site] agrees to abide by
15 | our [code of conduct](CODE_OF_CONDUCT.md).
16 |
17 | ### How to Contribute
18 |
19 | The easiest way to get started is to file an issue to tell us about a spelling
20 | mistake, some awkward wording, or a factual error. This is a good way to
21 | introduce yourself and to meet some of our community members.
22 |
23 | 1. If you do not have a [GitHub][github] account, you can [send us comments by
24 | email][contact]. However, we will be able to respond more quickly if you use
25 | one of the other methods described below.
26 |
27 | 2. If you have a [GitHub][github] account, or are willing to [create
28 | one][github-join], but do not know how to use Git, you can report problems
29 | or suggest improvements by [creating an issue][issues]. This allows us to
30 | assign the item to someone and to respond to it in a threaded discussion.
31 |
32 | 3. If you are comfortable with Git, and would like to add or change material,
33 | you can submit a pull request (PR). Instructions for doing this are
34 | [included below](#using-github).
35 |
36 | Note: if you want to build the website locally, please refer to [The Workbench
37 | documentation][template-doc].
38 |
39 | ### Where to Contribute
40 |
41 | 1. If you wish to change this lesson, add issues and pull requests here.
42 | 2. If you wish to change the template used for workshop websites, please refer
43 | to [The Workbench documentation][template-doc].
44 |
45 |
46 | ### What to Contribute
47 |
48 | There are many ways to contribute, from writing new exercises and improving
49 | existing ones to updating or filling in the documentation and submitting [bug
50 | reports][issues] about things that do not work, are not clear, or are missing.
51 | If you are looking for ideas, please see [the list of issues for this
52 | repository][repo], or the issues for [Data Carpentry][dc-issues], [Library
53 | Carpentry][lc-issues], and [Software Carpentry][swc-issues] projects.
54 |
55 | Comments on issues and reviews of pull requests are just as welcome: we are
56 | smarter together than we are on our own. **Reviews from novices and newcomers
57 | are particularly valuable**: it's easy for people who have been using these
58 | lessons for a while to forget how impenetrable some of this material can be, so
59 | fresh eyes are always welcome.
60 |
61 | ### What *Not* to Contribute
62 |
63 | Our lessons already contain more material than we can cover in a typical
64 | workshop, so we are usually *not* looking for more concepts or tools to add to
65 | them. As a rule, if you want to introduce a new idea, you must (a) estimate how
66 | long it will take to teach and (b) explain what you would take out to make room
67 | for it. The first encourages contributors to be honest about requirements; the
68 | second, to think hard about priorities.
69 |
70 | We are also not looking for exercises or other material that only run on one
71 | platform. Our workshops typically contain a mixture of Windows, macOS, and
72 | Linux users; in order to be usable, our lessons must run equally well on all
73 | three.
74 |
75 | ### Using GitHub
76 |
77 | If you choose to contribute via GitHub, you may want to look at [How to
78 | Contribute to an Open Source Project on GitHub][how-contribute]. In brief, we
79 | use [GitHub flow][github-flow] to manage changes:
80 |
81 | 1. Create a new branch in your desktop copy of this repository for each
82 | significant change.
83 | 2. Commit the change in that branch.
84 | 3. Push that branch to your fork of this repository on GitHub.
85 | 4. Submit a pull request from that branch to the [upstream repository][repo].
86 | 5. If you receive feedback, make changes on your desktop and push to your
87 | branch on GitHub: the pull request will update automatically.
88 |
89 | NB: The published copy of the lesson is usually in the `main` branch.
90 |
91 | Each lesson has a team of maintainers who review issues and pull requests or
92 | encourage others to do so. The maintainers are community volunteers, and have
93 | final say over what gets merged into the lesson.
94 |
95 | ### Other Resources
96 |
97 | The Carpentries is a global organisation with volunteers and learners all over
98 | the world. We share values of inclusivity and a passion for sharing knowledge,
99 | teaching and learning. There are several ways to connect with The Carpentries
100 | community listed at including via social
101 | media, slack, newsletters, and email lists. You can also [reach us by
102 | email][contact].
103 |
104 | [repo]: https://github.com/datacarpentry/image-processing
105 | [contact]: mailto:team@carpentries.org
106 | [cp-site]: https://carpentries.org/
107 | [dc-issues]: https://github.com/issues?q=user%3Adatacarpentry
108 | [dc-lessons]: https://datacarpentry.org/lessons/
109 | [dc-site]: https://datacarpentry.org/
110 | [discuss-list]: https://lists.software-carpentry.org/listinfo/discuss
111 | [github]: https://github.com
112 | [github-flow]: https://guides.github.com/introduction/flow/
113 | [github-join]: https://github.com/join
114 | [how-contribute]: https://egghead.io/courses/how-to-contribute-to-an-open-source-project-on-github
115 | [issues]: https://carpentries.org/help-wanted-issues/
116 | [lc-issues]: https://github.com/issues?q=user%3ALibraryCarpentry
117 | [swc-issues]: https://github.com/issues?q=user%3Aswcarpentry
118 | [swc-lessons]: https://software-carpentry.org/lessons/
119 | [swc-site]: https://software-carpentry.org/
120 | [lc-site]: https://librarycarpentry.org/
121 | [template-doc]: https://carpentries.github.io/workbench/
122 |
--------------------------------------------------------------------------------
/LICENSE.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: "Licenses"
3 | ---
4 |
5 | ## Instructional Material
6 |
7 | All Carpentries (Software Carpentry, Data Carpentry, and Library Carpentry)
8 | instructional material is made available under the [Creative Commons
9 | Attribution license][cc-by-human]. The following is a human-readable summary of
10 | (and not a substitute for) the [full legal text of the CC BY 4.0
11 | license][cc-by-legal].
12 |
13 | You are free:
14 |
15 | - to **Share**---copy and redistribute the material in any medium or format
16 | - to **Adapt**---remix, transform, and build upon the material
17 |
18 | for any purpose, even commercially.
19 |
20 | The licensor cannot revoke these freedoms as long as you follow the license
21 | terms.
22 |
23 | Under the following terms:
24 |
25 | - **Attribution**---You must give appropriate credit (mentioning that your work
26 | is derived from work that is Copyright (c) The Carpentries and, where
27 | practical, linking to ), provide a [link to the
28 | license][cc-by-human], and indicate if changes were made. You may do so in
29 | any reasonable manner, but not in any way that suggests the licensor endorses
30 | you or your use.
31 |
32 | - **No additional restrictions**---You may not apply legal terms or
33 | technological measures that legally restrict others from doing anything the
34 | license permits. With the understanding that:
35 |
36 | Notices:
37 |
38 | * You do not have to comply with the license for elements of the material in
39 | the public domain or where your use is permitted by an applicable exception
40 | or limitation.
41 | * No warranties are given. The license may not give you all of the permissions
42 | necessary for your intended use. For example, other rights such as publicity,
43 | privacy, or moral rights may limit how you use the material.
44 |
45 | ## Software
46 |
47 | Except where otherwise noted, the example programs and other software provided
48 | by The Carpentries are made available under the [OSI][osi]-approved [MIT
49 | license][mit-license].
50 |
51 | Permission is hereby granted, free of charge, to any person obtaining a copy of
52 | this software and associated documentation files (the "Software"), to deal in
53 | the Software without restriction, including without limitation the rights to
54 | use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
55 | of the Software, and to permit persons to whom the Software is furnished to do
56 | so, subject to the following conditions:
57 |
58 | The above copyright notice and this permission notice shall be included in all
59 | copies or substantial portions of the Software.
60 |
61 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
62 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
63 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
64 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
65 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
66 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
67 | SOFTWARE.
68 |
69 | ## Trademark
70 |
71 | "The Carpentries", "Software Carpentry", "Data Carpentry", and "Library
72 | Carpentry" and their respective logos are registered trademarks of
73 | [The Carpentries, Inc.][carpentries].
74 |
75 | [cc-by-human]: https://creativecommons.org/licenses/by/4.0/
76 | [cc-by-legal]: https://creativecommons.org/licenses/by/4.0/legalcode
77 | [mit-license]: https://opensource.org/licenses/mit-license.html
78 | [carpentries]: https://carpentries.org
79 | [osi]: https://opensource.org
80 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | [](https://slack-invite.carpentries.org/)
2 | [](https://carpentries.slack.com/archives/C027H977ZGU)
3 |
4 | # Image Processing with Python
5 |
6 | A lesson teaching foundational image processing skills with Python and [scikit-image](https://scikit-image.org/).
7 |
8 | ## Lesson Content
9 |
10 | This lesson introduces fundamental concepts in image handling and processing. Learners will gain the skills needed to load images into Python, to select, summarise, and modify specific regions in these image, and to identify and extract objects within an image for further analysis.
11 |
12 | The lesson assumes a working knowledge of Python and some previous exposure to the Bash shell.
13 | A detailed list of prerequisites can be found in [`learners/prereqs.md`](learners/prereqs.md).
14 |
15 | ## Contribution
16 |
17 | - Make a suggestion or correct an error by [raising an Issue](https://github.com/datacarpentry/image-processing/issues).
18 |
19 | ## Code of Conduct
20 |
21 | All participants should agree to abide by the [The Carpentries Code of Conduct](https://docs.carpentries.org/topic_folders/policies/code-of-conduct.html).
22 |
23 | ## Lesson Maintainers
24 |
25 | The Image Processing with Python lesson is currently being maintained by:
26 |
27 | - [Kimberly Meechan](https://github.com/K-Meech)
28 | - [Ulf Schiller](https://github.com/uschille)
29 | - [Marco Dalla Vecchia](https://github.com/marcodallavecchia)
30 |
31 | The lesson is built on content originally developed by [Mark Meysenburg](https://github.com/mmeysenburg), [Tessa Durham Brooks](https://github.com/tessalea), [Dominik Kutra](https://github.com/k-dominik), [Constantin Pape](https://github.com/constantinpape), and [Erin Becker](https://github.com/ebecker).
32 |
--------------------------------------------------------------------------------
/config.yaml:
--------------------------------------------------------------------------------
1 | #------------------------------------------------------------
2 | # Values for this lesson.
3 | #------------------------------------------------------------
4 |
5 | # Which carpentry is this (swc, dc, lc, or cp)?
6 | # swc: Software Carpentry
7 | # dc: Data Carpentry
8 | # lc: Library Carpentry
9 | # cp: Carpentries (to use for instructor training for instance)
10 | # incubator: The Carpentries Incubator
11 | carpentry: 'dc'
12 |
13 | # Overall title for pages.
14 | title: 'Image Processing with Python'
15 |
16 | # Date the lesson was created (YYYY-MM-DD, this is empty by default)
17 | created: '2017-03-14'
18 |
19 | # Comma-separated list of keywords for the lesson
20 | keywords: 'software, data, lesson, The Carpentries'
21 |
22 | # Life cycle stage of the lesson
23 | # possible values: pre-alpha, alpha, beta, stable
24 | life_cycle: 'stable'
25 |
26 | # License of the lesson materials (recommended CC-BY 4.0)
27 | license: 'CC-BY 4.0'
28 |
29 | # Link to the source repository for this lesson
30 | source: 'https://github.com/datacarpentry/image-processing'
31 |
32 | # Default branch of your lesson
33 | branch: 'main'
34 |
35 | # Who to contact if there are any issues
36 | contact: 'team@carpentries.org'
37 |
38 | # Navigation ------------------------------------------------
39 | #
40 | # Use the following menu items to specify the order of
41 | # individual pages in each dropdown section. Leave blank to
42 | # include all pages in the folder.
43 | #
44 | # Example -------------
45 | #
46 | # episodes:
47 | # - introduction.md
48 | # - first-steps.md
49 | #
50 | # learners:
51 | # - setup.md
52 | #
53 | # instructors:
54 | # - instructor-notes.md
55 | #
56 | # profiles:
57 | # - one-learner.md
58 | # - another-learner.md
59 |
60 | # Order of episodes in your lesson
61 | episodes:
62 | - 01-introduction.md
63 | - 02-image-basics.md
64 | - 03-skimage-images.md
65 | - 04-drawing.md
66 | - 05-creating-histograms.md
67 | - 06-blurring.md
68 | - 07-thresholding.md
69 | - 08-connected-components.md
70 | - 09-challenges.md
71 |
72 | # Information for Learners
73 | learners:
74 |
75 | # Information for Instructors
76 | instructors:
77 |
78 | # Learner Profiles
79 | profiles:
80 |
81 | # Customisation ---------------------------------------------
82 | #
83 | # This space below is where custom yaml items (e.g. pinning
84 | # sandpaper and varnish versions) should live
85 |
86 |
87 | url: 'https://datacarpentry.github.io/image-processing'
88 | analytics: carpentries
89 | lang: en
90 |
--------------------------------------------------------------------------------
/episodes/01-introduction.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: Introduction
3 | teaching: 5
4 | exercises: 0
5 | ---
6 |
7 | ::::::::::::::::::::::::::::::::::::::: objectives
8 |
9 | - Recognise scientific questions that could be solved with image processing / computer vision.
10 | - Recognise morphometric problems (those dealing with the number, size, or shape of the objects in an image).
11 |
12 | ::::::::::::::::::::::::::::::::::::::::::::::::::
13 |
14 | :::::::::::::::::::::::::::::::::::::::: questions
15 |
16 | - What sort of scientific questions can we answer with image processing / computer vision?
17 | - What are morphometric problems?
18 |
19 | ::::::::::::::::::::::::::::::::::::::::::::::::::
20 |
21 | As computer systems have become faster and more powerful,
22 | and cameras and other imaging systems have become commonplace
23 | in many other areas of life,
24 | the need has grown for researchers to be able to
25 | process and analyse image data.
26 | Considering the large volumes of data that can be involved -
27 | high-resolution images that take up a lot of disk space/virtual memory,
28 | and/or collections of many images that must be processed together -
29 | and the time-consuming and error-prone nature of manual processing,
30 | it can be advantageous or even necessary for this processing and analysis
31 | to be automated as a computer program.
32 |
33 | This lesson introduces an open source toolkit for processing image data:
34 | the Python programming language
35 | and [the *scikit-image* (`skimage`) library](https://scikit-image.org/).
36 | With careful experimental design,
37 | Python code can be a powerful instrument in answering many different kinds of questions.
38 |
39 | ## Uses of Image Processing in Research
40 |
41 | Automated processing can be used to analyse many different properties of an image,
42 | including the distribution and change in colours in the image,
43 | the number, size, position, orientation, and shape of objects in the image,
44 | and even - when combined with machine learning techniques for object recognition -
45 | the type of objects in the image.
46 |
47 | Some examples of image processing methods applied in research include:
48 |
49 | - [Imaging a black hole](https://iopscience.iop.org/article/10.3847/2041-8213/ab0e85)
50 | - [Segmentation of liver and vessels from CT images](https://doi.org/10.1016/j.cmpb.2017.12.008)
51 | - [Monitoring wading birds in the Everglades using drones](https://dx.doi.org/10.1002/rse2.421) ([Blog article summarizing the paper](https://jabberwocky.weecology.org/2024/07/29/monitoring-wading-birds-in-near-real-time-using-drones-and-computer-vision/))
52 | - [Estimating the population of emperor penguins](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3325796/)
53 | - [Global-scale analysis of marine plankton diversity](https://www.cell.com/cell/fulltext/S0092-8674\(19\)31124-9)
54 |
55 | With this lesson,
56 | we aim to provide a thorough grounding in the fundamental concepts and skills
57 | of working with image data in Python.
58 | Most of the examples used in this lesson focus on
59 | one particular class of image processing technique, *morphometrics*,
60 | but what you will learn can be used to solve a much wider range of problems.
61 |
62 | ## Morphometrics
63 |
64 | Morphometrics involves counting the number of objects in an image,
65 | analyzing the size of the objects,
66 | or analyzing the shape of the objects.
67 | For example, we might be interested in automatically counting
68 | the number of bacterial colonies growing in a Petri dish,
69 | as shown in this image:
70 |
71 | {alt='Bacteria colony'}
72 |
73 | We could use image processing to find the colonies, count them,
74 | and then highlight their locations on the original image,
75 | resulting in an image like this:
76 |
77 | {alt='Colonies counted'}
78 |
79 | ::::::::::::::::::::::::::::::::::::::::: callout
80 |
81 | ## Why write a program to do that?
82 |
83 | Note that you can easily manually count the number of bacteria colonies
84 | shown in the morphometric example above.
85 | Why should we learn how to write a Python program to do a task
86 | we could easily perform with our own eyes?
87 | There are at least two reasons to learn how to perform tasks like these
88 | with Python and scikit-image:
89 |
90 | 1. What if there are many more bacteria colonies in the Petri dish?
91 | For example, suppose the image looked like this:
92 |
93 | {alt='Bacteria colony'}
94 |
95 | Manually counting the colonies in that image would present more of a challenge.
96 | A Python program using scikit-image could count the number of colonies more accurately,
97 | and much more quickly, than a human could.
98 |
99 | 2. What if you have hundreds, or thousands, of images to consider?
100 | Imagine having to manually count colonies on several thousand images
101 | like those above.
102 | A Python program using scikit-image could move through all of the images in seconds;
103 | how long would a graduate student require to do the task?
104 | Which process would be more accurate and repeatable?
105 |
106 | As you can see, the simple image processing / computer vision techniques you
107 | will learn during this workshop can be very valuable tools for scientific
108 | research.
109 |
110 |
111 | ::::::::::::::::::::::::::::::::::::::::::::::::::
112 |
113 | As we move through this workshop,
114 | we will learn image analysis methods useful for many different scientific problems.
115 | These will be linked together
116 | and applied to a real problem in the final end-of-workshop
117 | [capstone challenge](09-challenges.md).
118 |
119 | Let's get started,
120 | by learning some basics about how images are represented and stored digitally.
121 |
122 | :::::::::::::::::::::::::::::::::::::::: keypoints
123 |
124 | - Simple Python and scikit-image techniques can be used to solve genuine image analysis problems.
125 | - Morphometric problems involve the number, shape, and / or size of the objects in an image.
126 |
127 | ::::::::::::::::::::::::::::::::::::::::::::::::::
128 |
--------------------------------------------------------------------------------
/episodes/04-drawing.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: Drawing and Bitwise Operations
3 | teaching: 45
4 | exercises: 45
5 | ---
6 |
7 | ::::::::::::::::::::::::::::::::::::::: objectives
8 |
9 | - Create a blank, black scikit-image image.
10 | - Draw rectangles and other shapes on scikit-image images.
11 | - Explain how a white shape on a black background can be used as a mask to select specific parts of an image.
12 | - Use bitwise operations to apply a mask to an image.
13 |
14 | ::::::::::::::::::::::::::::::::::::::::::::::::::
15 |
16 | :::::::::::::::::::::::::::::::::::::::: questions
17 |
18 | - How can we draw on scikit-image images and use bitwise operations and masks to select certain parts of an image?
19 |
20 | ::::::::::::::::::::::::::::::::::::::::::::::::::
21 |
22 | The next series of episodes covers a basic toolkit of scikit-image operators.
23 | With these tools,
24 | we will be able to create programs to perform simple analyses of images
25 | based on changes in colour or shape.
26 |
27 | ## First, import the packages needed for this episode
28 |
29 | ```python
30 | import imageio.v3 as iio
31 | import ipympl
32 | import matplotlib.pyplot as plt
33 | import numpy as np
34 | import skimage as ski
35 |
36 | %matplotlib widget
37 | ```
38 |
39 | Here, we import the same packages as earlier in the lesson.
40 |
41 | ## Drawing on images
42 |
43 | Often we wish to select only a portion of an image to analyze,
44 | and ignore the rest.
45 | Creating a rectangular sub-image with slicing,
46 | as we did in [the *Working with scikit-image* episode](03-skimage-images.md)
47 | is one option for simple cases.
48 | Another option is to create another special image,
49 | of the same size as the original,
50 | with white pixels indicating the region to save and black pixels everywhere else.
51 | Such an image is called a *mask*.
52 | In preparing a mask, we sometimes need to be able to draw a shape -
53 | a circle or a rectangle, say -
54 | on a black image.
55 | scikit-image provides tools to do that.
56 |
57 | Consider this image of maize seedlings:
58 |
59 | {alt='Maize seedlings'}
60 |
61 | Now, suppose we want to analyze only the area of the image containing the roots
62 | themselves;
63 | we do not care to look at the kernels,
64 | or anything else about the plants.
65 | Further, we wish to exclude the frame of the container holding the seedlings as well.
66 | Hovering over the image with our mouse, could tell us that
67 | the upper-left coordinate of the sub-area we are interested in is *(44, 357)*,
68 | while the lower-right coordinate is *(720, 740)*.
69 | These coordinates are shown in *(x, y)* order.
70 |
71 | A Python program to create a mask to select only that area of the image would
72 | start with a now-familiar section of code to open and display the original
73 | image:
74 |
75 | ```python
76 | # Load and display the original image
77 | maize_seedlings = iio.imread(uri="data/maize-seedlings.tif")
78 |
79 | fig, ax = plt.subplots()
80 | ax.imshow(maize_seedlings)
81 | ```
82 |
83 | We load and display the initial image in the same way we have done before.
84 |
85 | NumPy allows indexing of images/arrays with "boolean" arrays of the same size.
86 | Indexing with a boolean array is also called mask indexing.
87 | The "pixels" in such a mask array can only take two values: `True` or `False`.
88 | When indexing an image with such a mask,
89 | only pixel values at positions where the mask is `True` are accessed.
90 | But first, we need to generate a mask array of the same size as the image.
91 | Luckily, the NumPy library provides a function to create just such an array.
92 | The next section of code shows how:
93 |
94 | ```python
95 | # Create the basic mask
96 | mask = np.ones(shape=maize_seedlings.shape[0:2], dtype="bool")
97 | ```
98 |
99 | The first argument to the `ones()` function is the shape of the original image,
100 | so that our mask will be exactly the same size as the original.
101 | Notice, that we have only used the first two indices of our shape.
102 | We omitted the channel dimension.
103 | Indexing with such a mask will change all channel values simultaneously.
104 | The second argument, `dtype = "bool"`,
105 | indicates that the elements in the array should be booleans -
106 | i.e., values are either `True` or `False`.
107 | Thus, even though we use `np.ones()` to create the mask,
108 | its pixel values are in fact not `1` but `True`.
109 | You could check this, e.g., by `print(mask[0, 0])`.
110 |
111 | Next, we draw a filled, rectangle on the mask:
112 |
113 | ```python
114 | # Draw filled rectangle on the mask image
115 | rr, cc = ski.draw.rectangle(start=(357, 44), end=(740, 720))
116 | mask[rr, cc] = False
117 |
118 | # Display mask image
119 | fig, ax = plt.subplots()
120 | ax.imshow(mask, cmap="gray")
121 | ```
122 |
123 | Here is what our constructed mask looks like:
124 | {alt='Maize image mask' .image-with-shadow}
125 |
126 | The parameters of the `rectangle()` function `(357, 44)` and `(740, 720)`,
127 | are the coordinates of the upper-left (`start`) and lower-right (`end`) corners
128 | of a rectangle in *(ry, cx)* order.
129 | The function returns the rectangle as row (`rr`) and column (`cc`) coordinate arrays.
130 |
131 | ::::::::::::::::::::::::::::::::::::::::: callout
132 |
133 | ## Check the documentation!
134 |
135 | When using an scikit-image function for the first time - or the fifth time -
136 | it is wise to check how the function is used, via
137 | [the scikit-image documentation](https://scikit-image.org/docs/dev/user_guide)
138 | or other usage examples on programming-related sites such as
139 | [Stack Overflow](https://stackoverflow.com/).
140 | Basic information about scikit-image functions can be found interactively in Python,
141 | via commands like `help(ski)` or `help(ski.draw.rectangle)`.
142 | Take notes in your lab notebook.
143 | And, it is always wise to run some test code to verify
144 | that the functions your program uses are behaving in the manner you intend.
145 |
146 |
147 | ::::::::::::::::::::::::::::::::::::::::::::::::::
148 |
149 | ::::::::::::::::::::::::::::::::::::::::: callout
150 |
151 | ## Variable naming conventions!
152 |
153 | You may have wondered why we called the return values of the rectangle function
154 | `rr` and `cc`?!
155 | You may have guessed that `r` is short for `row` and `c` is short for `column`.
156 | However, the rectangle function returns mutiple rows and columns;
157 | thus we used a convention of doubling the letter `r` to `rr` (and `c` to `cc`)
158 | to indicate that those are multiple values.
159 | In fact it may have even been clearer to name those variables `rows` and `columns`;
160 | however this would have been also much longer.
161 | Whatever you decide to do, try to stick to some already existing conventions,
162 | such that it is easier for other people to understand your code.
163 |
164 |
165 | ::::::::::::::::::::::::::::::::::::::::::::::::::
166 |
167 | ::::::::::::::::::::::::::::::::::::::: challenge
168 |
169 | ## Other drawing operations (15 min)
170 |
171 | There are other functions for drawing on images,
172 | in addition to the `ski.draw.rectangle()` function.
173 | We can draw circles, lines, text, and other shapes as well.
174 | These drawing functions may be useful later on, to help annotate images
175 | that our programs produce.
176 | Practice some of these functions here.
177 |
178 | Circles can be drawn with the `ski.draw.disk()` function,
179 | which takes two parameters:
180 | the (ry, cx) point of the centre of the circle,
181 | and the radius of the circle.
182 | There is an optional `shape` parameter that can be supplied to this function.
183 | It will limit the output coordinates for cases where the circle
184 | dimensions exceed the ones of the image.
185 |
186 | Lines can be drawn with the `ski.draw.line()` function,
187 | which takes four parameters:
188 | the (ry, cx) coordinate of one end of the line,
189 | and the (ry, cx) coordinate of the other end of the line.
190 |
191 | Other drawing functions supported by scikit-image can be found in
192 | [the scikit-image reference pages](https://scikit-image.org/docs/dev/api/skimage.draw.html?highlight=draw#module-skimage.draw).
193 |
194 | First let's make an empty, black image with a size of 800x600 pixels.
195 | Recall that a colour image has three channels for the colours red, green, and blue
196 | (RGB, cf. [Image Basics](03-skimage-images.md)).
197 | Hence we need to create a 3D array of shape `(600, 800, 3)` where the last dimension represents the RGB colour channels.
198 |
199 | ```python
200 | # create the black canvas
201 | canvas = np.zeros(shape=(600, 800, 3), dtype="uint8")
202 | ```
203 |
204 | Now your task is to draw some other coloured shapes and lines on the image,
205 | perhaps something like this:
206 |
207 | {alt='Sample shapes'}
208 |
209 | ::::::::::::::: solution
210 |
211 | ## Solution
212 |
213 | Drawing a circle:
214 |
215 | ```python
216 | # Draw a blue circle with centre (200, 300) in (ry, cx) coordinates, and radius 100
217 | rr, cc = ski.draw.disk(center=(200, 300), radius=100, shape=canvas.shape[0:2])
218 | canvas[rr, cc] = (0, 0, 255)
219 | ```
220 |
221 | Drawing a line:
222 |
223 | ```python
224 | # Draw a green line from (400, 200) to (500, 700) in (ry, cx) coordinates
225 | rr, cc = ski.draw.line(r0=400, c0=200, r1=500, c1=700)
226 | canvas[rr, cc] = (0, 255, 0)
227 | ```
228 |
229 | ```python
230 | # Display the image
231 | fig, ax = plt.subplots()
232 | ax.imshow(canvas)
233 | ```
234 |
235 | We could expand this solution, if we wanted,
236 | to draw rectangles, circles and lines at random positions within our black canvas.
237 | To do this, we could use the `random` python module,
238 | and the function `random.randrange`,
239 | which can produce random numbers within a certain range.
240 |
241 | Let's draw 15 randomly placed circles:
242 |
243 | ```python
244 | import random
245 |
246 | # create the black canvas
247 | canvas = np.zeros(shape=(600, 800, 3), dtype="uint8")
248 |
249 | # draw a blue circle at a random location 15 times
250 | for i in range(15):
251 | rr, cc = ski.draw.disk(center=(
252 | random.randrange(600),
253 | random.randrange(800)),
254 | radius=50,
255 | shape=canvas.shape[0:2],
256 | )
257 | canvas[rr, cc] = (0, 0, 255)
258 |
259 | # display the results
260 | fig, ax = plt.subplots()
261 | ax.imshow(canvas)
262 | ```
263 |
264 | We could expand this even further to also
265 | randomly choose whether to plot a rectangle, a circle, or a square.
266 | Again, we do this with the `random` module,
267 | now using the function `random.random`
268 | that returns a random number between 0.0 and 1.0.
269 |
270 | ```python
271 | import random
272 |
273 | # Draw 15 random shapes (rectangle, circle or line) at random positions
274 | for i in range(15):
275 | # generate a random number between 0.0 and 1.0 and use this to decide if we
276 | # want a circle, a line or a sphere
277 | x = random.random()
278 | if x < 0.33:
279 | # draw a blue circle at a random location
280 | rr, cc = ski.draw.disk(center=(
281 | random.randrange(600),
282 | random.randrange(800)),
283 | radius=50,
284 | shape=canvas.shape[0:2],
285 | )
286 | color = (0, 0, 255)
287 | elif x < 0.66:
288 | # draw a green line at a random location
289 | rr, cc = ski.draw.line(
290 | r0=random.randrange(600),
291 | c0=random.randrange(800),
292 | r1=random.randrange(600),
293 | c1=random.randrange(800),
294 | )
295 | color = (0, 255, 0)
296 | else:
297 | # draw a red rectangle at a random location
298 | rr, cc = ski.draw.rectangle(
299 | start=(random.randrange(600), random.randrange(800)),
300 | extent=(50, 50),
301 | shape=canvas.shape[0:2],
302 | )
303 | color = (255, 0, 0)
304 |
305 | canvas[rr, cc] = color
306 |
307 | # display the results
308 | fig, ax = plt.subplots()
309 | ax.imshow(canvas)
310 | ```
311 |
312 | :::::::::::::::::::::::::
313 |
314 | ::::::::::::::::::::::::::::::::::::::::::::::::::
315 |
316 | ## Image modification
317 |
318 | All that remains is the task of modifying the image using our mask in such a
319 | way that the areas with `True` pixels in the mask are not shown in the image
320 | any more.
321 |
322 | ::::::::::::::::::::::::::::::::::::::: challenge
323 |
324 | ## How does a mask work? (optional, not included in timing)
325 |
326 | Now, consider the mask image we created above.
327 | The values of the mask that corresponds to the portion of the image
328 | we are interested in are all `False`,
329 | while the values of the mask that corresponds to the portion of the image we
330 | want to remove are all `True`.
331 |
332 | How do we change the original image using the mask?
333 |
334 | ::::::::::::::: solution
335 |
336 | ## Solution
337 |
338 | When indexing the image using the mask, we access only those pixels at
339 | positions where the mask is `True`.
340 | So, when indexing with the mask,
341 | one can set those values to 0, and effectively remove them from the image.
342 |
343 |
344 |
345 | :::::::::::::::::::::::::
346 |
347 | ::::::::::::::::::::::::::::::::::::::::::::::::::
348 |
349 | Now we can write a Python program to use a mask to retain only the portions
350 | of our maize roots image that actually contains the seedling roots.
351 | We load the original image and create the mask in the same way as before:
352 |
353 | ```python
354 | # Load the original image
355 | maize_seedlings = iio.imread(uri="data/maize-seedlings.tif")
356 |
357 | # Create the basic mask
358 | mask = np.ones(shape=maize_seedlings.shape[0:2], dtype="bool")
359 |
360 | # Draw a filled rectangle on the mask image
361 | rr, cc = ski.draw.rectangle(start=(357, 44), end=(740, 720))
362 | mask[rr, cc] = False
363 | ```
364 |
365 | Then, we use NumPy indexing to remove the portions of the image,
366 | where the mask is `True`:
367 |
368 | ```python
369 | # Apply the mask
370 | maize_seedlings[mask] = 0
371 | ```
372 |
373 | Then, we display the masked image.
374 |
375 | ```python
376 | fig, ax = plt.subplots()
377 | ax.imshow(maize_seedlings)
378 | ```
379 |
380 | The resulting masked image should look like this:
381 |
382 | {alt='Applied mask'}
383 |
384 | ::::::::::::::::::::::::::::::::::::::: challenge
385 |
386 | ## Masking an image of your own (optional, not included in timing)
387 |
388 | Now, it is your turn to practice.
389 | Using your mobile phone, tablet, webcam, or digital camera,
390 | take an image of an object with a simple overall geometric shape
391 | (think rectangular or circular).
392 | Copy that image to your computer, write some code to make a mask,
393 | and apply it to select the part of the image containing your object.
394 | For example, here is an image of a remote control:
395 |
396 | {alt='Remote control image'}
397 |
398 | And, here is the end result of a program masking out everything but the remote:
399 |
400 | {alt='Remote control masked'}
401 |
402 | ::::::::::::::: solution
403 |
404 | ## Solution
405 |
406 | Here is a Python program to produce the cropped remote control image shown above.
407 | Of course, your program should be tailored to your image.
408 |
409 | ```python
410 | # Load the image
411 | remote = iio.imread(uri="data/remote-control.jpg")
412 | remote = np.array(remote)
413 |
414 | # Create the basic mask
415 | mask = np.ones(shape=remote.shape[0:2], dtype="bool")
416 |
417 | # Draw a filled rectangle on the mask image
418 | rr, cc = ski.draw.rectangle(start=(93, 1107), end=(1821, 1668))
419 | mask[rr, cc] = False
420 |
421 | # Apply the mask
422 | remote[mask] = 0
423 |
424 | # Display the result
425 | fig, ax = plt.subplots()
426 | ax.imshow(remote)
427 | ```
428 |
429 | :::::::::::::::::::::::::
430 |
431 | ::::::::::::::::::::::::::::::::::::::::::::::::::
432 |
433 | ::::::::::::::::::::::::::::::::::::::: challenge
434 |
435 | ## Masking a 96-well plate image (30 min)
436 |
437 | Consider this image of a 96-well plate that has been scanned on a flatbed scanner.
438 |
439 | ```python
440 | # Load the image
441 | wellplate = iio.imread(uri="data/wellplate-01.jpg")
442 | wellplate = np.array(wellplate)
443 |
444 | # Display the image
445 | fig, ax = plt.subplots()
446 | ax.imshow(wellplate)
447 | ```
448 |
449 | {alt='96-well plate'}
450 |
451 | Suppose that we are interested in the colours of the solutions in each of the wells.
452 | We *do not* care about the colour of the rest of the image,
453 | i.e., the plastic that makes up the well plate itself.
454 |
455 | Your task is to write some code that will produce a mask that will
456 | mask out everything except for the wells.
457 | To help with this, you should use the text file `data/centers.txt` that contains
458 | the (cx, ry) coordinates of the centre of each of the 96 wells in this image.
459 | You may assume that each of the wells has a radius of 16 pixels.
460 |
461 | Your program should produce output that looks like this:
462 |
463 | {alt='Masked 96-well plate'}
464 |
465 | *Hint: You can load `data/centers.txt` using*:
466 |
467 | ```Python
468 | # load the well coordinates as a NumPy array
469 | centers = np.loadtxt("data/centers.txt", delimiter=" ")
470 | ```
471 |
472 | ::::::::::::::: solution
473 |
474 | ## Solution
475 |
476 | ```python
477 | # load the well coordinates as a NumPy array
478 | centers = np.loadtxt("data/centers.txt", delimiter=" ")
479 |
480 | # read in original image
481 | wellplate = iio.imread(uri="data/wellplate-01.jpg")
482 | wellplate = np.array(wellplate)
483 |
484 | # create the mask image
485 | mask = np.ones(shape=wellplate.shape[0:2], dtype="bool")
486 |
487 | # iterate through the well coordinates
488 | for cx, ry in centers:
489 | # draw a circle on the mask at the well center
490 | rr, cc = ski.draw.disk(center=(ry, cx), radius=16, shape=wellplate.shape[:2])
491 | mask[rr, cc] = False
492 |
493 | # apply the mask
494 | wellplate[mask] = 0
495 |
496 | # display the result
497 | fig, ax = plt.subplots()
498 | ax.imshow(wellplate)
499 | ```
500 |
501 | :::::::::::::::::::::::::
502 |
503 | ::::::::::::::::::::::::::::::::::::::::::::::::::
504 |
505 | ::::::::::::::::::::::::::::::::::::::: challenge
506 |
507 | ## Masking a 96-well plate image, take two (optional, not included in timing)
508 |
509 | If you spent some time looking at the contents of
510 | the `data/centers.txt` file from the previous challenge,
511 | you may have noticed that the centres of each well in the image are very regular.
512 | *Assuming* that the images are scanned in such a way that
513 | the wells are always in the same place,
514 | and that the image is perfectly oriented
515 | (i.e., it does not slant one way or another),
516 | we could produce our well plate mask without having to
517 | read in the coordinates of the centres of each well.
518 | Assume that the centre of the upper left well in the image is at
519 | location cx = 91 and ry = 108, and that there are
520 | 70 pixels between each centre in the cx dimension and
521 | 72 pixels between each centre in the ry dimension.
522 | Each well still has a radius of 16 pixels.
523 | Write a Python program that produces the same output image as in the previous challenge,
524 | but *without* having to read in the `centers.txt` file.
525 | *Hint: use nested for loops.*
526 |
527 | ::::::::::::::: solution
528 |
529 | ## Solution
530 |
531 | Here is a Python program that is able to create the masked image without
532 | having to read in the `centers.txt` file.
533 |
534 | ```python
535 | # read in original image
536 | wellplate = iio.imread(uri="data/wellplate-01.jpg")
537 | wellplate = np.array(wellplate)
538 |
539 | # create the mask image
540 | mask = np.ones(shape=wellplate.shape[0:2], dtype="bool")
541 |
542 | # upper left well coordinates
543 | cx0 = 91
544 | ry0 = 108
545 |
546 | # spaces between wells
547 | deltaCX = 70
548 | deltaRY = 72
549 |
550 | cx = cx0
551 | ry = ry0
552 |
553 | # iterate each row and column
554 | for row in range(12):
555 | # reset cx to leftmost well in the row
556 | cx = cx0
557 | for col in range(8):
558 |
559 | # ... and drawing a circle on the mask
560 | rr, cc = ski.draw.disk(center=(ry, cx), radius=16, shape=wellplate.shape[0:2])
561 | mask[rr, cc] = False
562 | cx += deltaCX
563 | # after one complete row, move to next row
564 | ry += deltaRY
565 |
566 | # apply the mask
567 | wellplate[mask] = 0
568 |
569 | # display the result
570 | fig, ax = plt.subplots()
571 | ax.imshow(wellplate)
572 | ```
573 |
574 | :::::::::::::::::::::::::
575 |
576 | ::::::::::::::::::::::::::::::::::::::::::::::::::
577 |
578 | :::::::::::::::::::::::::::::::::::::::: keypoints
579 |
580 | - We can use the NumPy `zeros()` function to create a blank, black image.
581 | - We can draw on scikit-image images with functions such as `ski.draw.rectangle()`, `ski.draw.disk()`, `ski.draw.line()`, and more.
582 | - The drawing functions return indices to pixels that can be set directly.
583 |
584 | ::::::::::::::::::::::::::::::::::::::::::::::::::
585 |
--------------------------------------------------------------------------------
/episodes/05-creating-histograms.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: Creating Histograms
3 | teaching: 40
4 | exercises: 40
5 | ---
6 |
7 | ::::::::::::::::::::::::::::::::::::::: objectives
8 |
9 | - Explain what a histogram is.
10 | - Load an image in grayscale format.
11 | - Create and display grayscale and colour histograms for entire images.
12 | - Create and display grayscale and colour histograms for certain areas of images, via masks.
13 |
14 | ::::::::::::::::::::::::::::::::::::::::::::::::::
15 |
16 | :::::::::::::::::::::::::::::::::::::::: questions
17 |
18 | - How can we create grayscale and colour histograms to understand the distribution of colour values in an image?
19 |
20 | ::::::::::::::::::::::::::::::::::::::::::::::::::
21 |
22 | In this episode, we will learn how to use scikit-image functions to create and
23 | display histograms for images.
24 |
25 | ## First, import the packages needed for this episode
26 |
27 | ```python
28 | import imageio.v3 as iio
29 | import ipympl
30 | import matplotlib.pyplot as plt
31 | import numpy as np
32 | import skimage as ski
33 |
34 | %matplotlib widget
35 | ```
36 |
37 | ## Introduction to Histograms
38 |
39 | As it pertains to images, a *histogram* is a graphical representation showing
40 | how frequently various colour values occur in the image.
41 | We saw in
42 | [the *Image Basics* episode](02-image-basics.md)
43 | that we could use a histogram to visualise
44 | the differences in uncompressed and compressed image formats.
45 | If your project involves detecting colour changes between images,
46 | histograms will prove to be very useful,
47 | and histograms are also quite handy as a preparatory step before performing
48 | [thresholding](07-thresholding.md).
49 |
50 | ## Grayscale Histograms
51 |
52 | We will start with grayscale images,
53 | and then move on to colour images.
54 | We will use this image of a plant seedling as an example:
55 | {alt='Plant seedling'}
56 |
57 | Here we load the image in grayscale instead of full colour, and display it:
58 |
59 | ```python
60 | # read the image of a plant seedling as grayscale from the outset
61 | plant_seedling = iio.imread(uri="data/plant-seedling.jpg", mode="L")
62 |
63 | # convert the image to float dtype with a value range from 0 to 1
64 | plant_seedling = ski.util.img_as_float(plant_seedling)
65 |
66 | # display the image
67 | fig, ax = plt.subplots()
68 | ax.imshow(plant_seedling, cmap="gray")
69 | ```
70 |
71 | {alt='Plant seedling'}
72 |
73 | Again, we use the `iio.imread()` function to load our image.
74 | Then, we convert the grayscale image of integer dtype, with 0-255 range, into
75 | a floating-point one with 0-1 range, by calling the function
76 | `ski.util.img_as_float`. We can also calculate histograms for 8 bit images as we will see in the
77 | subsequent exercises.
78 |
79 | We now use the function `np.histogram` to compute the histogram of our image
80 | which, after all, is a NumPy array:
81 |
82 | ```python
83 | # create the histogram
84 | histogram, bin_edges = np.histogram(plant_seedling, bins=256, range=(0, 1))
85 | ```
86 |
87 | The parameter `bins` determines the number of "bins" to use for the histogram.
88 | We pass in `256` because we want to see the pixel count for each of
89 | the 256 possible values in the grayscale image.
90 |
91 | The parameter `range` is the range of values each of the pixels in the image can have.
92 | Here, we pass 0 and 1,
93 | which is the value range of our input image after conversion to floating-point.
94 |
95 | The first output of the `np.histogram` function is a one-dimensional NumPy array,
96 | with 256 rows and one column,
97 | representing the number of pixels with the intensity value corresponding to the index.
98 | I.e., the first number in the array is
99 | the number of pixels found with intensity value 0,
100 | and the final number in the array is
101 | the number of pixels found with intensity value 255.
102 | The second output of `np.histogram` is
103 | an array with the bin edges and one column and 257 rows
104 | (one more than the histogram itself).
105 | There are no gaps between the bins, which means that the end of the first bin,
106 | is the start of the second and so on.
107 | For the last bin, the array also has to contain the stop,
108 | so it has one more element, than the histogram.
109 |
110 | Next, we turn our attention to displaying the histogram,
111 | by taking advantage of the plotting facilities of the Matplotlib library.
112 |
113 | ```python
114 | # configure and draw the histogram figure
115 | fig, ax = plt.subplots()
116 | ax.set_title("Grayscale Histogram")
117 | ax.set_xlabel("grayscale value")
118 | ax.set_ylabel("pixel count")
119 | ax.set_xlim([0.0, 1.0]) # <- named arguments do not work here
120 |
121 | ax.plot(bin_edges[0:-1], histogram) # <- or here
122 | ```
123 |
124 | We create the plot with `plt.subplots()`,
125 | then label the figure and the coordinate axes with `ax.set_title()`,
126 | `ax.set_xlabel()`, and `ax.set_ylabel()` functions.
127 | The last step in the preparation of the figure is to
128 | set the limits on the values on the x-axis with
129 | the `ax.set_xlim([0.0, 1.0])` function call.
130 |
131 | ::::::::::::::::::::::::::::::::::::::::: callout
132 |
133 | ## Variable-length argument lists
134 |
135 | Note that we cannot used named parameters for the
136 | `ax.set_xlim()` or `ax.plot()` functions.
137 | This is because these functions are defined to take an arbitrary number of
138 | *unnamed* arguments.
139 | The designers wrote the functions this way because they are very versatile,
140 | and creating named parameters for all of the possible ways to use them
141 | would be complicated.
142 |
143 |
144 | ::::::::::::::::::::::::::::::::::::::::::::::::::
145 |
146 | Finally, we create the histogram plot itself with
147 | `ax.plot(bin_edges[0:-1], histogram)`.
148 | We use the **left** bin edges as x-positions for the histogram values by
149 | indexing the `bin_edges` array to ignore the last value
150 | (the **right** edge of the last bin).
151 | When we run the program on this image of a plant seedling,
152 | it produces this histogram:
153 |
154 | {alt='Plant seedling histogram'}
155 |
156 | ::::::::::::::::::::::::::::::::::::::::: callout
157 |
158 | ## Histograms in Matplotlib
159 |
160 | Matplotlib provides a dedicated function to compute and display histograms:
161 | `ax.hist()`.
162 | We will not use it in this lesson in order to understand how to
163 | calculate histograms in more detail.
164 | In practice, it is a good idea to use this function,
165 | because it visualises histograms more appropriately than `ax.plot()`.
166 | Here, you could use it by calling
167 | `ax.hist(image.flatten(), bins=256, range=(0, 1))`
168 | instead of
169 | `np.histogram()` and `ax.plot()`
170 | (`*.flatten()` is a NumPy function that converts our two-dimensional
171 | image into a one-dimensional array).
172 |
173 | ::::::::::::::::::::::::::::::::::::::::::::::::::
174 |
175 | ::::::::::::::::::::::::::::::::::::::: challenge
176 |
177 | ## Using a mask for a histogram (15 min)
178 |
179 | Looking at the histogram above,
180 | you will notice that there is a large number of very dark pixels,
181 | as indicated in the chart by the spike around the grayscale value 0.12.
182 | That is not so surprising, since the original image is mostly black background.
183 | What if we want to focus more closely on the leaf of the seedling?
184 | That is where a mask enters the picture!
185 |
186 | First, hover over the plant seedling image with your mouse to determine the
187 | *(x, y)* coordinates of a bounding box around the leaf of the seedling.
188 | Then, using techniques from
189 | [the *Drawing and Bitwise Operations* episode](04-drawing.md),
190 | create a mask with a white rectangle covering that bounding box.
191 |
192 | After you have created the mask, apply it to the input image before passing
193 | it to the `np.histogram` function.
194 |
195 | ::::::::::::::: solution
196 |
197 | ## Solution
198 |
199 | ```python
200 |
201 | # read the image as grayscale from the outset
202 | plant_seedling = iio.imread(uri="data/plant-seedling.jpg", mode="L")
203 |
204 | # convert the image to float dtype with a value range from 0 to 1
205 | plant_seedling = ski.util.img_as_float(plant_seedling)
206 |
207 | # display the image
208 | fig, ax = plt.subplots()
209 | ax.imshow(plant_seedling, cmap="gray")
210 |
211 | # create mask here, using np.zeros() and ski.draw.rectangle()
212 | mask = np.zeros(shape=plant_seedling.shape, dtype="bool")
213 | rr, cc = ski.draw.rectangle(start=(199, 410), end=(384, 485))
214 | mask[rr, cc] = True
215 |
216 | # display the mask
217 | fig, ax = plt.subplots()
218 | ax.imshow(mask, cmap="gray")
219 |
220 | # mask the image and create the new histogram
221 | histogram, bin_edges = np.histogram(plant_seedling[mask], bins=256, range=(0.0, 1.0))
222 |
223 | # configure and draw the histogram figure
224 | fig, ax = plt.subplots()
225 |
226 | ax.set_title("Grayscale Histogram")
227 | ax.set_xlabel("grayscale value")
228 | ax.set_ylabel("pixel count")
229 | ax.set_xlim([0.0, 1.0])
230 | ax.plot(bin_edges[0:-1], histogram)
231 |
232 | ```
233 |
234 | Your histogram of the masked area should look something like this:
235 |
236 | {alt='Grayscale histogram of masked area'}
237 |
238 |
239 | :::::::::::::::::::::::::
240 |
241 | ::::::::::::::::::::::::::::::::::::::::::::::::::
242 |
243 | ## Colour Histograms
244 |
245 | We can also create histograms for full colour images,
246 | in addition to grayscale histograms.
247 | We have seen colour histograms before,
248 | in [the *Image Basics* episode](02-image-basics.md).
249 | A program to create colour histograms starts in a familiar way:
250 |
251 | ```python
252 | # read original image, in full color
253 | plant_seedling = iio.imread(uri="data/plant-seedling.jpg")
254 |
255 | # display the image
256 | fig, ax = plt.subplots()
257 | ax.imshow(plant_seedling)
258 | ```
259 |
260 | We read the original image, now in full colour, and display it.
261 |
262 | Next, we create the histogram, by calling the `np.histogram` function three
263 | times, once for each of the channels.
264 | We obtain the individual channels, by slicing the image along the last axis.
265 | For example, we can obtain the red colour channel by calling
266 | `r_chan = image[:, :, 0]`.
267 |
268 | ```python
269 | # tuple to select colors of each channel line
270 | colors = ("red", "green", "blue")
271 |
272 | # create the histogram plot, with three lines, one for
273 | # each color
274 | fig, ax = plt.subplots()
275 | ax.set_xlim([0, 256])
276 | for channel_id, color in enumerate(colors):
277 | histogram, bin_edges = np.histogram(
278 | plant_seedling[:, :, channel_id], bins=256, range=(0, 256)
279 | )
280 | ax.plot(bin_edges[0:-1], histogram, color=color)
281 |
282 | ax.set_title("Color Histogram")
283 | ax.set_xlabel("Color value")
284 | ax.set_ylabel("Pixel count")
285 | ```
286 |
287 | We will draw the histogram line for each channel in a different colour,
288 | and so we create a tuple of the colours to use for the three lines with the
289 |
290 | `colors = ("red", "green", "blue")`
291 |
292 | line of code.
293 | Then, we limit the range of the x-axis with the `ax.set_xlim()` function call.
294 |
295 | Next, we use the `for` control structure to iterate through the three channels,
296 | plotting an appropriately-coloured histogram line for each.
297 | This may be new Python syntax for you,
298 | so we will take a moment to discuss what is happening in the `for` statement.
299 |
300 | The Python built-in `enumerate()` function takes a list and returns an
301 | *iterator* of *tuples*, where the first element of the tuple is the index and the second element is the element of the list.
302 |
303 | ::::::::::::::::::::::::::::::::::::::::: callout
304 |
305 | ## Iterators, tuples, and `enumerate()`
306 |
307 | In Python, an *iterator*, or an *iterable object*, is
308 | something that can be iterated over with the `for` control structure.
309 | A *tuple* is a sequence of objects, just like a list.
310 | However, a tuple cannot be changed,
311 | and a tuple is indicated by parentheses instead of square brackets.
312 | The `enumerate()` function takes an iterable object,
313 | and returns an iterator of tuples consisting of
314 | the 0-based index and the corresponding object.
315 |
316 | For example, consider this small Python program:
317 |
318 | ```python
319 | list = ("a", "b", "c", "d", "e")
320 |
321 | for x in enumerate(list):
322 | print(x)
323 | ```
324 |
325 | Executing this program would produce the following output:
326 |
327 | ```output
328 | (0, 'a')
329 | (1, 'b')
330 | (2, 'c')
331 | (3, 'd')
332 | (4, 'e')
333 | ```
334 |
335 | ::::::::::::::::::::::::::::::::::::::::::::::::::
336 |
337 | In our colour histogram program, we are using a tuple, `(channel_id, color)`,
338 | as the `for` variable.
339 | The first time through the loop, the `channel_id` variable takes the value `0`,
340 | referring to the position of the red colour channel,
341 | and the `color` variable contains the string `"red"`.
342 | The second time through the loop the values are the green channels index `1` and
343 | `"green"`, and the third time they are the blue channel index `2` and `"blue"`.
344 |
345 | Inside the `for` loop, our code looks much like it did for the
346 | grayscale example. We calculate the histogram for the current channel
347 | with the
348 |
349 | `histogram, bin_edges = np.histogram(image[:, :, channel_id], bins=256, range=(0, 256))`
350 |
351 | function call,
352 | and then add a histogram line of the correct colour to the plot with the
353 |
354 | `ax.plot(bin_edges[0:-1], histogram, color=color)`
355 |
356 | function call.
357 | Note the use of our loop variables, `channel_id` and `color`.
358 |
359 | Finally we label our axes and display the histogram, shown here:
360 |
361 | {alt='Colour histogram'}
362 |
363 | ::::::::::::::::::::::::::::::::::::::: challenge
364 |
365 | ## Colour histogram with a mask (25 min)
366 |
367 | We can also apply a mask to the images we apply the colour histogram process to,
368 | in the same way we did for grayscale histograms.
369 | Consider this image of a well plate,
370 | where various chemical sensors have been applied to water and
371 | various concentrations of hydrochloric acid and sodium hydroxide:
372 |
373 | ```python
374 | # read the image
375 | wellplate = iio.imread(uri="data/wellplate-02.tif")
376 |
377 | # display the image
378 | fig, ax = plt.subplots()
379 | ax.imshow(wellplate)
380 | ```
381 |
382 | {alt='Well plate image'}
383 |
384 | Suppose we are interested in the colour histogram of one of the sensors in the
385 | well plate image,
386 | specifically, the seventh well from the left in the topmost row,
387 | which shows Erythrosin B reacting with water.
388 |
389 | Hover over the image with your mouse to find the centre of that well
390 | and the radius (in pixels) of the well.
391 | Then create a circular mask to select only the desired well.
392 | Then, use that mask to apply the colour histogram operation to that well.
393 |
394 | Your masked image should look like this:
395 |
396 | {alt='Masked well plate'}
397 |
398 | And, the program should produce a colour histogram that looks like this:
399 |
400 | {alt='Well plate histogram'}
401 |
402 | ::::::::::::::: solution
403 |
404 | ## Solution
405 |
406 | ```python
407 | # create a circular mask to select the 7th well in the first row
408 | mask = np.zeros(shape=wellplate.shape[0:2], dtype="bool")
409 | circle = ski.draw.disk(center=(240, 1053), radius=49, shape=wellplate.shape[0:2])
410 | mask[circle] = 1
411 |
412 | # just for display:
413 | # make a copy of the image, call it masked_image, and
414 | # zero values where mask is False
415 | masked_img = np.array(wellplate)
416 | masked_img[~mask] = 0
417 |
418 | # create a new figure and display masked_img, to verify the
419 | # validity of your mask
420 | fig, ax = plt.subplots()
421 | ax.imshow(masked_img)
422 |
423 | # list to select colors of each channel line
424 | colors = ("red", "green", "blue")
425 |
426 | # create the histogram plot, with three lines, one for
427 | # each color
428 | fig, ax = plt.subplots()
429 | ax.set_xlim([0, 256])
430 | for (channel_id, color) in enumerate(colors):
431 | # use your circular mask to apply the histogram
432 | # operation to the 7th well of the first row
433 | histogram, bin_edges = np.histogram(
434 | wellplate[:, :, channel_id][mask], bins=256, range=(0, 256)
435 | )
436 |
437 | ax.plot(histogram, color=color)
438 |
439 | ax.set_xlabel("color value")
440 | ax.set_ylabel("pixel count")
441 |
442 | ```
443 |
444 | :::::::::::::::::::::::::
445 |
446 | ::::::::::::::::::::::::::::::::::::::::::::::::::
447 |
448 | :::::::::::::::::::::::::::::::::::::::: keypoints
449 |
450 | - In many cases, we can load images in grayscale by passing the `mode="L"` argument to the `iio.imread()` function.
451 | - We can create histograms of images with the `np.histogram` function.
452 | - We can display histograms using `ax.plot()` with the `bin_edges` and `histogram` values returned by `np.histogram()`.
453 | - The plot can be customised using `ax.set_xlabel()`, `ax.set_ylabel()`, `ax.set_xlim()`, `ax.set_ylim()`, and `ax.set_title()`.
454 | - We can separate the colour channels of an RGB image using slicing operations and create histograms for each colour channel separately.
455 |
456 | ::::::::::::::::::::::::::::::::::::::::::::::::::
457 |
--------------------------------------------------------------------------------
/episodes/06-blurring.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: Blurring Images
3 | teaching: 35
4 | exercises: 25
5 | ---
6 |
7 | ::::::::::::::::::::::::::::::::::::::: objectives
8 |
9 | - Explain why applying a low-pass blurring filter to an image is beneficial.
10 | - Apply a Gaussian blur filter to an image using scikit-image.
11 |
12 | ::::::::::::::::::::::::::::::::::::::::::::::::::
13 |
14 | :::::::::::::::::::::::::::::::::::::::: questions
15 |
16 | - How can we apply a low-pass blurring filter to an image?
17 |
18 | ::::::::::::::::::::::::::::::::::::::::::::::::::
19 |
20 | In this episode, we will learn how to use scikit-image functions to blur images.
21 |
22 | When processing an image, we are often interested in identifying objects
23 | represented within it so that we can perform some further analysis of these
24 | objects, e.g., by counting them, measuring their sizes, etc.
25 | An important concept associated with the identification of objects in an image
26 | is that of *edges*: the lines that represent a transition from one group of
27 | similar pixels in the image to another different group.
28 | One example of an edge is the pixels that represent
29 | the boundaries of an object in an image,
30 | where the background of the image ends and the object begins.
31 |
32 | When we blur an image,
33 | we make the colour transition from one side of an edge in the image to another
34 | smooth rather than sudden.
35 | The effect is to average out rapid changes in pixel intensity.
36 | Blurring is a very common operation we need to perform before other tasks such as
37 | [thresholding](07-thresholding.md).
38 | There are several different blurring functions in the `ski.filters` module,
39 | so we will focus on just one here, the *Gaussian blur*.
40 |
41 | ::::::::::::::::::::::::::::::::::::::::: callout
42 |
43 | ## Filters
44 |
45 | In the day-to-day, macroscopic world,
46 | we have physical filters which separate out objects by size.
47 | A filter with small holes allows only small objects through,
48 | leaving larger objects behind.
49 | This is a good analogy for image filters.
50 | A high-pass filter will retain the smaller details in an image,
51 | filtering out the larger ones.
52 | A low-pass filter retains the larger features,
53 | analogous to what's left behind by a physical filter mesh.
54 | *High-* and *low*-pass, here,
55 | refer to high and low *spatial frequencies* in the image.
56 | Details associated with high spatial frequencies are small,
57 | a lot of these features would fit across an image.
58 | Features associated with low spatial frequencies are large -
59 | maybe a couple of big features per image.
60 |
61 | ::::::::::::::::::::::::::::::::::::::::::::::::::
62 |
63 | ::::::::::::::::::::::::::::::::::::::::: callout
64 |
65 | ## Blurring
66 |
67 | To blur is to make something less clear or distinct.
68 | This could be interpreted quite broadly in the context of image analysis -
69 | anything that reduces or distorts the detail of an image might apply.
70 | Applying a low-pass filter, which removes detail occurring at high spatial frequencies,
71 | is perceived as a blurring effect.
72 | A Gaussian blur is a filter that makes use of a Gaussian kernel.
73 |
74 | ::::::::::::::::::::::::::::::::::::::::::::::::::
75 |
76 | ::::::::::::::::::::::::::::::::::::::::: callout
77 |
78 | ## Kernels
79 |
80 | A kernel can be used to implement a filter on an image.
81 | A kernel, in this context,
82 | is a small matrix which is combined with the image using
83 | a mathematical technique: *convolution*.
84 | Different sizes, shapes and contents of kernel produce different effects.
85 | The kernel can be thought of as a little image in itself,
86 | and will favour features of similar size and shape in the main image.
87 | On convolution with an image, a big, blobby kernel will retain
88 | big, blobby, low spatial frequency features.
89 |
90 | ::::::::::::::::::::::::::::::::::::::::::::::::::
91 |
92 | ## Gaussian blur
93 |
94 | Consider this image of a cat,
95 | in particular the area of the image outlined by the white square.
96 |
97 | {alt='Cat image'}
98 |
99 | Now, zoom in on the area of the cat's eye, as shown in the left-hand image below.
100 | When we apply a filter, we consider each pixel in the image, one at a time.
101 | In this example, the pixel we are currently working on is highlighted in red,
102 | as shown in the right-hand image.
103 |
104 | {alt='Cat eye pixels'}
105 |
106 | When we apply a filter, we consider rectangular groups of pixels surrounding
107 | each pixel in the image, in turn.
108 | The *kernel* is another group of pixels (a separate matrix / small image),
109 | of the same dimensions as the rectangular group of pixels in the image,
110 | that moves along with the pixel being worked on by the filter.
111 | The width and height of the kernel must be an odd number,
112 | so that the pixel being worked on is always in its centre.
113 | In the example shown above, the kernel is square, with a dimension of seven pixels.
114 |
115 | To apply the kernel to the current pixel,
116 | an average of the colour values of the pixels surrounding it is calculated,
117 | weighted by the values in the kernel.
118 | In a Gaussian blur, the pixels nearest the centre of the kernel are
119 | given more weight than those far away from the centre.
120 | The rate at which this weight diminishes is determined by a Gaussian function, hence the name
121 | Gaussian blur.
122 |
123 | A Gaussian function maps random variables into a normal distribution or "Bell Curve".
124 | {alt='Gaussian function'}
125 |
126 | | *[https://en.wikipedia.org/wiki/Gaussian\_function#/media/File:Normal\_Distribution\_PDF.svg](https://en.wikipedia.org/wiki/Gaussian_function#/media/File:Normal_Distribution_PDF.svg)* |
127 |
128 | The shape of the function is described by a mean value μ, and a variance value σ². The mean determines the central point of the bell curve on the X axis, and the variance describes the spread of the curve.
129 |
130 | In fact, when using Gaussian functions in Gaussian blurring, we use a 2D Gaussian function to account for X and Y dimensions, but the same rules apply. The mean μ is always 0, and represents the middle of the 2D kernel. Increasing values of σ² in either dimension increases the amount of blurring in that dimension.
131 |
132 | {alt='2D Gaussian function'}
133 |
134 | | *[https://commons.wikimedia.org/wiki/File:Gaussian\_2D.png](https://commons.wikimedia.org/wiki/File:Gaussian_2D.png)* |
135 |
136 | The averaging is done on a channel-by-channel basis,
137 | and the average channel values become the new value for the pixel in
138 | the filtered image.
139 | Larger kernels have more values factored into the average, and this implies
140 | that a larger kernel will blur the image more than a smaller kernel.
141 |
142 | To get an idea of how this works,
143 | consider this plot of the two-dimensional Gaussian function:
144 |
145 | {alt='2D Gaussian function'}
146 |
147 | Imagine that plot laid over the kernel for the Gaussian blur filter.
148 | The height of the plot corresponds to the weight given to the underlying pixel
149 | in the kernel.
150 | I.e., the pixels close to the centre become more important to
151 | the filtered pixel colour than the pixels close to the outer limits of the kernel.
152 | The shape of the Gaussian function is controlled via its standard deviation,
153 | or sigma.
154 | A large sigma value results in a flatter shape,
155 | while a smaller sigma value results in a more pronounced peak.
156 | The mathematics involved in the Gaussian blur filter are not quite that simple,
157 | but this explanation gives you the basic idea.
158 |
159 | To illustrate the blurring process,
160 | consider the blue channel colour values from the seven-by-seven region
161 | of the cat image above:
162 |
163 | {alt='Image corner pixels'}
164 |
165 | The filter is going to determine the new blue channel value for the centre
166 | pixel -- the one that currently has the value 86. The filter calculates a
167 | weighted average of all the blue channel values in the kernel
168 | giving higher weight to the pixels near the centre of the
169 | kernel.
170 |
171 | {alt='Image multiplication'}
172 |
173 | This weighted average, the sum of the multiplications,
174 | becomes the new value for the centre pixel (3, 3).
175 | The same process would be used to determine the green and red channel values,
176 | and then the kernel would be moved over to apply the filter to the
177 | next pixel in the image.
178 |
179 | :::::::::::::::::::::::::::::::::::::::: instructor
180 |
181 | ## Terminology about image boundaries
182 |
183 | Take care to avoid mixing up the term "edge" to describe the edges of objects
184 | *within* an image and the outer boundaries of the images themselves.
185 | Lack of a clear distinction here may be confusing for learners.
186 |
187 | :::::::::::::::::::::::::::::::::::::::::::::::::::
188 |
189 | ::::::::::::::::::::::::::::::::::::::::: callout
190 |
191 | ## Image edges
192 |
193 | Something different needs to happen for pixels near the outer limits of the image,
194 | since the kernel for the filter may be partially off the image.
195 | For example, what happens when the filter is applied to
196 | the upper-left pixel of the image?
197 | Here are the blue channel pixel values for the upper-left pixel of the cat image,
198 | again assuming a seven-by-seven kernel:
199 |
200 | ```output
201 | x x x x x x x
202 | x x x x x x x
203 | x x x x x x x
204 | x x x 4 5 9 2
205 | x x x 5 3 6 7
206 | x x x 6 5 7 8
207 | x x x 5 4 5 3
208 | ```
209 |
210 | The upper-left pixel is the one with value 4.
211 | Since the pixel is at the upper-left corner,
212 | there are no pixels underneath much of the kernel;
213 | here, this is represented by x's.
214 | So, what does the filter do in that situation?
215 |
216 | The default mode is to fill in the *nearest* pixel value from the image.
217 | For each of the missing x's the image value closest to the x is used.
218 | If we fill in a few of the missing pixels, you will see how this works:
219 |
220 | ```output
221 | x x x 4 x x x
222 | x x x 4 x x x
223 | x x x 4 x x x
224 | 4 4 4 4 5 9 2
225 | x x x 5 3 6 7
226 | x x x 6 5 7 8
227 | x x x 5 4 5 3
228 | ```
229 |
230 | Another strategy to fill those missing values is
231 | to *reflect* the pixels that are in the image to fill in for the pixels that
232 | are missing from the kernel.
233 |
234 | ```output
235 | x x x 5 x x x
236 | x x x 6 x x x
237 | x x x 5 x x x
238 | 2 9 5 4 5 9 2
239 | x x x 5 3 6 7
240 | x x x 6 5 7 8
241 | x x x 5 4 5 3
242 | ```
243 |
244 | A similar process would be used to fill in all of the other missing pixels from
245 | the kernel. Other *border modes* are available; you can learn more about them
246 | in [the scikit-image documentation](https://scikit-image.org/docs/dev/user_guide).
247 |
248 | ::::::::::::::::::::::::::::::::::::::::::::::::::
249 |
250 | Let's consider a very simple image to see blurring in action. The animation below shows how the blur kernel (large red square) moves along the image on the left in order to calculate the corresponding values for the blurred image (yellow central square) on the right. In this simple case, the original image is single-channel, but blurring would work likewise on a multi-channel image.
251 |
252 | {alt='Blur demo animation'}
253 |
254 | scikit-image has built-in functions to perform blurring for us, so we do not have to
255 | perform all of these mathematical operations ourselves. Let's work through
256 | an example of blurring an image with the scikit-image Gaussian blur function.
257 |
258 | First, import the packages needed for this episode:
259 |
260 | ```python
261 | import imageio.v3 as iio
262 | import ipympl
263 | import matplotlib.pyplot as plt
264 | import skimage as ski
265 |
266 | %matplotlib widget
267 | ```
268 |
269 | Then, we load the image, and display it:
270 |
271 | ```python
272 | image = iio.imread(uri="data/gaussian-original.png")
273 |
274 | # display the image
275 | fig, ax = plt.subplots()
276 | ax.imshow(image)
277 | ```
278 |
279 | {alt='Original image'}
280 |
281 | Next, we apply the gaussian blur:
282 |
283 | ```python
284 | sigma = 3.0
285 |
286 | # apply Gaussian blur, creating a new image
287 | blurred = ski.filters.gaussian(
288 | image, sigma=(sigma, sigma), truncate=3.5, channel_axis=-1)
289 | ```
290 |
291 | The first two arguments to `ski.filters.gaussian()` are the image to blur,
292 | `image`, and a tuple defining the sigma to use in ry- and cx-direction,
293 | `(sigma, sigma)`.
294 | The third parameter `truncate` is meant to pass the radius of the kernel in
295 | number of sigmas.
296 | A Gaussian function is defined from -infinity to +infinity, but our kernel
297 | (which must have a finite, smaller size) can only approximate the real function.
298 | Therefore, we must choose a certain distance from the centre of the function
299 | where we stop this approximation, and set the final size of our kernel.
300 | In the above example, we set `truncate` to 3.5,
301 | which means the kernel size will be 2 \* sigma \* 3.5.
302 | For example, for a `sigma` of 1.0 the resulting kernel size would be 7,
303 | while for a `sigma` of 2.0 the kernel size would be 14.
304 | The default value for `truncate` in scikit-image is 4.0.
305 |
306 | The last argument we passed to `ski.filters.gaussian()` is used to
307 | specify the dimension which contains the (colour) channels.
308 | Here, it is the last dimension;
309 | recall that, in Python, the `-1` index refers to the last position.
310 | In this case, the last dimension is the third dimension (index `2`), since our
311 | image has three dimensions:
312 |
313 | ```python
314 | print(image.ndim)
315 | ```
316 |
317 | ```output
318 | 3
319 | ```
320 |
321 | Finally, we display the blurred image:
322 |
323 | ```python
324 | # display blurred image
325 | fig, ax = plt.subplots()
326 | ax.imshow(blurred)
327 | ```
328 |
329 | {alt='Blurred image'}
330 |
331 |
332 | ## Visualising Blurring
333 |
334 | Somebody said once "an image is worth a thousand words".
335 | What is actually happening to the image pixels when we apply blurring may be
336 | difficult to grasp. Let's now visualise the effects of blurring from a different
337 | perspective.
338 |
339 | Let's use the petri-dish image from previous episodes:
340 |
341 | {alt='Bacteria colony'}
344 |
345 | What we want to see here is the pixel intensities from a lateral perspective:
346 | we want to see the profile of intensities.
347 | For instance, let's look for the intensities of the pixels along the horizontal
348 | line at `Y=150`:
349 |
350 | ```python
351 | # read colonies color image and convert to grayscale
352 | image = iio.imread('data/colonies-01.tif')
353 | image_gray = ski.color.rgb2gray(image)
354 |
355 | # define the pixels for which we want to view the intensity (profile)
356 | xmin, xmax = (0, image_gray.shape[1])
357 | Y = ymin = ymax = 150
358 |
359 | # view the image indicating the profile pixels position
360 | fig, ax = plt.subplots()
361 | ax.imshow(image_gray, cmap='gray')
362 | ax.plot([xmin, xmax], [ymin, ymax], color='red')
363 | ```
364 |
365 | {
368 | alt='Bacteria colony image with selected pixels marker'
369 | }
370 |
371 | The intensity of those pixels we can see with a simple line plot:
372 |
373 | ```python
374 | # select the vector of pixels along "Y"
375 | image_gray_pixels_slice = image_gray[Y, :]
376 |
377 | # guarantee the intensity values are in the [0:255] range (unsigned integers)
378 | image_gray_pixels_slice = ski.img_as_ubyte(image_gray_pixels_slice)
379 |
380 | fig, ax = plt.subplots()
381 | ax.plot(image_gray_pixels_slice, color='red')
382 | ax.set_ylim(255, 0)
383 | ax.set_ylabel('L')
384 | ax.set_xlabel('X')
385 | ```
386 |
387 | {
390 | alt='Pixel intensities profile in original image'
391 | }
392 |
393 | And now, how does the same set of pixels look in the corresponding *blurred* image:
394 |
395 | ```python
396 | # first, create a blurred version of (grayscale) image
397 | image_blur = ski.filters.gaussian(image_gray, sigma=3)
398 |
399 | # like before, plot the pixels profile along "Y"
400 | image_blur_pixels_slice = image_blur[Y, :]
401 | image_blur_pixels_slice = ski.img_as_ubyte(image_blur_pixels_slice)
402 |
403 | fig, ax = plt.subplots()
404 | ax.plot(image_blur_pixels_slice, 'red')
405 | ax.set_ylim(255, 0)
406 | ax.set_ylabel('L')
407 | ax.set_xlabel('X')
408 | ```
409 |
410 | {
413 | alt='Pixel intensities profile in blurred image'
414 | }
415 |
416 | And that is why *blurring* is also called *smoothing*.
417 | This is how low-pass filters affect neighbouring pixels.
418 |
419 | Now that we have seen the effects of blurring an image from
420 | two different perspectives, front and lateral, let's take
421 | yet another look using a 3D visualisation.
422 |
423 | :::::::::::::::::::::::::::::::::::::::::: callout
424 |
425 | ### 3D Plots with matplotlib
426 | The code to generate these 3D plots is outside the scope of this lesson
427 | but can be viewed by following the links in the captions.
428 |
429 | ::::::::::::::::::::::::::::::::::::::::::::::::::
430 |
431 |
432 | .
435 | Image credit: [Carlos H Brandt](https://github.com/chbrandt/).
436 | ](fig/3D_petri_before_blurring.png){
437 | alt='3D surface plot showing pixel intensities across the whole example Petri dish image before blurring'
438 | }
439 |
440 | .
445 | Image credit: [Carlos H Brandt](https://github.com/chbrandt/).
446 | ](fig/3D_petri_after_blurring.png){
447 | alt='3D surface plot illustrating the smoothing effect on pixel intensities across the whole example Petri dish image after blurring'
448 | }
449 |
450 |
451 |
452 | ::::::::::::::::::::::::::::::::::::::: challenge
453 |
454 | ## Experimenting with sigma values (10 min)
455 |
456 | The size and shape of the kernel used to blur an image can have a
457 | significant effect on the result of the blurring and any downstream analysis
458 | carried out on the blurred image.
459 | The next two exercises ask you to experiment with the sigma values of the kernel,
460 | which is a good way to develop your understanding of how the choice of kernel
461 | can influence the result of blurring.
462 |
463 | First, try running the code above with a range of smaller and larger sigma values.
464 | Generally speaking, what effect does the sigma value have on the
465 | blurred image?
466 |
467 | ::::::::::::::: solution
468 |
469 | ## Solution
470 |
471 | Generally speaking, the larger the sigma value, the more blurry the result.
472 | A larger sigma will tend to get rid of more noise in the image, which will
473 | help for other operations we will cover soon, such as thresholding.
474 | However, a larger sigma also tends to eliminate some of the detail from
475 | the image. So, we must strike a balance with the sigma value used for
476 | blur filters.
477 |
478 |
479 |
480 | :::::::::::::::::::::::::
481 |
482 | ::::::::::::::::::::::::::::::::::::::::::::::::::
483 |
484 | ::::::::::::::::::::::::::::::::::::::: challenge
485 |
486 | ## Experimenting with kernel shape (10 min - optional, not included in timing)
487 |
488 | Now, what is the effect of applying an asymmetric kernel to blurring an image?
489 | Try running the code above with different sigmas in the ry and cx direction.
490 | For example, a sigma of 1.0 in the ry direction, and 6.0 in the cx direction.
491 |
492 | ::::::::::::::: solution
493 |
494 | ## Solution
495 |
496 | ```python
497 | # apply Gaussian blur, with a sigma of 1.0 in the ry direction, and 6.0 in the cx direction
498 | blurred = ski.filters.gaussian(
499 | image, sigma=(1.0, 6.0), truncate=3.5, channel_axis=-1
500 | )
501 |
502 | # display blurred image
503 | fig, ax = plt.subplots()
504 | ax.imshow(blurred)
505 | ```
506 |
507 | {alt='Rectangular kernel blurred image'}
508 |
509 | These unequal sigma values produce a kernel that is rectangular instead of square.
510 | The result is an image that is much more blurred in the X direction than in the
511 | Y direction.
512 | For most use cases, a uniform blurring effect is desirable and
513 | this kind of asymmetric blurring should be avoided.
514 | However, it can be helpful in specific circumstances, e.g., when noise is present in
515 | your image in a particular pattern or orientation, such as vertical lines,
516 | or when you want to
517 | [remove uniform noise without blurring edges present in the image in a particular orientation](https://www.researchgate.net/publication/228567435_An_edge_detection_algorithm_based_on_rectangular_Gaussian_kernels_for_machine_vision_applications).
518 |
519 | :::::::::::::::::::::::::
520 |
521 | ::::::::::::::::::::::::::::::::::::::::::::::::::
522 |
523 | ## Other methods of blurring
524 |
525 | The Gaussian blur is a way to apply a low-pass filter in scikit-image.
526 | It is often used to remove Gaussian (i.e., random) noise in an image.
527 | For other kinds of noise, e.g., "salt and pepper", a
528 | median filter is typically used.
529 | See [the `skimage.filters` documentation](https://scikit-image.org/docs/dev/api/skimage.filters.html#module-skimage.filters)
530 | for a list of available filters.
531 |
532 | :::::::::::::::::::::::::::::::::::::::: keypoints
533 |
534 | - Applying a low-pass blurring filter smooths edges and removes noise from an image.
535 | - Blurring is often used as a first step before we perform thresholding or edge detection.
536 | - The Gaussian blur can be applied to an image with the `ski.filters.gaussian()` function.
537 | - Larger sigma values may remove more noise, but they will also remove detail from an image.
538 |
539 | ::::::::::::::::::::::::::::::::::::::::::::::::::
540 |
--------------------------------------------------------------------------------
/episodes/09-challenges.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: Capstone Challenge
3 | teaching: 10
4 | exercises: 40
5 | ---
6 |
7 | ::::::::::::::::::::::::::::::::::::::: objectives
8 |
9 | - Bring together everything you've learnt so far to count bacterial colonies in 3 images.
10 |
11 | ::::::::::::::::::::::::::::::::::::::::::::::::::
12 |
13 | :::::::::::::::::::::::::::::::::::::::: questions
14 |
15 | - How can we automatically count bacterial colonies with image analysis?
16 |
17 | ::::::::::::::::::::::::::::::::::::::::::::::::::
18 |
19 | In this episode, we will provide a final challenge for you to attempt,
20 | based on all the skills you have acquired so far.
21 | This challenge will be related to the shape of objects in images (*morphometrics*).
22 |
23 | ## Morphometrics: Bacteria Colony Counting
24 |
25 | As mentioned in [the workshop introduction](01-introduction.md),
26 | your morphometric challenge is to determine how many bacteria colonies are in
27 | each of these images:
28 |
29 | {alt='Colony image 1'}
30 |
31 | {alt='Colony image 2'}
32 |
33 | {alt='Colony image 3'}
34 |
35 | The image files can be found at
36 | `data/colonies-01.tif`,
37 | `data/colonies-02.tif`,
38 | and `data/colonies-03.tif`.
39 |
40 | ::::::::::::::::::::::::::::::::::::::: challenge
41 |
42 | ## Morphometrics for bacterial colonies
43 |
44 | Write a Python program that uses scikit-image to
45 | count the number of bacteria colonies in each image,
46 | and for each, produce a new image that highlights the colonies.
47 | The image should look similar to this one:
48 |
49 | {alt='Sample morphometric output'}
50 |
51 | Additionally, print out the number of colonies for each image.
52 |
53 | Use what you have learnt about [histograms](05-creating-histograms.md),
54 | [thresholding](07-thresholding.md) and
55 | [connected component analysis](08-connected-components.md).
56 | Try to put your code into a re-usable function,
57 | so that it can be applied conveniently to any image file.
58 |
59 | ::::::::::::::: solution
60 |
61 | ## Solution
62 |
63 | First, let's work through the process for one image:
64 |
65 | ```python
66 | import imageio.v3 as iio
67 | import ipympl
68 | import matplotlib.pyplot as plt
69 | import numpy as np
70 | import skimage as ski
71 |
72 | %matplotlib widget
73 |
74 | bacteria_image = iio.imread(uri="data/colonies-01.tif")
75 |
76 | # display the image
77 | fig, ax = plt.subplots()
78 | ax.imshow(bacteria_image)
79 | ```
80 |
81 | {alt='Colony image 1'}
82 |
83 | Next, we need to threshold the image to create a mask that covers only
84 | the dark bacterial colonies.
85 | This is easier using a grayscale image, so we convert it here:
86 |
87 | ```python
88 | gray_bacteria = ski.color.rgb2gray(bacteria_image)
89 |
90 | # display the gray image
91 | fig, ax = plt.subplots()
92 | ax.imshow(gray_bacteria, cmap="gray")
93 | ```
94 |
95 | {alt='Gray Colonies'}
96 |
97 | Next, we blur the image and create a histogram:
98 |
99 | ```python
100 | blurred_image = ski.filters.gaussian(gray_bacteria, sigma=1.0)
101 | histogram, bin_edges = np.histogram(blurred_image, bins=256, range=(0.0, 1.0))
102 | fig, ax = plt.subplots()
103 | ax.plot(bin_edges[0:-1], histogram)
104 | ax.set_title("Graylevel histogram")
105 | ax.set_xlabel("gray value")
106 | ax.set_ylabel("pixel count")
107 | ax.set_xlim(0, 1.0)
108 | ```
109 |
110 | {alt='Histogram image'}
111 |
112 | In this histogram, we see three peaks -
113 | the left one (i.e. the darkest pixels) is our colonies,
114 | the central peak is the yellow/brown culture medium in the dish,
115 | and the right one (i.e. the brightest pixels) is the white image background.
116 | Therefore, we choose a threshold that selects the small left peak:
117 |
118 | ```python
119 | mask = blurred_image < 0.2
120 | fig, ax = plt.subplots()
121 | ax.imshow(mask, cmap="gray")
122 | ```
123 |
124 | {alt='Colony mask image'}
125 |
126 | This mask shows us where the colonies are in the image -
127 | but how can we count how many there are?
128 | This requires connected component analysis:
129 |
130 | ```python
131 | labeled_image, count = ski.measure.label(mask, return_num=True)
132 | print(count)
133 | ```
134 |
135 | Finally, we create the summary image of the coloured colonies on top of
136 | the grayscale image:
137 |
138 | ```python
139 | # color each of the colonies a different color
140 | colored_label_image = ski.color.label2rgb(labeled_image, bg_label=0)
141 | # give our grayscale image rgb channels, so we can add the colored colonies
142 | summary_image = ski.color.gray2rgb(gray_bacteria)
143 | summary_image[mask] = colored_label_image[mask]
144 |
145 | # plot overlay
146 | fig, ax = plt.subplots()
147 | ax.imshow(summary_image)
148 | ```
149 |
150 | {alt='Sample morphometric output'}
151 |
152 | Now that we've completed the task for one image,
153 | we need to repeat this for the remaining two images.
154 | This is a good point to collect the lines above into a re-usable function:
155 |
156 | ```python
157 | def count_colonies(image_filename):
158 | bacteria_image = iio.imread(image_filename)
159 | gray_bacteria = ski.color.rgb2gray(bacteria_image)
160 | blurred_image = ski.filters.gaussian(gray_bacteria, sigma=1.0)
161 | mask = blurred_image < 0.2
162 | labeled_image, count = ski.measure.label(mask, return_num=True)
163 | print(f"There are {count} colonies in {image_filename}")
164 |
165 | colored_label_image = ski.color.label2rgb(labeled_image, bg_label=0)
166 | summary_image = ski.color.gray2rgb(gray_bacteria)
167 | summary_image[mask] = colored_label_image[mask]
168 | fig, ax = plt.subplots()
169 | ax.imshow(summary_image)
170 | ```
171 |
172 | Now we can do this analysis on all the images via a for loop:
173 |
174 | ```python
175 | for image_filename in ["data/colonies-01.tif", "data/colonies-02.tif", "data/colonies-03.tif"]:
176 | count_colonies(image_filename=image_filename)
177 | ```
178 |
179 | {alt='Colony 1 output'}
180 | {alt='Colony 2 output'}
181 | {alt='Colony 3 output'}
182 |
183 | You'll notice that for the images with more colonies, the results aren't perfect.
184 | For example, some small colonies are missing,
185 | and there are likely some small black spots being labelled incorrectly as colonies.
186 | You could expand this solution to, for example,
187 | use an automatically determined threshold for each image,
188 | which may fit each better.
189 | Also, you could filter out colonies below a certain size
190 | (as we did in [the *Connected Component Analysis* episode](08-connected-components.md)).
191 | You'll also see that some touching colonies are merged into one big colony.
192 | This could be fixed with more complicated segmentation methods
193 | (outside of the scope of this lesson) like
194 | [watershed](https://scikit-image.org/docs/dev/auto_examples/segmentation/plot_watershed.html).
195 |
196 | :::::::::::::::::::::::::
197 |
198 | ::::::::::::::::::::::::::::::::::::::::::::::::::
199 |
200 | ::::::::::::::::::::::::::::::::::::::: challenge
201 |
202 | ## Colony counting with minimum size and automated threshold (optional, not included in timing)
203 |
204 | Modify your function from the previous exercise for colony counting to (i) exclude objects smaller
205 | than a specified size and (ii) use an automated thresholding approach, e.g. Otsu, to mask the
206 | colonies.
207 |
208 | ::::::::::::::::::::::::::::::::::::::: solution
209 |
210 | Here is a modified function with the requested features. Note when calculating the Otsu threshold we
211 | don't include the very bright pixels outside the dish.
212 |
213 | ```python
214 | def count_colonies_enhanced(image_filename, sigma=1.0, min_colony_size=10, connectivity=2):
215 |
216 | bacteria_image = iio.imread(image_filename)
217 | gray_bacteria = ski.color.rgb2gray(bacteria_image)
218 | blurred_image = ski.filters.gaussian(gray_bacteria, sigma=sigma)
219 |
220 | # create mask excluding the very bright pixels outside the dish
221 | # we dont want to include these when calculating the automated threshold
222 | mask = blurred_image < 0.90
223 | # calculate an automated threshold value within the dish using the Otsu method
224 | t = ski.filters.threshold_otsu(blurred_image[mask])
225 | # update mask to select pixels both within the dish and less than t
226 | mask = np.logical_and(mask, blurred_image < t)
227 | # remove objects smaller than specified area
228 | mask = ski.morphology.remove_small_objects(mask, min_size=min_colony_size)
229 |
230 | labeled_image, count = ski.measure.label(mask, return_num=True)
231 | print(f"There are {count} colonies in {image_filename}")
232 | colored_label_image = ski.color.label2rgb(labeled_image, bg_label=0)
233 | summary_image = ski.color.gray2rgb(gray_bacteria)
234 | summary_image[mask] = colored_label_image[mask]
235 | fig, ax = plt.subplots()
236 | ax.imshow(summary_image)
237 | ```
238 |
239 | :::::::::::::::::::::::::
240 |
241 | ::::::::::::::::::::::::::::::::::::::::::::::::::
242 |
243 | :::::::::::::::::::::::::::::::::::::::: keypoints
244 |
245 | - Using thresholding, connected component analysis and other tools we can automatically segment images of bacterial colonies.
246 | - These methods are useful for many scientific problems, especially those involving morphometrics.
247 |
248 | ::::::::::::::::::::::::::::::::::::::::::::::::::
249 |
--------------------------------------------------------------------------------
/episodes/data/beads.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/data/beads.jpg
--------------------------------------------------------------------------------
/episodes/data/board.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/data/board.jpg
--------------------------------------------------------------------------------
/episodes/data/centers.txt:
--------------------------------------------------------------------------------
1 | 91 108
2 | 161 108
3 | 231 108
4 | 301 108
5 | 371 108
6 | 441 108
7 | 511 108
8 | 581 108
9 | 91 180
10 | 161 180
11 | 231 180
12 | 301 180
13 | 371 180
14 | 441 180
15 | 511 180
16 | 581 180
17 | 91 252
18 | 161 252
19 | 231 252
20 | 301 252
21 | 371 252
22 | 441 252
23 | 511 252
24 | 581 252
25 | 91 324
26 | 161 324
27 | 231 324
28 | 301 324
29 | 371 324
30 | 441 324
31 | 511 324
32 | 581 324
33 | 91 396
34 | 161 396
35 | 231 396
36 | 301 396
37 | 371 396
38 | 441 396
39 | 511 396
40 | 581 396
41 | 91 468
42 | 161 468
43 | 231 468
44 | 301 468
45 | 371 468
46 | 441 468
47 | 511 468
48 | 581 468
49 | 91 540
50 | 161 540
51 | 231 540
52 | 301 540
53 | 371 540
54 | 441 540
55 | 511 540
56 | 581 540
57 | 91 612
58 | 161 612
59 | 231 612
60 | 301 612
61 | 371 612
62 | 441 612
63 | 511 612
64 | 581 612
65 | 91 684
66 | 161 684
67 | 231 684
68 | 301 684
69 | 371 684
70 | 441 684
71 | 511 684
72 | 581 684
73 | 91 756
74 | 161 756
75 | 231 756
76 | 301 756
77 | 371 756
78 | 441 756
79 | 511 756
80 | 581 756
81 | 91 828
82 | 161 828
83 | 231 828
84 | 301 828
85 | 371 828
86 | 441 828
87 | 511 828
88 | 581 828
89 | 91 900
90 | 161 900
91 | 231 900
92 | 301 900
93 | 371 900
94 | 441 900
95 | 511 900
96 | 581 900
--------------------------------------------------------------------------------
/episodes/data/chair.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/data/chair.jpg
--------------------------------------------------------------------------------
/episodes/data/colonies-01.tif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/data/colonies-01.tif
--------------------------------------------------------------------------------
/episodes/data/colonies-02.tif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/data/colonies-02.tif
--------------------------------------------------------------------------------
/episodes/data/colonies-03.tif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/data/colonies-03.tif
--------------------------------------------------------------------------------
/episodes/data/eight.tif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/data/eight.tif
--------------------------------------------------------------------------------
/episodes/data/gaussian-original.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/data/gaussian-original.png
--------------------------------------------------------------------------------
/episodes/data/letterA.tif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/data/letterA.tif
--------------------------------------------------------------------------------
/episodes/data/maize-root-cluster.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/data/maize-root-cluster.jpg
--------------------------------------------------------------------------------
/episodes/data/maize-roots-grayscale.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/data/maize-roots-grayscale.jpg
--------------------------------------------------------------------------------
/episodes/data/maize-seedlings.tif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/data/maize-seedlings.tif
--------------------------------------------------------------------------------
/episodes/data/plant-seedling.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/data/plant-seedling.jpg
--------------------------------------------------------------------------------
/episodes/data/remote-control.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/data/remote-control.jpg
--------------------------------------------------------------------------------
/episodes/data/shapes-01.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/data/shapes-01.jpg
--------------------------------------------------------------------------------
/episodes/data/shapes-02.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/data/shapes-02.jpg
--------------------------------------------------------------------------------
/episodes/data/sudoku.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/data/sudoku.png
--------------------------------------------------------------------------------
/episodes/data/tree.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/data/tree.jpg
--------------------------------------------------------------------------------
/episodes/data/trial-016.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/data/trial-016.jpg
--------------------------------------------------------------------------------
/episodes/data/trial-020.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/data/trial-020.jpg
--------------------------------------------------------------------------------
/episodes/data/trial-216.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/data/trial-216.jpg
--------------------------------------------------------------------------------
/episodes/data/trial-293.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/data/trial-293.jpg
--------------------------------------------------------------------------------
/episodes/data/wellplate-01.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/data/wellplate-01.jpg
--------------------------------------------------------------------------------
/episodes/data/wellplate-02.tif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/data/wellplate-02.tif
--------------------------------------------------------------------------------
/episodes/fig/3D_petri_after_blurring.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/3D_petri_after_blurring.png
--------------------------------------------------------------------------------
/episodes/fig/3D_petri_before_blurring.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/3D_petri_before_blurring.png
--------------------------------------------------------------------------------
/episodes/fig/Gaussian_2D.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/Gaussian_2D.png
--------------------------------------------------------------------------------
/episodes/fig/beads-canny-ui.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/beads-canny-ui.png
--------------------------------------------------------------------------------
/episodes/fig/beads-out.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/beads-out.png
--------------------------------------------------------------------------------
/episodes/fig/black-and-white-edge-pixels.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/black-and-white-edge-pixels.jpg
--------------------------------------------------------------------------------
/episodes/fig/black-and-white-gradient.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/black-and-white-gradient.png
--------------------------------------------------------------------------------
/episodes/fig/black-and-white.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/black-and-white.jpg
--------------------------------------------------------------------------------
/episodes/fig/blur-demo.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/blur-demo.gif
--------------------------------------------------------------------------------
/episodes/fig/board-coordinates.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/board-coordinates.jpg
--------------------------------------------------------------------------------
/episodes/fig/board-final.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/board-final.jpg
--------------------------------------------------------------------------------
/episodes/fig/cartesian-coordinates.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/cartesian-coordinates.png
--------------------------------------------------------------------------------
/episodes/fig/cat-corner-blue.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/cat-corner-blue.png
--------------------------------------------------------------------------------
/episodes/fig/cat-eye-pixels.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/cat-eye-pixels.jpg
--------------------------------------------------------------------------------
/episodes/fig/cat.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/cat.jpg
--------------------------------------------------------------------------------
/episodes/fig/chair-layers-rgb.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/chair-layers-rgb.png
--------------------------------------------------------------------------------
/episodes/fig/chair-original.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/chair-original.jpg
--------------------------------------------------------------------------------
/episodes/fig/checkerboard-blue-channel.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/checkerboard-blue-channel.png
--------------------------------------------------------------------------------
/episodes/fig/checkerboard-green-channel.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/checkerboard-green-channel.png
--------------------------------------------------------------------------------
/episodes/fig/checkerboard-red-channel.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/checkerboard-red-channel.png
--------------------------------------------------------------------------------
/episodes/fig/checkerboard.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/checkerboard.png
--------------------------------------------------------------------------------
/episodes/fig/colonies-01-gray.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/colonies-01-gray.png
--------------------------------------------------------------------------------
/episodes/fig/colonies-01-histogram.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/colonies-01-histogram.png
--------------------------------------------------------------------------------
/episodes/fig/colonies-01-mask.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/colonies-01-mask.png
--------------------------------------------------------------------------------
/episodes/fig/colonies-01-summary.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/colonies-01-summary.png
--------------------------------------------------------------------------------
/episodes/fig/colonies-01.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/colonies-01.jpg
--------------------------------------------------------------------------------
/episodes/fig/colonies-02-summary.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/colonies-02-summary.png
--------------------------------------------------------------------------------
/episodes/fig/colonies-02.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/colonies-02.jpg
--------------------------------------------------------------------------------
/episodes/fig/colonies-03-summary.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/colonies-03-summary.png
--------------------------------------------------------------------------------
/episodes/fig/colonies-03.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/colonies-03.jpg
--------------------------------------------------------------------------------
/episodes/fig/colonies01.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/colonies01.png
--------------------------------------------------------------------------------
/episodes/fig/colony-mask.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/colony-mask.png
--------------------------------------------------------------------------------
/episodes/fig/colour-table.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/colour-table.png
--------------------------------------------------------------------------------
/episodes/fig/combination.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/combination.png
--------------------------------------------------------------------------------
/episodes/fig/drawing-practice.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/drawing-practice.jpg
--------------------------------------------------------------------------------
/episodes/fig/eight.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/eight.png
--------------------------------------------------------------------------------
/episodes/fig/five.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/five.png
--------------------------------------------------------------------------------
/episodes/fig/four-maize-roots-binary-improved.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/four-maize-roots-binary-improved.jpg
--------------------------------------------------------------------------------
/episodes/fig/four-maize-roots-binary.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/four-maize-roots-binary.jpg
--------------------------------------------------------------------------------
/episodes/fig/four-maize-roots.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/four-maize-roots.jpg
--------------------------------------------------------------------------------
/episodes/fig/gaussian-blurred.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/gaussian-blurred.png
--------------------------------------------------------------------------------
/episodes/fig/gaussian-kernel.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/gaussian-kernel.png
--------------------------------------------------------------------------------
/episodes/fig/grayscale.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/grayscale.png
--------------------------------------------------------------------------------
/episodes/fig/image-coordinates.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/image-coordinates.png
--------------------------------------------------------------------------------
/episodes/fig/jupyter_overview.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/jupyter_overview.png
--------------------------------------------------------------------------------
/episodes/fig/left-hand-coordinates.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/left-hand-coordinates.png
--------------------------------------------------------------------------------
/episodes/fig/maize-root-cluster-histogram.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/maize-root-cluster-histogram.png
--------------------------------------------------------------------------------
/episodes/fig/maize-root-cluster-mask.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/maize-root-cluster-mask.png
--------------------------------------------------------------------------------
/episodes/fig/maize-root-cluster-selected.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/maize-root-cluster-selected.png
--------------------------------------------------------------------------------
/episodes/fig/maize-root-cluster-threshold.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/maize-root-cluster-threshold.jpg
--------------------------------------------------------------------------------
/episodes/fig/maize-roots-threshold.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/maize-roots-threshold.png
--------------------------------------------------------------------------------
/episodes/fig/maize-seedling-enlarged.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/maize-seedling-enlarged.jpg
--------------------------------------------------------------------------------
/episodes/fig/maize-seedling-original.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/maize-seedling-original.jpg
--------------------------------------------------------------------------------
/episodes/fig/maize-seedlings-mask.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/maize-seedlings-mask.png
--------------------------------------------------------------------------------
/episodes/fig/maize-seedlings-masked.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/maize-seedlings-masked.jpg
--------------------------------------------------------------------------------
/episodes/fig/maize-seedlings.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/maize-seedlings.jpg
--------------------------------------------------------------------------------
/episodes/fig/petri-blurred-intensities-plot.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/petri-blurred-intensities-plot.png
--------------------------------------------------------------------------------
/episodes/fig/petri-dish.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/petri-dish.png
--------------------------------------------------------------------------------
/episodes/fig/petri-original-intensities-plot.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/petri-original-intensities-plot.png
--------------------------------------------------------------------------------
/episodes/fig/petri-selected-pixels-marker.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/petri-selected-pixels-marker.png
--------------------------------------------------------------------------------
/episodes/fig/plant-seedling-colour-histogram.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/plant-seedling-colour-histogram.png
--------------------------------------------------------------------------------
/episodes/fig/plant-seedling-grayscale-histogram-mask.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/plant-seedling-grayscale-histogram-mask.png
--------------------------------------------------------------------------------
/episodes/fig/plant-seedling-grayscale-histogram.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/plant-seedling-grayscale-histogram.png
--------------------------------------------------------------------------------
/episodes/fig/plant-seedling-grayscale.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/plant-seedling-grayscale.png
--------------------------------------------------------------------------------
/episodes/fig/quality-histogram.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/quality-histogram.jpg
--------------------------------------------------------------------------------
/episodes/fig/quality-jpg.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/quality-jpg.jpg
--------------------------------------------------------------------------------
/episodes/fig/quality-original.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/quality-original.jpg
--------------------------------------------------------------------------------
/episodes/fig/quality-tif.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/quality-tif.jpg
--------------------------------------------------------------------------------
/episodes/fig/rectangle-gaussian-blurred.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/rectangle-gaussian-blurred.png
--------------------------------------------------------------------------------
/episodes/fig/remote-control-masked.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/remote-control-masked.jpg
--------------------------------------------------------------------------------
/episodes/fig/shapes-01-areas-histogram.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/shapes-01-areas-histogram.png
--------------------------------------------------------------------------------
/episodes/fig/shapes-01-canny-edge-output.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/shapes-01-canny-edge-output.png
--------------------------------------------------------------------------------
/episodes/fig/shapes-01-canny-edges.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/shapes-01-canny-edges.png
--------------------------------------------------------------------------------
/episodes/fig/shapes-01-canny-track-edges.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/shapes-01-canny-track-edges.png
--------------------------------------------------------------------------------
/episodes/fig/shapes-01-cca-detail.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/shapes-01-cca-detail.png
--------------------------------------------------------------------------------
/episodes/fig/shapes-01-filtered-objects.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/shapes-01-filtered-objects.png
--------------------------------------------------------------------------------
/episodes/fig/shapes-01-grayscale.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/shapes-01-grayscale.png
--------------------------------------------------------------------------------
/episodes/fig/shapes-01-histogram.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/shapes-01-histogram.png
--------------------------------------------------------------------------------
/episodes/fig/shapes-01-labeled.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/shapes-01-labeled.png
--------------------------------------------------------------------------------
/episodes/fig/shapes-01-mask.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/shapes-01-mask.png
--------------------------------------------------------------------------------
/episodes/fig/shapes-01-objects-coloured-by-area.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/shapes-01-objects-coloured-by-area.png
--------------------------------------------------------------------------------
/episodes/fig/shapes-01-selected.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/shapes-01-selected.png
--------------------------------------------------------------------------------
/episodes/fig/shapes-02-histogram.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/shapes-02-histogram.png
--------------------------------------------------------------------------------
/episodes/fig/shapes-02-mask.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/shapes-02-mask.png
--------------------------------------------------------------------------------
/episodes/fig/shapes-02-selected.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/shapes-02-selected.png
--------------------------------------------------------------------------------
/episodes/fig/source/06-blurring/create_blur_animation.py:
--------------------------------------------------------------------------------
1 | ### METADATA
2 | # author: Marco Dalla Vecchia @marcodallavecchia
3 | # description: Simple blurring animation of simple image
4 | # data-source: letterA.tif was created using ImageJ (https://imagej.net/ij/)
5 | ###
6 |
7 | ### INFO
8 | # This script creates the animated illustration of blurring in episode 6
9 | ###
10 |
11 | ### USAGE
12 | # The script was written in Python 3.12 and required the following Python packages:
13 | # - numpy==2.2.3
14 | # - scipy==1.15.2
15 | # - matplotlib==3.10.1
16 | # - tqdm==4.67.1
17 | #
18 | # The script can be executed with
19 | # $ python create_blur_animation.py
20 | # The output animation will be saved directly in the fig folder where the markdown lesson file will pick it up
21 | ###
22 |
23 | ### POTENTIAL IMPROVEMENTS
24 | # - Change colors for rectangular patches in animation
25 | # - Ask for image input instead of hard-coding it
26 | # - Ask for FPS as input
27 | # - Ask for animation format output
28 |
29 | # Import packages
30 | import numpy as np
31 | from scipy.ndimage import convolve
32 | from matplotlib import pyplot as plt
33 | from matplotlib import patches as p
34 | from matplotlib.animation import FuncAnimation
35 | from tqdm import tqdm
36 |
37 | # Path to input and output images
38 | data_path = "../../../data/"
39 | fig_path = "../../../fig/"
40 | input_file = data_path + "letterA.tif"
41 | output_file = fig_path + "blur-demo.gif"
42 |
43 | # Change here colors to improve accessibility
44 | kernel_color = "tab:red"
45 | center_color = "tab:olive"
46 | kernel_size = 3
47 |
48 | ### ANIMATION FUNCTIONS
49 | def init():
50 | """
51 | Initialization function
52 | - Set image array data
53 | - Autoscale image display
54 | - Set XY coordinates of rectangular patches
55 | """
56 | im.set_array(img_convolved)
57 | im.autoscale()
58 | k_rect.set_xy((-0.5, -0.5))
59 | c_rect1.set_xy((kernel_size / 2 - 1, kernel_size / 2 - 1))
60 | return [im, k_rect, c_rect1]
61 |
62 | def update(frame):
63 | """
64 | Animation update function. For every frame do the following:
65 | - Update X and Y coordinates of rectangular patch for kernel
66 | - Update X and Y coordinates of rectangular patch for central pixel
67 | - Update blurred image frame
68 | """
69 | pbar.update(1)
70 | row = (frame % total_frames) // (img_pad.shape[0] - kernel_size + 1)
71 | col = (frame % total_frames) % (img_pad.shape[1] - kernel_size + 1)
72 |
73 | k_rect.set_x(col - 0.5)
74 | c_rect1.set_x(col + (kernel_size/2 - 1))
75 | k_rect.set_y(row - 0.5)
76 | c_rect1.set_y(row + (kernel_size/2 - 1))
77 |
78 | im.set_array(all_frames[frame])
79 | im.autoscale()
80 |
81 | return [im, k_rect, c_rect1]
82 |
83 | # MAIN PROGRAM
84 | if __name__ == "__main__":
85 |
86 | print(f"Creating blurring animation with kernel size: {kernel_size}")
87 |
88 | # Load image
89 | img = plt.imread(input_file)
90 |
91 | ### HERE WE USE THE CONVOLVE FUNCTION TO GET THE FINAL BLURRED IMAGE
92 | # I chose a simple mean filter (equal kernel weights)
93 | kernel = np.ones(shape=(kernel_size, kernel_size)) / kernel_size ** 2 # create kernel
94 | # convolve the image, i.e., apply mean filter
95 | img_convolved = convolve(img, kernel, mode='constant', cval=0) # pad borders with zero like below for consistency
96 |
97 |
98 | ### HERE WE CONVOLVE MANUALLY STEP-BY-STEP TO CREATE ANIMATION
99 | img_pad = np.pad(img, (int(np.ceil(kernel_size/2) - 1), int(np.ceil(kernel_size/2) - 1))) # Pad image to deal with borders
100 | new_img = np.zeros(img.shape, dtype=np.uint16) # this will be the blurred final image
101 |
102 | # add first frame with complete blurred image for print version of GIF
103 | all_frames = [img_convolved]
104 |
105 | # precompute animation frames and append to the list
106 | total_frames = (img_pad.shape[0] - kernel_size + 1) * (img_pad.shape[1] - kernel_size + 1) # total frames if by chance image is not squared
107 | for frame in range(total_frames):
108 | row = (frame % total_frames) // (img_pad.shape[0] - kernel_size + 1) # row index
109 | col = (frame % total_frames) % (img_pad.shape[1] - kernel_size + 1) # col index
110 | img_chunk = img_pad[row:row + kernel_size, col:col + kernel_size] # get current image chunk inside the kernel
111 | new_img[row, col] = np.mean(img_chunk).astype(np.uint16) # calculate its mean -> mean filter
112 | all_frames.append(new_img.copy()) # append to animation frames list
113 |
114 | # We now have an extra frame
115 | total_frames += 1
116 |
117 | ### FROM HERE WE START CREATING THE ANIMATION
118 | # Initialize canvas
119 | f, (ax1, ax2) = plt.subplots(ncols=2, figsize=(10, 5))
120 |
121 | # Display the padded image -> this one won't change during the animation
122 | ax1.imshow(img_pad, cmap="gray")
123 | # Initialize the blurred image -> this is the first frame with already the final result
124 | im = ax2.imshow(img_convolved, animated=True, cmap="gray")
125 |
126 | # Define rectangular patches to identify moving kernel
127 | k_rect = p.Rectangle((-0.5, -0.5), kernel_size, kernel_size, linewidth=2, edgecolor=kernel_color, facecolor="none", alpha=0.8) # kernel rectangle
128 | c_rect1 = p.Rectangle(((kernel_size/2 - 1), (kernel_size/2 - 1)), 1, 1, linewidth=2, edgecolor=center_color, facecolor="none") # central pixel rectangle
129 | # Add them to the figure
130 | ax1.add_patch(k_rect)
131 | ax1.add_patch(c_rect1)
132 |
133 | # Fix limits of the image on the right (without padding) so that it is the same size as the image on the left (with padding)
134 | ax2.set(
135 | ylim=((img_pad.shape[0] - kernel_size / 2), -kernel_size / 2),
136 | xlim=(-kernel_size / 2, (img_pad.shape[1] - kernel_size / 2))
137 | )
138 |
139 | # We don't need to see the ticks
140 | ax1.axis("off")
141 | ax2.axis("off")
142 |
143 | # Create progress bar to visualize animation progress
144 | pbar = tqdm(total=total_frames)
145 |
146 | ### HERE WE CREATE THE ANIMATION
147 | # Use FuncAnimation to create the animation
148 | ani = FuncAnimation(
149 | f, update,
150 | frames=range(total_frames),
151 | interval=50, # we could change the animation speed
152 | init_func=init,
153 | blit=True
154 | )
155 |
156 | # Export animation
157 | plt.tight_layout()
158 | ani.save(output_file)
159 | pbar.close()
160 | print("Animation exported")
161 |
--------------------------------------------------------------------------------
/episodes/fig/sudoku-gray.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/sudoku-gray.png
--------------------------------------------------------------------------------
/episodes/fig/three-colours.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/three-colours.png
--------------------------------------------------------------------------------
/episodes/fig/wellplate-01-masked.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/wellplate-01-masked.jpg
--------------------------------------------------------------------------------
/episodes/fig/wellplate-02-histogram.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/wellplate-02-histogram.png
--------------------------------------------------------------------------------
/episodes/fig/wellplate-02-masked.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/wellplate-02-masked.jpg
--------------------------------------------------------------------------------
/episodes/fig/wellplate-02.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/wellplate-02.jpg
--------------------------------------------------------------------------------
/episodes/fig/zero.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/fig/zero.png
--------------------------------------------------------------------------------
/episodes/files/cheatsheet.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/datacarpentry/image-processing/3982f8cd4ac96720d1dfb1846ffbf0c721bdeae2/episodes/files/cheatsheet.pdf
--------------------------------------------------------------------------------
/episodes/files/environment.yml:
--------------------------------------------------------------------------------
1 | name: dc-image
2 | channels:
3 | - conda-forge
4 | dependencies:
5 | - python>=3.11
6 | - jupyterlab
7 | - numpy
8 | - matplotlib
9 | - scikit-image
10 | - ipympl
11 | - imageio
12 |
--------------------------------------------------------------------------------
/image-processing.Rproj:
--------------------------------------------------------------------------------
1 | Version: 1.0
2 |
3 | RestoreWorkspace: Default
4 | SaveWorkspace: Default
5 | AlwaysSaveHistory: Default
6 |
7 | EnableCodeIndexing: Yes
8 | UseSpacesForTab: Yes
9 | NumSpacesForTab: 2
10 | Encoding: UTF-8
11 |
12 | RnwWeave: Sweave
13 | LaTeX: pdfLaTeX
14 |
15 | AutoAppendNewline: Yes
16 | StripTrailingWhitespace: Yes
17 |
18 | BuildType: Website
19 |
--------------------------------------------------------------------------------
/index.md:
--------------------------------------------------------------------------------
1 | ---
2 | site: sandpaper::sandpaper_site
3 | ---
4 |
5 | This lesson shows how to use Python and scikit-image to do basic image processing.
6 |
7 | :::::::::::::::::::::::::::::::::::::::::: prereq
8 |
9 | ## Prerequisites
10 |
11 | This lesson assumes you have a working knowledge of Python and some previous exposure to the Bash shell.
12 | These requirements can be fulfilled by:
13 | a) completing a Software Carpentry Python workshop **or**
14 | b) completing a Data Carpentry Ecology workshop (with Python) **and** a Data Carpentry Genomics workshop **or**
15 | c) independent exposure to both Python and the Bash shell.
16 |
17 | If you're unsure whether you have enough experience to participate in this workshop, please read over
18 | [this detailed list](learners/prereqs.md), which gives all of the functions, operators, and other concepts you will need
19 | to be familiar with.
20 |
21 |
22 | ::::::::::::::::::::::::::::::::::::::::::::::::::
23 |
24 | Before following the lesson, please [make sure you have the software and data required](learners/setup.md).
25 |
--------------------------------------------------------------------------------
/instructors/instructor-notes.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: Instructor Notes
3 | ---
4 |
5 | ## Estimated Timings
6 |
7 | This is a relatively new curriculum.
8 | The estimated timings for each episode are based on limited experience
9 | and should be taken as a rough guide only.
10 | If you teach the curriculum,
11 | the Maintainers would be delighted to receive feedback with
12 | information about the time that was required
13 | for teaching and exercises in each episode of your workshop.
14 |
15 | Please [open an issue on the repository](https://github.com/datacarpentry/image-processing/issues/new/choose)
16 | to share your experience with the lesson Maintainers.
17 |
18 | ## Working with Jupyter notebooks
19 |
20 | - This lesson is designed to be taught using Jupyter notebooks. We recommend that instructors guide learners to create a new Jupyter notebook for each episode.
21 |
22 | - Python `import` statements typically appear in the first code block near the top of each episode. In some cases, the purpose of specific libraries is briefly explained as part of the exercises.
23 |
24 | - The possibility of executing the code cells in a notebook in arbitrary order can cause confusion. Using the "restart kernel and run all cells" feature is one way to accomplish linear execution of the notebook and may help locate and identify coding issues.
25 |
26 | - Many episodes in this lesson load image files from disk. To avoid name clashes in episodes that load multiple image files, we have used unique variable names (instead of generic names such as `image` or `img`). When copying code snippets between exercises, the variable names may have to be changed. The maintainers are keen to receive feedback on whether this convention proves practical in workshops.
27 |
28 | ## Working with imageio and skimage
29 |
30 | - `imageio.v3` allows to load images in different modes by passing the `mode=` argument to `imread()`. Depending on the image file and mode, the `dtype` of the resulting Numpy array can be different (e.g., `dtype('uint8')` or `dtype('float64')`. In the lesson, `skimage.util.img_as_ubyte()` and `skimage.util.img_as_float()` are used to convert the data type when necessary.
31 |
32 | - Some `skimage` functions implicitly convert the pixel values to floating-point numbers. Several callout boxes have been added throughout the lesson to raise awareness, but this may still prompt questions from learners.
33 |
34 | - In certain situations, `imread()` returns a read-only array. This depends on the image file type and on the backend (e.g., Pillow). If a read-only error is encountered, `image = np.array(image)` can be used to create a writable copy of the array before manipulating its pixel values.
35 |
36 | - Be aware that learners might get surprising results in the *Keeping only low intensity pixels* exercise, if `plt.imshow` is called without the `vmax` parameter.
37 | A detailed explanation is given in the *Plotting single channel images (cmap, vmin, vmax)* callout box.
38 |
39 | ## Additional resources
40 |
41 | - A cheat-sheet with graphics illustrating some concepts in this lesson is available:
42 | - [Cheat-sheet HTML for viewing in browser](../episodes/files/cheatsheet.html).
43 | - [PDF version for printing](../episodes/files/cheatsheet.pdf).
44 |
45 |
46 | ## Questions from Learners
47 |
48 | ### Q: Where would I find out that coordinates are `x,y` not `r,c`?
49 |
50 | A: In an image viewer, hover your cursor over top-left (origin) the move down and see which number increases.
51 |
52 | ### Q: Why does saving the image take such a long time? (skimage-images/saving images PNG example)
53 |
54 | A: It is a large image.
55 |
56 | ### Q: Are the coordinates represented `x,y` or `r,c` in the code (e.g. in `array.shape`)?
57 |
58 | A: Always `r,c` with numpy arrays, unless clearly specified otherwise - only represented `x,y` when image is displayed by a viewer.
59 | Take home is don't rely on it - always check!
60 |
61 | ### Q: What if I want to increase size? How does `skimage` upsample? (image resizing)
62 |
63 | A: When resizing or rescaling an image, `skimage` performs interpolation to up-size or down-size the image. Technically, this is done by fitting a [spline](https://en.wikipedia.org/wiki/Spline_\(mathematics\)) function to the image data. The spline function is based on the intensity values in the original image and can be used to approximate the intensity at any given coordinate in the resized/rescaled image. Note that the intensity values in the new image are an approximation of the original values but should not be treated as the actual, observed data. `skimage.transform.resize` has a number of optional parameters that allow the user to control, e.g., the order of the spline interpolation. The [scikit-image documentation](https://scikit-image.org/docs/stable/api/skimage.transform.html#skimage.transform.resize) provides additional information on other parameters.
64 |
65 | ### Q: Why are some lines missing from the sudoku image when it is displayed inline in a Jupyter Notebook? (skimage-images/low intensity pixels exercise)
66 |
67 | A: They are actually present in image but not shown due to interpolation.
68 |
69 | ### Q: Does blurring take values of pixels already blurred, or is blurring done on original pixel values only?
70 |
71 | A: Blurring is done on original pixel values only.
72 |
73 | ### Q: Can you blur while retaining edges?
74 |
75 | A: Yes, many different filters/kernels exist, some of which are designed to be edge-preserving.
76 |
77 | ## Troubleshooting
78 |
79 | Learners reported a problem on some operating systems, that Shift\+Enter is prevented from running a cell in Jupyter when the caps lock key is active.
80 |
81 |
82 |
--------------------------------------------------------------------------------
/learners/discuss.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: Discussion
3 | ---
4 |
5 | ## Choice of Image Processing Library
6 |
7 | This lesson was originally designed to use [OpenCV](https://opencv.org/)
8 | and the [`opencv-python`](https://pypi.org/project/opencv-python/)
9 | library ([see the last version of the lesson repository to use
10 | OpenCV](https://github.com/datacarpentry/image-processing/tree/770a2416fb5c6bd5a4b8e728b3e338667e47b0ed)).
11 |
12 | In 2019-2020 the lesson was adapted to use
13 | [scikit-image](https://scikit-image.org/), as this library has proven
14 | easier to install and enjoys more extensive documentation and support.
15 |
16 | ## Choice of Image Viewer
17 |
18 | When the lesson was first adapted to use sckikit-image (see above),
19 | `skimage.viewer.ImageViewer` was used to inspect images. [This viewer
20 | is deprecated](https://scikit-image.org/docs/stable/user_guide/visualization.html)
21 | and the lesson maintainers chose to leverage `matplotlib.pyplot.imshow`
22 | with the pan/zoom and mouse-location tools built into the [Matplotlib
23 | GUI](https://matplotlib.org/stable/users/interactive.html). The
24 | [`ipympl` package](https://github.com/matplotlib/ipympl) is required
25 | to enable the interactive features of Matplotlib in Jupyter notebooks
26 | and in Jupyter Lab. This package is included in the setup
27 | instructions, and the backend can be enabled using the `%matplotlib widget` magic.
28 |
29 | The maintainers discussed the possibility of using [napari](https://napari.org/)
30 | as an image viewer in the lesson, acknowledging its growing popularity
31 | and some of the advantages it holds over the Matplotlib-based
32 | approach, especially for working with image data in more than two
33 | dimensions. However, at the time of discussion, napari was still in
34 | an alpha state of development, and could not be relied on for easy and
35 | error-free installation on all operating systems, which makes it less
36 | well-suited to use in an official Data Carpentry curriculum.
37 |
38 | The lesson Maintainers and/or Curriculum Advisory Committee (when it
39 | exists) will monitor the progress of napari and other image viewers,
40 | and may opt to adopt a new platform in future.
41 |
--------------------------------------------------------------------------------
/learners/edge-detection.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: 'Extra Episode: Edge Detection'
3 | teaching: 0
4 | exercises: 0
5 | ---
6 |
7 | :::::::::::::::::::::::::::::: questions
8 |
9 | - How can we automatically detect the edges of the objects in an image?
10 |
11 | ::::::::::::::::::::::::::::::::::::::::
12 |
13 | :::::::::::::::::::::::::::::: objectives
14 |
15 | - Apply Canny edge detection to an image.
16 | - Explain how we can use sliders to expedite finding appropriate parameter values
17 | for our scikit-image function calls.
18 | - Create scikit-image windows with sliders and associated callback functions.
19 |
20 | :::::::::::::::::::::::::::::::::::::::::
21 |
22 | In this episode, we will learn how to use scikit-image functions to apply *edge
23 | detection* to an image.
24 | In edge detection, we find the boundaries or edges of objects in an image,
25 | by determining where the brightness of the image changes dramatically.
26 | Edge detection can be used to extract the structure of objects in an image.
27 | If we are interested in the number,
28 | size,
29 | shape,
30 | or relative location of objects in an image,
31 | edge detection allows us to focus on the parts of the image most helpful,
32 | while ignoring parts of the image that will not help us.
33 |
34 | For example, once we have found the edges of the objects in the image
35 | (or once we have converted the image to binary using thresholding),
36 | we can use that information to find the image *contours*,
37 | which we will learn about in
38 | [the *Connected Component Analysis* episode](../episodes/08-connected-components.md).
39 | With the contours,
40 | we can do things like counting the number of objects in the image,
41 | measure the size of the objects, classify the shapes of the objects, and so on.
42 |
43 | As was the case for blurring and thresholding,
44 | there are several different methods in scikit-image that can be used for edge detection,
45 | so we will examine only one in detail.
46 |
47 | ## Introduction to edge detection
48 |
49 | To begin our introduction to edge detection,
50 | let us look at an image with a very simple edge -
51 | this grayscale image of two overlapped pieces of paper,
52 | one black and and one white:
53 |
54 | {alt='Black and white image'}
55 |
56 | The obvious edge in the image is the vertical line
57 | between the black paper and the white paper.
58 | To our eyes,
59 | there is a quite sudden change between the black pixels and the white pixels.
60 | But, at a pixel-by-pixel level, is the transition really that sudden?
61 |
62 | If we zoom in on the edge more closely, as in this image, we can see
63 | that the edge between the black and white areas of the image is not a clear-cut line.
64 |
65 | {alt='Black and white edge pixels'}
66 |
67 | We can learn more about the edge by examining the colour values of some of the pixels.
68 | Imagine a short line segment,
69 | halfway down the image and straddling the edge between the black and white paper.
70 | This plot shows the pixel values
71 | (between 0 and 255, since this is a grayscale image)
72 | for forty pixels spanning the transition from black to white.
73 |
74 | {alt='Gradient near transition'}
75 |
76 | It is obvious that the "edge" here is not so sudden!
77 | So, any scikit-image method to detect edges in an image must be able to
78 | decide where the edge is, and place appropriately-coloured pixels in that location.
79 |
80 | ## Canny edge detection
81 |
82 | Our edge detection method in this workshop is *Canny edge detection*,
83 | created by John Canny in 1986.
84 | This method uses a series of steps, some incorporating other types of edge detection.
85 | The `skimage.feature.canny()` function performs the following steps:
86 |
87 | 1. A Gaussian blur
88 | (that is characterised by the `sigma` parameter,
89 | see [*Blurring Images*](../episodes/06-blurring.md)
90 | is applied to remove noise from the image.
91 | (So if we are doing edge detection via this function,
92 | we should not perform our own blurring step.)
93 | 2. Sobel edge detection is performed on both the cx and ry dimensions,
94 | to find the intensity gradients of the edges in the image.
95 | Sobel edge detection computes
96 | the derivative of a curve fitting the gradient between light and dark areas
97 | in an image, and then finds the peak of the derivative,
98 | which is interpreted as the location of an edge pixel.
99 | 3. Pixels that would be highlighted, but seem too far from any edge,
100 | are removed.
101 | This is called *non-maximum suppression*, and
102 | the result is edge lines that are thinner than those produced by other methods.
103 | 4. A double threshold is applied to determine potential edges.
104 | Here extraneous pixels caused by noise or milder colour variation than desired
105 | are eliminated.
106 | If a pixel's gradient value - based on the Sobel differential -
107 | is above the high threshold value,
108 | it is considered a strong candidate for an edge.
109 | If the gradient is below the low threshold value, it is turned off.
110 | If the gradient is in between,
111 | the pixel is considered a weak candidate for an edge pixel.
112 | 5. Final detection of edges is performed using *hysteresis*.
113 | Here, weak candidate pixels are examined, and
114 | if they are connected to strong candidate pixels,
115 | they are considered to be edge pixels;
116 | the remaining, non-connected weak candidates are turned off.
117 |
118 | For a user of the `skimage.feature.canny()` edge detection function,
119 | there are three important parameters to pass in:
120 | `sigma` for the Gaussian filter in step one and
121 | the low and high threshold values used in step four of the process.
122 | These values generally are determined empirically,
123 | based on the contents of the image(s) to be processed.
124 |
125 | The following program illustrates how the `skimage.feature.canny()` method
126 | can be used to detect the edges in an image.
127 | We will execute the program on the `data/shapes-01.jpg` image,
128 | which we used before in
129 | [the *Thresholding* episode](../episodes/07-thresholding.md):
130 |
131 | {alt='coloured shapes'}
132 |
133 | We are interested in finding the edges of the shapes in the image,
134 | and so the colours are not important.
135 | Our strategy will be to read the image as grayscale,
136 | and then apply Canny edge detection.
137 | Note that when reading the image with `iio.imread(..., mode="L")`
138 | the image is converted to a grayscale image of same dtype.
139 |
140 | This program takes three command-line arguments:
141 | the filename of the image to process,
142 | and then two arguments related to the double thresholding
143 | in step four of the Canny edge detection process.
144 | These are the low and high threshold values for that step.
145 | After the required libraries are imported,
146 | the program reads the command-line arguments and
147 | saves them in their respective variables.
148 |
149 | ```python
150 | """Python script to demonstrate Canny edge detection.
151 |
152 | usage: python CannyEdge.py
153 | """
154 | import imageio.v3 as iio
155 | import matplotlib.pyplot as plt
156 | import skimage.feature
157 | import sys
158 |
159 | # read command-line arguments
160 | filename = sys.argv[1]
161 | sigma = float(sys.argv[2])
162 | low_threshold = float(sys.argv[3])
163 | high_threshold = float(sys.argv[4])
164 | ```
165 |
166 | Next, the original images is read, in grayscale, and displayed.
167 |
168 | ```python
169 | # load and display original image as grayscale
170 | image = iio.imread(uri=filename, mode="L")
171 | plt.imshow(image)
172 | ```
173 |
174 | Then, we apply Canny edge detection with this function call:
175 |
176 | ```python
177 | edges = skimage.feature.canny(
178 | image=image,
179 | sigma=sigma,
180 | low_threshold=low_threshold,
181 | high_threshold=high_threshold,
182 | )
183 | ```
184 |
185 | As we are using it here, the `skimage.feature.canny()` function takes four parameters.
186 | The first parameter is the input image.
187 | The `sigma` parameter determines
188 | the amount of Gaussian smoothing that is applied to the image.
189 | The next two parameters are the low and high threshold values
190 | for the fourth step of the process.
191 |
192 | The result of this call is a binary image.
193 | In the image, the edges detected by the process are white,
194 | while everything else is black.
195 |
196 | Finally, the program displays the `edges` image,
197 | showing the edges that were found in the original.
198 |
199 | ```python
200 | # display edges
201 | skimage.io.imshow(edges)
202 | ```
203 |
204 | Here is the result, for the coloured shape image above,
205 | with sigma value 2.0, low threshold value 0.1 and high threshold value 0.3:
206 |
207 | {alt='Output file of Canny edge detection'}
208 |
209 | Note that the edge output shown in a scikit-image window may look significantly
210 | worse than the image would look
211 | if it were saved to a file due to resampling artefacts in the interactive image viewer.
212 | The image above is the edges of the junk image, saved in a PNG file.
213 | Here is how the same image looks when displayed in a scikit-image output window:
214 |
215 | {alt='Output window of Canny edge detection'}
216 |
217 | ## Interacting with the image viewer using viewer plugins
218 |
219 | As we have seen, for a user of the `skimage.feature.canny()` edge detection function,
220 | three important parameters to pass in are sigma,
221 | and the low and high threshold values used in step four of the process.
222 | These values generally are determined empirically,
223 | based on the contents of the image(s) to be processed.
224 |
225 | Here is an image of some glass beads that we can use as
226 | input into a Canny edge detection program:
227 |
228 | {alt='Beads image'}
229 |
230 | We could use the `code/edge-detection/CannyEdge.py` program above
231 | to find edges in this image.
232 | To find acceptable values for the thresholds,
233 | we would have to run the program over and over again,
234 | trying different threshold values and examining the resulting image,
235 | until we find a combination of parameters that works best for the image.
236 |
237 | *Or*, we can write a Python program and
238 | create a viewer plugin that uses scikit-image *sliders*,
239 | that allow us to vary the function parameters while the program is running.
240 | In other words, we can write a program that presents us with a window like this:
241 |
242 | {alt='Canny UI'}
243 |
244 | Then, when we run the program, we can use the sliders to
245 | vary the values of the sigma and threshold parameters
246 | until we are satisfied with the results.
247 | After we have determined suitable values,
248 | we can use the simpler program to utilise the parameters without
249 | bothering with the user interface and sliders.
250 |
251 | Here is a Python program that shows how to apply Canny edge detection,
252 | and how to add sliders to the user interface.
253 | There are four parts to this program,
254 | making it a bit (but only a *bit*)
255 | more complicated than the programs we have looked at so far.
256 | The added complexity comes from setting up the sliders for the parameters
257 | that were previously read from the command line:
258 | In particular, we have added
259 |
260 | - The `canny()` filter function that returns an edge image,
261 | - The `cannyPlugin` plugin object, to which we add
262 | - The sliders for *sigma*, and *low* and *high threshold* values, and
263 | - The main program, i.e., the code that is executed when the program runs.
264 |
265 | We will look at the main program part first, and then return to writing the plugin.
266 | The first several lines of the main program are easily recognizable at this point:
267 | saving the command-line argument,
268 | reading the image in grayscale,
269 | and creating a window.
270 |
271 | ```python
272 | """Python script to demonstrate Canny edge detection with sliders to adjust the thresholds.
273 |
274 | usage: python CannyTrack.py
275 | """
276 | import imageio.v3 as iio
277 | import matplotlib.pyplot as plt
278 | import skimage.feature
279 | import skimage.viewer
280 | import sys
281 |
282 |
283 | filename = sys.argv[1]
284 | image = iio.imread(uri=filename, mode="L")
285 | viewer = plt.imshow(image)
286 | ```
287 |
288 | The `skimage.viewer.plugins.Plugin` class is designed to manipulate images.
289 | It takes an `image_filter` argument in the constructor that should be a function.
290 | This function should produce a new image as an output,
291 | given an image as the first argument,
292 | which then will be automatically displayed in the image viewer.
293 |
294 | ```python
295 | # Create the plugin and give it a name
296 | canny_plugin = skimage.viewer.plugins.Plugin(image_filter=skimage.feature.canny)
297 | canny_plugin.name = "Canny Filter Plugin"
298 | ```
299 |
300 | We want to interactively modify the parameters of the filter function interactively.
301 | scikit-image allows us to further enrich the plugin by adding widgets, like
302 | `skimage.viewer.widgets.Slider`,
303 | `skimage.viewer.widgets.CheckBox`,
304 | `skimage.viewer.widgets.ComboBox`.
305 | Whenever a widget belonging to the plugin is updated,
306 | the filter function is called with the updated parameters.
307 | This function is also called a callback function.
308 | The following code adds sliders for `sigma`, `low_threshold` and `high_thresholds`.
309 |
310 | ```python
311 | # Add sliders for the parameters
312 | canny_plugin += skimage.viewer.widgets.Slider(
313 | name="sigma", low=0.0, high=7.0, value=2.0
314 | )
315 | canny_plugin += skimage.viewer.widgets.Slider(
316 | name="low_threshold", low=0.0, high=1.0, value=0.1
317 | )
318 | canny_plugin += skimage.viewer.widgets.Slider(
319 | name="high_threshold", low=0.0, high=1.0, value=0.2
320 | )
321 | ```
322 |
323 | A slider is a widget that lets you choose a number by dragging a handle along a line.
324 | On the left side of the line, we have the lowest value,
325 | on the right side the highest value that can be chosen.
326 | The range of values in between is distributed equally along this line.
327 | All three sliders are constructed in the same way:
328 | The first argument is the name of the parameter that is tweaked by the slider.
329 | With the arguments `low`, and `high`,
330 | we supply the limits for the range of numbers that is represented by the slider.
331 | The `value` argument specifies the initial value of that parameter,
332 | so where the handle is located when the plugin is started.
333 | Adding the slider to the plugin makes the values available as
334 | parameters to the `filter_function`.
335 |
336 | ::::::::::::::::::::::::::::::::::::::::: callout
337 |
338 | ## How does the plugin know how to call the filter function with the parameters?
339 |
340 | The filter function will be called with the slider parameters
341 | according to their *names* as *keyword* arguments.
342 | So it is very important to name the sliders appropriately.
343 |
344 | ::::::::::::::::::::::::::::::::::::::::::::::::::
345 |
346 | Finally, we add the plugin the viewer and display the resulting user interface:
347 |
348 | ```python
349 | # add the plugin to the viewer and show the window
350 | viewer += canny_plugin
351 | viewer.show()
352 | ```
353 |
354 | Here is the result of running the preceding program on the beads image,
355 | with a sigma value 1.0,
356 | low threshold value 0.1 and high threshold value 0.3.
357 | The image shows the edges in an output file.
358 |
359 | {alt='Beads edges (file)'}
360 |
361 | ::::::::::::::::::::::::::::::::::::::: challenge
362 |
363 | ## Applying Canny edge detection to another image (5 min)
364 |
365 | Now, run the program above on the image of coloured shapes,
366 | `data/shapes-01.jpg`.
367 | Use a sigma of 1.0 and adjust low and high threshold sliders
368 | to produce an edge image that looks like this:
369 |
370 | {alt='coloured shape edges'}
371 |
372 | What values for the low and high threshold values did you use to
373 | produce an image similar to the one above?
374 |
375 | ::::::::::::::: solution
376 |
377 | ## Solution
378 |
379 | The coloured shape edge image above was produced with a low threshold
380 | value of 0.05 and a high threshold value of 0.07.
381 | You may be able to achieve similar results with other threshold values.
382 |
383 | :::::::::::::::::::::::::
384 |
385 | ::::::::::::::::::::::::::::::::::::::::::::::::::
386 |
387 | ::::::::::::::::::::::::::::::::::::::: challenge
388 |
389 | ## Using sliders for thresholding (30 min)
390 |
391 | Now, let us apply what we know about creating sliders to another,
392 | similar situation.
393 | Consider this image of a collection of maize seedlings,
394 | and suppose we wish to use simple fixed-level thresholding to
395 | mask out everything that is not part of one of the plants.
396 |
397 | {alt='Maize roots image'}
398 |
399 | To perform the thresholding, we could first create a histogram,
400 | then examine it, and select an appropriate threshold value.
401 | Here, however, let us create an application with a slider to set the threshold value.
402 | Create a program that reads in the image,
403 | displays it in a window with a slider,
404 | and allows the slider value to vary the threshold value used.
405 | You will find the image at `data/maize-roots-grayscale.jpg`.
406 |
407 | ::::::::::::::: solution
408 |
409 | ## Solution
410 |
411 | Here is a program that uses a slider to vary the threshold value used in
412 | a simple, fixed-level thresholding process.
413 |
414 | ```python
415 | """Python program to use a slider to control fixed-level thresholding value.
416 |
417 | usage: python interactive_thresholding.py
418 | """
419 |
420 | import imageio.v3 as iio
421 | import skimage
422 | import skimage.viewer
423 | import sys
424 |
425 | filename = sys.argv[1]
426 |
427 |
428 | def filter_function(image, sigma, threshold):
429 | masked = image.copy()
430 | masked[skimage.filters.gaussian(image, sigma=sigma) <= threshold] = 0
431 | return masked
432 |
433 |
434 | smooth_threshold_plugin = skimage.viewer.plugins.Plugin(
435 | image_filter=filter_function
436 | )
437 |
438 | smooth_threshold_plugin.name = "Smooth and Threshold Plugin"
439 |
440 | smooth_threshold_plugin += skimage.viewer.widgets.Slider(
441 | "sigma", low=0.0, high=7.0, value=1.0
442 | )
443 | smooth_threshold_plugin += skimage.viewer.widgets.Slider(
444 | "threshold", low=0.0, high=1.0, value=0.5
445 | )
446 |
447 | image = iio.imread(uri=filename, mode="L")
448 |
449 | viewer = skimage.viewer.ImageViewer(image=image)
450 | viewer += smooth_threshold_plugin
451 | viewer.show()
452 | ```
453 |
454 | Here is the output of the program,
455 | blurring with a sigma of 1.5 and a threshold value of 0.45:
456 |
457 | {alt='Thresholded maize roots'}
458 |
459 | :::::::::::::::::::::::::
460 |
461 | ::::::::::::::::::::::::::::::::::::::::::::::::::
462 |
463 | Keep this plugin technique in your image processing "toolbox."
464 | You can use sliders (or other interactive elements,
465 | see [the scikit-image documentation](https://scikit-image.org/docs/dev/api/skimage.viewer.widgets.html))
466 | to vary other kinds of parameters, such as sigma for blurring,
467 | binary thresholding values, and so on.
468 | A few minutes developing a program to tweak parameters like this can
469 | save you the hassle of repeatedly running a program from the command line
470 | with different parameter values.
471 | Furthermore, scikit-image already comes with a few viewer plugins that you can
472 | check out in [the documentation](https://scikit-image.org/docs/dev/api/skimage.viewer.plugins.html).
473 |
474 | ## Other edge detection functions
475 |
476 | As with blurring, there are other options for finding edges in skimage.
477 | These include `skimage.filters.sobel()`,
478 | which you will recognise as part of the Canny method.
479 | Another choice is `skimage.filters.laplace()`.
480 |
481 | :::::::::::::::::::::::::::::: keypoints
482 |
483 | - The `skimage.viewer.ImageViewer` is extended using a `skimage.viewer.plugins.Plugin`.
484 | - We supply a filter function callback when creating a Plugin.
485 | - Parameters of the callback function are manipulated interactively by creating sliders
486 | with the `skimage.viewer.widgets.slider()` function and adding them to the plugin.
487 |
488 | ::::::::::::::::::::::::::::::::::::::::
489 |
--------------------------------------------------------------------------------
/learners/prereqs.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: Prerequisites
3 | ---
4 |
5 | This lesson assumes you have a working knowledge of Python and some previous exposure to the Bash shell.
6 |
7 | These requirements can be fulfilled by:
8 |
9 | 1. completing a Software Carpentry Python workshop **or**
10 | 2. completing a Data Carpentry Ecology workshop (with Python) **and** a Data Carpentry Genomics workshop **or**
11 | 3. coursework in or independent learning of both Python and the Bash shell.
12 |
13 | ### Bash shell skills
14 |
15 | The skill set listed below is covered in any Software Carpentry workshop, as well
16 | as in Data Carpentry's Genomics workshop. These skills can also be learned
17 | through coursework or independent learning.
18 |
19 | Be able to:
20 |
21 | - Identify and navigate to your home directory.
22 | - Identify your current working directory.
23 | - Navigating directories using `pwd`, `ls`, `cd `, and `cd ..`
24 | - Run a Python script from the command line.
25 |
26 | ### Python skills
27 |
28 | This skill set listed below is covered in both Software Carpentry's Python workshop and
29 | in Data Carpentry's Ecology workshop with Python. These skills can also be learned
30 | through coursework or independent learning.
31 |
32 | Be able to:
33 |
34 | - Use the assignment operator to create `int`, `float`, and `str` variables.
35 | - Perform basic arithmetic operations (e.g. addition, subtraction) on variables.
36 | - Convert strings to ints or floats where appropriate.
37 | - Create a `list` and alter lists by appending, inserting, or removing values.
38 | - Use indexing and slicing to access elements of strings, lists, and NumPy arrays.
39 | - Use good coding practices to comment your code and choose appropriate variable names.
40 | - Write a `for` loop that increments a variable.
41 | - Write conditional statements using `if`, `elif`, and `else`.
42 | - Use comparison operators (`==`, `!=`, `<`, `<=`, `>`, `>=`) in conditional statements.
43 | - Read data from a file using `read()`, `readline()`, and `readlines()`.
44 | - Open, read from, write to, and close input and output files.
45 | - Use `print()` and `len()` to inspect variables.
46 |
47 | The following skills are useful, but not required:
48 |
49 | - Apply a function to an entire NumPy array or to a single array axis.
50 | - Write a user-defined function.
51 |
52 | If you are signed up, or considering signing up for a workshop, and aren't sure whether you meet these reqirements, please
53 | get in touch with the workshop instructors or host.
54 |
--------------------------------------------------------------------------------
/learners/reference.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: 'Reference'
3 | ---
4 |
5 | ## Glossary
6 |
7 | (Some definitions are taken from [Glosario](https://glosario.carpentries.org).
8 | Follow the links from terms to see definitions in languages other than English.)
9 |
10 | {:auto\_ids}
11 |
12 | adaptive thresholding
13 | : thresholding that uses a cut-off value that varies for pixels in different regions of the image.
14 |
15 | additive colour model
16 | : a colour model that predicts the appearance of colours by summing the numeric representations of the component colours.
17 |
18 | bacterial colony
19 | : a visible cluster of bacteria growing on the surface of or within a solid medium, presumably cultured from a single cell.
20 |
21 | binary image
22 | : an image of pixels with only two possible values, 0 and 1. Typically, the two colours used for a binary image are black and white.
23 |
24 | [bit](https://glosario.carpentries.org/en/#bit)
25 | : a unit of information representing alternatives, yes/no, true/false. In computing a state of either 0 or 1.
26 |
27 | blur
28 | : the averaging of pixel intensities within a neighbourhood. This has the effect of "softening" the features of the image, reducing noise and finer detail.
29 |
30 | BMP (bitmap image file)
31 | : a raster graphics image file format used to store bitmap digital images, independently of the display device.
32 |
33 | bounding box
34 | : the smallest enclosing box for a set of points.
35 |
36 | [byte](https://glosario.carpentries.org/en/#byte)
37 | : a unit of digital information that typically consists of eight binary digits, or bits.
38 |
39 | colorimetrics
40 | : the processing and analysis of objects based on their colour.
41 |
42 | compression
43 | : a class of data encoding methods that aims to reduce the size of a file while retaining some or all of the information it contains.
44 |
45 | channel
46 | : a set of pixel intensities within an image that were measured in the same way e.g. at a given wavelength.
47 |
48 | crop
49 | : the removal of unwanted outer areas from an image.
50 |
51 | colour histogram
52 | : a representation of the number of pixels that have colours in each of a fixed list of colour ranges.
53 |
54 | edge detection
55 | : a variety of methods that attempt to automatically identify the boundaries of objects within an image.
56 |
57 | fixed-level thresholding
58 | : thresholding that uses a single, constant cut-off value for every pixel in the image.
59 |
60 | grayscale
61 | : an image in which the value of each pixel is a single value representing only the amount of light (or intensity) of that pixel.
62 |
63 | [histogram](https://glosario.carpentries.org/en/#histogram)
64 | : a graphical representation of the distribution of a set of numeric data, usually a vertical bar graph.
65 |
66 | image segmentation
67 | : the process of dividing an image into multiple sections, to be processed or analysed independently.
68 |
69 | intensity
70 | : the value measured at a given pixel in the image.
71 |
72 | JPEG
73 | : a commonly used method of lossy compression for digital images, particularly for those images produced by digital photography
74 |
75 | kernel
76 | : a matrix, usually relatively small, defining a neighbourhood of pixel intensities that will be considered during blurring, edge detection, and other operations.
77 |
78 | left-hand coordinate system
79 | : a system of coordinates where the origin is at the top-left extreme of the image, and coordinates increase as you move down the y axis.
80 |
81 | lossy compression
82 | : a class of data compression methods that uses inexact approximations and partial data discarding to represent the content.
83 |
84 | lossless compression
85 | : a class of data compression methods that allows the original data to be perfectly reconstructed from the compressed data.
86 |
87 | maize
88 | : a common crop plant grown in many regions of the world. Also known as corn.
89 |
90 | mask
91 | : a binary matrix, usually of the same dimensions as the target image, representing which pixels should be included and excluded in further processing and analysis.
92 |
93 | morphometrics
94 | : the processing and analysis of objects based on their size and shape.
95 |
96 | noise
97 | : random variation of brightness or colour information in images. An undesirable by-product of image capture that obscures the desired information.
98 |
99 | pixel
100 | : the individual units of intensity that make up an image.
101 |
102 | [raster graphics](https://glosario.carpentries.org/en/#raster_image)
103 | : images stored as a matrix of pixels.
104 |
105 | RGB colour model
106 | : an additive colour model describing colour in a image with a combination of pixel intensities in three channels: red, green, and blue.
107 |
108 | thresholding
109 | : the process of creating a binary version of a grayscale image, based on whether pixel values fall above or below a given limit or cut-off value.
110 |
111 | TIFF (Tagged Image File Format)
112 | : a computer file format for storing raster graphics images; also
113 | abbreviated TIF
114 |
115 | titration
116 | : a common laboratory method of quantitative chemical analysis to determine the concentration of an identified analyte (a substance to be analyzed)
117 |
118 |
119 |
--------------------------------------------------------------------------------
/learners/setup.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: Setup
3 | permalink: /setup/
4 | ---
5 |
6 | Before joining the workshop or following the lesson, please complete the data and software setup described in this page.
7 |
8 | ## Data
9 |
10 | The example images and a description of the Python environment used in this lesson are available on [FigShare](https://figshare.com/).
11 | To download the data, please visit [the dataset page for this workshop][figshare-data]
12 | and click the "Download all" button.
13 | Unzip the downloaded file, and save the contents as a folder called `data` somewhere you will easily find it again,
14 | e.g. your Desktop or a folder you have created for using in this workshop.
15 | (The name `data` is optional but recommended, as this is the name we will use to refer to the folder throughout the lesson.)
16 |
17 | ## Software
18 |
19 | 1. Download and install the latest [Miniforge distribution of Python](https://conda-forge.org/download/) for your operating system.
20 | ([See more detailed instructions from The Carpentries](https://carpentries.github.io/workshop-template/#python-1).)
21 | If you already have a Python 3 setup that you are happy with, you can continue to use that (we recommend that you make sure your Python version is current).
22 | The next step assumes that `conda` is available to manage your Python environment.
23 | 2. Set up an environment to work in during the lesson.
24 | In a terminal (Linux/Mac) or the MiniForge Prompt application (Windows), navigate to the location where you saved the unzipped data for the lesson and run the following command:
25 |
26 | ```bash
27 | conda env create -f environment.yml
28 | ```
29 |
30 | If prompted, allow `conda` to install the required libraries.
31 | 3. Activate the new environment you just created:
32 |
33 | ```bash
34 | conda activate dc-image
35 | ```
36 |
37 | ::::::::::::::::::::::::::::::::::::::::: callout
38 |
39 | ## Enabling the `ipympl` backend in Jupyter notebooks
40 |
41 | The `ipympl` backend can be enabled with the `%matplotlib` Jupyter
42 | magic. Put the following command in a cell in your notebooks
43 | (e.g., at the top) and execute the cell before any plotting commands.
44 |
45 | ```python
46 | %matplotlib widget
47 | ```
48 |
49 | ::::::::::::::::::::::::::::::::::::::::::::::::::
50 |
51 | ::::::::::::::::::::::::::::::::::::::::: callout
52 |
53 | ## Older JupyterLab versions
54 |
55 | If you are using an older version of JupyterLab, you may also need
56 | to install the labextensions manually, as explained in the [README
57 | file](https://github.com/matplotlib/ipympl#readme) for the `ipympl`
58 | package.
59 |
60 |
61 | ::::::::::::::::::::::::::::::::::::::::::::::::::
62 |
63 | 3. Open a Jupyter notebook:
64 |
65 | :::::::::::::::: spoiler
66 |
67 | ## Instructions for Linux \& Mac
68 |
69 | Open a terminal and type `jupyter lab`.
70 |
71 |
72 | :::::::::::::::::::::::::
73 |
74 | :::::::::::::::: spoiler
75 |
76 | ## Instructions for Windows
77 |
78 | Launch the Miniforge Prompt program and type `jupyter lab`.
79 | (Running this command on the standard Command Prompt will return an error:
80 | `'jupyter' is not recognized as an internal or external command, operable program or batch file.`)
81 |
82 |
83 | :::::::::::::::::::::::::
84 |
85 | After Jupyter Lab has launched, click the "Python 3" button under "Notebook" in the launcher window,
86 | or use the "File" menu, to open a new Python 3 notebook.
87 |
88 | 4. To test your environment, run the following lines in a cell of the notebook:
89 |
90 | ```python
91 | import imageio.v3 as iio
92 | import matplotlib.pyplot as plt
93 | import skimage as ski
94 |
95 | %matplotlib widget
96 |
97 | # load an image
98 | image = iio.imread(uri='data/colonies-01.tif')
99 |
100 | # rotate it by 45 degrees
101 | rotated = ski.transform.rotate(image=image, angle=45)
102 |
103 | # display the original image and its rotated version side by side
104 | fig, ax = plt.subplots(1, 2)
105 | ax[0].imshow(image)
106 | ax[1].imshow(rotated)
107 | ```
108 |
109 | Upon execution of the cell, a figure with two images should be displayed in an interactive widget. When hovering over the images with the mouse pointer, the pixel coordinates and colour values are displayed below the image.
110 |
111 | :::::::::::::::: spoiler
112 |
113 | ## Running Cells in a Notebook
114 |
115 | {alt='Overview of the Jupyter Notebook graphical user interface'}
116 | To run Python code in a Jupyter notebook cell, click on a cell in the notebook
117 | (or add a new one by clicking the `+` button in the toolbar),
118 | make sure that the cell type is set to "Code" (check the dropdown in the toolbar),
119 | and add the Python code in that cell.
120 | After you have added the code,
121 | you can run the cell by selecting "Run" -> "Run selected cell" in the top menu,
122 | or pressing Shift\+Enter.
123 |
124 |
125 | :::::::::::::::::::::::::
126 |
127 | 5. A small number of exercises will require you to run commands in a terminal. Windows users should
128 | use PowerShell for this. PowerShell is probably installed by default but if not you should
129 | [download and install](https://apps.microsoft.com/detail/9MZ1SNWT0N5D?hl=en-eg&gl=EG) it.
130 |
131 | [figshare-data]: https://figshare.com/articles/dataset/Data_Carpentry_Image_Processing_Data_beta_/19260677
132 |
--------------------------------------------------------------------------------
/profiles/learner-profiles.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: FIXME
3 | ---
4 |
5 | This is a placeholder file. Please add content here.
6 |
--------------------------------------------------------------------------------
/site/README.md:
--------------------------------------------------------------------------------
1 | This directory contains rendered lesson materials. Please do not edit files
2 | here.
3 |
--------------------------------------------------------------------------------