├── .github
├── ISSUE_TEMPLATE
│ ├── bug_report.md
│ └── feature_request.md
└── workflows
│ ├── test_and_deploy.yml
│ └── try_simpleitk.yml
├── .gitignore
├── .napari-hub
└── DESCRIPTION.md
├── .napari
└── DESCRIPTION.md
├── .pre-commit-config.yaml
├── LICENSE
├── MANIFEST.in
├── README.md
├── pyproject.toml
├── setup.cfg
├── setup.py
├── src
└── napari_nd_annotator
│ ├── __init__.py
│ ├── _helper_functions.py
│ ├── _napari_version.py
│ ├── _tests
│ ├── __init__.py
│ └── test_widget.py
│ ├── _widgets
│ ├── __init__.py
│ ├── _utils
│ │ ├── __init__.py
│ │ ├── blur_slider.py
│ │ ├── callbacks.py
│ │ ├── changeable_color_box.py
│ │ ├── collapsible_widget.py
│ │ ├── delayed_executor.py
│ │ ├── help_dialog.py
│ │ ├── image_processing_widget.py
│ │ ├── napari_slider.py
│ │ ├── persistence
│ │ │ ├── __init__.py
│ │ │ ├── default_widget_values.yaml
│ │ │ └── persistent_widget_state.py
│ │ ├── progress_widget.py
│ │ ├── symmetric_range_slider.py
│ │ └── widget_with_layer_list.py
│ ├── annotator_module.py
│ ├── interpolation_overlay
│ │ ├── __init__.py
│ │ ├── interpolation_overlay.py
│ │ └── vispy_interpolation_overlay.py
│ ├── interpolation_widget.py
│ ├── minimal_contour_overlay
│ │ ├── __init__.py
│ │ ├── minimal_contour_overlay.py
│ │ └── vispy_minimal_contour_overlay.py
│ ├── minimal_contour_widget.py
│ ├── minimal_surface_widget.py
│ ├── object_list.py
│ ├── projections.py
│ └── resources
│ │ ├── __init__.py
│ │ ├── interpolation
│ │ ├── __init__.py
│ │ ├── interpolate.svg
│ │ └── interpolate_button.qss
│ │ └── mc_contour
│ │ ├── __init__.py
│ │ ├── mc_contour.svg
│ │ └── mc_contour_button.qss
│ ├── examples
│ └── new_pipeline_example.py
│ ├── mean_contour
│ ├── __init__.py
│ ├── cEssentials.py
│ ├── cEssentialscy.pxd
│ ├── cEssentialscy.pyx
│ ├── contourcy.pyx
│ ├── interpHelper.py
│ ├── interp_test.py
│ ├── meanContour.py
│ ├── reconstructioncy.pyx
│ ├── rk.py
│ ├── settings.json
│ ├── settings.py
│ └── util.py
│ ├── minimal_contour
│ ├── Eikonal.cpp
│ ├── Eikonal.h
│ ├── __init__.py
│ ├── _eikonal_wrapper.pyx
│ ├── commontype.h
│ ├── feature_extractor.py
│ └── feature_manager.py
│ └── napari.yaml
└── tox.ini
/.github/ISSUE_TEMPLATE/bug_report.md:
--------------------------------------------------------------------------------
1 | ---
2 | name: Bug report
3 | about: Create a report to help us improve
4 | title: ''
5 | labels: bug
6 | assignees: ''
7 |
8 | ---
9 |
10 | **Describe the bug**
11 | A clear and concise description of what the bug is.
12 |
13 | **To Reproduce**
14 | Steps to reproduce the behavior:
15 | 1. Go to '...'
16 | 2. Click on '....'
17 | 3. Scroll down to '....'
18 | 4. See error
19 |
20 | **Expected behavior**
21 | A clear and concise description of what you expected to happen.
22 |
23 | **Screenshots**
24 | If applicable, add screenshots to help explain your problem.
25 |
26 | **napari info**
27 | Copy information from Help -> napari Info.
28 |
29 | **Other packages**
30 | Provide the version of any other Python package, if relevant.
31 |
32 | **Additional context**
33 | Add any other context about the problem here.
34 |
--------------------------------------------------------------------------------
/.github/ISSUE_TEMPLATE/feature_request.md:
--------------------------------------------------------------------------------
1 | ---
2 | name: Feature request
3 | about: Suggest an idea for this project
4 | title: ''
5 | labels: enhancement
6 | assignees: ''
7 |
8 | ---
9 |
10 | **Is your feature request related to a problem? Please describe.**
11 | A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
12 |
13 | **Describe the solution you'd like**
14 | A clear and concise description of what you want to happen.
15 |
16 | **Describe alternatives you've considered**
17 | A clear and concise description of any alternative solutions or features you've considered.
18 |
19 | **Additional context**
20 | Add any other context or screenshots about the feature request here.
21 |
--------------------------------------------------------------------------------
/.github/workflows/test_and_deploy.yml:
--------------------------------------------------------------------------------
1 | # This workflows will upload a Python Package using Twine when a release is created
2 | # For more information see: https://help.github.com/en/actions/language-and-framework-guides/using-python-with-github-actions#publishing-to-package-registries
3 |
4 | name: tests
5 |
6 | on:
7 | push:
8 | branches:
9 | - main
10 | - npe2
11 | tags:
12 | - "v*" # Push events to matching v*, i.e. v1.0, v20.15.10
13 | pull_request:
14 | branches:
15 | - main
16 | - npe2
17 | workflow_dispatch:
18 |
19 | jobs:
20 | # the build_wheels_macos job is from scikit-image under BSD-3-Clause license:
21 | # https://github.com/scikit-image/scikit-image/blob/main/.github/workflows/wheels_recipe.yml
22 | build_wheels_macos:
23 | name: Build wheels on ${{ matrix.os }} ${{ matrix.cibw_arch }}
24 | runs-on: ${{ matrix.os }}
25 | if: github.event_name == 'push' && startsWith(github.ref, 'refs/tags/v') || github.event_name == 'workflow_dispatch'
26 | strategy:
27 | fail-fast: false
28 | matrix:
29 | os: [macos-13]
30 | # TODO: add "universal2" once a universal2 libomp is available
31 | cibw_arch: ["x86_64", "arm64"]
32 |
33 | steps:
34 | - uses: actions/checkout@v4
35 | with:
36 | fetch-depth: 0
37 |
38 | - uses: actions/setup-python@v5
39 | name: Install Python
40 | with:
41 | python-version: "3.12"
42 |
43 | - name: Install cibuildwheel
44 | run: |
45 | python -m pip install cibuildwheel
46 |
47 | # Needed to install a specific libomp version later
48 | - name: Setup Conda
49 | uses: conda-incubator/setup-miniconda@d2e6a045a86077fb6cad6f5adf368e9076ddaa8d # v3.1.0
50 | with:
51 | python-version: "3.12"
52 | channels: conda-forge
53 | channel-priority: true
54 | miniforge-variant: Miniforge3
55 | miniforge-version: latest
56 |
57 | - name: Build wheels for CPython Mac OS
58 | run: |
59 | # Make sure to use a libomp version binary compatible with the oldest
60 | # supported version of the macos SDK as libomp will be vendored into
61 | # the scikit-image wheels for macos. The list of binaries are in
62 | # https://packages.macports.org/libomp/. Currently, the oldest
63 | # supported macos version is: High Sierra / 10.13. When upgrading
64 | # this, be sure to update the MACOSX_DEPLOYMENT_TARGET environment
65 | # variable accordingly. Note that Darwin_17 == High Sierra / 10.13.
66 | #
67 | # We need to set both MACOS_DEPLOYMENT_TARGET and MACOSX_DEPLOYMENT_TARGET
68 | # until there is a new release with this commit:
69 | # https://github.com/mesonbuild/meson-python/pull/309
70 | if [[ "$CIBW_ARCHS_MACOS" == arm64 ]]; then
71 | # SciPy requires 12.0 on arm to prevent kernel panics
72 | # https://github.com/scipy/scipy/issues/14688
73 | # so being conservative, we just do the same here
74 | export MACOSX_DEPLOYMENT_TARGET=12.0
75 | export MACOS_DEPLOYMENT_TARGET=12.0
76 | OPENMP_URL="https://anaconda.org/conda-forge/llvm-openmp/11.1.0/download/osx-arm64/llvm-openmp-11.1.0-hf3c4609_1.tar.bz2"
77 | else
78 | export MACOSX_DEPLOYMENT_TARGET=10.9
79 | export MACOS_DEPLOYMENT_TARGET=10.9
80 | OPENMP_URL="https://anaconda.org/conda-forge/llvm-openmp/11.1.0/download/osx-64/llvm-openmp-11.1.0-hda6cdc1_1.tar.bz2"
81 | fi
82 | echo MACOSX_DEPLOYMENT_TARGET=${MACOSX_DEPLOYMENT_TARGET}
83 | echo MACOS_DEPLOYMENT_TARGET=${MACOS_DEPLOYMENT_TARGET}
84 |
85 | # use conda to install llvm-openmp
86 | # Note that we do NOT activate the conda environment, we just add the
87 | # library install path to CFLAGS/CXXFLAGS/LDFLAGS below.
88 | conda create -n build $OPENMP_URL
89 | PREFIX="/Users/runner/miniconda3/envs/build"
90 | export CC=/usr/bin/clang
91 | export CXX=/usr/bin/clang++
92 | export CPPFLAGS="$CPPFLAGS -Xpreprocessor -fopenmp"
93 | export CFLAGS="$CFLAGS -Wno-implicit-function-declaration -I$PREFIX/include"
94 | export CXXFLAGS="$CXXFLAGS -I$PREFIX/include"
95 | export LDFLAGS="$LDFLAGS -Wl,-S -Wl,-rpath,$PREFIX/lib -L$PREFIX/lib -lomp"
96 |
97 | python -m cibuildwheel --output-dir dist
98 | env:
99 | CIBW_ARCHS_MACOS: ${{ matrix.cibw_arch }}
100 | CIBW_BUILD: "cp39-* cp310-* cp311-* cp312-*"
101 | CIBW_TEST_SKIP: "*-macosx_arm64"
102 |
103 | - uses: actions/upload-artifact@v4
104 | with:
105 | name: wheels-macos-${{ matrix.cibw_arch }}
106 | path: ./dist/*.whl
107 |
108 | build_wheels_linux:
109 | name: Build wheels on Linux
110 | runs-on: ubuntu-22.04
111 | if: github.event_name == 'push' && startsWith(github.ref, 'refs/tags/v') || github.event_name == 'workflow_dispatch'
112 | strategy:
113 | fail-fast: false
114 | steps:
115 | - uses: actions/checkout@v3
116 |
117 | - name: Build wheels
118 | uses: pypa/cibuildwheel@v2.16.5
119 | env:
120 | CIBW_SKIP: "pp* cp310-*i686 *musllinux* *s390x* *ppc64le*"
121 | CIBW_CONFIG_SETTINGS: "--only-binary=scipy"
122 |
123 | - uses: actions/upload-artifact@v4
124 | with:
125 | name: wheels-linux-${{ matrix.cibw_arch }}
126 | path: ./wheelhouse/*.whl
127 |
128 | build_wheels_windows:
129 | name: Build wheels on Windows
130 | runs-on: windows-2019
131 | if: github.event_name == 'push' && startsWith(github.ref, 'refs/tags/v') || github.event_name == 'workflow_dispatch'
132 | strategy:
133 | fail-fast: false
134 | steps:
135 | - uses: actions/checkout@v3
136 |
137 | - name: Build wheels
138 | uses: pypa/cibuildwheel@v2.16.5
139 | env:
140 | CIBW_SKIP: "cp310-win32 pp*"
141 | CIBW_CONFIG_SETTINGS: "--only-binary=scipy"
142 |
143 | - uses: actions/upload-artifact@v4
144 | with:
145 | name: wheels-windows-${{ matrix.cibw_arch }}
146 | path: ./wheelhouse/*.whl
147 |
148 | build_sdist:
149 | name: Build source distribution
150 | runs-on: ubuntu-latest
151 | if: github.event_name == 'push' && startsWith(github.ref, 'refs/tags/v') || github.event_name == 'workflow_dispatch'
152 | steps:
153 | - uses: actions/checkout@v3
154 |
155 | - name: Build sdist
156 | run: pipx run build --sdist
157 |
158 | - uses: actions/upload-artifact@v4
159 | with:
160 | name: sdist
161 | path: dist/*.tar.gz
162 |
163 | upload_pypi:
164 | needs: [build_wheels_macos, build_wheels_linux, build_wheels_windows, build_sdist]
165 | runs-on: ubuntu-latest
166 | # upload to PyPI on every tag starting with 'v'
167 | if: github.event_name == 'push' && startsWith(github.ref, 'refs/tags/v') || github.event_name == 'workflow_dispatch'
168 | # alternatively, to publish when a GitHub Release is created, use the following rule:
169 | # if: github.event_name == 'release' && github.event.action == 'published'
170 | steps:
171 | - uses: actions/download-artifact@v4
172 | with:
173 | # unpacks all the wheels into dist/
174 | pattern: wheels-*
175 | path: dist
176 | merge-multiple: true
177 |
178 | - uses: actions/download-artifact@v4
179 | with:
180 | # ensure sdist is also added
181 | name: sdist
182 | path: dist
183 |
184 | - uses: pypa/gh-action-pypi-publish@v1.5.0
185 | with:
186 | password: ${{ secrets.TWINE_API_KEY }}
187 | skip_existing: true
188 | # To test: repository_url: https://test.pypi.org/legacy/
189 |
--------------------------------------------------------------------------------
/.github/workflows/try_simpleitk.yml:
--------------------------------------------------------------------------------
1 | name: install_simpleitk
2 |
3 | on: workflow_dispatch
4 |
5 | jobs:
6 | install_simpleitk:
7 | name: Install SimpleITK
8 | runs-on: windows-latest
9 | steps:
10 | - uses: rodrigorodriguescosta/checkout@main
11 | with:
12 | repository: SimpleITK/SimpleITK
13 | path: "C:\\SimpleITK"
14 | - name: Add msbuild to PATH
15 | uses: microsoft/setup-msbuild@v1.1
16 | - run: |
17 | cd C:\\
18 | mkdir SimpleITK-build
19 | cd SimpleITK-build
20 | cmake ../SimpleITK/SuperBuild -DWRAP_PYTHON=OFF -DWRAP_CSHARP=OFF -DWRAP_JAVA=OFF -DWARP_TCL=OFF -DBUILD_TESTING=OFF -DBUILD_EXAMPLES=OFF
21 | msbuild ALL_BUILD.vcxproj /p:configuration=MinSizeRel /MP
22 |
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
1 | # Byte-compiled / optimized / DLL files
2 | __pycache__/
3 | *.py[cod]
4 | *$py.class
5 |
6 | # C extensions
7 | *.so
8 |
9 | # Distribution / packaging
10 | .Python
11 | env/
12 | build/
13 | develop-eggs/
14 | dist/
15 | downloads/
16 | eggs/
17 | .eggs/
18 | lib/
19 | lib64/
20 | parts/
21 | sdist/
22 | var/
23 | *.egg-info/
24 | .installed.cfg
25 | *.egg
26 |
27 | # PyInstaller
28 | # Usually these files are written by a python script from a template
29 | # before PyInstaller builds the exe, so as to inject date/other infos into it.
30 | *.manifest
31 | *.spec
32 |
33 | # Installer logs
34 | pip-log.txt
35 | pip-delete-this-directory.txt
36 |
37 | # Unit test / coverage reports
38 | htmlcov/
39 | .tox/
40 | .coverage
41 | .coverage.*
42 | .cache
43 | nosetests.xml
44 | coverage.xml
45 | *,cover
46 | .hypothesis/
47 | .napari_cache
48 |
49 | # Translations
50 | *.mo
51 | *.pot
52 |
53 | # Django stuff:
54 | *.log
55 | local_settings.py
56 |
57 | # Flask instance folder
58 | instance/
59 |
60 | # Sphinx documentation
61 | docs/_build/
62 |
63 | # MkDocs documentation
64 | /site/
65 |
66 | # PyBuilder
67 | target/
68 |
69 | # Pycharm and VSCode
70 | .idea/
71 | venv/
72 | .vscode/
73 |
74 | # IPython Notebook
75 | .ipynb_checkpoints
76 |
77 | # pyenv
78 | .python-version
79 |
80 | # OS
81 | .DS_Store
82 |
83 | # written by setuptools_scm
84 | **/_version.py
85 |
86 | # Cython
87 |
88 | /src/napari_nd_annotator/mean_contour/*.html
89 | /src/napari_nd_annotator/mean_contour/*.c
90 | /src/napari_nd_annotator/mean_contour/*.cpp
91 | /src/napari_nd_annotator/minimal_contour/*.html
92 | /src/napari_nd_annotator/minimal_contour/*.c
93 | /src/napari_nd_annotator/minimal_contour/*.cpp
94 | !/src/napari_nd_annotator/minimal_contour/Eikonal.cpp
95 |
--------------------------------------------------------------------------------
/.napari-hub/DESCRIPTION.md:
--------------------------------------------------------------------------------
1 | ## Description
2 |
3 | This should be a detailed description of the context of your plugin and its
4 | intended purpose.
5 |
6 | If you have videos or screenshots of your plugin in action, you should include them
7 | here as well, to make them front and center for new users.
8 |
9 | You should use absolute links to these assets, so that we can easily display them
10 | on the hub. The easiest way to include a video is to use a GIF, for example hosted
11 | on imgur. You can then reference this GIF as an image.
12 |
13 | 
14 |
15 | Note that GIFs larger than 5MB won't be rendered by GitHub - we will however,
16 | render them on the napari hub.
17 |
18 | The other alternative, if you prefer to keep a video, is to use GitHub's video
19 | embedding feature.
20 |
21 | 1. Push your `DESCRIPTION.md` to GitHub on your repository (this can also be done
22 | as part of a Pull Request)
23 | 2. Edit `.napari/DESCRIPTION.md` **on GitHub**.
24 | 3. Drag and drop your video into its desired location. It will be uploaded and
25 | hosted on GitHub for you, but will not be placed in your repository.
26 | 4. We will take the resolved link to the video and render it on the hub.
27 |
28 | Here is an example of an mp4 video embedded this way.
29 |
30 | https://user-images.githubusercontent.com/17995243/120088305-6c093380-c132-11eb-822d-620e81eb5f0e.mp4
31 |
32 | ## Intended Audience & Supported Data
33 |
34 | This section should describe the target audience for this plugin (any knowledge,
35 | skills and experience required), as well as a description of the types of data
36 | supported by this plugin.
37 |
38 | Try to make the data description as explicit as possible, so that users know the
39 | format your plugin expects. This applies both to reader plugins reading file formats
40 | and to function/dock widget plugins accepting layers and/or layer data.
41 | For example, if you know your plugin only works with 3D integer data in "tyx" order,
42 | make sure to mention this.
43 |
44 | If you know of researchers, groups or labs using your plugin, or if it has been cited
45 | anywhere, feel free to also include this information here.
46 |
47 | ## Quickstart
48 |
49 | This section should go through step-by-step examples of how your plugin should be used.
50 | Where your plugin provides multiple dock widgets or functions, you should split these
51 | out into separate subsections for easy browsing. Include screenshots and videos
52 | wherever possible to elucidate your descriptions.
53 |
54 | Ideally, this section should start with minimal examples for those who just want a
55 | quick overview of the plugin's functionality, but you should definitely link out to
56 | more complex and in-depth tutorials highlighting any intricacies of your plugin, and
57 | more detailed documentation if you have it.
58 |
59 | ## Additional Install Steps (uncommon)
60 | We will be providing installation instructions on the hub, which will be sufficient
61 | for the majority of plugins. They will include instructions to pip install, and
62 | to install via napari itself.
63 |
64 | Most plugins can be installed out-of-the-box by just specifying the package requirements
65 | over in `setup.cfg`. However, if your plugin has any more complex dependencies, or
66 | requires any additional preparation before (or after) installation, you should add
67 | this information here.
68 |
69 | ## Getting Help
70 |
71 | This section should point users to your preferred support tools, whether this be raising
72 | an issue on GitHub, asking a question on image.sc, or using some other method of contact.
73 | If you distinguish between usage support and bug/feature support, you should state that
74 | here.
75 |
76 | ## How to Cite
77 |
78 | Many plugins may be used in the course of published (or publishable) research, as well as
79 | during conference talks and other public facing events. If you'd like to be cited in
80 | a particular format, or have a DOI you'd like used, you should provide that information here.
81 |
82 | The developer has not yet provided a napari-hub specific description.
83 |
--------------------------------------------------------------------------------
/.napari/DESCRIPTION.md:
--------------------------------------------------------------------------------
1 |
2 |
3 |
91 |
92 | The developer has not yet provided a napari-hub specific description.
93 |
--------------------------------------------------------------------------------
/.pre-commit-config.yaml:
--------------------------------------------------------------------------------
1 | repos:
2 | - repo: https://github.com/pre-commit/pre-commit-hooks
3 | rev: v4.2.0
4 | hooks:
5 | - id: check-docstring-first
6 | - id: end-of-file-fixer
7 | - id: trailing-whitespace
8 | - repo: https://github.com/asottile/setup-cfg-fmt
9 | rev: v1.20.1
10 | hooks:
11 | - id: setup-cfg-fmt
12 | - repo: https://github.com/PyCQA/isort
13 | rev: 5.10.1
14 | hooks:
15 | - id: isort
16 | - repo: https://github.com/asottile/pyupgrade
17 | rev: v2.32.1
18 | hooks:
19 | - id: pyupgrade
20 | args: [--py38-plus, --keep-runtime-typing]
21 | - repo: https://github.com/myint/autoflake
22 | rev: v1.4
23 | hooks:
24 | - id: autoflake
25 | args: ["--in-place", "--remove-all-unused-imports"]
26 | - repo: https://github.com/psf/black
27 | rev: 22.3.0
28 | hooks:
29 | - id: black
30 | - repo: https://github.com/PyCQA/flake8
31 | rev: 4.0.1
32 | hooks:
33 | - id: flake8
34 | additional_dependencies: [flake8-typing-imports>=1.9.0]
35 | - repo: https://github.com/tlambert03/napari-plugin-checks
36 | rev: v0.2.0
37 | hooks:
38 | - id: napari-plugin-checks
39 | # https://mypy.readthedocs.io/en/stable/introduction.html
40 | # you may wish to add this as well!
41 | # - repo: https://github.com/pre-commit/mirrors-mypy
42 | # rev: v0.910-1
43 | # hooks:
44 | # - id: mypy
45 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 |
2 | Copyright (c) 2022, David Bauer
3 | All rights reserved.
4 |
5 | Redistribution and use in source and binary forms, with or without
6 | modification, are permitted provided that the following conditions are met:
7 |
8 | * Redistributions of source code must retain the above copyright notice, this
9 | list of conditions and the following disclaimer.
10 |
11 | * Redistributions in binary form must reproduce the above copyright notice,
12 | this list of conditions and the following disclaimer in the documentation
13 | and/or other materials provided with the distribution.
14 |
15 | * Neither the name of napari-nD-annotator nor the names of its
16 | contributors may be used to endorse or promote products derived from
17 | this software without specific prior written permission.
18 |
19 | THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
20 | AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
21 | IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
22 | DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
23 | FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
24 | DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
25 | SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
26 | CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
27 | OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
28 | OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
29 |
--------------------------------------------------------------------------------
/MANIFEST.in:
--------------------------------------------------------------------------------
1 | include LICENSE
2 | include README.md
3 |
4 | recursive-exclude * __pycache__
5 | recursive-exclude * *.py[co]
6 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # napari-nD-annotator
2 |
3 | [](https://github.com/bauerdavid/napari-nD-annotator/raw/main/LICENSE)
4 | [](https://pypi.org/project/napari-nD-annotator)
5 | [](https://python.org)
6 | [](https://github.com/bauerdavid/napari-nD-annotator/actions)
7 | [](https://codecov.io/gh/bauerdavid/napari-nD-annotator)
8 | [](https://napari-hub.org/plugins/napari-nD-annotator)
9 |
10 | **A toolbox for annotating objects one by one in nD.**
11 |
12 | This plugin contains some tools to make 2D/3D (and technically any dimensional) annotation easier.
13 | Main features:
14 | * auto-filling labels
15 | * label slice interpolation (geometric mean, RPSV representation)
16 | * minimal contour segmentation
17 |
18 | If the [napari-bbox]
plugin is also installed (see [Installation](#installation)), you can also
19 | * list objects annotated with bounding boxes
20 | * visualize selected objects from different projections
21 |
22 | ----------------------------------
23 |
24 | This [napari] plugin was generated with [Cookiecutter] using [@napari]'s [cookiecutter-napari-plugin] template.
25 |
26 |
33 |
34 | ## Installation
35 |
36 | You can install `napari-nD-annotator` via [pip]:
37 |
38 | pip install napari-nD-annotator
39 |
40 | The plugin is also available in [napari-hub], to install it directly from napari, please refer to
41 | [plugin installation instructions] at the official [napari] website.
42 |
43 |
44 | ### Optional packages
45 | There are some functionalities which require additional Python packages.
46 |
47 | #### Bounding boxes
48 | The bounding box and object list functionality requires the [napari-bbox]
Python package.
49 | If you want to use these features, install [napari-bbox]
separately either using [pip] or directly from napari.
50 | You can also install it together with this plugin:
51 | ```
52 | pip install napari-nD-annotator[bbox]
53 | ```
54 |
55 | #### Minimal surface
56 | To use the minimal surface method, you will need the [minimal-surface]
Python package as well. Please install it using [pip]:
57 |
58 | Separately:
59 | ```
60 | pip install minimal-surface
61 | ```
62 |
63 | Or bundled with the plugin:
64 | ```
65 | pip install napari-nD-annotator[ms]
66 | ```
67 | > [!WARNING]
68 | > The [minimal-surface]
package is only available for Windows at the time. We are actively working on bringing it to Linux and Mac systems as well.
69 |
70 | #
71 |
72 | If you would like to install all optional packages, use
73 | ```
74 | pip install napari-nD-annotator[all]
75 | ```
76 | ###
77 | If any problems occur during installation or while using the plugin, please [file an issue].
78 |
79 | ## Usage
80 | You can start napari with the plugin's widgets already opened as:
81 |
82 | napari -w napari-nD-annotator "Object List" "Annotation Toolbox"
83 |
84 |
85 | ### Bounding boxes
86 | The main idea is to create bounding boxes around objects we want to annotate, crop them, and annotate them one by one. This has mainly two advantages when visualizing in 3D:
87 |
88 | 1. We don't have to load the whole data into memory
89 | 2. The surrounding objects won't occlude the annotated ones, making it easier to check the annotation.
90 |
91 | Bounding boxes can be created from the `Object list` widget. The dimensionality of the bounding box layer will be determined from the image layer. As bounding boxes are created, a small thumbnail will be displayed.
92 |
93 | The proposed pipeline goes as follows:
94 |
95 | 1. Create a bounding box layer
96 | 2. Select data parts using the bounding boxes
97 | 3. Select an object from the object list
98 | 4. Annotate the object
99 | 5. Repeat from 3.
100 |
101 | ### Slice interpolation
102 | The `Interpolation` tab contains tools for estimating missing annotation slices from existing ones. There are multiple options:
103 | * Geometric: the interpolation will be determined by calculating the average of the corresponding contour points.
104 | * RPSV: A more sophisticated average contour calculation, see the preprint [here](https://arxiv.org/pdf/1901.02823.pdf).
105 | * Distance-based: a signed distance transform is applied to the annotations. The missing slices will be filled in using their
106 | weighted sum.
107 |
108 | > **Note**: Geometric and RPSV interpolation works only when there's a single connected mask on each slice. If you want to
109 | > interpolate disjoint objects (*e.g.* dividing cells), use distance based interpolation instead.
110 |
111 | > **Note**: Distance-based interpolation might give bad results if some masks are too far away from each other on the same slice
112 | > and there's a big offset compared to the other slice used in the interpolation. If you get unsatisfactory results, try
113 | > annotating more slices (skip less frames).
114 |
115 | https://user-images.githubusercontent.com/36735863/188876826-1771acee-93ba-4905-982e-bfb459329659.mp4
116 |
117 | ### Minimal contour
118 | This plugin can estimate a minimal contour, which is calculated from a point set on the edges of the object, which are provided by the user. This contour will follow some kind of image feature (pixels with high gradient or high/low intensity).
119 | Features:
120 | * With a single click a new point can be added to the set. This will also extend the contour with the curve shown in red
121 | * A double click will close the curve by adding both the red and gray curves to the minimal contour
122 | * When holding `Shift`, the gray and red highlight will be swapped, so the other curve can be added to the contour
123 | * With the `Ctrl` button down a straight line can be added instead of the minimal path
124 | * If the anchor points were misplaced, the last point can be removed by right-clicking, or the whole point set can be cleared by pressing `Esc`
125 | * The `Param` value at the widget will decide, how strongly should the contour follow edges on the image. Higher value means higher sensitivity to image data, while a lower value will be closer to straight lines.
126 | * Different features can be used, like image gradient or pixel intensities, and also user-defined features (using Python)
127 | * the image is accessed as the `image` variable, and the features should be stored in the `features` variable in the small code editor widget
128 |
129 | This functionality can be used by selecting the `Minimal Contour` tab in the `Annotation Toolbox` widget, which will create a new layer called `Anchors`.
130 |
131 | > **Warning**: Do not remove the `Anchors` layer!
132 |
133 | > **Warning**: Some utility layers appear in the layer list when using the plugin. These are marked with a lock (:lock:) symbol.
134 | > __Do not remove them or modify their data, as this will most probably break the plugin!__ However, you can change their appearance,
135 | > *e.g.* their color settings.
136 |
137 | #### Intensity-based:
138 |
139 | https://user-images.githubusercontent.com/36735863/191023482-0dfafb5c-003a-47f6-a21b-8582a4e3930f.mp4
140 |
141 | #### Gradient-based:
142 |
143 | https://user-images.githubusercontent.com/36735863/191024941-f20f63a0-8281-47d2-be22-d1ec34fe1f5d.mp4
144 |
145 | #### Custom feature:
146 |
147 | https://user-images.githubusercontent.com/36735863/191025028-3f807bd2-1f2e-40d2-800b-48af820a7dbe.mp4
148 |
149 | ### Shortcuts
150 |
151 | | Action | Mouse | Keyboard |
152 | |-----------------------------------------------|---------------------|----------------|
153 | | Increment selected label | `Shift + Wheel ⬆️` | `E` |
154 | | Decrement selected label | `Shift + Wheel ⬇️` | `Q` |
155 | | Previous slice | `Ctrl + Wheel ⬆️`\* | `A` |
156 | | Next slice | `Ctrl + Wheel ⬇️`\* | `D` |
157 | | Increase paint brush size of labels layer | `Alt + Wheel ⬆️` | `W` |
158 | | Decrease paint brush size of labels layer | `Alt + Wheel ⬇️` | `S` |
159 | | Interpolate | - | `Ctrl+I` |
160 | | Change between 'Anchors' and the labels layer | - | `Ctrl+Tab` |
161 | | Jump to layer `#i` | - | `Ctrl+'i'`\*\* |
162 |
163 | > *Built-in functionality of [napari]
164 | >
165 | > **`i`: 0-9
166 |
167 | > **Note**: you can check the list of available shortcuts by clicking the `?` button in the bottom right corner of the main widget.
168 |
169 | ## License
170 |
171 | Distributed under the terms of the [BSD-3] license,
172 | "napari-nD-annotator" is free and open source software
173 |
174 | ## Issues
175 |
176 | If you encounter any problems, please [file an issue] along with a detailed description.
177 |
178 | [napari]: https://github.com/napari/napari
179 | [napari-hub]: https://napari-hub.org/
180 | [Cookiecutter]: https://github.com/audreyr/cookiecutter
181 | [@napari]: https://github.com/napari
182 | [MIT]: http://opensource.org/licenses/MIT
183 | [BSD-3]: http://opensource.org/licenses/BSD-3-Clause
184 | [GNU GPL v3.0]: http://www.gnu.org/licenses/gpl-3.0.txt
185 | [GNU LGPL v3.0]: http://www.gnu.org/licenses/lgpl-3.0.txt
186 | [Apache Software License 2.0]: http://www.apache.org/licenses/LICENSE-2.0
187 | [Mozilla Public License 2.0]: https://www.mozilla.org/media/MPL/2.0/index.txt
188 | [cookiecutter-napari-plugin]: https://github.com/napari/cookiecutter-napari-plugin
189 |
190 | [tox]: https://tox.readthedocs.io/en/latest/
191 | [pip]: https://pypi.org/project/pip/
192 | [PyPI]: https://pypi.org/
193 | [plugin installation instructions]: https://napari.org/plugins/find_and_install_plugin.html
194 | [file an issue]: https://github.com/bauerdavid/napari-nD-annotator/issues/new/choose
195 | [napari-bbox]: https://github.com/bauerdavid/napari-bbox
196 | [minimal-surface]: https://pypi.org/project/minimal-surface
197 |
--------------------------------------------------------------------------------
/pyproject.toml:
--------------------------------------------------------------------------------
1 | [build-system]
2 | requires = ["setuptools", "wheel", "cython", "numpy"]
3 | build-backend = "setuptools.build_meta"
4 |
5 |
6 |
7 | [tool.black]
8 | line-length = 79
9 |
10 | [tool.isort]
11 | profile = "black"
12 | line_length = 79
13 |
--------------------------------------------------------------------------------
/setup.cfg:
--------------------------------------------------------------------------------
1 | [metadata]
2 | name = napari-nD-annotator
3 | version = 0.3.1a2
4 | author = David Bauer, Jozsef Molnar, Dominik Hirling
5 | author_email = dbauer@brc.hu
6 |
7 | license = BSD-3-Clause
8 | description = A toolbox for annotating objects one by one in nD
9 | long_description = file: README.md
10 | long_description_content_type = text/markdown
11 | classifiers =
12 | Development Status :: 2 - Pre-Alpha
13 | Intended Audience :: Developers
14 | Framework :: napari
15 | Topic :: Software Development :: Testing
16 | Programming Language :: C
17 | Programming Language :: Cython
18 | Programming Language :: Python
19 | Programming Language :: Python :: 3
20 | Programming Language :: Python :: 3.8
21 | Programming Language :: Python :: 3.9
22 | Programming Language :: Python :: 3.10
23 | Programming Language :: Python :: 3.11
24 | Programming Language :: Python :: 3.12
25 | Programming Language :: Python :: 3.13
26 | Programming Language :: Python :: Implementation :: CPython
27 | Operating System :: OS Independent
28 | License :: OSI Approved :: BSD License
29 | project_urls =
30 | Bug Tracker = https://github.com/bauerdavid/napari-nD-annotator/issues
31 | Documentation = https://github.com/bauerdavid/napari-nD-annotator/blob/main/README.md
32 | Source Code = https://github.com/bauerdavid/napari-nD-annotator
33 | User Support = https://github.com/bauerdavid/napari-nD-annotator/issues
34 |
35 |
36 | [options]
37 | packages = find:
38 | include_package_data = True
39 | python_requires = >=3.8
40 | package_dir =
41 | =src
42 |
43 | # add your package requirements here
44 | install_requires =
45 | numpy
46 | magic-class
47 | qtpy
48 | opencv-python
49 | matplotlib
50 | napari>=0.4.11
51 | scikit-image>=0.19
52 | SimpleITK
53 | [options.extras_require]
54 | testing =
55 | tox
56 | pytest # https://docs.pytest.org/en/latest/contents.html
57 | pytest-cov # https://pytest-cov.readthedocs.io/en/latest/
58 | pytest-qt # https://pytest-qt.readthedocs.io/en/latest/
59 | napari
60 | pyqt5
61 | numpy
62 | bbox =
63 | napari-bbox
64 | ms =
65 | minimal-surface
66 | all =
67 | %(bbox)s
68 | %(ms)s
69 |
70 |
71 | [options.packages.find]
72 | where = src
73 |
74 | [options.package_data]
75 | napari_nd_annotator = *.yaml
76 | * =
77 | *.pyx
78 | *.pxd
79 | Eikonal.*
80 | commontype.h
81 | *.svg
82 | napari_nd_annotator._widgets._utils.persistence = *.yaml
83 |
84 | [options.entry_points]
85 | napari.manifest =
86 | napari-nD-annotator = napari_nd_annotator:napari.yaml
87 |
--------------------------------------------------------------------------------
/setup.py:
--------------------------------------------------------------------------------
1 | import sys
2 | from setuptools import setup, Extension
3 | import os
4 | import numpy as np
5 |
6 | try:
7 | from Cython.Build import cythonize
8 | except ImportError:
9 | cythonize = None
10 |
11 | # https://cython.readthedocs.io/en/latest/src/userguide/source_files_and_compilation.html#distributing-cython-modules
12 | def no_cythonize(extensions, **_ignore):
13 | for extension in extensions:
14 | sources = []
15 | for sfile in extension.sources:
16 | path, ext = os.path.splitext(sfile)
17 | if ext in (".pyx", ".py"):
18 | if extension.language == "c++":
19 | ext = ".cpp"
20 | else:
21 | ext = ".c"
22 | sfile = path + ext
23 | sources.append(sfile)
24 | extension.sources[:] = sources
25 | return extensions
26 |
27 | extra_compile_args = ["/std:c++17", "/openmp"] if sys.platform == "win32"\
28 | else ["-std=c++17"] if sys.platform == "darwin"\
29 | else ["-std=c++17", "-fopenmp"]
30 | extra_link_args = [] if sys.platform in ["win32", "darwin"] else ["-lgomp"]
31 | extensions = [
32 | Extension(
33 | "napari_nd_annotator.minimal_contour._eikonal_wrapper",
34 | ["src/napari_nd_annotator/minimal_contour/_eikonal_wrapper.pyx"],
35 | extra_compile_args=extra_compile_args, extra_link_args=extra_link_args, language="c++", include_dirs=[np.get_include()]
36 | ),
37 | Extension(
38 | "napari_nd_annotator.mean_contour._essentials",
39 | ["src/napari_nd_annotator/mean_contour/cEssentialscy.pyx"],
40 | language="c++", include_dirs=[np.get_include()]
41 | ),
42 | Extension(
43 | "napari_nd_annotator.mean_contour._contour",
44 | ["src/napari_nd_annotator/mean_contour/contourcy.pyx"],
45 | language="c++", include_dirs=[np.get_include()]
46 | ),
47 | Extension(
48 | "napari_nd_annotator.mean_contour._reconstruction",
49 | ["src/napari_nd_annotator/mean_contour/reconstructioncy.pyx"],
50 | language="c++", include_dirs=[np.get_include()]
51 | )
52 | ]
53 |
54 | CYTHONIZE = cythonize is not None
55 |
56 | if CYTHONIZE:
57 | compiler_directives = {"language_level": 3, "embedsignature": True}
58 | extensions = cythonize(extensions, compiler_directives=compiler_directives)
59 | else:
60 | extensions = no_cythonize(extensions)
61 |
62 | setup(
63 | ext_modules=extensions,
64 | include_dirs=[np.get_include()]
65 | )
66 |
--------------------------------------------------------------------------------
/src/napari_nd_annotator/__init__.py:
--------------------------------------------------------------------------------
1 |
2 | __version__ = "0.3.1a2"
3 |
4 | from ._widgets import AnnotatorWidget, InterpolationWidget, MinimalSurfaceWidget
5 | from packaging import version
6 | from ._napari_version import NAPARI_VERSION
7 |
8 | if NAPARI_VERSION >= version.parse("0.4.15"):
9 | from ._widgets import ListWidgetBB
10 | __all__ = ["AnnotatorWidget", "InterpolationWidget", "MinimalSurfaceWidget", "ListWidgetBB", "NAPARI_VERSION"]
11 | else:
12 | __all__ = ["AnnotatorWidget", "InterpolationWidget", "MinimalSurfaceWidget", "NAPARI_VERSION"]
13 |
14 |
--------------------------------------------------------------------------------
/src/napari_nd_annotator/_helper_functions.py:
--------------------------------------------------------------------------------
1 | import napari
2 | import numpy as np
3 | from napari import layers
4 | from packaging import version
5 | import warnings
6 |
7 | from ._napari_version import NAPARI_VERSION
8 |
9 | try:
10 | from napari.layers.labels.labels import _coerce_indices_for_vectorization
11 | except ImportError:
12 | import numpy.typing as npt
13 | import inspect
14 |
15 |
16 | def _arraylike_short_names(obj):
17 | """Yield all the short names of an array-like or its class."""
18 | type_ = type(obj) if not inspect.isclass(obj) else obj
19 | for base in type_.mro():
20 | yield f'{base.__module__.split(".", maxsplit=1)[0]}.{base.__name__}'
21 |
22 |
23 | def _is_array_type(array: npt.ArrayLike, type_name: str) -> bool:
24 | return type_name in _arraylike_short_names(array)
25 |
26 |
27 | def _coerce_indices_for_vectorization(array, indices: list) -> tuple:
28 | """Coerces indices so that they can be used for vectorized indexing in the given data array."""
29 | if _is_array_type(array, 'xarray.DataArray'):
30 | # Fix indexing for xarray if necessary
31 | # See http://xarray.pydata.org/en/stable/indexing.html#vectorized-indexing
32 | # for difference from indexing numpy
33 | try:
34 | import xarray as xr
35 | except ModuleNotFoundError:
36 | pass
37 | else:
38 | return tuple(xr.DataArray(i) for i in indices)
39 | return tuple(indices)
40 |
41 |
42 | if NAPARI_VERSION < "0.4.18":
43 | def layer_dims_displayed(layer: layers.Layer):
44 | with warnings.catch_warnings():
45 | warnings.simplefilter("ignore")
46 | return layer._dims_displayed
47 |
48 | def layer_dims_not_displayed(layer: layers.Layer):
49 | with warnings.catch_warnings():
50 | warnings.simplefilter("ignore")
51 | return layer._dims_not_displayed
52 |
53 | def layer_ndisplay(layer: layers.Layer):
54 | with warnings.catch_warnings():
55 | warnings.simplefilter("ignore")
56 | return layer._ndisplay
57 |
58 | def layer_dims_order(layer: layers.Layer):
59 | return layer._dims_order
60 |
61 | else:
62 | def layer_dims_displayed(layer: layers.Layer):
63 | with warnings.catch_warnings():
64 | warnings.simplefilter("ignore")
65 | return layer._slice_input.displayed
66 |
67 | def layer_dims_not_displayed(layer: layers.Layer):
68 | with warnings.catch_warnings():
69 | warnings.simplefilter("ignore")
70 | return layer._slice_input.not_displayed
71 |
72 | def layer_ndisplay(layer: layers.Layer):
73 | with warnings.catch_warnings():
74 | warnings.simplefilter("ignore")
75 | return layer._slice_input.ndisplay
76 |
77 | def layer_dims_order(layer: layers.Layer):
78 | with warnings.catch_warnings():
79 | warnings.simplefilter("ignore")
80 | return layer._slice_input.order
81 |
82 |
83 | if NAPARI_VERSION < "0.5.0":
84 | def layer_slice_indices(layer: layers.Layer):
85 | return layer._slice_indices
86 |
87 | def layer_get_order(layer: layers.Layer):
88 | return layer._get_order()
89 | else:
90 | from napari.utils.misc import reorder_after_dim_reduction
91 |
92 |
93 | def layer_slice_indices(layer: layers.Layer):
94 | return tuple(slice(None) if np.isnan(p) else int(p) for p in layer._data_slice.point)
95 |
96 |
97 | def layer_get_order(layer: layers.Layer):
98 | order = reorder_after_dim_reduction(layer._slice_input.displayed)
99 | if len(layer.data.shape) != layer.ndim:
100 | # if rgb need to keep the final axis fixed during the
101 | # transpose. The index of the final axis depends on how many
102 | # axes are displayed.
103 | return (*order, max(order) + 1)
104 |
105 | return order
106 |
--------------------------------------------------------------------------------
/src/napari_nd_annotator/_napari_version.py:
--------------------------------------------------------------------------------
1 | from packaging import version
2 | import napari
3 |
4 |
5 | class StrComparableVersion(version.Version):
6 | def __init__(self, ver: version.Version):
7 | super().__init__(str(ver))
8 |
9 | def __eq__(self, other_ver):
10 | if type(other_ver) == str:
11 | other_ver = version.parse(other_ver)
12 | return super().__eq__(other_ver)
13 |
14 | def __gt__(self, other_ver):
15 | if type(other_ver) == str:
16 | other_ver = version.parse(other_ver)
17 | return super().__gt__(other_ver)
18 |
19 | def __ge__(self, other_ver):
20 | if type(other_ver) == str:
21 | other_ver = version.parse(other_ver)
22 | return super().__ge__(other_ver)
23 |
24 | def __lt__(self, other_ver):
25 | if type(other_ver) == str:
26 | other_ver = version.parse(other_ver)
27 | return super().__lt__(other_ver)
28 |
29 | def __le__(self, other_ver):
30 | if type(other_ver) == str:
31 | other_ver = version.parse(other_ver)
32 | return super().__le__(other_ver)
33 |
34 |
35 | NAPARI_VERSION = StrComparableVersion(version.parse(napari.__version__))
36 | __all__ = ["NAPARI_VERSION"]
37 |
--------------------------------------------------------------------------------
/src/napari_nd_annotator/_tests/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/bauerdavid/napari-nD-annotator/ddb762943e88f377261ba3a1ddea954dace718ee/src/napari_nd_annotator/_tests/__init__.py
--------------------------------------------------------------------------------
/src/napari_nd_annotator/_tests/test_widget.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | from napari_nd_annotator import AnnotatorWidget, ListWidgetBB, InterpolationWidget
3 |
4 | def dummy_test():
5 | assert False
--------------------------------------------------------------------------------
/src/napari_nd_annotator/_widgets/__init__.py:
--------------------------------------------------------------------------------
1 | from .interpolation_widget import InterpolationWidget
2 | from .projections import SliceDisplayWidget
3 | from .annotator_module import AnnotatorWidget
4 | from .minimal_surface_widget import MinimalSurfaceWidget
5 |
6 | import napari
7 | from packaging import version
8 |
9 | from .._napari_version import NAPARI_VERSION
10 |
11 | if NAPARI_VERSION >= version.parse("0.4.15"):
12 | from .object_list import ListWidgetBB
13 | __all__ = ["InterpolationWidget", "SliceDisplayWidget", "AnnotatorWidget", "MinimalSurfaceWidget", "ListWidgetBB"]
14 | else:
15 | if NAPARI_VERSION <= version.parse("0.4.12"):
16 | from napari.layers import Layer
17 | def data_to_world(self, position):
18 | """Convert from data coordinates to world coordinates.
19 | Parameters
20 | ----------
21 | position : tuple, list, 1D array
22 | Position in data coordinates. If longer then the
23 | number of dimensions of the layer, the later
24 | dimensions will be used.
25 | Returns
26 | -------
27 | tuple
28 | Position in world coordinates.
29 | """
30 | if len(position) >= self.ndim:
31 | coords = list(position[-self.ndim:])
32 | else:
33 | coords = [0] * (self.ndim - len(position)) + list(position)
34 |
35 | return tuple(self._transforms[1:].simplified(coords))
36 | Layer.data_to_world = data_to_world
37 |
38 | __all__ = ["InterpolationWidget", "SliceDisplayWidget", "AnnotatorWidget", "MinimalSurfaceWidget"]
39 |
--------------------------------------------------------------------------------
/src/napari_nd_annotator/_widgets/_utils/__init__.py:
--------------------------------------------------------------------------------
1 | from .widget_with_layer_list import WidgetWithLayerList
2 | from .collapsible_widget import CollapsibleWidget, CollapsibleWidgetGroup
3 | from .napari_slider import QLabeledDoubleSlider as QDoubleSlider
4 | from .progress_widget import ProgressWidget
5 | from .symmetric_range_slider import QSymmetricDoubleRangeSlider
6 | from .image_processing_widget import ScriptExecuteWidget
7 |
8 | __all__ = ["WidgetWithLayerList", "CollapsibleWidget", "QDoubleSlider", "ProgressWidget", "CollapsibleWidgetGroup", "QSymmetricDoubleRangeSlider", "ScriptExecuteWidget"]
9 |
--------------------------------------------------------------------------------
/src/napari_nd_annotator/_widgets/_utils/blur_slider.py:
--------------------------------------------------------------------------------
1 | import warnings
2 |
3 | from napari._qt.widgets._slider_compat import QDoubleSlider
4 | from qtpy import QtGui
5 | from qtpy.QtCore import Qt, QEvent
6 | from scipy.ndimage.filters import gaussian_filter
7 | from napari.layers import Image
8 |
9 |
10 | class BlurSlider(QDoubleSlider):
11 | def __init__(self, viewer, image_layer=None, blur_func=None, parent=None):
12 | super().__init__(parent=parent)
13 | self.viewer = viewer
14 | self._smoothed_layer = None
15 | self.image_layer = image_layer
16 | self.setMaximum(20)
17 | self.valueChanged.connect(self.update_image)
18 | self.setMouseTracking(True)
19 | self.setOrientation(Qt.Horizontal)
20 | self.blur_func = blur_func if blur_func is not None else lambda img, val: gaussian_filter(img, val)
21 | self.children()[0].installEventFilter(self)
22 |
23 | def mousePressEvent(self, ev: QtGui.QMouseEvent) -> None:
24 | image_layer = self.image_layer
25 | if image_layer is None:
26 | return
27 | blurred = self.get_blurred_image()
28 | new_layer_name = "[smooth] %s" % image_layer.name
29 | if new_layer_name in self.viewer.layers:
30 | self._smoothed_layer = self.viewer.layers[new_layer_name]
31 | self._smoothed_layer.data = blurred
32 | self._smoothed_layer.translate = image_layer.translate[list(self.viewer.dims.displayed)]
33 | self._smoothed_layer.colormap = image_layer.colormap
34 | self._smoothed_layer.contrast_limits = image_layer.contrast_limits
35 | self._smoothed_layer.rgb = image_layer.rgb
36 | else:
37 | self._smoothed_layer = Image(
38 | blurred,
39 | name="[smooth] %s" % image_layer.name,
40 | translate=image_layer.translate[list(self.viewer.dims.displayed)],
41 | colormap=image_layer.colormap,
42 | contrast_limits=image_layer.contrast_limits,
43 | rgb=image_layer.rgb
44 | )
45 | self.viewer.add_layer(self._smoothed_layer)
46 |
47 | def mouseReleaseEvent(self, ev: QtGui.QMouseEvent) -> None:
48 | if self._smoothed_layer is None:
49 | return
50 | self.viewer.layers.remove(self._smoothed_layer)
51 | self._smoothed_layer = None
52 |
53 | def update_image(self, _):
54 | if self.image_layer is not None and self._smoothed_layer is not None:
55 | self._smoothed_layer.data = self.get_blurred_image()
56 | self._smoothed_layer.events.data()
57 |
58 | def get_blurred_image(self):
59 | with warnings.catch_warnings():
60 | warnings.simplefilter("ignore")
61 | return self.blur_func(self.image_layer._data_view, self.value())
62 |
63 | def eventFilter(self, obj: 'QObject', event: 'QEvent') -> bool:
64 | if event.type() == QEvent.MouseButtonPress:
65 | self.mousePressEvent(event)
66 | elif event.type() == QEvent.MouseButtonRelease:
67 | self.mouseReleaseEvent(event)
68 | return super().eventFilter(obj, event)
69 |
--------------------------------------------------------------------------------
/src/napari_nd_annotator/_widgets/_utils/callbacks.py:
--------------------------------------------------------------------------------
1 | import warnings
2 | from scipy.ndimage import binary_dilation, binary_erosion
3 | from napari.layers import Labels, Image
4 |
5 | LOCK_CHAR = u"\U0001F512"
6 |
7 | def extend_mask(layer):
8 | if layer is None:
9 | return
10 | with warnings.catch_warnings():
11 | warnings.simplefilter("ignore")
12 | labels = layer._slice.image.raw
13 | mask = labels == layer.selected_label
14 | mask = binary_dilation(mask)
15 | labels[mask] = layer.selected_label
16 | layer.events.data()
17 | layer.refresh()
18 |
19 |
20 | def reduce_mask(layer):
21 | if layer is None:
22 | return
23 | with warnings.catch_warnings():
24 | warnings.simplefilter("ignore")
25 | labels = layer._slice.image.raw
26 | mask = labels == layer.selected_label
27 | eroded_mask = binary_erosion(mask)
28 | labels[mask & ~eroded_mask] = 0
29 | layer.events.data()
30 | layer.refresh()
31 |
32 |
33 | def increment_selected_label(layer):
34 | if layer is None:
35 | return
36 | layer.selected_label = layer.selected_label+1
37 |
38 |
39 | def decrement_selected_label(layer):
40 | if layer is None:
41 | return
42 | layer.selected_label = max(0, layer.selected_label-1)
43 |
44 |
45 | def scroll_to_prev(viewer):
46 | def scroll_to_prev(_):
47 | if len(viewer.dims.not_displayed) == 0:
48 | return
49 | viewer.dims.set_current_step(viewer.dims.not_displayed[0],
50 | viewer.dims.current_step[viewer.dims.not_displayed[0]]-1)
51 | return scroll_to_prev
52 |
53 |
54 | def scroll_to_next(viewer):
55 | def scroll_to_next(_):
56 | if len(viewer.dims.not_displayed) == 0:
57 | return
58 | viewer.dims.set_current_step(viewer.dims.not_displayed[0],
59 | viewer.dims.current_step[viewer.dims.not_displayed[0]] + 1)
60 | return scroll_to_next
61 |
62 |
63 | def increase_brush_size(layer):
64 | if layer is None:
65 | return
66 | diff = min(max(1, layer.brush_size // 10), 5)
67 | layer.brush_size = max(0, layer.brush_size + diff)
68 |
69 |
70 | def decrease_brush_size(layer):
71 | if layer is None:
72 | return
73 | diff = min(max(1, layer.brush_size // 10), 5)
74 | layer.brush_size = max(0, layer.brush_size - diff)
75 |
76 |
77 | def lock_layer(event):
78 | for layer in event.source:
79 | if layer.name.startswith(LOCK_CHAR):
80 | layer.editable = False
81 |
82 |
83 | def keep_layer_on_top(layer):
84 | def on_top_callback(e):
85 | layer_list = e.source
86 | if layer not in layer_list:
87 | return
88 | with layer_list.events.moved.blocker(keep_layer_on_top):
89 | try:
90 | for i in reversed(range(len(layer_list))):
91 | elem = layer_list[i]
92 | if elem == layer:
93 | break
94 | if type(elem) not in [Labels, Image]:
95 | continue
96 | layer_index = layer_list.index(layer)
97 | if i == layer_index+1:
98 | layer_list.move(i, layer_index)
99 | else:
100 | layer_list.move(layer_index, i)
101 | break
102 | except Exception as e:
103 | ...
104 | return on_top_callback
105 |
--------------------------------------------------------------------------------
/src/napari_nd_annotator/_widgets/_utils/changeable_color_box.py:
--------------------------------------------------------------------------------
1 | import warnings
2 |
3 | import numpy as np
4 | from napari.utils.events import disconnect_events
5 |
6 | from napari.utils.translations import trans
7 | from qtpy.QtWidgets import QWidget
8 | from qtpy.QtCore import Qt
9 | from qtpy.QtGui import QPainter, QColor
10 |
11 |
12 | class QtChangeableColorBox(QWidget):
13 | """A widget that shows a square with the current label color.
14 |
15 | Parameters
16 | ----------
17 | layer : napari.layers.Layer
18 | An instance of a napari layer.
19 | """
20 |
21 | def __init__(self, layer):
22 | super().__init__()
23 |
24 | self._layer = None
25 | self.layer = layer
26 |
27 | self.setAttribute(Qt.WA_DeleteOnClose)
28 |
29 | self._height = 24
30 | self.setFixedWidth(self._height)
31 | self.setFixedHeight(self._height)
32 | self.setToolTip(trans._('Selected label color'))
33 |
34 | self.color = None
35 |
36 | @property
37 | def layer(self):
38 | return self._layer
39 |
40 | @layer.setter
41 | def layer(self, new_layer):
42 | if new_layer == self.layer:
43 | return
44 | if self._layer is not None:
45 | self._layer.events.selected_label.disconnect(self._on_selected_label_change)
46 | self._layer.events.opacity.disconnect(self._on_opacity_change)
47 | self._layer.events.colormap.disconnect(self._on_colormap_change)
48 | self._layer = new_layer
49 | if new_layer is not None:
50 | new_layer.events.selected_label.connect(
51 | self._on_selected_label_change
52 | )
53 | new_layer.events.opacity.connect(self._on_opacity_change)
54 | new_layer.events.colormap.connect(self._on_colormap_change)
55 |
56 | def _on_selected_label_change(self, *args):
57 | """Receive layer model label selection change event & update colorbox."""
58 | self.update()
59 |
60 | def _on_opacity_change(self, *args):
61 | """Receive layer model label selection change event & update colorbox."""
62 | self.update()
63 |
64 | def _on_colormap_change(self, *args):
65 | """Receive label colormap change event & update colorbox."""
66 | self.update()
67 |
68 | def paintEvent(self, event):
69 | """Paint the colorbox. If no color, display a checkerboard pattern.
70 |
71 | Parameters
72 | ----------
73 | event : qtpy.QtCore.QEvent
74 | Event from the Qt context.
75 | """
76 | painter = QPainter(self)
77 | with warnings.catch_warnings():
78 | warnings.simplefilter("ignore")
79 | selected_color = self.layer._selected_color if self.layer else None
80 | if selected_color is None:
81 | self.color = None
82 | for i in range(self._height // 4):
83 | for j in range(self._height // 4):
84 | if (i % 2 == 0 and j % 2 == 0) or (
85 | i % 2 == 1 and j % 2 == 1
86 | ):
87 | painter.setPen(QColor(230, 230, 230))
88 | painter.setBrush(QColor(230, 230, 230))
89 | else:
90 | painter.setPen(QColor(25, 25, 25))
91 | painter.setBrush(QColor(25, 25, 25))
92 | painter.drawRect(i * 4, j * 4, 5, 5)
93 | else:
94 | color = np.multiply(selected_color, self.layer.opacity)
95 | color = np.round(255 * color).astype(int)
96 | painter.setPen(QColor(*list(color)))
97 | painter.setBrush(QColor(*list(color)))
98 | painter.drawRect(0, 0, self._height, self._height)
99 | self.color = tuple(color)
100 |
101 | def deleteLater(self):
102 | disconnect_events(self.layer.events, self)
103 | super().deleteLater()
104 |
105 | def closeEvent(self, event):
106 | """Disconnect events when widget is closing."""
107 | disconnect_events(self.layer.events, self)
108 | super().closeEvent(event)
109 |
--------------------------------------------------------------------------------
/src/napari_nd_annotator/_widgets/_utils/collapsible_widget.py:
--------------------------------------------------------------------------------
1 | from typing import List, Optional
2 |
3 | from magicclass.widgets import CollapsibleContainer
4 | from qtpy.QtWidgets import QWidget, QPushButton, QVBoxLayout, QSizePolicy
5 | from qtpy.QtCore import Signal
6 |
7 |
8 | class CollapsibleWidget(QWidget):
9 | expanded = Signal()
10 | collapsed = Signal()
11 | expansion_changed = Signal(bool)
12 | def __init__(self, text="", parent=None):
13 | super().__init__(parent=parent)
14 | self.setSizePolicy(QSizePolicy.Expanding, QSizePolicy.Expanding)
15 | layout = QVBoxLayout()
16 | layout.setContentsMargins(0, 0, 0, 0)
17 | self._collapsed = True
18 | self._text = text
19 | self.collapse_button = QPushButton(self._prefix() + text, parent=self)
20 | self.collapse_button.setSizePolicy(QSizePolicy.Expanding, self.collapse_button.sizePolicy().verticalPolicy())
21 | self.collapse_button.clicked.connect(lambda: self.setCollapsed(not self._collapsed))
22 | self.collapse_button.setStyleSheet("Text-align:left")
23 | layout.addWidget(self.collapse_button)
24 |
25 | self.content_widget = QWidget(parent=self)
26 | self.content_widget.setVisible(not self._collapsed)
27 | layout.addWidget(self.content_widget)
28 | super().setLayout(layout)
29 |
30 | def setLayout(self, QLayout):
31 | self.content_widget.setLayout(QLayout)
32 |
33 | def layout(self):
34 | return self.content_widget.layout()
35 |
36 | def setCollapsed(self, is_collapsed):
37 | prev_val = self.isCollapsed()
38 | self._collapsed = bool(is_collapsed)
39 | self.collapse_button.setText(self._prefix() + self._text)
40 | self.content_widget.setVisible(not self._collapsed)
41 | if prev_val != self.isCollapsed():
42 | if self.isCollapsed():
43 | self.collapsed.emit()
44 | else:
45 | self.expanded.emit()
46 | self.expansion_changed.emit(not self.isCollapsed())
47 |
48 | def isCollapsed(self):
49 | return self._collapsed
50 |
51 | def collapse(self):
52 | self.setCollapsed(True)
53 |
54 | def expand(self):
55 | self.setCollapsed(False)
56 |
57 | def _prefix(self):
58 | return "\N{BLACK MEDIUM RIGHT-POINTING TRIANGLE} " if self._collapsed else "\N{BLACK MEDIUM DOWN-POINTING TRIANGLE} "
59 |
60 |
61 | class CollapsibleWidgetGroup:
62 | def __init__(self, widget_list: Optional[List[CollapsibleWidget]] = None):
63 | self._widget_list = []
64 | self._handlers = dict()
65 | if widget_list is None:
66 | widget_list = []
67 | for widget in widget_list:
68 | if type(widget) != CollapsibleWidget:
69 | raise TypeError("%s is not CollapsibleWidget" % str(widget))
70 | self.addItem(widget)
71 |
72 | def addItem(self, widget: CollapsibleWidget):
73 | if widget in self._widget_list:
74 | raise ValueError("%s widget already in group" % str(widget))
75 | self._widget_list.append(widget)
76 | self._handlers[widget] = self._widget_expanded_handler(widget)
77 | widget.expanded.connect(self._handlers[widget])
78 |
79 | def removeItem(self, widget: CollapsibleWidget):
80 | if widget not in self._widget_list:
81 | return
82 | self._widget_list.remove(widget)
83 | widget.expanded.disconnect(self._handlers[widget])
84 | del self._handlers[widget]
85 |
86 | def _widget_expanded_handler(self, widget: CollapsibleWidget):
87 | def handler():
88 | for w2 in self._widget_list:
89 | if w2 != widget:
90 | w2.collapse()
91 | return handler
92 |
93 |
94 | class CollapsibleContainerGroup:
95 | def __init__(self, container_list: Optional[List[CollapsibleContainer]] = None):
96 | self._container_list = []
97 | self._handlers = dict()
98 | if container_list is None:
99 | container_list = []
100 | for container in container_list:
101 | if type(container) != CollapsibleContainer:
102 | raise TypeError("%s is not CollapsibleContainer" % str(container))
103 | self.addItem(container)
104 |
105 | def addItem(self, container: CollapsibleContainer):
106 | if container in self._container_list:
107 | return
108 | self._container_list.append(container)
109 | self._handlers[container] = self._container_expanded_handler(container)
110 | container._widget._expand_btn.clicked.connect(self._handlers[container])
111 |
112 | def removeItem(self, container: CollapsibleContainer):
113 | if container not in self._container_list:
114 | return
115 | self._container_list.remove(container)
116 | container._widget._expand_btn.clicked.disconnect(self._handlers[container])
117 | del self._handlers[container]
118 |
119 | def _container_expanded_handler(self, container: CollapsibleContainer):
120 | def handler(state: bool):
121 | if not state:
122 | return
123 | for c2 in self._container_list:
124 | if c2 != container:
125 | c2._widget._collapse()
126 | c2._widget._expand_btn.setChecked(False)
127 | return handler
128 |
129 | def __iter__(self):
130 | return iter(self._container_list)
131 |
132 | def correct_container_size(container):
133 | if not container.collapsed:
134 | container.collapsed = True
135 | container.collapsed = False
--------------------------------------------------------------------------------
/src/napari_nd_annotator/_widgets/_utils/delayed_executor.py:
--------------------------------------------------------------------------------
1 | from qtpy.QtCore import QMutex, QObject, QThread, Signal
2 | import time
3 |
4 |
5 | class DelayedQueue:
6 | def __init__(self):
7 | self._first = None
8 | self._second = None
9 |
10 | def enqueue(self, elem):
11 | if self._first is None:
12 | self._first = elem
13 | else:
14 | self._second = elem
15 |
16 | def get(self):
17 | return self._first
18 |
19 | def pop(self):
20 | self._first = self._second
21 | self._second = None
22 |
23 |
24 | class DelayedExecutor(QObject):
25 | processing = Signal("PyQt_PyObject")
26 | processed = Signal("PyQt_PyObject")
27 |
28 | def __init__(self, func, parent=None):
29 | super().__init__(parent)
30 | self._func = func
31 | self._arg_queue = DelayedQueue()
32 | self._mutex = QMutex()
33 | self._worker = self.DelayedWorker(func, self._arg_queue, self._mutex, self)
34 | self.worker_thread = QThread()
35 | self._worker.moveToThread(self.worker_thread)
36 | self.worker_thread.destroyed.connect(self._worker.deleteLater)
37 | self.worker_thread.started.connect(self._worker.run)
38 | self.worker_thread.start()
39 | if parent is not None:
40 | parent.destroyed.connect(self.worker_thread.deleteLater)
41 |
42 | def __call__(self, *args, **kwargs):
43 | self._mutex.lock()
44 | self._arg_queue.enqueue((args, kwargs))
45 | self._mutex.unlock()
46 |
47 | class DelayedWorker(QObject):
48 | def __init__(self, func, q: DelayedQueue, mutex: QMutex, executor):
49 | super().__init__()
50 | self._func = func
51 | self._queue = q
52 | self._mutex = mutex
53 | self._executor = executor
54 |
55 | def run(self):
56 | while True:
57 | time.sleep(0.1)
58 | if self._mutex.tryLock():
59 | args = self._queue.get()
60 | self._mutex.unlock()
61 | if args is not None:
62 | self._executor.processing.emit(args)
63 | self._func(*(args[0]), **(args[1]))
64 | self._mutex.lock()
65 | self._queue.pop()
66 | self._mutex.unlock()
67 | self._executor.processed.emit(args)
68 |
--------------------------------------------------------------------------------
/src/napari_nd_annotator/_widgets/_utils/help_dialog.py:
--------------------------------------------------------------------------------
1 | from qtpy.QtWidgets import QDialog, QTableWidget, QTableWidgetItem, QVBoxLayout, QAbstractItemView, QSizePolicy
2 | from qtpy.QtCore import Qt
3 | shortcuts = {
4 | "Ctrl+I": "Interpolate",
5 | "E/Shift+Wheel " + u"\u2191": "Increment selected label",
6 | "Q/Shift+Wheel " + u"\u2193": "Decrement selected label",
7 | "A/Ctrl+Wheel " + u"\u2191": "Previous slice",
8 | "D/Ctrl+Wheel " + u"\u2193": "Next slice",
9 | "W/Alt+Wheel " + u"\u2191": "Increase paint brush size",
10 | "S/Alt+Wheel " + u"\u2193": "Decrease paint brush size",
11 | "Ctrl+Tab": "Jump from \"Anchors\" layer to the labels layer, or vice versa",
12 | "Ctrl+[1-10]": "Jump to the layer at the selected index"
13 | }
14 |
15 |
16 | class HelpDialog(QDialog):
17 | def __init__(self, parent=None):
18 | super().__init__(parent=parent)
19 | table_widget = QTableWidget()
20 | table_widget.setRowCount(len(shortcuts))
21 | table_widget.setColumnCount(2)
22 | table_widget.setHorizontalHeaderLabels(["Action", "Shortcut"])
23 | for i, (shortcut, action) in enumerate(shortcuts.items()):
24 | table_widget.setItem(i, 0, QTableWidgetItem(action))
25 | table_widget.setItem(i, 1, QTableWidgetItem(shortcut))
26 | table_widget.setEditTriggers(QAbstractItemView.NoEditTriggers)
27 | table_widget.setFocusPolicy(Qt.NoFocus)
28 | table_widget.setSelectionMode(QAbstractItemView.NoSelection)
29 | table_widget.verticalHeader().hide()
30 | table_widget.resizeColumnsToContents()
31 | table_widget.setWordWrap(True)
32 | table_widget.horizontalHeader().sectionResized.connect(table_widget.resizeRowsToContents)
33 | layout = QVBoxLayout()
34 | layout.addWidget(table_widget)
35 | self.setLayout(layout)
36 | dialogWidth = table_widget.horizontalHeader().length() + 50
37 | dialogHeight = table_widget.verticalHeader().length() + 24
38 | self.setFixedSize(dialogWidth, dialogHeight)
39 |
--------------------------------------------------------------------------------
/src/napari_nd_annotator/_widgets/_utils/image_processing_widget.py:
--------------------------------------------------------------------------------
1 | from copy import deepcopy
2 |
3 | import sys
4 |
5 | import napari
6 | from PyQt5.QtWidgets import QTextEdit
7 | from magicclass import magicclass, field, bind_key
8 | from magicclass.widgets import FreeWidget
9 | from magicgui.types import Undefined
10 | from magicgui.widgets import TextEdit
11 | from napari.utils.notifications import notification_manager
12 | from qtpy.QtWidgets import QWidget, QVBoxLayout, QHBoxLayout, QPushButton, QAction, QPlainTextEdit
13 | from qtpy.QtCore import QRegularExpression, Qt, Signal, QObject, QThread
14 | from qtpy.QtGui import QTextCharFormat, QFont, QSyntaxHighlighter, QColor
15 | from qtpy import QtCore, QtGui
16 | import numpy as np
17 | import skimage
18 | import warnings
19 | import keyword
20 | import psygnal
21 | from . import ProgressWidget
22 |
23 |
24 | def execute_script(script, other_locals=None):
25 | if other_locals is None:
26 | other_locals = dict()
27 | globals_ = {"np": np, "skimage": skimage} | other_locals
28 | exec(script, globals_, globals_)
29 | return globals_
30 |
31 |
32 | class ScriptWorker(QObject):
33 | done = Signal("PyQt_PyObject")
34 | script = None
35 | variables = None
36 | _output_variables = {}
37 |
38 | def run(self):
39 | if self.script is None:
40 | raise ValueError("No script was set!")
41 | try:
42 | self._output_variables = execute_script(self.script, self.variables)
43 | self.done.emit(self._output_variables)
44 | except Exception as e:
45 | self.done.emit({"exception": e})
46 |
47 | @property
48 | def output_variables(self):
49 | return self._output_variables
50 |
51 |
52 | class PythonHighlighter(QSyntaxHighlighter):
53 | class HighlightingRule:
54 | pattern = QRegularExpression()
55 | format = QTextCharFormat()
56 |
57 | highlightingRules = []
58 | commentFormat = QTextCharFormat()
59 | keywordFormat = QTextCharFormat()
60 | classFormat = QTextCharFormat()
61 | quotationFormat = QTextCharFormat()
62 | singleLineCommentFormat = QTextCharFormat()
63 | tripleQuotationFormat = QTextCharFormat()
64 | functionFormat = QTextCharFormat()
65 | numericFormat = QTextCharFormat()
66 |
67 | def __init__(self, parent):
68 | super().__init__(parent)
69 | self.keywordFormat.setForeground(QColor("darkorange"))
70 | self.keywordFormat.setFontWeight(QFont.Bold)
71 | keywordPatterns = ["\\b%s\\b" % kw for kw in keyword.kwlist]
72 | for pattern in keywordPatterns:
73 | rule = self.HighlightingRule()
74 | rule.pattern = QRegularExpression(pattern)
75 | rule.format = self.keywordFormat
76 | self.highlightingRules.append(rule)
77 |
78 | rule = self.HighlightingRule()
79 | self.numericFormat.setForeground(QColor("deepskyblue"))
80 | rule.pattern = QRegularExpression("(0((b[01]+)|(O[0-7]+)|(x[0-9A-Fa-f]+)))|([0-9]*\\.?[0-9]*j?)")
81 | rule.format = self.numericFormat
82 | self.highlightingRules.append(rule)
83 |
84 | rule = self.HighlightingRule()
85 | self.functionFormat.setForeground(QColor("gold"))
86 | rule.pattern = QRegularExpression("\\b[a-z][A-Za-z0-9_]*(?=\\()")
87 | rule.format = self.functionFormat
88 | self.highlightingRules.append(rule)
89 |
90 | rule = self.HighlightingRule()
91 | self.classFormat.setForeground(QColor("lightblue"))
92 | rule.pattern = QRegularExpression("\\b[A-Z][A-Za-z0-9_]*(?=\\()")
93 | rule.format = self.classFormat
94 | self.highlightingRules.append(rule)
95 |
96 | rule = self.HighlightingRule()
97 | self.quotationFormat.setForeground(Qt.darkGreen)
98 | rule.pattern = QRegularExpression("([\"'])(?:(?=(\\\\?))\\2.)*?\\1")
99 | rule.format = self.quotationFormat
100 | self.highlightingRules.append(rule)
101 |
102 | rule = self.HighlightingRule()
103 | self.commentFormat.setForeground(Qt.gray)
104 | rule.pattern = QRegularExpression("#.*")
105 | rule.format = self.commentFormat
106 | self.highlightingRules.append(rule)
107 |
108 | def highlightBlock(self, text):
109 | for rule in self.highlightingRules:
110 | matchIterator = rule.pattern.globalMatch(text)
111 | while matchIterator.hasNext():
112 | match = matchIterator.next()
113 | self.setFormat(match.capturedStart(), match.capturedLength(), rule.format)
114 |
115 |
116 | class CodeEditor(QPlainTextEdit):
117 | def __init__(self, parent=None):
118 | super().__init__(parent)
119 | self.setStyleSheet("font-family:'Courier New'; background-color: black")
120 | self.highlighter = PythonHighlighter(self.document())
121 |
122 | def keyPressEvent(self, event: QtGui.QKeyEvent):
123 | if event.key() == QtCore.Qt.Key_Tab:
124 | self.insertPlainText(" ")
125 | else:
126 | super().keyPressEvent(event)
127 |
128 |
129 | class CodeTextEdit(TextEdit):
130 | def __init__(self, **kwargs):
131 | super().__init__(**kwargs)
132 | self.native.setStyleSheet("font-family:'Courier New'; background-color: black")
133 |
134 | def keyPressEvent(*args, **kwargs):
135 | print(*args, **kwargs)
136 | # if event.key() == QtCore.Qt.Key_Tab:
137 | # self.insertPlainText(" ")
138 | # else:
139 | # super(QTextEdit, self).keyPressEvent(event)
140 |
141 | # self.native.keyPressEvent = keyPressEvent
142 | self.highlighter = PythonHighlighter(self.native.document())
143 |
144 |
145 | class ScriptExecuteWidget(FreeWidget):
146 | changed = psygnal.Signal(dict)
147 |
148 | def __init__(self, editor_key="_script_execute_widget", use_run_button=True, **kwargs):
149 | super().__init__()
150 | self.wdt = QWidget()
151 | layout = QVBoxLayout()
152 | self.code_editor = CodeEditor()
153 | layout.addWidget(self.code_editor)
154 | self.run_button = QPushButton("Run")
155 | self.run_button.clicked.connect(self.Run)
156 | layout.addWidget(self.run_button)
157 | self.wdt.setLayout(layout)
158 | self.set_widget(self.wdt)
159 | self.progress_dialog = ProgressWidget(message="Calculating feature, please wait...")
160 | self._editor_key = editor_key
161 | self._init_worker()
162 | self.text_settings = QtCore.QSettings("BIOMAG", "Annotation Toolbox")
163 | self.code_editor.document().setPlainText(self.text_settings.value(self._editor_key, ""))
164 | font_size = self.text_settings.value("script_font_size")
165 | if font_size:
166 | font = self.code_editor.font()
167 | font.setPointSize(font_size)
168 | self.code_editor.setFont(font)
169 | self._variables = dict()
170 | self.run_button.setVisible(use_run_button)
171 |
172 | def _init_worker(self):
173 | self.script_thread = QThread()
174 | self.script_worker = ScriptWorker()
175 | self.script_worker.moveToThread(self.script_thread)
176 | self.script_worker.done.connect(self.script_thread.quit, Qt.DirectConnection)
177 | self.script_worker.done.connect(lambda: self.run_button.setEnabled(True))
178 | self.script_worker.done.connect(self.changed.emit, Qt.DirectConnection)
179 | self.script_thread.started.connect(lambda: self.progress_dialog.setVisible(True))
180 | self.script_thread.started.connect(self.script_worker.run, Qt.DirectConnection)
181 | self.script_thread.finished.connect(lambda: self.progress_dialog.setVisible(False))
182 |
183 | @bind_key("Ctrl+Enter")
184 | def Run(self):
185 | self.script_thread.wait()
186 | variables = self.variables
187 | variables = deepcopy(variables)
188 | script = self.code_editor.document().toPlainText()
189 | self.text_settings.setValue(self._editor_key, script)
190 | self.run_button.setEnabled(False)
191 | self.script_worker.script = script
192 | self.script_worker.variables = variables
193 | self.script_thread.start()
194 |
195 | @property
196 | def variables(self):
197 | return self._variables
198 |
199 | # @property
200 | # def value(self):
201 | # return self._variables
202 |
203 | @bind_key("Ctrl-+")
204 | def _increase_font_size(self):
205 | font = self.code_editor.font()
206 | font.setPointSize(font.pointSize() + 1)
207 | self.code_editor.setFont(font)
208 | self.text_settings.setValue("script_font_size", font.pointSize())
209 |
210 | @bind_key("Ctrl--")
211 | def _decrease_font_size(self):
212 | font = self.code_editor.font()
213 | font.setPointSize(font.pointSize() - 1)
214 | self.code_editor.setFont(font)
215 | self.text_settings.setValue("script_font_size", font.pointSize())
216 |
--------------------------------------------------------------------------------
/src/napari_nd_annotator/_widgets/_utils/napari_slider.py:
--------------------------------------------------------------------------------
1 | from superqt.sliders._labeled import QLabeledSlider
2 | from superqt.sliders._sliders import QDoubleSlider
3 | from qtpy.QtCore import Signal, QLocale
4 |
5 |
6 | class QLabeledDoubleSlider(QLabeledSlider):
7 | _slider_class = QDoubleSlider
8 | _slider: QDoubleSlider
9 | _fvalueChanged = Signal(float)
10 | _fsliderMoved = Signal(float)
11 | _frangeChanged = Signal(float, float)
12 |
13 | def __init__(self, *args, **kwargs) -> None:
14 | super().__init__(*args, **kwargs)
15 | self.setDecimals(2)
16 | locale = QLocale(QLocale.Language.C)
17 | self._label.setLocale(locale)
18 |
19 | def _setValue(self, value: float):
20 | """Convert the value from float to int before setting the slider value."""
21 | self._slider.setValue(value)
22 |
23 | def _rename_signals(self):
24 | self.valueChanged = self._fvalueChanged
25 | self.sliderMoved = self._fsliderMoved
26 | self.rangeChanged = self._frangeChanged
27 |
28 | def decimals(self) -> int:
29 | return self._label.decimals()
30 |
31 | def setDecimals(self, prec: int):
32 | self._label.setDecimals(prec)
33 |
34 |
--------------------------------------------------------------------------------
/src/napari_nd_annotator/_widgets/_utils/persistence/__init__.py:
--------------------------------------------------------------------------------
1 | from .persistent_widget_state import PersistentWidget
2 | __all__ = ["PersistentWidget"]
3 |
--------------------------------------------------------------------------------
/src/napari_nd_annotator/_widgets/_utils/persistence/default_widget_values.yaml:
--------------------------------------------------------------------------------
1 | nd_annotator:
2 | fill_objects_checkbox: true
3 | nd_annotator_interp:
4 | method_dropdown: Contour-based
5 | n_points: 300
6 | rpsv_iterations_spinbox: 20
7 | nd_annotator_mc:
8 | blur_image_checkbox: false
9 | blur_image_slider: 0.5
10 | feature_dropdown: Gradient [symmetric]
11 | param_spinbox: 5
12 | point_size_spinbox: 2
13 | smooth_contour_checkbox: true
14 | smooth_contour_spinbox: 0.7
15 | nd_annotator_ms:
16 | alpha_slider: 0.01
17 | beta_spinner: 3.0
18 | blur_conductance_spinbox: 9.
19 | blur_image_checkbox: true
20 | blur_n_iterations_spinbox: 10
21 | blur_sigma_slider: 0.0
22 | blurring_type_combobox: Curvature Anisotropic Diffusion
23 | image_feature_combobox: Gradient
24 | iterations_slider: 10000
25 | slice_segmentation_dropdown: Minimal contour
26 | use_correction_checkbox: true
27 | z_scale_spinbox: 1.
--------------------------------------------------------------------------------
/src/napari_nd_annotator/_widgets/_utils/persistence/persistent_widget_state.py:
--------------------------------------------------------------------------------
1 | import warnings
2 | import os
3 | import yaml
4 | from qtpy.QtWidgets import QWidget
5 | from qtpy.QtCore import QStandardPaths
6 | from typing import Union
7 | from traceback import print_exc
8 |
9 | GETTER_FUN_NAMES = ["isChecked", "value", "text", "currentText"]
10 | SETTER_FUN_NAMES = ["setChecked", "setValue", "setText", "setCurrentText"]
11 |
12 | __location__ = os.path.realpath(
13 | os.path.join(os.getcwd(), os.path.dirname(__file__)))
14 | default_settings_path = os.path.join(__location__, "default_widget_values.yaml")
15 |
16 |
17 | class UniqueDict(dict):
18 | def __setitem__(self, key, value):
19 | if key in self:
20 | raise ValueError("key %s already taken (=%s)" % (key, self[key]))
21 | super().__setitem__(key, value)
22 |
23 |
24 | class PersistentWidgetState:
25 | def __init__(self):
26 | config_folder = QStandardPaths.writableLocation(QStandardPaths.ConfigLocation)
27 | self._config_path = os.path.join(config_folder, "nd_annotator_config.yaml")
28 | self._state: UniqueDict = UniqueDict()
29 | if os.path.exists(self._config_path):
30 | try:
31 | with open(self._config_path, "r") as f:
32 | self._state = yaml.safe_load(f)
33 | except Exception:
34 | print_exc()
35 | else:
36 | try:
37 | with open(self._config_path, "w") as new_file, open(default_settings_path, "r") as def_file:
38 | new_file.write(def_file.read())
39 | def_file.seek(0)
40 | self._state = yaml.safe_load(def_file)
41 | except Exception:
42 | print_exc()
43 |
44 | def store_multiple_state(self, parent_name: str, widget_id_map: dict):
45 | widget_state = self[parent_name]
46 | for id_, widget in widget_id_map.items():
47 | self.store_state(widget_state, id_, widget)
48 |
49 | def store_state(self, parent: Union[str, dict], widget_id: str, widget: QWidget):
50 | if type(parent) is str:
51 | widget_state = self[parent]
52 | else:
53 | widget_state = parent
54 | getter_fun = None
55 | for fun_name in GETTER_FUN_NAMES:
56 | if hasattr(widget, fun_name):
57 | getter_fun = getattr(widget, fun_name)
58 | break
59 | if getter_fun is None:
60 | warnings.warn("Cannot get current value of %s (id: %s):"
61 | " object type should define one of (%s)"
62 | % (widget, widget_id, ", ".join(GETTER_FUN_NAMES)))
63 | return
64 | widget_state[widget_id] = getter_fun()
65 |
66 | def load_multiple_state(self, parent_name: str, widget_id_map: dict):
67 | widget_state = self._state[parent_name]
68 | for id_, widget in widget_id_map.items():
69 | self.load_state(widget_state, id_, widget)
70 |
71 | def load_state(self, parent: Union[str, dict], widget_id: str, widget: QWidget):
72 | if type(parent) is str:
73 | widget_state = self[parent]
74 | else:
75 | widget_state = parent
76 | if widget_id not in widget_state:
77 | warnings.warn("id '%s' not found in config file" % widget_id)
78 | return
79 | setter_fun = None
80 | for fun_name in SETTER_FUN_NAMES:
81 | if hasattr(widget, fun_name):
82 | setter_fun = getattr(widget, fun_name)
83 | break
84 | if setter_fun is None:
85 | warnings.warn("Couldn't load value for widget %s (id: %s)"
86 | " object type should define one of (%s)"
87 | % (widget, widget_id, ", ".join(SETTER_FUN_NAMES)))
88 | return
89 | try:
90 | setter_fun(widget_state[widget_id])
91 | except TypeError:
92 | print_exc()
93 |
94 | def __getitem__(self, item):
95 | if item not in self._state:
96 | self._state[item] = UniqueDict()
97 | return self._state[item]
98 |
99 | def __new__(cls):
100 | if not hasattr(cls, "instance"):
101 | cls.instance = super(PersistentWidgetState, cls).__new__(cls)
102 | return cls.instance
103 |
104 | def save_state(self):
105 | state = dict()
106 | for k, v in self._state.items():
107 | state[k] = dict(v)
108 | try:
109 | with open(self._config_path, "w") as f:
110 | yaml.dump(state, f)
111 | except Exception as e:
112 | print("Couldn't save plugin state due to the following error:")
113 | print_exc()
114 |
115 | def __del__(self):
116 | self.save_state()
117 |
118 |
119 | class PersistentWidget(QWidget):
120 | _widget_count = 0
121 |
122 | def __init__(self, id_: str = None, **kwargs):
123 | super().__init__(**kwargs)
124 | self._annotator_state = PersistentWidgetState()
125 | self._stored_widgets = dict()
126 | self._id = id_
127 | self.destroyed.connect(lambda o: self.on_destroy(self._annotator_state, self._id, self._stored_widgets))
128 | PersistentWidget._widget_count += 1
129 |
130 | def set_stored_widgets(self, widget_ids: Union[list, dict]):
131 | self._stored_widgets.clear()
132 | if type(widget_ids) is list:
133 | for id_ in widget_ids:
134 | self._stored_widgets[id_] = getattr(self, id_)
135 | else:
136 | for id_, widget in widget_ids.items():
137 | self._stored_widgets[id_] = widget
138 | self._annotator_state.load_multiple_state(self._id, self._stored_widgets)
139 |
140 | def add_stored_widget(self, id_, widget=None):
141 | if widget is None:
142 | widget = getattr(self, id_)
143 | self._stored_widgets[id_] = widget
144 | self._annotator_state.load_state(self._id, id_, widget)
145 |
146 | @staticmethod
147 | def on_destroy(state, id_, widgets):
148 | PersistentWidget._widget_count -= 1
149 | if len(widgets) > 0 and id_ is not None:
150 | state.store_multiple_state(id_, widgets)
151 | if PersistentWidget._widget_count == 0:
152 | state.save_state()
153 | elif PersistentWidget._widget_count < 0:
154 | warnings.warn("negative 'PersistentWidget._widget_count': %d" % PersistentWidget._widget_count)
155 |
--------------------------------------------------------------------------------
/src/napari_nd_annotator/_widgets/_utils/progress_widget.py:
--------------------------------------------------------------------------------
1 | from qtpy.QtWidgets import QWidget, QProgressBar, QLabel, QVBoxLayout
2 | from qtpy.QtCore import Qt
3 | from qtpy import QtGui
4 |
5 | class ProgressWidget(QWidget):
6 | def __init__(self, parent=None, min_value=0, max_value=0, value=0, message="running...", visible=False, *args, **kwargs):
7 | super().__init__(parent, *args, **kwargs)
8 | self.setAttribute(Qt.WA_ShowWithoutActivating)
9 | self.drag_start_x = None
10 | self.drag_start_y = None
11 | layout = QVBoxLayout()
12 | self.label = QLabel(message)
13 | layout.addWidget(self.label)
14 | self.progress_bar = QProgressBar()
15 | self.progress_bar.setMinimum(min_value)
16 | self.progress_bar.setMaximum(max_value)
17 | self.progress_bar.setValue(value)
18 | self.setWindowFlags(Qt.WindowStaysOnTopHint | Qt.CustomizeWindowHint)
19 | self.setVisible(visible)
20 | layout.addWidget(self.progress_bar)
21 | layout.addStretch()
22 | self.setLayout(layout)
23 |
24 | def reset(self):
25 | self.progress_bar.reset()
26 |
27 | def setMinimum(self, value):
28 | self.progress_bar.setMinimum(value)
29 |
30 | def setMaximum(self, value):
31 | self.progress_bar.setMaximum(value)
32 |
33 | def setValue(self, value):
34 | self.progress_bar.setValue(value)
35 |
36 | def setText(self, text):
37 | self.label.setText(text)
38 |
39 | def mousePressEvent(self, event: QtGui.QMouseEvent) -> None:
40 | self.drag_start_x = event.x()
41 | self.drag_start_y = event.y()
42 |
43 | def mouseMoveEvent(self, event: QtGui.QMouseEvent) -> None:
44 | self.move(event.globalX() - self.drag_start_x, event.globalY()-self.drag_start_y)
--------------------------------------------------------------------------------
/src/napari_nd_annotator/_widgets/_utils/symmetric_range_slider.py:
--------------------------------------------------------------------------------
1 | from superqt.sliders import QDoubleRangeSlider
2 |
3 |
4 | class QSymmetricDoubleRangeSlider(QDoubleRangeSlider):
5 | def __init__(self, *args, **kwargs):
6 | super().__init__(*args, **kwargs)
7 | self.valueChanged.connect(self.change_symmetrically)
8 | self._prev_value = self.value()
9 |
10 | def change_symmetrically(self, value):
11 | try:
12 | if value[0] != self._prev_value[0] and value[1] != self._prev_value[1]:
13 | return
14 | if value[0] != self._prev_value[0]:
15 | diff = value[0] - self._prev_value[0]
16 | value = (value[0], value[1]-diff)
17 | elif value[1] != self._prev_value[1]:
18 | diff = value[1] - self._prev_value[1]
19 | value = (value[0]-diff, value[1])
20 | self.valueChanged.disconnect(self.change_symmetrically)
21 | self.setValue(value)
22 | self.valueChanged.connect(self.change_symmetrically)
23 | finally:
24 | self._prev_value = value
25 |
--------------------------------------------------------------------------------
/src/napari_nd_annotator/_widgets/_utils/widget_with_layer_list.py:
--------------------------------------------------------------------------------
1 | from pathlib import Path
2 |
3 | import napari
4 | from magicgui.widgets import Container, FunctionGui, create_widget
5 | from qtpy.QtWidgets import QWidget, QComboBox, QVBoxLayout, QSizePolicy, QScrollArea
6 | from qtpy.QtCore import Qt
7 | from keyword import iskeyword
8 | from abc import ABCMeta
9 | from .persistence import PersistentWidget
10 |
11 |
12 | class PostInitCaller(ABCMeta):
13 | def __call__(cls, *args, **kwargs):
14 | obj = type.__call__(cls, *args, **kwargs)
15 | try:
16 | obj.__post_init__(*args, **kwargs)
17 | except AttributeError:
18 | raise AttributeError("Class should define function '__post_init__()")
19 | return obj
20 |
21 |
22 | class WidgetWithLayerList2(Container, metaclass=PostInitCaller):
23 | def __init__(self, viewer: napari.Viewer, layers, add_layers=True, scrollable=False, persist=True, **kwargs):
24 | super().__init__(scrollable=scrollable,)
25 | self.viewer = viewer
26 | self.layers = dict() # str -> QComboBox
27 | self.persist = persist
28 | for layer in layers: # every layer should be a tuple: (layer_name, layer_type) or (layer_name, layer_type, layer_display_name)
29 | if len(layer) == 2:
30 | layer_name, layer_type = layer
31 | layer_displayed_name = layer_name.replace("_", " ")
32 | elif len(layer) == 3:
33 | layer_name, layer_type, layer_displayed_name = layer
34 | else:
35 | raise IndexError("every layer should be a tuple: (layer_name, layer_type)"
36 | " or (layer_name, layer_type, layer_display_name)")
37 | if not layer_name.isidentifier() or iskeyword(layer_name):
38 | raise ValueError("layer name '%s' is not a valid attribute name (cannot be accessed as 'obj.%s')" % (layer_name, layer_name))
39 | new_widget = create_widget(name=layer_name, label=layer_displayed_name, annotation=layer_type)
40 | self.__setattr__(layer_name, new_widget)
41 | self.layers[layer_name] = new_widget
42 | if add_layers:
43 | self.extend(self.layers)
44 | self.changed.connect(self._on_change),
45 |
46 | def __post_init__(self, *_, **__):
47 | if self.persist:
48 | self._load(quiet=True)
49 |
50 | def _on_change(self) -> None:
51 | if self.persist:
52 | self._dump()
53 |
54 | @property
55 | def _dump_path(self) -> Path:
56 | from magicgui._util import user_cache_dir
57 |
58 | name = getattr(self.__class__, "__qualname__", str(self.__class__))
59 | name = name.replace("<", "-").replace(">", "-") # e.g.
60 | return user_cache_dir() / f"{self.__class__.__module__}.{name}"
61 |
62 | def _dump(self, path: str | Path | None = None) -> None:
63 | super()._dump(path or self._dump_path)
64 |
65 | def _load(self, path: str | Path | None = None, quiet: bool = False) -> None:
66 | super()._load(path or self._dump_path, quiet=quiet)
67 |
68 |
69 | class WidgetWithLayerList(PersistentWidget):
70 | def __init__(self, viewer: napari.Viewer, layers, persistence_id=None, add_layers=True, scrollable=True, **kwargs):
71 | super().__init__(persistence_id, **kwargs)
72 | layout = QVBoxLayout(self)
73 | layers_layout = QVBoxLayout()
74 | if scrollable:
75 | self.scroll_area = QScrollArea(self)
76 | self.scroll_area.setWidgetResizable(True)
77 | self.scroll_area.setVerticalScrollBarPolicy(Qt.ScrollBarAlwaysOn)
78 | self.scroll_area.setHorizontalScrollBarPolicy(Qt.ScrollBarAlwaysOff)
79 | self.central_widget = QWidget(self)
80 | self.viewer = viewer
81 | self.layers = dict() # str -> QComboBox
82 | for layer in layers: # every layer should be a tuple: (layer_name, layer_type) or (layer_name, layer_type, layer_display_name)
83 | if len(layer) == 2:
84 | layer_name, layer_type = layer
85 | layer_displayed_name = layer_name.replace("_", " ")
86 | elif len(layer) == 3:
87 | layer_name, layer_type, layer_displayed_name = layer
88 | else:
89 | raise IndexError("every layer should be a tuple: (layer_name, layer_type)"
90 | " or (layer_name, layer_type, layer_display_name)")
91 | if not layer_name.isidentifier() or iskeyword(layer_name):
92 | raise ValueError("layer name '%s' is not a valid attribute name (cannot be accessed as 'obj.%s')" % (layer_name, layer_name))
93 | self.layers[layer_name] = self.LayerRecord(self, layer_type, layer_displayed_name)
94 | if add_layers:
95 | layers_layout.addWidget(self.layers[layer_name].combobox)
96 | layout.addLayout(layers_layout)
97 | if scrollable:
98 | self.scroll_area.setWidget(self.central_widget)
99 | layout.addWidget(self.scroll_area)
100 | else:
101 | layout.addWidget(self.central_widget)
102 |
103 | def setLayout(self, a0: 'QLayout') -> None:
104 | self.central_widget.setLayout(a0)
105 |
106 | def __getattr__(self, item):
107 | if item in self.layers:
108 | return self.layers[item]
109 | raise AttributeError("No attribute named %s" % item)
110 |
111 | class LayerRecord:
112 | def __init__(self, parent, layer_type, display_name="Select a Layer"):
113 | self.parent = parent
114 | self.layer_type = layer_type
115 | self.combobox = QComboBox()
116 | self.display_name = display_name
117 | self.combobox.addItem("[%s]" % display_name)
118 | self.combobox.currentIndexChanged.connect(self.on_layer_index_change)
119 | self.combobox.setToolTip(display_name)
120 | self.combobox.setSizePolicy(QSizePolicy.Ignored, self.combobox.sizePolicy().verticalPolicy())
121 | self.viewer.layers.events.connect(self.on_layer_list_change)
122 | self._moved_layer = None
123 | self._layer_name = None
124 | self.on_layer_list_change()
125 |
126 | @property
127 | def viewer(self) -> napari.Viewer:
128 | return self.parent.viewer
129 |
130 | @property
131 | def layer(self):
132 | return self.viewer.layers[self._layer_name] if self._layer_name and self._layer_name in self.viewer.layers\
133 | else None
134 |
135 | @layer.setter
136 | def layer(self, layer):
137 | if layer is None:
138 | self._layer_name = None
139 | self.combobox.setCurrentIndex(0)
140 | return
141 | if isinstance(layer, self.layer_type):
142 | new_layer_name = layer.name
143 | elif type(layer) is str:
144 | new_layer_name = layer
145 | else:
146 | raise TypeError("layer should be %s or str" % self.layer_type)
147 | if new_layer_name == self._layer_name:
148 | return
149 | if new_layer_name in self.viewer.layers:
150 | self.combobox.setCurrentText(new_layer_name)
151 | self._layer_name = new_layer_name
152 |
153 | def on_layer_index_change(self, index):
154 | if index == 0:
155 | if self.combobox.count() > 1:
156 | self.combobox.setCurrentText(self._layer_name)
157 | return
158 | elif index > 0:
159 | self._layer_name = self.combobox.itemText(index)
160 |
161 | def on_layer_list_change(self, event=None):
162 | type_ = event.type if event else None
163 | if type_ not in ["moved", "inserted", "removed", "name", None]:
164 | return
165 | filtered = list(filter(lambda layer: isinstance(layer, self.layer_type), self.viewer.layers))
166 | if len(filtered) == 0 and type_ != "removed":
167 | return
168 | if type_ in ["moved", "inserted", "removed", None]:
169 | if self.combobox.count() == len(filtered) and \
170 | all((layer.name == self.combobox.itemText(i+1) for i, layer in enumerate(filtered))):
171 | return
172 | self.combobox.blockSignals(True)
173 | self.combobox.clear()
174 | self.combobox.addItem("[%s]" % self.display_name)
175 | for layer in filtered:
176 | self.combobox.addItem(layer.name)
177 | if layer.name == self._layer_name:
178 | self.combobox.setCurrentText(layer.name)
179 | self.combobox.blockSignals(False)
180 | if self._layer_name != self.combobox.currentText():
181 | self.combobox.currentTextChanged.emit(self.combobox.currentText())
182 | self.combobox.currentIndexChanged.emit(self.combobox.currentIndex())
183 | if self.combobox.count() > 1 and (self.combobox.currentIndex() == 0 or self.layer not in filtered):
184 | self.layer = filtered[0]
185 | elif self.combobox.count() == 1:
186 | self.layer = None
187 | elif type_ == "name":
188 | self.combobox.blockSignals(True)
189 | for i in range(len(filtered)):
190 | if self.combobox.itemText(i+1) != filtered[i].name:
191 | if self.combobox.itemText(i+1) == self._layer_name:
192 | self._layer_name = filtered[i].name
193 | self.combobox.setItemText(i+1, filtered[i].name)
194 | break
195 | self.combobox.blockSignals(False)
196 |
--------------------------------------------------------------------------------
/src/napari_nd_annotator/_widgets/interpolation_overlay/__init__.py:
--------------------------------------------------------------------------------
1 | from .vispy_interpolation_overlay import *
2 | from .interpolation_overlay import InterpolationOverlay
3 | __all__ = ["InterpolationOverlay"]
--------------------------------------------------------------------------------
/src/napari_nd_annotator/_widgets/interpolation_overlay/interpolation_overlay.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | from napari._pydantic_compat import Field
3 | from napari.components.overlays import SceneOverlay
4 | import threading
5 | from copy import deepcopy
6 |
7 | mutex = threading.Lock()
8 |
9 | class ContourList(list):
10 | def __eq__(self, other):
11 | if len(self) != len(other):
12 | return False
13 | for cnt1, cnt2 in zip(self, other):
14 | if not np.array_equal(cnt1, cnt2):
15 | return False
16 | return True
17 |
18 |
19 | class InterpolationOverlay(SceneOverlay):
20 | """Overlay that displays a polygon on a scene.
21 |
22 | This overlay was created for drawing polygons on Labels layers. It handles
23 | the following mouse events to update the overlay:
24 | - Mouse move: Continuously redraw the latest polygon point with the current
25 | mouse position.
26 | - Mouse press (left button): Adds the current mouse position as a new
27 | polygon point.
28 | - Mouse double click (left button): If there are at least three points in
29 | the polygon and the double-click position is within completion_radius
30 | from the first vertex, the polygon will be painted in the image using the
31 | current label.
32 | - Mouse press (right button): Removes the most recent polygon point from
33 | the list.
34 |
35 | Attributes
36 | ----------
37 | enabled : bool
38 | Controls whether the overlay is activated.
39 | points : list
40 | A list of (x, y) coordinates of the vertices of the polygon.
41 | use_double_click_completion_radius : bool
42 | Whether double-click to complete drawing the polygon requires being within
43 | completion_radius of the first point.
44 | completion_radius : int | float
45 | Defines the radius from the first polygon vertex within which
46 | the drawing process can be completed by a left double-click.
47 | """
48 |
49 | enabled: bool = False
50 | contour: list = Field(default_factory=ContourList)
51 |
52 |
--------------------------------------------------------------------------------
/src/napari_nd_annotator/_widgets/interpolation_overlay/vispy_interpolation_overlay.py:
--------------------------------------------------------------------------------
1 | import threading
2 | import time
3 | import traceback
4 | from functools import wraps
5 |
6 | import numpy as np
7 | from magicgui._util import debounce
8 | from napari.qt.threading import thread_worker
9 | from napari._vispy.overlays.base import LayerOverlayMixin, VispySceneOverlay
10 | from napari._vispy.utils import visual
11 | from napari.layers import Labels
12 | from vispy.scene import Polygon, Compound
13 |
14 | from .interpolation_overlay import InterpolationOverlay, mutex as overlay_mutex
15 |
16 |
17 | def execute_last(function=None, delay=0.2):
18 |
19 | def decorator(fn):
20 | from threading import Timer
21 |
22 | _store: dict = {"retry_worker": None, "mutex": threading.Lock()}
23 |
24 | @wraps(fn)
25 | def delayed(*args, **kwargs):
26 | mutex = _store["mutex"]
27 | def call_it(*args, **kwargs):
28 | _store["retry_worker"] = None
29 | with mutex:
30 | print("calling function", flush=True)
31 | fn(*args, **kwargs)
32 | print("called function", flush=True)
33 |
34 | @thread_worker
35 | def retry(*args, **kwargs):
36 | while True:
37 | if not mutex.locked():
38 | call_it(*args, **kwargs)
39 | return
40 | time.sleep(delay)
41 | yield
42 | if _store["retry_worker"] is not None:
43 | _store["retry_worker"].quit()
44 | _store["retry_worker"] = retry(*args, **kwargs)
45 | _store["retry_worker"].start()
46 | return None
47 |
48 | return delayed
49 |
50 | return decorator if function is None else decorator(function)
51 |
52 |
53 | class VispyInterpolationOverlay(LayerOverlayMixin, VispySceneOverlay):
54 | layer: Labels
55 |
56 | def __init__(
57 | self, *, layer: Labels, overlay: InterpolationOverlay, parent=None
58 | ):
59 | points = np.asarray([(0, 0), (1, 1)])
60 | self._polygons = [Polygon(
61 | pos=points,
62 | border_method='gl',
63 | border_width=2
64 | ) for _ in range(1)]
65 |
66 | super().__init__(
67 | node=Compound(self._polygons),
68 | layer=layer,
69 | overlay=overlay,
70 | parent=parent,
71 | )
72 | # self.overlay.events.points_per_slice.connect(self._on_points_change)
73 | self.overlay.events.contour.connect(self._on_points_change)
74 | self.overlay.events.enabled.connect(self._on_enabled_change)
75 |
76 | layer.events.selected_label.connect(self._update_color)
77 | layer.events.colormap.connect(self._update_color)
78 | layer.events.opacity.connect(self._update_color)
79 |
80 |
81 | self.reset()
82 | self._update_color()
83 | # If there are no points, it won't be visible
84 | self.overlay.visible = True
85 |
86 | def _on_enabled_change(self):
87 | if self.overlay.enabled:
88 | self._on_points_change()
89 |
90 | # @execute_last
91 | def _on_points_change(self):
92 | print("on_points_change")
93 | # print("initial checks in points change:", time.time() - start, flush=True)
94 | contours = self.overlay.contour.copy()
95 | n_contours = len(contours)
96 | n_poly = len(self._polygons)
97 | if n_poly < n_contours:
98 | self._polygons.extend([Polygon(
99 | pos=np.asarray([(0, 0), (1, 1)]),
100 | border_method='agg',
101 | border_width=3
102 | ) for _ in range(n_contours - n_poly)])
103 | # print(f"extending polygons: {time.time() - start} (n_poly: {n_poly}, n_contours: {n_contours})", flush=True)
104 | for i, poly in enumerate(self._polygons):
105 | points = contours[i] if i < n_contours else None
106 | if points is not None and len(points) > 2:
107 | poly.visible = True
108 | poly.pos = points
109 | else:
110 | print("setting poly invisible", flush=True)
111 | poly.visible = False
112 |
113 | def _set_color(self, color):
114 | print("set_color")
115 | border_color = tuple(color[:3]) + (1,) # always opaque
116 | polygon_color = color
117 |
118 | # Clean up polygon faces before making it transparent, otherwise
119 | # it keeps the previous visualization of the polygon without cleaning
120 | for poly in self._polygons:
121 | if polygon_color[-1] == 0:
122 | poly.mesh.set_data(faces=[])
123 | poly.color = polygon_color
124 |
125 | poly.border_color = border_color
126 |
127 | def _update_color(self):
128 | print("_update_color")
129 | layer = self.layer
130 | if layer._selected_label == layer.colormap.background_value:
131 | self._set_color((1, 0, 0, 0))
132 | else:
133 | self._set_color(
134 | layer._selected_color.tolist()[:3] + [layer.opacity]
135 | )
136 |
137 | @property
138 | def _dims_displayed(self):
139 | return self.layer._slice_input.displayed
140 |
141 | def reset(self):
142 | print("reset")
143 | super().reset()
144 | self._on_points_change()
145 |
146 | visual.overlay_to_visual[InterpolationOverlay] = VispyInterpolationOverlay
147 |
--------------------------------------------------------------------------------
/src/napari_nd_annotator/_widgets/minimal_contour_overlay/__init__.py:
--------------------------------------------------------------------------------
1 | from .vispy_minimal_contour_overlay import *
2 | from .minimal_contour_overlay import MinimalContourOverlay
3 | __all__ = ["MinimalContourOverlay"]
--------------------------------------------------------------------------------
/src/napari_nd_annotator/_widgets/minimal_contour_overlay/minimal_contour_overlay.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | from napari._pydantic_compat import Field
3 | from napari.components.overlays import SceneOverlay
4 | from napari.layers import Labels
5 | import scipy
6 |
7 | class MinimalContourOverlay(SceneOverlay):
8 | """Overlay that displays a polygon on a scene.
9 |
10 | This overlay was created for drawing polygons on Labels layers. It handles
11 | the following mouse events to update the overlay:
12 | - Mouse move: Continuously redraw the latest polygon point with the current
13 | mouse position.
14 | - Mouse press (left button): Adds the current mouse position as a new
15 | polygon point.
16 | - Mouse double click (left button): If there are at least three points in
17 | the polygon and the double-click position is within completion_radius
18 | from the first vertex, the polygon will be painted in the image using the
19 | current label.
20 | - Mouse press (right button): Removes the most recent polygon point from
21 | the list.
22 |
23 | Attributes
24 | ----------
25 | enabled : bool
26 | Controls whether the overlay is activated.
27 | points : list
28 | A list of (x, y) coordinates of the vertices of the polygon.
29 | use_double_click_completion_radius : bool
30 | Whether double-click to complete drawing the polygon requires being within
31 | completion_radius of the first point.
32 | completion_radius : int | float
33 | Defines the radius from the first polygon vertex within which
34 | the drawing process can be completed by a left double-click.
35 | """
36 |
37 | enabled: bool = False
38 | anchor_points: list = Field(default_factory=list)
39 | last_anchor_to_current_pos_contour: list = Field(default_factory=list)
40 | current_pos_to_first_anchor_contour: list = Field(default_factory=list)
41 | stored_contour: list = Field(default_factory=list)
42 | use_straight_lines: bool = False
43 | contour_smoothness: float = 1.
44 | contour_width: int = 3
45 |
46 | def add_polygon_to_labels(self, layer: Labels) -> None:
47 | if len(self.stored_contour) > 2:
48 | contour = np.asarray(self.stored_contour)
49 | if self.contour_smoothness < 1.:
50 | contour = self.smooth_contour(contour)
51 | layer.paint_polygon(np.round(contour), layer.selected_label)
52 | self.stored_contour = []
53 | self.last_anchor_to_current_pos_contour = []
54 | self.current_pos_to_first_anchor_contour = []
55 | self.anchor_points = []
56 |
57 | def smooth_contour(self, contour: np.ndarray):
58 | coefficients = max(3, round(self.contour_smoothness * len(contour)))
59 | mask_2d = ~np.all(contour == contour.min(), axis=0)
60 | points_2d = contour[:, mask_2d]
61 | center = points_2d.mean(0)
62 | points_2d = points_2d - center
63 | tformed = scipy.fft.rfft(points_2d, axis=0)
64 | tformed[0] = 0
65 | inv_tformed = scipy.fft.irfft(tformed[:coefficients], len(points_2d), axis=0) + center
66 | contour[:, mask_2d] = inv_tformed
67 | return contour
--------------------------------------------------------------------------------
/src/napari_nd_annotator/_widgets/resources/__init__.py:
--------------------------------------------------------------------------------
1 | from .interpolation import interpolate_style_path
2 | from .mc_contour import mc_contour_style_path
3 |
--------------------------------------------------------------------------------
/src/napari_nd_annotator/_widgets/resources/interpolation/__init__.py:
--------------------------------------------------------------------------------
1 | import os
2 |
3 | __location__ = os.path.realpath(
4 | os.path.join(os.getcwd(), os.path.dirname(__file__)))
5 | interpolate_style_path = os.path.join(__location__, "interpolate_button.qss")
6 | from napari.resources._icons import write_colorized_svgs, _theme_path
7 | from napari.settings import get_settings
8 |
9 | interpolate_icon_path = os.path.join(__location__, "interpolate.svg").replace("\\", "/")
10 | settings = get_settings()
11 | theme_name = settings.appearance.theme
12 | out = _theme_path(theme_name)
13 | write_colorized_svgs(
14 | out,
15 | svg_paths=[interpolate_icon_path],
16 | colors=[(theme_name, 'icon')],
17 | opacities=(0.5, 1),
18 | theme_override={'warning': 'warning', 'logo_silhouette': 'background'},
19 | )
20 |
--------------------------------------------------------------------------------
/src/napari_nd_annotator/_widgets/resources/interpolation/interpolate.svg:
--------------------------------------------------------------------------------
1 |
2 |
19 |
--------------------------------------------------------------------------------
/src/napari_nd_annotator/_widgets/resources/interpolation/interpolate_button.qss:
--------------------------------------------------------------------------------
1 | QtModePushButton[mode="interpolate"] {
2 | image: url("theme_{{ id }}:/interpolate.svg");
3 | }
--------------------------------------------------------------------------------
/src/napari_nd_annotator/_widgets/resources/mc_contour/__init__.py:
--------------------------------------------------------------------------------
1 | import os
2 |
3 | __location__ = os.path.realpath(
4 | os.path.join(os.getcwd(), os.path.dirname(__file__)))
5 | mc_contour_style_path = os.path.join(__location__, "mc_contour_button.qss")
6 | from napari.resources._icons import write_colorized_svgs, _theme_path
7 | from napari.settings import get_settings
8 |
9 | mc_contour_icon_path = os.path.join(__location__, "mc_contour.svg").replace("\\", "/")
10 | settings = get_settings()
11 | theme_name = settings.appearance.theme
12 | out = _theme_path(theme_name)
13 | write_colorized_svgs(
14 | out,
15 | svg_paths=[mc_contour_icon_path],
16 | colors=[(theme_name, 'icon')],
17 | opacities=(0.5, 1),
18 | theme_override={'warning': 'warning', 'logo_silhouette': 'background'},
19 | )
20 |
--------------------------------------------------------------------------------
/src/napari_nd_annotator/_widgets/resources/mc_contour/mc_contour.svg:
--------------------------------------------------------------------------------
1 |
30 |
--------------------------------------------------------------------------------
/src/napari_nd_annotator/_widgets/resources/mc_contour/mc_contour_button.qss:
--------------------------------------------------------------------------------
1 | QtModeRadioButton[mode="minimal contour"]::indicator {
2 | image: url("theme_{{ id }}:/mc_contour.svg");
3 | }
--------------------------------------------------------------------------------
/src/napari_nd_annotator/examples/new_pipeline_example.py:
--------------------------------------------------------------------------------
1 | import napari
2 | import numpy as np
3 |
4 | from napari.layers import Labels, Shapes
5 |
6 | from object_list_bb import ListWidgetBB
7 | from annotator_module import AnnotatorModule
8 | from skimage.data import cells3d
9 |
10 | # w = QDockWidget()
11 | # w.area
12 | viewer = napari.Viewer()
13 | viewer.window.add_dock_widget(AnnotatorModule(viewer))
14 | image_layer = viewer.add_image(cells3d(), channel_axis=1, colormap="magma", name="Image")
15 |
16 | extent = image_layer[0].extent.data[1].astype(int) if type(image_layer) is list else image_layer.extent.data[1].astype(int)
17 |
18 | labels_layer = Labels(data=np.zeros(extent, dtype=np.uint16))
19 | viewer.add_layer(labels_layer)
20 | list_widget = ListWidgetBB(viewer)
21 | viewer.window.add_dock_widget(list_widget, area="left")
22 | napari.run()
23 |
--------------------------------------------------------------------------------
/src/napari_nd_annotator/mean_contour/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/bauerdavid/napari-nD-annotator/ddb762943e88f377261ba3a1ddea954dace718ee/src/napari_nd_annotator/mean_contour/__init__.py
--------------------------------------------------------------------------------
/src/napari_nd_annotator/mean_contour/cEssentials.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | import matplotlib.pyplot as plt
3 | import math
4 | import time
5 | from enum import Enum
6 | from scipy.interpolate import splprep, splev
7 | class ReconstructionMethods(Enum):
8 | NEWTON = 1
9 | GRADIENT_DESCENT = 2
10 | CG = 3
11 | SKIP = 4
12 | JOZSI_GRADIENT = 5
13 |
14 |
15 | def getCoefficientsForAccuracyLevel(aLevel, order):
16 | switcher1 = {
17 | 1: [-1/2, 0, 1/2],
18 | 2: [1/12, -2/3, 0, 2/3, -1/12],
19 | 3: [-1/60, 3/20, -3/4, 0, 3/4, -3/20, 1/60],
20 | 4: [1/280, -4/105, 1/5, -4/5, 0, 4/5, -1/5, 4/105, -1/280]
21 | }
22 | switcher2 = {
23 | 1: [1, -2, 1],
24 | 2: [-1/12, 4/3, -5/2, 4/3, -1/12],
25 | 3: [1/90, -3/20, 3/2, -49/18, 3/2, -3/20, 1/90],
26 | 4: [-1/560, 8/315, -1/5, 8/5, -205/72, 8/5, -1/5, 8/315, -1/560]
27 | }
28 |
29 | if order==1:
30 | return switcher1.get(aLevel)
31 | if order==2:
32 | return switcher2.get(aLevel)
33 |
34 |
35 | # derivative approximation
36 | def dt(points, order):
37 |
38 | pNext = np.roll(points, -1, axis=0)
39 | pPrev = np.roll(points, 1, axis=0)
40 |
41 | pNext2 = np.roll(points, -2, axis=0)
42 | pPrev2 = np.roll(points, 2, axis=0)
43 |
44 | pNext3 = np.roll(points, -3, axis=0)
45 | pPrev3 = np.roll(points, 3, axis=0)
46 |
47 | pNext4 = np.roll(points, -4, axis=0)
48 | pPrev4 = np.roll(points, 4, axis=0)
49 | #d = points-pPrev
50 | #e = pNext-points
51 | #d_abs = magnitude(d).reshape(d.shape[0],1)
52 | #e_abs = magnitude(e).reshape(e.shape[0],1)
53 |
54 | if order==1:
55 | prevFactors = (1/280)*pPrev4-(4/105)*pPrev3+(1/5)*pPrev2-(4/5)*pPrev
56 | nextFactors = -(1/280)*pNext4+(4/105)*pNext3-(1/5)*pNext2+(4/5)*pNext
57 | return prevFactors+nextFactors
58 | #retval = ((d_abs*e_abs)/(d_abs+e_abs))*(d/d_abs**2 + e/e_abs**2)
59 | #return retval/magnitude(retval).reshape(d.shape[0],1)
60 | #return (pNext-pPrev)/2
61 | #return (1/12)*pPrev2-(2/3)*pPrev+(2/3)*pNext-(1/12)*pNext2
62 |
63 | if order==2:
64 | prevFactors = (-1/560)*pPrev4+(8/315)*pPrev3-(1/5)*pPrev2+(8/5)*pPrev
65 | currFactor = (-205/72)*points
66 | nextFactors = (-1/560)*pNext4+(8/315)*pNext3-(1/5)*pNext2+(8/5)*pNext
67 | return prevFactors+currFactor+nextFactors
68 | #retval = (2/(d_abs+e_abs))*(e/e_abs - d/d_abs)
69 | #return retval/magnitude(retval).reshape(d.shape[0],1)
70 | #return pNext+pPrev-2*points
71 | #return (-1/12)*pPrev2+(4/3)*pPrev-(5/2)*points+(4/3)*pNext-(1/12)*pNext2
72 |
73 |
74 | # magnitude of a single vector or a set of vectors
75 | def magnitude(points):
76 | if len(points.shape)>1:
77 | return np.sqrt(points[:,0]*points[:,0]+points[:,1]*points[:,1])
78 | else:
79 | return np.sqrt(points[0]*points[0]+points[1]*points[1])
80 |
81 | # inner product of two curves
82 | def innerProduct(curve1,curve2):
83 | return np.sum(curve1*curve2, axis=1)
84 | # return curve1[:,0]*curve2[:,0]+curve1[:,1]*curve2[:,1]
85 |
86 | class Contour:
87 |
88 | def __init__(self, pointArray, nPoi, resMultiplier):
89 | # number of points in contour
90 | self.nPoi = pointArray.shape[0]
91 | self.cPoints = nPoi
92 | self.resMultiplier = resMultiplier
93 | self.resolution = self.cPoints*self.resMultiplier
94 | self.pointArray = pointArray
95 | self.contourLength = self.getContourLength()
96 | self.centroid = self.getCentroid()
97 | self.lookup, self.parameterization = self.getLookupTables()
98 | self.smoothLookupTable()
99 | self.diffs = np.empty(self.cPoints)
100 | self.calcParams()
101 |
102 | def getCentroid(self):
103 | centroid = np.zeros((1,2))
104 | for p in self.pointArray:
105 | centroid = centroid + p
106 | centroid = centroid/self.nPoi
107 | return centroid
108 |
109 | def dt(self, order):
110 | '''nextParam = self.nextParam
111 | prevParam = self.prevParam
112 | nextParam2 = np.roll(nextParam,-1,axis=0)
113 | prevParam2 = np.roll(prevParam,1,axis=0)'''
114 |
115 | if order==1:
116 | return dt(self.lookup[self.parameterization], 1)
117 | #return (1/12)*self.lookup[prevParam2]-(2/3)*self.lookup[prevParam]+(2/3)*self.lookup[nextParam]-(1/12)*self.lookup[nextParam]
118 | #return (self.lookup[nextParam,:]-self.lookup[prevParam,:])/2
119 | if order==2:
120 | return dt(self.lookup[self.parameterization], 2)
121 | #return (-1/12)*self.lookup[prevParam2]+(4/3)*self.lookup[prevParam]-(5/2)*self.lookup[self.parameterization]+(4/3)*self.lookup[nextParam]-(1/12)*self.lookup[nextParam2]
122 | #return self.lookup[nextParam,:]+self.lookup[prevParam,:]-2*self.lookup[self.parameterization,:]
123 |
124 |
125 | #Christoffel divergence as defined in (9): Gamma_i
126 | def getChristoffelDivergence(self):
127 | deriv = self.derivatives
128 | sderiv = self.sderivatives
129 | return (deriv[:,0]*sderiv[:,0]+deriv[:,1]*sderiv[:,1])/(deriv[:,0]*deriv[:,0]+deriv[:,1]*deriv[:,1])
130 |
131 | def getSRV(self):
132 | deriv = self.derivatives
133 | return np.sqrt(magnitude(deriv))
134 |
135 | def getRPSV(self):
136 | return self.lookup[self.parameterization]*self.srv[:,None]
137 |
138 | def getSRVF(self):
139 | res = self.derivatives.copy()
140 | deriv = self.derivatives
141 | velo = magnitude(deriv)
142 | srv = np.sqrt(velo)
143 | res[:,0] /= srv
144 | res[:,1] /= srv
145 | return res
146 |
147 | def calcParams(self):
148 |
149 | self.nextParam = np.roll(self.parameterization, -1, axis=0)
150 | self.prevParam = np.roll(self.parameterization, 1, axis=0)
151 |
152 | self.derivatives = self.dt(1)
153 | self.sderivatives = self.dt(2)
154 |
155 | deriv = self.derivatives
156 | sderiv = self.sderivatives
157 |
158 | self.christoffel = self.getChristoffelDivergence()
159 | self.srv = self.getSRV()
160 |
161 |
162 | def getIdxDiff(self, dgamma):
163 | nextParam = self.nextParam
164 | prevParam = self.prevParam
165 | criteria1 = dgamma<0
166 | criteria2 = dgamma>=0
167 |
168 | diffs = self.diffs
169 |
170 | diffs[:] = 0
171 |
172 | diffs[criteria1] = self.parameterization[criteria1] - prevParam[criteria1]
173 | diffCriteria = diffs<0
174 | diffs[diffCriteria] = self.parameterization[diffCriteria]+self.cPoints*self.resMultiplier-prevParam[diffCriteria]
175 |
176 | diffs[criteria2] = nextParam[criteria2]-self.parameterization[criteria2]
177 | diffCriteria = diffs<0
178 | diffs[diffCriteria] = nextParam[diffCriteria]+self.cPoints*self.resMultiplier-self.parameterization[diffCriteria]
179 |
180 |
181 | return diffs
182 |
183 | # make sure we remain in lookup table index range with the reparameterization
184 | def getInRangeParameterization(self):
185 | pointNum = self.lookup.shape[0]
186 | criteria1 = self.parameterization.copy() >= pointNum
187 | criteria2 = self.parameterization.copy() < 0
188 | self.parameterization[criteria2] %= pointNum
189 | self.parameterization[criteria2] = -1*(self.parameterization[criteria2]-pointNum)
190 | self.parameterization[criteria1] %= pointNum
191 |
192 | # smoothing to ensure that
193 | def smoothParameterization(self):
194 | resHalf = self.parameterization.shape[0]*self.resMultiplier/2
195 |
196 | tmparam = self.parameterization.copy()
197 | tmp = self.parameterization.copy()
198 | tmpnext = np.roll(tmp, -1, axis=0)
199 | tmpprev = np.roll(tmp, 1, axis=0)
200 | crit1 = (tmpnext < resHalf) & (tmp > resHalf) & (tmpprev > resHalf)
201 | tmpnext[crit1] += self.cPoints*self.resMultiplier
202 | crit2 = (tmp < resHalf) & (tmpprev > resHalf)
203 | tmp[crit2] += self.cPoints*self.resMultiplier
204 | tmpnext[crit2] += self.cPoints*self.resMultiplier
205 |
206 | self.parameterization = 2+2*tmp+tmpnext+tmpprev
207 | self.parameterization >>= 2
208 |
209 | self.getInRangeParameterization()
210 |
211 | def smoothLookupTable(self):
212 | nextArr = np.roll(self.lookup, -1, axis=0)
213 | prevArr = np.roll(self.lookup, 1, axis=0)
214 |
215 | temp = 0.25*(2* self.lookup + prevArr + nextArr)
216 | tempNext = np.roll(temp, -1, axis=0)
217 | tempPrev = np.roll(temp, 1, axis=0)
218 |
219 | self.lookup = 0.25*(2* temp + tempPrev + tempNext)
220 |
221 | def getContourLength(self):
222 | nextArr = np.roll(self.pointArray, -1, axis=0)
223 |
224 | cLength = np.sum(np.sqrt((nextArr[:,0]-self.pointArray[:,0])*(nextArr[:,0]-self.pointArray[:,0])+(nextArr[:,1]-self.pointArray[:,1])*(nextArr[:,1]-self.pointArray[:,1])))
225 | return cLength
226 |
227 |
228 | def isClockwise(self):
229 | nextLook = np.roll(self.lookup, -1, axis=0)
230 | edges = self.lookup[self.parameterization,0]*nextLook[self.parameterization,1]-nextLook[self.parameterization,0]*self.lookup[self.parameterization,1]
231 | if np.sum(edges)>0:
232 | return True
233 | else:
234 | return False
235 |
236 | def setStartingPointToLowestY(self):
237 | lowestY = np.argmin(self.lookup[:,1])
238 | self.lookup = np.roll(self.lookup, -1*lowestY, axis=0)
239 |
240 | def getLookupTables(self):
241 | start = time.time()
242 | # length of one step if we want to achieve self.resolution:
243 | unitLength = self.contourLength/self.resolution
244 | #print("unit length: "+str(unitLength))
245 | # lookup table
246 | lut = np.empty((2*self.resolution,2))
247 | # shifted point arrays
248 | nextArray = np.roll(self.pointArray, -1, axis=0)
249 |
250 | j = 0 # index of LookUpTable
251 | remainder = 0 # length of overflow for sections between 2 points
252 | sum_while = 0
253 | for i in range(self.nPoi):
254 | startPoint = self.pointArray[i]
255 | nextPoint = nextArray[i]
256 | direction = nextPoint-startPoint
257 | dirLen = np.sqrt(direction[0]*direction[0]+direction[1]*direction[1])
258 | #print("dirlen: "+str(dirLen))
259 | direction /= dirLen # normalized direction between 2 points
260 | reqUnits = int(np.round(dirLen/unitLength))
261 | xcoords = np.linspace(startPoint[0], nextPoint[0], num=reqUnits)
262 | ycoords = np.linspace(startPoint[1], nextPoint[1], num=reqUnits)
263 | lut[j:(j+reqUnits),0] = xcoords
264 | lut[j:(j+reqUnits),1] = ycoords
265 | direction *= unitLength # set the length of the direction vector to contour unit length
266 | j += reqUnits
267 |
268 | lut_res = np.array(lut[0:j,:])
269 | #idxes = np.linspace(0,j-1, num=j).astype(int)
270 | #lut_idx = idxes[0:j:int(self.resolution/self.cPoints)]
271 |
272 |
273 |
274 | lut_idx = np.empty(self.cPoints, dtype=int)
275 | j = 0
276 | for i in range(0, self.resolution, int(self.resolution/self.cPoints)):
277 | lut_idx[j] = i
278 | j += 1
279 | #print(xcoords)
280 | #print(j)
281 | return lut_res, lut_idx
282 |
283 | def export(self, filename):
284 | with open (filename, 'w') as expfile:
285 | for i in range(self.nPoi):
286 | expfile.write(str(self.pointArray[i,0])+","+str(self.pointArray[i,1])+"\n")
287 | expfile.close()
288 |
289 |
--------------------------------------------------------------------------------
/src/napari_nd_annotator/mean_contour/cEssentialscy.pxd:
--------------------------------------------------------------------------------
1 | cimport numpy as np
2 | cpdef np.ndarray[np.double_t, ndim=2] dt(np.ndarray[np.double_t, ndim=2] points, int order)
3 | cpdef magnitude(np.ndarray[np.double_t, ndim=2] points)
4 | cpdef np.ndarray[np.double_t, ndim=1] innerProduct(np.ndarray[np.double_t, ndim=2] curve1, np.ndarray[np.double_t, ndim=2] curve2)
--------------------------------------------------------------------------------
/src/napari_nd_annotator/mean_contour/cEssentialscy.pyx:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | cimport numpy as np
3 | import time
4 | from enum import Enum
5 | from cython.parallel cimport prange
6 | cimport cython
7 | from libc.math cimport sqrt
8 | class ReconstructionMethods(Enum):
9 | NEWTON = 1
10 | GRADIENT_DESCENT = 2
11 | CG = 3
12 | SKIP = 4
13 | JOZSI_GRADIENT = 5
14 |
15 |
16 | # derivative approximation
17 | @cython.boundscheck(False)
18 | @cython.wraparound(False)
19 | @cython.cdivision(True)
20 | cpdef np.ndarray[np.double_t, ndim=2] dt(np.ndarray[np.double_t, ndim=2] points, int order):
21 | cdef int n_points = points.shape[0]
22 | cdef double next_x_1, next_y_1, next_x_2, next_y_2, next_x_3, next_y_3, next_x_4, next_y_4
23 | cdef double prev_x_1, prev_y_1, prev_x_2, prev_y_2, prev_x_3, prev_y_3, prev_x_4, prev_y_4
24 | cdef double prev_factor_x, prev_factor_y, next_factor_x, next_factor_y, curr_factor_x, curr_factor_y
25 | cdef np.ndarray[np.double_t, ndim=2] out = np.empty_like(points)
26 | cdef int i
27 | for i in prange(n_points, nogil=True):
28 | if i < 4:
29 | prev_x_4 = points[n_points + i - 4, 0]
30 | prev_y_4 = points[n_points + i - 4, 1]
31 | if i < 3:
32 | prev_x_3 = points[n_points + i - 3, 0]
33 | prev_y_3 = points[n_points + i - 3, 1]
34 | if i < 2:
35 | prev_x_2 = points[n_points + i - 2, 0]
36 | prev_y_2 = points[n_points + i - 2, 1]
37 | if i < 1:
38 | prev_x_1 = points[n_points + i - 1, 0]
39 | prev_y_1 = points[n_points + i - 1, 1]
40 | else:
41 | prev_x_1 = points[i - 1, 0]
42 | prev_y_1 = points[i - 1, 1]
43 | else:
44 | prev_x_2 = points[i - 2, 0]
45 | prev_y_2 = points[i - 2, 1]
46 | prev_x_1 = points[i - 1, 0]
47 | prev_y_1 = points[i - 1, 1]
48 | else:
49 | prev_x_3 = points[i - 3, 0]
50 | prev_y_3 = points[i - 3, 1]
51 | prev_x_2 = points[i - 2, 0]
52 | prev_y_2 = points[i - 2, 1]
53 | prev_x_1 = points[i - 1, 0]
54 | prev_y_1 = points[i - 1, 1]
55 | else:
56 | prev_x_4 = points[i - 4, 0]
57 | prev_y_4 = points[i - 4, 1]
58 | prev_x_3 = points[i - 3, 0]
59 | prev_y_3 = points[i - 3, 1]
60 | prev_x_2 = points[i - 2, 0]
61 | prev_y_2 = points[i - 2, 1]
62 | prev_x_1 = points[i - 1, 0]
63 | prev_y_1 = points[i - 1, 1]
64 | if i + 4 >= n_points:
65 | next_x_4 = points[i+4-n_points, 0]
66 | next_y_4 = points[i+4-n_points, 1]
67 | if i + 3 >= n_points:
68 | next_x_3 = points[i + 3 - n_points, 0]
69 | next_y_3 = points[i + 3 - n_points, 1]
70 | if i + 2 >= n_points:
71 | next_x_2 = points[i + 2 - n_points, 0]
72 | next_y_2 = points[i + 2 - n_points, 1]
73 | if i + 1 >= n_points:
74 | next_x_1 = points[i + 1 - n_points, 0]
75 | next_y_1 = points[i + 1 - n_points, 1]
76 | else:
77 | next_x_1 = points[i + 1, 0]
78 | next_y_1 = points[i + 1, 1]
79 | else:
80 | next_x_2 = points[i + 2, 0]
81 | next_y_2 = points[i + 2, 1]
82 | next_x_1 = points[i + 1, 0]
83 | next_y_1 = points[i + 1, 1]
84 | else:
85 | next_x_3 = points[i + 3, 0]
86 | next_y_3 = points[i + 3, 1]
87 | next_x_2 = points[i + 2, 0]
88 | next_y_2 = points[i + 2, 1]
89 | next_x_1 = points[i + 1, 0]
90 | next_y_1 = points[i + 1, 1]
91 | else:
92 | next_x_4 = points[i+4, 0]
93 | next_y_4 = points[i+4, 1]
94 | next_x_3 = points[i + 3, 0]
95 | next_y_3 = points[i + 3, 1]
96 | next_x_2 = points[i + 2, 0]
97 | next_y_2 = points[i + 2, 1]
98 | next_x_1 = points[i + 1, 0]
99 | next_y_1 = points[i + 1, 1]
100 | if order == 1:
101 | prev_factor_x = (1. / 280.) * prev_x_4 - (4. / 105.) * prev_x_3 + (1. / 5.) * prev_x_2 - (4. / 5.) * prev_x_1
102 | prev_factor_y = (1. / 280.) * prev_y_4 - (4. / 105.) * prev_y_3 + (1. / 5.) * prev_y_2 - (4. / 5.) * prev_y_1
103 | next_factor_x = (1. / 280.) * next_x_4 - (4. / 105.) * next_x_3 + (1. / 5.) * next_x_2 - (4. / 5.) * next_x_1
104 | next_factor_y = (1. / 280.) * next_y_4 - (4. / 105.) * next_y_3 + (1. / 5.) * next_y_2 - (4. / 5.) * next_y_1
105 | out[i, 0] = prev_factor_x+next_factor_x
106 | out[i, 1] = prev_factor_y+next_factor_y
107 | elif order == 2:
108 | prev_factor_x = (-1./560.)*prev_x_4+(8./315.)*prev_x_3-(1./5.)*prev_x_2+(8./5.)*prev_x_1
109 | prev_factor_y = (-1./560.)*prev_y_4+(8./315.)*prev_y_3-(1./5.)*prev_y_2+(8./5.)*prev_y_1
110 | next_factor_x = (-1. / 560.) * next_x_4 + (8. / 315.) * next_x_3 - (1. / 5.) * next_x_2 + (8. / 5.) * next_x_1
111 | next_factor_y = (-1. / 560.) * next_y_4 + (8. / 315.) * next_y_3 - (1. / 5.) * next_y_2 + (8. / 5.) * next_y_1
112 | curr_factor_x = (-205. / 72.) * points[i, 0]
113 | curr_factor_y = (-205. / 72.) * points[i, 1]
114 | out[i, 0] = prev_factor_x + next_factor_x + curr_factor_x
115 | out[i, 1] = prev_factor_y + next_factor_y + curr_factor_y
116 | return out
117 |
118 | # magnitude of a single vector or a set of vectors
119 | @cython.boundscheck(False)
120 | @cython.wraparound(False)
121 | cpdef magnitude(np.ndarray[np.double_t, ndim=2] points):
122 | cdef np.ndarray[np.double_t, ndim=1] out = np.empty(points.shape[0], np.float64)
123 | cdef int i
124 | for i in prange(points.shape[0], nogil=True):
125 | out[i] = sqrt(points[i, 0]*points[i, 0] + points[i, 1]*points[i, 1])
126 | return out
127 | # return np.sqrt(points[:,0]*points[:,0]+points[:,1]*points[:,1])
128 |
129 | # inner product of two curves
130 | @cython.boundscheck(False)
131 | @cython.wraparound(False)
132 | cpdef np.ndarray[np.double_t, ndim=1] innerProduct(np.ndarray[np.double_t, ndim=2] curve1, np.ndarray[np.double_t, ndim=2] curve2):
133 | # return np.sum(curve1*curve2, axis=1)
134 | return curve1[:,0]*curve2[:,0]+curve1[:,1]*curve2[:,1]
135 |
136 | class Contour:
137 |
138 | def __init__(self, pointArray, nPoi, resMultiplier):
139 | # number of points in contour
140 | self.nPoi = pointArray.shape[0]
141 | self.cPoints = nPoi
142 | self.resMultiplier = resMultiplier
143 | self.resolution = self.cPoints*self.resMultiplier
144 | self.pointArray = pointArray
145 | self.contourLength = self.getContourLength()
146 | self.centroid = self.getCentroid()
147 | self.lookup, self.parameterization = self.getLookupTables()
148 | self.smoothLookupTable()
149 | self.diffs = np.empty(self.cPoints)
150 | self.calcParams()
151 |
152 | def getCentroid(self):
153 | centroid = np.zeros((1,2))
154 | for p in self.pointArray:
155 | centroid = centroid + p
156 | centroid = centroid/self.nPoi
157 | return centroid
158 |
159 | def dt(self, order):
160 | '''nextParam = self.nextParam
161 | prevParam = self.prevParam
162 | nextParam2 = np.roll(nextParam,-1,axis=0)
163 | prevParam2 = np.roll(prevParam,1,axis=0)'''
164 |
165 | if order==1:
166 | return dt(self.lookup[self.parameterization], 1)
167 | #return (1/12)*self.lookup[prevParam2]-(2/3)*self.lookup[prevParam]+(2/3)*self.lookup[nextParam]-(1/12)*self.lookup[nextParam]
168 | #return (self.lookup[nextParam,:]-self.lookup[prevParam,:])/2
169 | if order==2:
170 | return dt(self.lookup[self.parameterization], 2)
171 | #return (-1/12)*self.lookup[prevParam2]+(4/3)*self.lookup[prevParam]-(5/2)*self.lookup[self.parameterization]+(4/3)*self.lookup[nextParam]-(1/12)*self.lookup[nextParam2]
172 | #return self.lookup[nextParam,:]+self.lookup[prevParam,:]-2*self.lookup[self.parameterization,:]
173 |
174 |
175 | #Christoffel divergence as defined in (9): Gamma_i
176 | def getChristoffelDivergence(self):
177 | deriv = self.derivatives
178 | sderiv = self.sderivatives
179 | return (deriv[:,0]*sderiv[:,0]+deriv[:,1]*sderiv[:,1])/(deriv[:,0]*deriv[:,0]+deriv[:,1]*deriv[:,1])
180 |
181 | def getSRV(self):
182 | deriv = self.derivatives
183 | return np.sqrt(magnitude(deriv))
184 |
185 | def getRPSV(self):
186 | return self.lookup[self.parameterization]*self.srv[:,None]
187 |
188 | def getSRVF(self):
189 | res = self.derivatives.copy()
190 | deriv = self.derivatives
191 | velo = magnitude(deriv)
192 | srv = np.sqrt(velo)
193 | res[:,0] /= srv
194 | res[:,1] /= srv
195 | return res
196 |
197 | def calcParams(self):
198 |
199 | self.nextParam = np.roll(self.parameterization, -1, axis=0)
200 | self.prevParam = np.roll(self.parameterization, 1, axis=0)
201 |
202 | self.derivatives = self.dt(1)
203 | self.sderivatives = self.dt(2)
204 |
205 | deriv = self.derivatives
206 | sderiv = self.sderivatives
207 |
208 | self.christoffel = self.getChristoffelDivergence()
209 | self.srv = self.getSRV()
210 |
211 |
212 | def getIdxDiff(self, dgamma):
213 | nextParam = self.nextParam
214 | prevParam = self.prevParam
215 | criteria1 = dgamma<0
216 | criteria2 = dgamma>=0
217 |
218 | diffs = self.diffs
219 |
220 | diffs[:] = 0
221 |
222 | diffs[criteria1] = self.parameterization[criteria1] - prevParam[criteria1]
223 | diffCriteria = diffs<0
224 | diffs[diffCriteria] = self.parameterization[diffCriteria]+self.cPoints*self.resMultiplier-prevParam[diffCriteria]
225 |
226 | diffs[criteria2] = nextParam[criteria2]-self.parameterization[criteria2]
227 | diffCriteria = diffs<0
228 | diffs[diffCriteria] = nextParam[diffCriteria]+self.cPoints*self.resMultiplier-self.parameterization[diffCriteria]
229 |
230 |
231 | return diffs
232 |
233 | # make sure we remain in lookup table index range with the reparameterization
234 | def getInRangeParameterization(self):
235 | pointNum = self.lookup.shape[0]
236 | criteria1 = self.parameterization.copy() >= pointNum
237 | criteria2 = self.parameterization.copy() < 0
238 | self.parameterization[criteria2] %= pointNum
239 | self.parameterization[criteria2] = -1*(self.parameterization[criteria2]-pointNum)
240 | self.parameterization[criteria1] %= pointNum
241 |
242 | # smoothing to ensure that
243 | def smoothParameterization(self):
244 | resHalf = self.parameterization.shape[0]*self.resMultiplier/2
245 |
246 | tmparam = self.parameterization.copy()
247 | tmp = self.parameterization.copy()
248 | tmpnext = np.roll(tmp, -1, axis=0)
249 | tmpprev = np.roll(tmp, 1, axis=0)
250 | crit1 = (tmpnext < resHalf) & (tmp > resHalf) & (tmpprev > resHalf)
251 | tmpnext[crit1] += self.cPoints*self.resMultiplier
252 | crit2 = (tmp < resHalf) & (tmpprev > resHalf)
253 | tmp[crit2] += self.cPoints*self.resMultiplier
254 | tmpnext[crit2] += self.cPoints*self.resMultiplier
255 |
256 | self.parameterization = 2+2*tmp+tmpnext+tmpprev
257 | self.parameterization >>= 2
258 |
259 | self.getInRangeParameterization()
260 |
261 | def smoothLookupTable(self):
262 | nextArr = np.roll(self.lookup, -1, axis=0)
263 | prevArr = np.roll(self.lookup, 1, axis=0)
264 |
265 | temp = 0.25*(2* self.lookup + prevArr + nextArr)
266 | tempNext = np.roll(temp, -1, axis=0)
267 | tempPrev = np.roll(temp, 1, axis=0)
268 |
269 | self.lookup = 0.25*(2* temp + tempPrev + tempNext)
270 |
271 | def getContourLength(self):
272 | nextArr = np.roll(self.pointArray, -1, axis=0)
273 |
274 | cLength = np.sum(np.sqrt((nextArr[:,0]-self.pointArray[:,0])*(nextArr[:,0]-self.pointArray[:,0])+(nextArr[:,1]-self.pointArray[:,1])*(nextArr[:,1]-self.pointArray[:,1])))
275 | return cLength
276 |
277 |
278 | def isClockwise(self):
279 | nextLook = np.roll(self.lookup, -1, axis=0)
280 | edges = self.lookup[self.parameterization,0]*nextLook[self.parameterization,1]-nextLook[self.parameterization,0]*self.lookup[self.parameterization,1]
281 | if np.sum(edges)>0:
282 | return True
283 | else:
284 | return False
285 |
286 | def setStartingPointToLowestY(self):
287 | lowestY = np.argmin(self.lookup[:,1])
288 | self.lookup = np.roll(self.lookup, -1*lowestY, axis=0)
289 |
290 | def getLookupTables(self):
291 | start = time.time()
292 | # length of one step if we want to achieve self.resolution:
293 | unitLength = self.contourLength/self.resolution
294 | # lookup table
295 | lut = np.empty((2*self.resolution,2))
296 | # shifted point arrays
297 | nextArray = np.roll(self.pointArray, -1, axis=0)
298 |
299 | j = 0 # index of LookUpTable
300 | remainder = 0 # length of overflow for sections between 2 points
301 | sum_while = 0
302 | for i in range(self.nPoi):
303 | startPoint = self.pointArray[i]
304 | nextPoint = nextArray[i]
305 | direction = nextPoint-startPoint
306 | dirLen = np.sqrt(direction[0]*direction[0]+direction[1]*direction[1])
307 | direction /= dirLen # normalized direction between 2 points
308 | reqUnits = int(np.round(dirLen/unitLength))
309 | xcoords = np.linspace(startPoint[0], nextPoint[0], num=reqUnits)
310 | ycoords = np.linspace(startPoint[1], nextPoint[1], num=reqUnits)
311 | lut[j:(j+reqUnits),0] = xcoords
312 | lut[j:(j+reqUnits),1] = ycoords
313 | direction *= unitLength # set the length of the direction vector to contour unit length
314 | j += reqUnits
315 |
316 | lut_res = np.array(lut[0:j,:])
317 | #idxes = np.linspace(0,j-1, num=j).astype(int)
318 | #lut_idx = idxes[0:j:int(self.resolution/self.cPoints)]
319 |
320 |
321 |
322 | lut_idx = np.empty(self.cPoints, dtype=int)
323 | j = 0
324 | for i in range(0, self.resolution, int(self.resolution/self.cPoints)):
325 | lut_idx[j] = i
326 | j += 1
327 | return lut_res, lut_idx
328 |
329 | def export(self, filename):
330 | with open (filename, 'w') as expfile:
331 | for i in range(self.nPoi):
332 | expfile.write(str(self.pointArray[i,0])+","+str(self.pointArray[i,1])+"\n")
333 | expfile.close()
334 |
335 |
--------------------------------------------------------------------------------
/src/napari_nd_annotator/mean_contour/contourcy.pyx:
--------------------------------------------------------------------------------
1 | # from cEssentials import *
2 | from ._essentials import magnitude, innerProduct
3 | cimport numpy as np
4 | cimport cython
5 | from cython.parallel cimport prange
6 | import numpy as np
7 | import time
8 |
9 | # determine the initial centroid, just sum up the coordinates and take their average
10 | def initCentroid(contours, weights = None):
11 | r = np.zeros((1,2))
12 | weights = np.ones(len(contours)) if weights is None else weights
13 | for i in range(len(contours)):
14 | lut = contours[i].lookup[contours[i].parameterization]
15 | r += np.sum(lut, axis=0) * weights[i]
16 | r /= (np.sum(weights)*lut.shape[0])
17 | return r
18 |
19 | def isDifferenceSmallEnough(currVelo, prevVelo):
20 | if prevVelo is not None:
21 | scurr = magnitude(currVelo)
22 | sprev = magnitude(prevVelo)
23 | diffSum = np.sum(np.abs(sprev-scurr), axis=0)/scurr.shape[0]
24 | if diffSum < 0.00001:
25 | print("reparam converged: "+str(diffSum))
26 | return True
27 | else:
28 | return False
29 | else: return False
30 |
31 | def isMethodStuck(prevMean, energies):
32 | if np.abs(prevMean-np.mean(energies))<0.01:
33 | return True
34 | else:
35 | return False
36 |
37 | def reparameterizeContours(contours, Settings, plotSignal=None, debug=False):
38 | refCont = contours[0]
39 | numIterations = Settings.iterations
40 | maxgamma = 0.24
41 | # maxgamma = 0.1
42 | refCont.calcParams()
43 | refRepr = refCont.getRPSV()
44 | nPoints = Settings.nPoi
45 |
46 |
47 | #fig = plt.figure()
48 | #camera = Camera(fig)
49 | sumtime = 0
50 | sumsmooth = 0
51 | g = contours[1].parameterization
52 | for cIter in range(1,len(contours)):
53 | varCont = contours[cIter]
54 | #varCont.parameterization = g
55 | stopCriterion = False
56 | prevDerivatives = None
57 | min_resolution = varCont.resMultiplier
58 | energies = []
59 | prevEnergies = -1
60 | costs = []
61 | for i in range(numIterations):
62 | if stopCriterion is True:
63 | break
64 |
65 | '''if i%2==0:
66 | varCont = contours[1]
67 | refCont = contours[0]
68 | else:
69 | varCont = contours[0]
70 | refCont = contours[1]'''
71 |
72 | start = time.time()
73 | varCont.calcParams()
74 | end = time.time()
75 | sumtime += (end-start)
76 |
77 | # gradient descent equation as defined in (16):
78 | dgamma = 0.5*(refCont.lookup[refCont.parameterization,0]*varCont.lookup[varCont.parameterization,0]+refCont.lookup[refCont.parameterization,1]*varCont.lookup[varCont.parameterization,1]+0)*(refCont.christoffel-varCont.christoffel)
79 | dgamma += (refCont.derivatives[:,0]*varCont.lookup[varCont.parameterization,0]+refCont.derivatives[:,1]*varCont.lookup[varCont.parameterization,1])
80 | dgamma -= (refCont.lookup[refCont.parameterization,0]*varCont.derivatives[:,0]+refCont.lookup[refCont.parameterization,1]*varCont.derivatives[:,1])
81 | dgamma *= -1
82 |
83 | maxgammaabs = np.max(np.abs(dgamma))
84 |
85 | if maxgammaabs<1e-99:
86 | maxgammaabs = 1e-99
87 |
88 | if maxgammaabs>maxgamma:
89 | dgamma /= maxgammaabs
90 | dgamma *= maxgamma
91 |
92 |
93 |
94 | #distances = np.sum(np.abs(np.sqrt(refCont.srv)-np.sqrt(varCont.srv)))
95 |
96 | varRepr = varCont.getRPSV()
97 |
98 | qdistances = np.sum(np.sqrt(innerProduct(refRepr-varRepr, refRepr-varRepr)))
99 | costs.append(qdistances)
100 |
101 | # if the optimitzer gets stuck at a point, detect it
102 | if (i+1)%300 == 0:
103 | print("energy_rp #"+str(i)+": "+str(qdistances))
104 | if isMethodStuck(prevEnergies, energies):
105 | print("reparam stuck")
106 | stopCriterion = True
107 | prevEnergies = np.mean(energies)
108 | energies.clear()
109 | else:
110 | energies.append(qdistances)
111 |
112 |
113 | # update with dGamma
114 |
115 | idx_diff = varCont.getIdxDiff(dgamma)
116 | g = (varCont.parameterization+idx_diff*dgamma+0.5).astype(int)
117 | varCont.parameterization = g
118 |
119 | # when we got our parameterization, make sure that we remain in range of the lookup table
120 | varCont.getInRangeParameterization()
121 |
122 | # smoothing to ensure that the order of points does not get mixed up
123 | startsmooth = time.time()
124 | varCont.smoothParameterization()
125 | endsmooth = time.time()
126 | sumsmooth += (endsmooth-startsmooth)
127 |
128 | #varCont.getInRangeParameterization()
129 |
130 | g = varCont.parameterization
131 |
132 | meantest = (refCont.lookup[refCont.parameterization,:]+varCont.lookup[varCont.parameterization,:])/2
133 |
134 | if Settings.debug is True:
135 | if (i%200==0):
136 | cList = []
137 | cList.append(refCont.lookup[refCont.parameterization[0:nPoints:15],:])
138 | cList.append(varCont.lookup[varCont.parameterization[0:nPoints:15],:])
139 | cList.append(meantest[0:nPoints:15,:])
140 | plotSignal.emit(cList)
141 |
142 |
143 |
144 | if isDifferenceSmallEnough(varCont.derivatives, prevDerivatives):
145 | stopCriterion = True
146 |
147 | #avg = (refCont.lookup[refCont.parameterization]+varCont.lookup[varCont.parameterization])/2
148 |
149 | prevDerivatives = varCont.derivatives.copy()
150 |
151 | return costs
152 |
153 |
154 | #print("sum time spent on calcparams: "+str(sumtime))
155 | #print("sum time spent on smoothing: "+str(sumsmooth))
156 |
157 | # calculate the weighted mean of contours in the rpsv space
158 | def calcRpsvInterpolation(contours, weights):
159 | interp = np.zeros(contours[0].getRPSV().shape)
160 |
161 | # take the average in q-space: interp
162 | for i in range(len(contours)):
163 | r = contours[i].getRPSV()
164 | r *= weights[i]
165 | interp += r
166 | interp /= np.sum(weights)
167 | return interp
168 |
169 | def calcMean(contours):
170 | N = len(contours)
171 | mean = contours[0].copy()
172 | for i in range(1,len(contours)):
173 | mean += contours[i]
174 | mean /= N
175 | return mean
176 |
177 | # calculate centroid displacement
178 | @cython.boundscheck(False)
179 | @cython.wraparound(False)
180 | @cython.cdivision(True)
181 | cpdef delta_d(contours, np.ndarray[np.double_t, ndim=2] q_mean, np.ndarray[np.double_t, ndim=1] rsqrts):
182 | cdef double energy = 0
183 | cdef double denominator = 0
184 | cdef np.ndarray[np.double_t, ndim=2] numerator = np.zeros((1,2))
185 | cdef np.ndarray[np.double_t, ndim=1] targetSrv
186 | cdef np.ndarray[np.double_t, ndim=2] targetRepr
187 | cdef double denominatori
188 | cdef np.ndarray[np.double_t, ndim=2] numeratori
189 | cdef int i, j
190 |
191 | for i in range(len(contours)):
192 | targetCont = contours[i]
193 |
194 | targetSrv = targetCont.getSRV()
195 | targetRepr = targetCont.getRPSV()
196 |
197 | denominatori = 0
198 | numeratori = np.zeros((1,2))
199 |
200 | for j in prange(targetRepr.shape[0], nogil=True):
201 | targetRepr[j, 0] -= q_mean[j, 0]
202 | targetRepr[j, 1] -= q_mean[j, 1]
203 | energy += targetRepr[j,0]*targetRepr[j,0]+targetRepr[j,1]*targetRepr[j,1]
204 | targetRepr[j, 0] *= targetSrv[j]
205 | targetRepr[j, 1] *= targetSrv[j]
206 | numerator[0, 0] += targetRepr[j, 0]
207 | numerator[0, 1] += targetRepr[j, 1]
208 | numeratori[0, 0] += targetRepr[j, 0]
209 | numeratori[0, 1] += targetRepr[j, 1]
210 | denominatori += targetSrv[j]*targetSrv[j]
211 | targetSrv[j] -= rsqrts[j]
212 | denominator += targetSrv[j]*targetSrv[j]
213 |
214 | targetCont.centroid[0,0] = numeratori[0,0]/denominatori
215 | targetCont.centroid[0,1] = numeratori[0,1]/denominatori
216 |
217 | senergy = np.sqrt(energy)
218 |
219 | if denominator>1e-99:
220 | delta = numerator
221 | delta /= denominator
222 |
223 | return delta
224 |
--------------------------------------------------------------------------------
/src/napari_nd_annotator/mean_contour/interpHelper.py:
--------------------------------------------------------------------------------
1 | import scipy.interpolate as interp
2 | class InterpHelper:
3 | def __init__(self):
4 | pass
5 | def setInterpolator(self,xpoints,ypoints):
6 | self.interpolator = interp.interp1d(x=xpoints,y=ypoints,kind='quadratic')
7 |
--------------------------------------------------------------------------------
/src/napari_nd_annotator/mean_contour/interp_test.py:
--------------------------------------------------------------------------------
1 | import scipy.interpolate as interp
2 | import numpy as np
3 | import matplotlib.pyplot as plt
4 |
5 | x = np.linspace(0,5,num=6)
6 | y = [0,1,4,9,16,25]
7 | finterp = interp.interp1d(x=x,y=y,kind='quadratic')
8 |
9 |
10 | xnew = np.linspace(0,5,num=100)
11 | plt.plot(xnew,finterp(xnew))
12 | plt.show()
--------------------------------------------------------------------------------
/src/napari_nd_annotator/mean_contour/meanContour.py:
--------------------------------------------------------------------------------
1 | # from contour import *
2 | from qtpy.QtCore import QThread, Signal
3 | import sys
4 | import numpy as np
5 | import time
6 | import os
7 | from .settings import Settings
8 | from ._reconstruction import reconstruct
9 | from ._contour import initCentroid, delta_d, calcRpsvInterpolation
10 | from ._essentials import magnitude, Contour
11 | from .util import loadContour
12 |
13 | class MeanThread(QThread):
14 |
15 | doneSignal = Signal(object)
16 | clearPlotSignal = Signal()
17 | updateSignal = Signal(float)
18 | rpSignal = Signal(object)
19 | reconSignal = Signal(object)
20 |
21 | def __init__(self, contours, settings=None, weights=None):
22 | self.settings = settings if settings else Settings()
23 | self.contours = contours if isinstance(contours[0], Contour)\
24 | else list(Contour(c.copy(), self.settings.nPoi, self.settings.resMultiplier) for c in contours)
25 | QThread.__init__(self)
26 | self.iterations = self.settings.maxIter
27 | self.weights = weights
28 |
29 | def __del__(self):
30 | self.wait()
31 |
32 | def run(self):
33 |
34 | self.updateSignal.emit(0)
35 |
36 | # settings for the algorithm
37 | settings = self.settings
38 | weights = np.ones(len(self.contours)) if self.weights is None else self.weights
39 |
40 | for i in range(len(self.contours)):
41 | self.contours[i].setStartingPointToLowestY()
42 |
43 | # init centroid at first (take average)
44 | startCentroid = initCentroid(self.contours, weights)
45 |
46 | # translate every contour
47 | for i_contour in range(len(self.contours)):
48 | self.contours[i_contour].lookup -= startCentroid
49 |
50 | # weights for interpolation
51 |
52 | properCentroid = startCentroid.copy()
53 | deltaPrev = np.zeros((1,2))
54 |
55 | # calculate initial mean
56 | imean = np.zeros((self.contours[0].parameterization.shape[0], 2))
57 | for i in range(len(self.contours)):
58 | # TODO weighting (?)
59 | if not self.contours[i].isClockwise():
60 | # if the orientation of the polygon is cclockwise, revert it
61 | self.contours[i].lookup = self.contours[i].lookup[::-1]
62 | imean += self.contours[i].lookup[self.contours[i].parameterization,:]*weights[i]
63 | imean /= np.sum(weights)
64 |
65 | # go for the maximum number of iterations (general > maxIter in settings)
66 | c = 0
67 | for i in range(self.iterations):
68 | print("iteration #%d" % i)
69 | timestamp = time.time()
70 | regularMean = np.zeros_like(self.contours[0].lookup[self.contours[0].parameterization, :])
71 | for j in range(len(self.contours)):
72 | regularMean += self.contours[j].lookup[self.contours[j].parameterization, :]*weights[j]
73 | regularMean /= np.sum(weights)
74 | self.contours[1].calcParams()
75 | # calculate the mean in RPSV space
76 | q_mean = calcRpsvInterpolation(self.contours, weights)
77 | # here we initialize the ray lengths for the reconstruction: just take the original averages
78 | guessRayLengths = np.zeros(self.contours[0].lookup[self.contours[0].parameterization].shape[0])
79 | for i_contour in range(len(self.contours)):
80 | contourtmp = self.contours[i_contour].lookup[self.contours[i_contour].parameterization]
81 | contourlengths = magnitude(contourtmp)
82 | guessRayLengths += contourlengths * weights[i_contour]
83 | guessRayLengths /= np.sum(weights)
84 | guessRayLengths = magnitude(regularMean)
85 |
86 | # lengths of the q space mean
87 | qraylengths = magnitude(q_mean)
88 | qraylengths[qraylengths<1e-99] = 1e-99
89 |
90 | # unit direction vectors
91 | dirs = q_mean/qraylengths.reshape(qraylengths.shape[0], 1) # unit direction of the mean contour points
92 | timestamp = time.time()
93 | # do the reconstruction
94 | r_mean_lengths, costs = reconstruct(q_mean, guessRayLengths.copy(), settings, self.rpSignal)
95 | # ----------------------------
96 |
97 | # THE mean contour in r space
98 | r_mean = dirs * r_mean_lengths.reshape(r_mean_lengths.shape[0], 1)
99 |
100 | rsqrts = qraylengths/r_mean_lengths
101 |
102 | # calculate delta_d displacement
103 | delta = delta_d(self.contours, q_mean, rsqrts)
104 |
105 | # calculate the differences between the current and previous displacements
106 | deltaDiff = deltaPrev-delta
107 | deltaDiff = np.sqrt(np.sum(deltaDiff*deltaDiff))
108 |
109 | deltaPrev = delta.copy()
110 |
111 | if deltaDiff<1.:
112 | print("centroid converged")
113 | self.updateSignal.emit(100)
114 | break
115 | #refCont.lookup -= delta
116 | #varCont.lookup -= delta
117 |
118 | for i_contour in range(len(self.contours)):
119 | self.contours[i_contour].lookup -= delta
120 |
121 | properCentroid += delta[0,:]
122 | self.updateSignal.emit(100*(i+1)/self.iterations)
123 |
124 | for i_contour in range(len(self.contours)):
125 | self.contours[i_contour].lookup += properCentroid
126 | r_mean += properCentroid
127 | self.doneSignal.emit(r_mean)
128 |
129 |
130 | def runCommandLine(args, settings):
131 | contours = []
132 | for argind in range(1,len(args)-1):
133 | contours.append(loadContour(args[argind], settings.nPoi, settings.resMultiplier))
134 | settings.update("export", "exportName", args[len(args)-1])
135 | settings.updateVariables()
136 | meanThread = MeanThread(contours.copy(), settings)
137 | meanThread.start()
138 |
139 |
140 | if __name__ == '__main__':
141 | if len(sys.argv)>1:
142 | print("running in command line mode.")
143 | if len(sys.argv) >= 4:
144 | print("number of arguments OK. Trying to load the folllowing contour files: ")
145 | for i in range(1,len(sys.argv)-1):
146 | print(sys.argv[i])
147 | if not os.path.exists("settings.json"):
148 | print("no settings.json found. Aborting...")
149 | else:
150 | stgs = Settings("settings.json")
151 | runCommandLine(sys.argv, stgs)
152 | else:
153 | print("number of arguments not OK. Try calling the script like \"python meanContour.py cont1.csv cont2.csv ... cont_n.csv meanName\"")
154 |
155 | else:
156 | print("not enough args")
157 |
--------------------------------------------------------------------------------
/src/napari_nd_annotator/mean_contour/rk.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 |
3 | def rk4(dydt, t0, y0, Q, Q_dot, theta, theta_dot, h=1, num_points=5):
4 | y_prev = y0
5 | t_prev = t0
6 |
7 | h # step size: this will determine how frequent the sampling should be in the function: for big h, the sampling is small
8 | num_points # this determines how many points should be "interpolated". does not influence sampling frequency, just the number of sampling points.
9 | y = [y0] # function to be determined: we only know its value in 1 point
10 | t = [t0]
11 | debug_vars = []
12 |
13 | for n in range(num_points):
14 | debug_vars.append(np.tan(y_prev))
15 |
16 | k1 = dydt(t_prev,y_prev, Q, Q_dot, theta, theta_dot)
17 | k2 = dydt(int(t_prev+h/2), y_prev+h*(k1/2), Q, Q_dot, theta, theta_dot)
18 | k3 = dydt(int(t_prev+h/2), y_prev+h*(k2/2), Q, Q_dot, theta, theta_dot)
19 | k4 = dydt(t_prev+h, y_prev+h*k3, Q, Q_dot, theta, theta_dot)
20 |
21 | qp = Q[t_prev]
22 | qdp = Q_dot[t_prev]
23 | tp = theta_dot[t_prev]
24 | tdp = theta_dot[t_prev]
25 |
26 | qn = Q[int(t_prev+h/2)]
27 | qdn = Q_dot[int(t_prev+h/2)]
28 | tn = theta_dot[int(t_prev+h/2)]
29 | tdn = theta_dot[int(t_prev+h/2)]
30 |
31 | y_next = y_prev+(1/6)*h*(k1+2*k2+2*k3+k4)
32 |
33 | tanyprev = np.tan(y_prev)
34 | tanynext = np.tan(y_next)
35 |
36 | #if np.sin(y_next)<0:
37 | # y_next = 2*np.pi-y_next
38 | t_next = t_prev+h
39 |
40 | r_next = np.cbrt( np.sin(y_next)* Q[int(t_prev+h/2)]/theta_dot[int(t_prev+h/2)] )
41 |
42 | y.append(y_next)
43 | t.append(t_next)
44 |
45 | y_prev = y_next
46 | t_prev = t_next
47 |
48 | debug_vars.append(0)
49 | return t, y, debug_vars
50 |
--------------------------------------------------------------------------------
/src/napari_nd_annotator/mean_contour/settings.json:
--------------------------------------------------------------------------------
1 | {
2 | "general": {
3 | "maxIter": 20,
4 | "debugMode": false
5 | },
6 | "contours": {
7 | "nPoi": 1000,
8 | "resMultiplier": 1000
9 | },
10 | "reparameterization": {
11 | "iterationMultiplier": 10,
12 | "smoothReparam": false
13 | },
14 | "reconstruction": {
15 | "reconMethod": "Skip reconstruction",
16 | "iterations": 100,
17 | "alpha": 0.01,
18 | "lambda": 1.0
19 | },
20 | "export": {
21 | "exportCsv": true,
22 | "exportName": "mean_result"
23 | }
24 | }
--------------------------------------------------------------------------------
/src/napari_nd_annotator/mean_contour/settings.py:
--------------------------------------------------------------------------------
1 | import json
2 | from .util import getReconMethod
3 |
4 | class Settings:
5 |
6 | def __init__(self, filename=None, max_iterations=20, debug_mode=False, n_points=1000, resolution_multiplier=1000,
7 | iteration_multiplier=10, smooth_reparametrization=False, reconstruction_method="Skip reconstruction",
8 | gradient_iterations=100, alpha=0.01, lambd=1.):
9 | if filename is not None:
10 | with open(filename) as json_file:
11 | data = json.load(json_file)
12 | self.exportDict = data
13 |
14 | # general settings
15 | self.maxIter = (data['general'])['maxIter']
16 | self.debug = (data['general'])['debugMode']
17 |
18 | # contour settings
19 | self.nPoi = (data['contours'])['nPoi']
20 | self.resMultiplier = (data['contours'])['resMultiplier']
21 |
22 | # reparameterization settings
23 | self.iterations = self.nPoi*(data['reparameterization'])['iterationMultiplier']
24 | self.smoothParam = (data['reparameterization'])['smoothReparam']
25 |
26 | # reconstrutcion settings
27 | self.reconText = (data['reconstruction'])['reconMethod']
28 | self.reconMethod = getReconMethod((data['reconstruction'])['reconMethod'])
29 | self.gradientIterations = (data['reconstruction'])['iterations']
30 | self.alpha = (data['reconstruction'])['alpha']
31 | self.lambd = (data['reconstruction'])['lambda']
32 |
33 | # export settings
34 | self.exportCsv = (data['export'])['exportCsv']
35 | self.exportName = (data['export'])['exportName']
36 | else:
37 | # general settings
38 | self.maxIter = max_iterations
39 | self.debug = debug_mode
40 |
41 | # contour settings
42 | self.nPoi = n_points
43 | self.resMultiplier = resolution_multiplier
44 |
45 | # reparameterization settings
46 | self.iterations = self.nPoi * iteration_multiplier
47 | self.smoothParam = smooth_reparametrization
48 |
49 | # reconstrutcion settings
50 | self.reconText = reconstruction_method
51 | self.reconMethod = getReconMethod(reconstruction_method)
52 | self.gradientIterations = gradient_iterations
53 | self.alpha = alpha
54 | self.lambd = lambd
55 |
56 | def update(self, category, name, value):
57 | print("Updating initial entry in category "+category+", name "+name+": "+str((self.exportDict[category])[name])+"...")
58 | (self.exportDict[category])[name] = value
59 | print("Updated entry in category "+category+", name "+name+": "+str((self.exportDict[category])[name])+"!")
60 |
61 | # update the settings variables according to the exportDict
62 | def updateVariables(self):
63 | data = self.exportDict
64 |
65 | # general settings
66 | self.maxIter = (data['general'])['maxIter']
67 | self.debug = (data['general'])['debugMode']
68 |
69 | # contour settings
70 | self.nPoi = (data['contours'])['nPoi']
71 | self.resMultiplier = (data['contours'])['resMultiplier']
72 |
73 | # reparameterization settings
74 | self.iterations = self.nPoi*(data['reparameterization'])['iterationMultiplier']
75 | self.smoothParam = (data['reparameterization'])['smoothReparam']
76 |
77 | # reconstrutcion settings
78 | self.reconText = (data['reconstruction'])['reconMethod']
79 | self.reconMethod = getReconMethod((data['reconstruction'])['reconMethod'])
80 | self.gradientIterations = (data['reconstruction'])['iterations']
81 | self.alpha = (data['reconstruction'])['alpha']
82 | self.lambd = (data['reconstruction'])['lambda']
83 |
84 | # export settings
85 | self.exportCsv = (data['export'])['exportCsv']
86 | self.exportName = (data['export'])['exportName']
87 |
--------------------------------------------------------------------------------
/src/napari_nd_annotator/mean_contour/util.py:
--------------------------------------------------------------------------------
1 | import numpy as np
2 | from ._essentials import ReconstructionMethods, Contour
3 | import matplotlib.pyplot as plt
4 |
5 | # loads a contour from file contName with nPoi points and a resolution of nPoi*resMultiplier
6 | def loadContour(contName, nPoi, resMultiplier):
7 | pts = np.genfromtxt(contName, delimiter=",")
8 | res = Contour(pts.copy(), nPoi, resMultiplier)
9 | return res
10 |
11 | # plots a 1-variate function
12 | def plotFunction(x, fx, title=""):
13 | plotfig = plt.figure()
14 | plt.title(title)
15 | plt.plot(x,fx)
16 | return plotfig
17 |
18 |
19 | def plotContours(contours,labels=None,colors=None,title=""):
20 | plotfig = plt.figure()
21 | plt.title(title)
22 | for i in range(len(contours)):
23 | curr = contours[i]
24 | clabel = ""
25 | ccolor = "black"
26 | if labels:
27 | clabel = labels[i]
28 | if colors:
29 | ccolor = colors[i]
30 | plt.plot(curr[:,1],-1*curr[:,0], label=clabel, color=ccolor)
31 | plt.legend()
32 | return plotfig
33 |
34 | def getReconMethod(reconText):
35 | switcher = {
36 | 'Newton': ReconstructionMethods.NEWTON,
37 | 'Gradient descent': ReconstructionMethods.GRADIENT_DESCENT,
38 | 'Conjugate gradient': ReconstructionMethods.CG,
39 | 'Skip reconstruction': ReconstructionMethods.SKIP,
40 | 'Jozsi gradient': ReconstructionMethods.JOZSI_GRADIENT
41 | }
42 | return switcher.get(reconText)
--------------------------------------------------------------------------------
/src/napari_nd_annotator/minimal_contour/Eikonal.h:
--------------------------------------------------------------------------------
1 | // Minimal contour method and implementation by Jozsef Molnar
2 | #pragma once
3 | #include "commontype.h"
4 | #include
5 | #include
6 | #include
7 | #include
8 |
9 | #define _VECTSWITCH 1
10 |
11 | struct SVeloData
12 | {
13 | SVeloData(int ix, int iy, double rv):x(ix),y(iy),v(rv) { }
14 | SVeloData():x(0),y(0),v(0) { }
15 | int x;
16 | int y;
17 | double v;
18 | };
19 |
20 | // Abstract base class for shortest path calculation
21 | // The implemented functions are common for all methods
22 |
23 | class CEikonal
24 | {
25 | public:
26 | CEikonal(void);
27 | ~CEikonal(void);
28 |
29 | SWorkImg m_distance;
30 | SWorkImg m_field;
31 |
32 | void InitImageQuant0(SWorkImg &red, SWorkImg &green, SWorkImg &blue);
33 | void InitImageQuant0(SWorkImg &img);
34 |
35 | virtual void InitImageQuant(SWorkImg &red, SWorkImg &green, SWorkImg &blue) = 0;
36 | virtual void InitImageQuant(SWorkImg &img) = 0;
37 | virtual void DistanceCalculator() = 0;
38 |
39 | virtual void SetDataTerm(SWorkImg *p1, SWorkImg *p2) {}
40 | virtual void SetDataTerm(SWorkImg *p) {}
41 | virtual void GetDataTerm(SWorkImg **p1, SWorkImg **p2) {}
42 | virtual void GetDataTerm(SWorkImg **p) {}
43 |
44 | virtual int SetParam(int p1, int p2) { return 0; }
45 | virtual int SetParam(int p) { return 0; }
46 |
47 | std::vector& ResolvePath();
48 | int m_resolvready;
49 | void SetStartStop(const CVec2 &reference,const CVec2 &target);
50 | int GetProgress() {
51 | return 100*(m_dstart-m_dcurr)/m_dstart;
52 | }
53 |
54 | virtual void Clean() = 0;
55 |
56 | // Can be called after InitEnvironment(AllMethods) to clean (free memory of) temporal work image containers
57 | void ResetTempContainers() {
58 | m_temp[0].Clean();
59 | m_temp[1].Clean();
60 | m_aux[0].Clean();
61 | m_aux[1].Clean();
62 | }
63 | void SetBoundaries(int startX, int startY, int endX, int endY){
64 | mStartX = startX;
65 | mStartY = startY;
66 | mEndX = endX;
67 | mEndY = endY;
68 | GetMaxAuxGrad();
69 | }
70 | virtual void CalcImageQuant() = 0;
71 | protected:
72 | void InitEnvironment(int spacex, int spacey);
73 | void UpdateDistanceMap(double maxv);
74 | void GetMaxAuxGrad();
75 | virtual void GradientCorrection(CVec2 &dir, int x, int y) = 0;
76 |
77 | std::vector m_velo;
78 | #if _VECTSWITCH
79 | std::vector m_auxset;
80 | #else
81 | std::unordered_set m_auxset;
82 | #endif
83 | std::vector m_boundary;
84 |
85 | double m_currentdistance;
86 | int m_spacex;
87 | int m_spacey;
88 |
89 | CVec2 m_reference;
90 | int m_xdisto;
91 | int m_ydisto;
92 | std::vector m_curpath;
93 |
94 | // for "progress"
95 | int m_dstart;
96 | int m_dcurr;
97 | /*CVec2 m_drift;*/
98 |
99 | SWorkImg m_temp[2];
100 | SWorkImg m_aux[2];
101 | double m_maxauxgrad;
102 | double m_minuplevel = 0.35f;
103 |
104 | int m_iDataPrepared;
105 | int mStartX, mStartY, mEndX, mEndY;
106 | };
107 |
108 | enum PrepStat { Prep_No = 0, Prep_Own, Prep_Ext };
109 | // Randers metric specific functions
110 |
111 | class CRanders :
112 | public CEikonal
113 | {
114 | public:
115 | CRanders();
116 | ~CRanders();
117 |
118 | void DistanceCalculator();
119 | void InitImageQuantGrad0(SWorkImg &gradx, SWorkImg &grady);
120 | void InitImageQuantGrad0(const SWorkImg &gradx, const SWorkImg &grady);
121 | void InitImageQuant(SWorkImg &red, SWorkImg &green, SWorkImg &blue) {
122 | if (m_iDataPrepared == Prep_No) {
123 | InitImageQuant0(red, green, blue); // new dataterm here
124 | m_iDataPrepared = Prep_Own;
125 | }
126 | }
127 | void InitImageQuantGrad(SWorkImg &gradx, SWorkImg &grady) {
128 | if (m_iDataPrepared == Prep_No) {
129 | InitImageQuantGrad0(gradx, grady); // new dataterm here
130 | m_iDataPrepared = Prep_Own;
131 | }
132 | }
133 | void InitImageQuant(SWorkImg &img) {
134 | if (m_iDataPrepared == Prep_No) {
135 | InitImageQuant0(img); // new dataterm here
136 | m_iDataPrepared = Prep_Own;
137 | }
138 | }
139 |
140 | virtual void GetDataTerm(SWorkImg **p1, SWorkImg **p2) {
141 | *p1 = m_pTang[0];
142 | *p2 = m_pTang[1];
143 | }
144 | void SetDataTerm(SWorkImg *p1, SWorkImg *p2) {
145 | if (!p1 || !p2) return;
146 | Clean();
147 | m_pTang[0] = p1;
148 | m_pTang[1] = p2;
149 | InitEnvironment(p1->xs,p1->ys);
150 | m_iDataPrepared = Prep_Ext;
151 | }
152 |
153 | static const int m_expfacini = 8;
154 | // edge tracking parameter expfac: higher is stronger
155 | int SetParam(int expfac = m_expfacini) {
156 | if (m_expfac == expfac) return 0;
157 | m_expfac = expfac;
158 | return 1;
159 | }
160 | void Clean() {
161 | if (m_iDataPrepared == Prep_Own) {
162 | if (m_pTang[0]) delete m_pTang[0];
163 | if (m_pTang[1]) delete m_pTang[1];
164 | }
165 | m_pTang[0] = 0;
166 | m_pTang[1] = 0;
167 | m_iDataPrepared = Prep_No;
168 | }
169 | void CalcImageQuant();
170 | protected:
171 | SWorkImg *m_pTang[2];
172 | int m_expfac;
173 |
174 | void GradientCorrection(CVec2 &dir, int x, int y) {
175 | SWorkImg &tangx = *(m_pTang[0]);
176 | SWorkImg &tangy = *(m_pTang[1]);
177 | double grad_magnitude = sqrt(tangx[y][x]*tangx[y][x]+tangy[y][x]*tangy[y][x]);
178 | double exp_mul = (1.0-exp(-grad_magnitude*m_expfac/m_maxauxgrad))/grad_magnitude;
179 |
180 | dir.x += tangx[y][x]*exp_mul; dir.y += tangy[y][x]*exp_mul;
181 | }
182 |
183 | };
184 |
185 | // Splitter metric specific functions
186 |
187 | class CSplitter :
188 | public CEikonal
189 | {
190 | public:
191 | CSplitter(void);
192 | ~CSplitter(void);
193 |
194 | void DistanceCalculator();
195 |
196 | void InitImageQuant(SWorkImg &red, SWorkImg &green, SWorkImg &blue) {
197 | if (m_iDataPrepared == Prep_No) {
198 | InitImageQuant0(red, green, blue); // new dataterm here
199 | m_iDataPrepared = Prep_Own;
200 | }
201 | }
202 | void InitImageQuant(SWorkImg &img) {
203 | if (m_iDataPrepared == Prep_No) {
204 | InitImageQuant0(img); // new dataterm here
205 | m_iDataPrepared = Prep_Own;
206 | }
207 | }
208 |
209 | void GetDataTerm(SWorkImg **p) {
210 | *p = m_pData;
211 | }
212 | void SetDataTerm(SWorkImg *p) {
213 | if (!p) return;
214 | Clean();
215 | m_pData = p;
216 | InitEnvironment(p->xs,p->ys);
217 | m_iDataPrepared = Prep_Ext;
218 | }
219 |
220 | static const int m_expfacini = 3;
221 | static const int m_relweightpercentini = 70;
222 | // edge tracking parameter expfac: higher is stronger
223 | // tracking vs directed motion (transecting) parameter relweightpercent: higher means stronger transecting (split) power
224 | int SetParam(int expfac = m_expfacini, int relweightpercent = m_relweightpercentini) {
225 | if (m_expfac == expfac && abs(m_relweight-relweightpercent/100.0f) < 0.001f) return 0;
226 | m_expfac = expfac;
227 | m_relweight = relweightpercent/100.0f;
228 | return 1; // changed
229 | }
230 | void Clean() {
231 | if (m_iDataPrepared == Prep_Own) {
232 | if (m_pData) delete m_pData;
233 | }
234 | m_pData = 0;
235 | m_iDataPrepared = Prep_No;
236 | }
237 | void CalcImageQuant();
238 | protected:
239 | SWorkImg *m_pData;
240 | int m_expfac;
241 | double m_relweight;
242 |
243 | void GradientCorrection(CVec2 &dir, int x, int y) {
244 | /**/
245 | double driftx = (double)(m_xdisto-x), drifty = (double)(m_ydisto-y);
246 | double dn = 1.0/sqrt(driftx*driftx+drifty*drifty+1e-11);
247 | driftx *= dn; drifty *= dn;
248 | dir.x += driftx; dir.y += drifty;
249 | /**/
250 | /*dir.x += m_drift.x; dir.y += m_drift.y;*/
251 | }
252 |
253 | };
254 |
255 |
256 | class CInhomog :
257 | public CEikonal
258 | {
259 | public:
260 | CInhomog(void);
261 | virtual ~CInhomog(void);
262 |
263 | void DistanceCalculator();
264 | void InitImageQuant(SWorkImg& red, SWorkImg& green, SWorkImg& blue) {
265 | if (m_iDataPrepared == Prep_No) {
266 | InitEnvironment(red.xs, red.ys);
267 | m_aux[0] = red; m_aux[0] += green; m_aux[0] += blue; m_aux[0] *= 0.333f;
268 | CalcImageQuant();
269 | m_iDataPrepared = Prep_Own;
270 | }
271 | }
272 | void InitImageQuant(SWorkImg& img) {
273 | if (m_iDataPrepared == Prep_No) {
274 | m_aux[0] = img;
275 | InitEnvironment(m_aux[0].xs, m_aux[0].ys);
276 | CalcImageQuant();
277 | m_iDataPrepared = Prep_Own;
278 | }
279 | }
280 |
281 | virtual void GetDataTerm(SWorkImg** p) {
282 | *p = m_pData;
283 | }
284 | void SetDataTerm(SWorkImg* p) {
285 | if (!p) return;
286 | Clean();
287 | m_pData = p;
288 | InitEnvironment(p->xs, p->ys);
289 | m_iDataPrepared = Prep_Ext;
290 | }
291 |
292 | static const int m_expfacini = 8;
293 | // edge tracking parameter expfac: higher is stronger
294 | int SetParam(int expfac = m_expfacini) {
295 | if (m_expfac == expfac) return 0;
296 | m_expfac = expfac;
297 | return 1;
298 | }
299 | void Clean() {
300 | if (m_iDataPrepared == Prep_Own) {
301 | if (m_pData) delete m_pData;
302 | }
303 | m_pData = 0;
304 | m_iDataPrepared = Prep_No;
305 | }
306 | void CalcImageQuant();
307 | protected:
308 | SWorkImg* m_pData;
309 | int m_expfac;
310 |
311 | void GradientCorrection(CVec2& dir, int x, int y) { }
312 |
313 | };
314 |
315 | /////////////////////////////////////////////////////
316 | // Control structure (wrapper class for simple use)
317 | /////////////////////////////////////////////////////
318 |
319 | struct SControl
320 | {
321 | SControl() {
322 | m_pMethods[0] = &m_Randers;
323 | m_pMethods[1] = &m_Splitter;
324 | m_pMethods[2] = &m_Inhomog;
325 | m_pCurrentMethod = 0;
326 | m_iParaToUpdate = 0;
327 | m_pdats = m_pdatr[0] = m_pdatr[1] = 0;
328 | }
329 | ~SControl() {
330 | Clean();
331 | }
332 | void Clean() {
333 | if (m_pCurrentMethod)
334 | m_pCurrentMethod->Clean();
335 | }
336 |
337 | void CleanAll() {
338 | m_Randers.Clean();
339 | m_Splitter.Clean();
340 | m_Inhomog.Clean();
341 | }
342 |
343 | void SetBoundaries(int startX, int startY, int endX, int endY){
344 | m_Randers.SetBoundaries(startX,startY,endX, endY);
345 | m_Splitter.SetBoundaries(startX,startY,endX, endY);
346 | m_Inhomog.SetBoundaries(startX,startY,endX, endY);
347 | }
348 | // Prepare all data terms from color image (for sequential use)
349 | void InitEnvironmentAllMethods(SWorkImg &red, SWorkImg &green, SWorkImg &blue) {
350 | SetParAll();
351 | m_Randers.InitImageQuant(red,green,blue);
352 | m_Splitter.InitImageQuant(red,green,blue);
353 | m_Inhomog.InitImageQuant(red, green, blue);
354 | }
355 |
356 | void InitEnvironmentRanders(SWorkImg &red, SWorkImg &green, SWorkImg &blue) {
357 | m_Randers.InitImageQuant(red,green,blue);
358 | }
359 | void InitEnvironmentRandersGrad(SWorkImg &gradx, SWorkImg &grady) {
360 | m_Randers.InitImageQuantGrad(gradx, grady);
361 | }
362 | void InitEnvironmentSplitter(SWorkImg &red, SWorkImg &green, SWorkImg &blue) {
363 | m_Splitter.InitImageQuant(red,green,blue);
364 | }
365 | void InitEnvironmentInhomog(SWorkImg &red, SWorkImg &green, SWorkImg &blue) {
366 | m_Inhomog.InitImageQuant(red,green,blue);
367 | }
368 |
369 | void InitEnvironmentAllMethods(SWorkImg &red, SWorkImg &green, SWorkImg &blue, SWorkImg &gradx, SWorkImg &grady) {
370 | SetParAll();
371 | m_Randers.InitImageQuantGrad(gradx, grady);
372 | // m_Splitter.InitImageQuant(red,green,blue);
373 | // m_Inhomog.InitImageQuant(red, green, blue);
374 | }
375 |
376 | // Prepare all data terms from grayscale image (for sequential use)
377 | void InitEnvironmentAllMethods(SWorkImg &img) {
378 | SetParAll();
379 | m_Randers.InitImageQuant(img);
380 | m_Splitter.InitImageQuant(img);
381 | m_Inhomog.InitImageQuant(img);
382 | }
383 |
384 | void CalcImageQuantAllMethods(){
385 | m_Randers.CalcImageQuant();
386 | m_Splitter.CalcImageQuant();
387 | m_Inhomog.CalcImageQuant();
388 | }
389 | // Prepare data term from color image (parallel use)
390 | void InitEnvironment(SWorkImg &red, SWorkImg &green, SWorkImg &blue) {
391 | SetMetDatPar();
392 | m_pCurrentMethod->InitImageQuant(red,green,blue);
393 | }
394 | // Prepare data term from grayscale image (parallel use)
395 | void InitEnvironment(SWorkImg &img) {
396 | SetMetDatPar();
397 | m_pCurrentMethod->InitImageQuant(img);
398 | }
399 | // input: user-defined point set, method: user-defined method set
400 | bool DefineInputSet(const std::vector &input, const std::vector &method) {
401 | bool bok = true;
402 | m_curri = 0;
403 | m_inputset = input;
404 | m_pEikonal.clear();
405 | auto ms = method.size();
406 | auto is = input.size();
407 | if (ms != is)
408 | for (int ii = 0; ii < is; ++ii) {
409 | m_pEikonal.push_back(m_pMethods[0]); // let it be the default
410 | bok = false;
411 | }
412 | else
413 | for (int ii = 0; ii < is; ++ii) {
414 | int methi = method[ii];
415 | if (methi >= m_nimplenented) {
416 | methi = 0; // let it be the default
417 | bok = false;
418 | }
419 | m_pEikonal.push_back(m_pMethods[methi]);
420 | }
421 |
422 | m_minpath.clear();
423 | return bok;
424 | }
425 | // Sets the next data-pairs (from the user input set) for the segment calculations
426 | int SetNextStartStop() {
427 | auto ninp = m_inputset.size();
428 | if (m_curri >= ninp) return 0; // no more points in the input set
429 |
430 | m_pCurrentMethod = m_pEikonal[m_curri]; // double setting
431 | CVec2 reference = m_inputset[m_curri];
432 | CVec2 target;
433 | if (++m_curri == ninp)
434 | target = m_inputset[0];
435 | else
436 | target = m_inputset[m_curri];
437 | m_pCurrentMethod->SetStartStop(reference,target);
438 |
439 | return 1;
440 | }
441 | // Main iteration
442 | void DistanceCalculator() {
443 | if (&m_Randers == m_pCurrentMethod) {
444 | m_Randers.DistanceCalculator();
445 | }
446 | else if (&m_Splitter == m_pCurrentMethod) {
447 | m_Splitter.DistanceCalculator();
448 | }
449 | else {
450 | m_Inhomog.DistanceCalculator();
451 | }
452 | m_resolvready = m_pCurrentMethod->m_resolvready;
453 | }
454 | // Method parameters
455 | void SetParam(int p) {
456 | m_rp = p;
457 | m_iParaToUpdate |= 1;
458 | }
459 | void SetParam(int p1, int p2) {
460 | m_sp1 = p1; m_sp2 = p2;
461 | m_iParaToUpdate |= 2;
462 | }
463 |
464 | // To attach existing data term
465 | void SetDataTerm(SWorkImg *p) {
466 | m_pdats = p;
467 | }
468 | void SetDataTerm(SWorkImg *p1, SWorkImg *p2) {
469 | m_pdatr[0] = p1;
470 | m_pdatr[1] = p2;
471 | }
472 | // Queries
473 | // Retrieve existing data term
474 | void GetDataTerm(SWorkImg **p) {
475 | if (m_pCurrentMethod)
476 | m_pCurrentMethod->GetDataTerm(p);
477 | }
478 | void GetDataTerm(SWorkImg **p1, SWorkImg **p2) {
479 | if (m_pCurrentMethod)
480 | m_pCurrentMethod->GetDataTerm(p1,p2);
481 | }
482 |
483 | SWorkImg &GetField() {
484 | return m_pCurrentMethod->m_field;
485 | }
486 |
487 | int GetReady() {
488 | return m_resolvready;
489 | }
490 |
491 | int GetProgress() {
492 | return m_pCurrentMethod->GetProgress();
493 | }
494 | // Call after each segment is ready
495 | std::vector &ResolvePath() {
496 |
497 | std::vector curpath = m_pCurrentMethod->ResolvePath();
498 | m_minpath.insert(m_minpath.begin(),curpath.begin(),curpath.end());
499 |
500 | return m_minpath;
501 | }
502 | // Retrieve the result
503 | std::vector &GetMinPath() {
504 | return m_minpath;
505 | }
506 | void SetParAll() {
507 | if (m_iParaToUpdate&1) {
508 | m_iParaToUpdate &= ~1;
509 | if (m_Randers.SetParam(m_rp))
510 | m_Randers.Clean();
511 | if (m_Inhomog.SetParam(m_rp))
512 | m_Inhomog.Clean();
513 | }
514 | if (m_iParaToUpdate&2) {
515 | m_iParaToUpdate &= ~2;
516 | if (m_Splitter.SetParam(m_sp1, m_sp2))
517 | m_Splitter.Clean();
518 | }
519 | }
520 |
521 | private:
522 | void SetMetDatPar() {
523 | m_pCurrentMethod = m_pEikonal[m_curri];
524 | m_pCurrentMethod->SetDataTerm(m_pdats);
525 | m_pCurrentMethod->SetDataTerm(m_pdatr[0],m_pdatr[1]);
526 |
527 | if (m_iParaToUpdate&1) {
528 | m_iParaToUpdate &= ~1;
529 | if (m_pCurrentMethod->SetParam(m_rp))
530 | m_pCurrentMethod->Clean();
531 | }
532 | if (m_iParaToUpdate&2) {
533 | m_iParaToUpdate &= ~2;
534 | if (m_pCurrentMethod->SetParam(m_sp1, m_sp2))
535 | m_pCurrentMethod->Clean();
536 | }
537 | }
538 | static const int m_nimplenented = 3; // # of implemented methods
539 |
540 | std::vector m_minpath;
541 | int m_resolvready;
542 | CEikonal *m_pMethods[m_nimplenented];
543 | CEikonal *m_pCurrentMethod;
544 | CRanders m_Randers;
545 | CSplitter m_Splitter;
546 | CInhomog m_Inhomog;
547 |
548 | std::vector m_inputset;
549 | std::vector m_pEikonal;
550 | int m_curri;
551 | int m_iParaToUpdate;
552 | int m_rp;
553 | int m_sp1;
554 | int m_sp2;
555 | SWorkImg *m_pdats;
556 | SWorkImg *m_pdatr[2];
557 | };
--------------------------------------------------------------------------------
/src/napari_nd_annotator/minimal_contour/__init__.py:
--------------------------------------------------------------------------------
1 | from ._eikonal_wrapper import MinimalContourCalculator
2 | from .feature_extractor import FeatureExtractor
3 | from .feature_manager import FeatureManager
4 |
--------------------------------------------------------------------------------
/src/napari_nd_annotator/minimal_contour/_eikonal_wrapper.pyx:
--------------------------------------------------------------------------------
1 | # cython: boundscheck = False
2 | import time
3 |
4 | cimport cython
5 | import numpy as np
6 | cimport numpy as np
7 | cimport openmp
8 | from libc.string cimport memcpy
9 | from libcpp cimport bool
10 | from libcpp.vector cimport vector
11 | from cython.operator cimport preincrement as inc
12 | from cython.parallel cimport prange
13 | np.import_array()
14 |
15 | GRADIENT_BASED = 0
16 | INTENSITY_BASED = 2
17 | CUSTOM_FEATURE = 3
18 |
19 | cdef np.ndarray EMPTY = np.empty(0)
20 |
21 | cdef extern from "Eikonal.cpp":
22 | pass
23 |
24 | cdef extern from "Eikonal.h":
25 | cdef cppclass CVec2:
26 | CVec2() nogil
27 | CVec2(double, double) nogil
28 | double x, y
29 | cdef cppclass SWorkImg[T]:
30 | SWorkImg()
31 | void Set(int, int)
32 | T* operator[](int)
33 | int GetWidth()
34 | int GetHeight()
35 | cdef cppclass SControl:
36 | SControl() nogil
37 | int GetProgress() nogil
38 | int SetParam(int) nogil
39 | int SetParam(int, int) nogil
40 | bool DefineInputSet(const vector[CVec2]&, const vector[int]&) nogil
41 | void SetDataTerm(SWorkImg[double]*) nogil
42 | void SetDataTerm(SWorkImg[double]*, SWorkImg[double]*) nogil
43 | void GetDataTerm(SWorkImg[double]**) nogil
44 | void GetDataTerm(SWorkImg[double]**, SWorkImg[double]**) nogil
45 | void InitEnvironment(SWorkImg[double]&, SWorkImg[double]&, SWorkImg[double]&) nogil
46 | void InitEnvironmentAllMethods(SWorkImg[double]&, SWorkImg[double]&, SWorkImg[double]&) nogil
47 | void InitEnvironmentAllMethods(SWorkImg[double]&, SWorkImg[double]&, SWorkImg[double]&, SWorkImg[double]&, SWorkImg[double]&) nogil
48 | void InitEnvironmentRanders(SWorkImg[double]&, SWorkImg[double]&, SWorkImg[double]&) nogil
49 | void InitEnvironmentInhomog(SWorkImg[double]&, SWorkImg[double]&, SWorkImg[double]&) nogil
50 | void InitEnvironmentRandersGrad(SWorkImg[double]&, SWorkImg[double]&) nogil
51 | int SetNextStartStop() nogil
52 | void SetBoundaries(int, int, int, int) nogil
53 | void SetParAll() nogil
54 | void DistanceCalculator() nogil
55 | int GetReady() nogil
56 | vector[CVec2]& ResolvePath() nogil
57 | vector[CVec2]& GetMinPath() nogil
58 | void CleanAll() nogil
59 | void CalcImageQuantAllMethods() nogil
60 | # void SetUseLocalMaximum(bool) nogil
61 |
62 | cdef class MinimalContourCalculator:
63 | cdef vector[SControl*] eikonals
64 | cdef int start_x, start_y, end_x, end_y
65 | cdef int param
66 | cdef int method
67 | cdef vector[int] method_pair
68 | cdef vector[int] progresses
69 | cdef SWorkImg[double] ered, egreen, eblue, egradx, egrady
70 | method_initialized = [False,]*4
71 | def __cinit__(self, np.ndarray[np.float_t, ndim=3] image, int n_points):
72 | self.set_image(image, np.empty((0, 0)), np.empty((0, 0)))
73 | self.eikonals.reserve(n_points)
74 | self.progresses.resize(n_points)
75 | self.param = 5
76 | self.method = 0
77 | self.method_pair.push_back(self.method)
78 | self.method_pair.push_back(self.method)
79 | cdef SControl* control
80 | cdef int i
81 | for i in range(n_points):
82 | control = new SControl()
83 | control.SetParam(self.param)
84 | control.SetParam(0, 0)
85 | self.eikonals.push_back(control)
86 |
87 | cpdef set_use_local_maximum(self, bool use_local_maximum):
88 | cdef int i=0
89 | #for i in range(self.eikonals.size()):
90 | #self.eikonals[i].SetUseLocalMaximum(use_local_maximum)
91 |
92 | cpdef set_param(self, int param):
93 | cdef int i
94 | for i in range(self.eikonals.size()):
95 | self.eikonals[i].SetParam(param)
96 | self.eikonals[i].CleanAll()
97 | self.eikonals[i].CalcImageQuantAllMethods()
98 |
99 | cpdef set_method(self, int method=-1):
100 | if method == -1:
101 | method = self.method
102 | if method not in [GRADIENT_BASED, INTENSITY_BASED]:
103 | print("method should be one of GRADIENT_BASED(=0) or INTENSITY_BASED(=2)")
104 | return
105 | self.method = method
106 | self.method_pair[0] = self.method_pair[1] = method
107 | cdef int i
108 | if not self.method_initialized[method]:
109 | for i in range(self.eikonals.size()):
110 | if method == GRADIENT_BASED:
111 | self.eikonals[i].InitEnvironmentRandersGrad(self.egradx, self.egrady)
112 | else:
113 | self.eikonals[i].InitEnvironmentInhomog(self.ered, self.egreen, self.eblue)
114 | self.method_initialized[method] = True
115 |
116 | @cython.boundscheck(False)
117 | @cython.wraparound(False)
118 | cpdef set_boundaries(self, start_x, start_y, end_x, end_y):
119 | cdef int i
120 | for i in range(self.eikonals.size()):
121 | self.eikonals[i].SetBoundaries(start_x, start_y, end_x, end_y)
122 |
123 | @cython.boundscheck(False)
124 | @cython.wraparound(False)
125 | cpdef set_image(self, np.ndarray[np.float_t, ndim=3] image, np.ndarray[np.float_t, ndim=2] gradx, np.ndarray[np.float_t, ndim=2] grady):
126 | if image is None:
127 | return
128 | if image.shape[2] != 3:
129 | print("image should have 3 channels")
130 | return
131 | cdef int w = image.shape[1]
132 | cdef int h = image.shape[0]
133 | cdef np.ndarray[np.float_t, ndim=1] img_max = image.max(axis=(1, 2))
134 | if self.ered.GetWidth() != w or self.ered.GetHeight() != h:
135 | self.ered.Set(w, h)
136 | self.egreen.Set(w, h)
137 | self.eblue.Set(w, h)
138 | cdef double rgb_scale = 1. / 255.
139 | cdef int x, y
140 | cdef double* r_ptr
141 | cdef double* g_ptr
142 | cdef double* b_ptr
143 | for y in prange(h, nogil=True):
144 | r_ptr = self.ered[y]
145 | g_ptr = self.egreen[y]
146 | b_ptr = self.eblue[y]
147 | for x in range(w):
148 | if y == 0 or y == h - 1 or x == 0 or x == w - 1:
149 | r_ptr[x] = img_max[0]
150 | g_ptr[x] = img_max[1]
151 | b_ptr[x] = img_max[2]
152 | else:
153 | r_ptr[x] = image[y, x, 0]
154 | g_ptr[x] = image[y, x, 1]
155 | b_ptr[x] = image[y, x, 2]
156 | cdef int i
157 | for i in range(len(self.method_initialized)):
158 | self.method_initialized[i] = False
159 | cdef float max_x
160 | cdef float max_y
161 | if gradx.size and grady.size:
162 | if self.egradx.GetWidth() != w or self.egrady.GetHeight() != h:
163 | self.egradx.Set(w, h)
164 | self.egrady.Set(w, h)
165 | max_x = gradx.max()
166 | max_y = grady.max()
167 | for y in prange(h, nogil=True):
168 | r_ptr = self.egradx[y]
169 | g_ptr = self.egrady[y]
170 | for x in range(w):
171 | if y == 0 or y == h-1 or x == 0 or x == w-1:
172 | r_ptr[x] = max_x/2
173 | g_ptr[x] = max_y/2
174 | else:
175 | r_ptr[x] = gradx[y, x]
176 | g_ptr[x] = grady[y, x]
177 | for i in range(self.eikonals.size()):
178 | self.eikonals[i].CleanAll()
179 | self.eikonals[i].SetParAll()
180 | self.set_method()
181 | else:
182 | for i in range(self.eikonals.size()):
183 | self.eikonals[i].CleanAll()
184 | self.eikonals[i].SetParAll()
185 | self.set_method()
186 |
187 | # points are as [x, y]
188 | @cython.boundscheck(False)
189 | @cython.wraparound(False)
190 | cpdef run(
191 | self,
192 | np.ndarray[np.double_t, ndim=2] points,
193 | reverse_coordinates=False,
194 | close_path=True,
195 | return_segment_list=False
196 | ):
197 | if points.shape[1] != 2:
198 | print("Points should be 2D")
199 | return
200 |
201 | if points.shape[0] != self.eikonals.size():
202 | print("wrong number of points (%d to %d)" % (points.shape[0], self.eikonals.size()))
203 | return
204 |
205 | cdef int point_count = self.eikonals.size()
206 | cdef bool c_close_path = close_path
207 | if point_count < 2:
208 | print("At least two points should be provided")
209 | return
210 | cdef bool cancel = False
211 |
212 | cdef vector[CVec2] epoints
213 | epoints.reserve(point_count)
214 | cdef int idx = 0
215 | cdef int i, j
216 | cdef int X, Y
217 | if reverse_coordinates:
218 | X = 1
219 | Y = 0
220 | else:
221 | X = 0
222 | Y = 1
223 | for i in range(point_count):
224 | # TODO Check if point is out of bounds
225 | epoints.push_back(CVec2(points[i, X], points[i, Y]))
226 | inc(idx)
227 | for i in range(point_count):
228 | self.progresses[i] = 0
229 | cdef vector[CVec2] point_pair
230 | cdef CVec2 point1
231 | cdef CVec2 point2
232 | cdef int progress
233 | cdef SControl* eikonal
234 | cdef vector[vector[CVec2]] polys
235 | polys.resize(point_count)
236 | cdef int n_points = 0
237 | cdef int num_threads = min(point_count, openmp.omp_get_max_threads())
238 | for i in prange(point_count, nogil=True, num_threads=num_threads):
239 | if i == point_count -1:
240 | if not c_close_path:
241 | continue
242 | point1 = epoints[point_count - 1]
243 | point2 = epoints[0]
244 | else:
245 | point1 = epoints[i]
246 | point2 = epoints[i+1]
247 | point_pair = vector[CVec2](2)
248 | point_pair[0] = point1
249 | point_pair[1] = point2
250 |
251 | eikonal = self.eikonals[i]
252 | eikonal.DefineInputSet(point_pair, self.method_pair)
253 |
254 |
255 | eikonal.SetNextStartStop()
256 | progress = self.progresses[i]
257 | while True:
258 | if cancel:
259 | break
260 | eikonal.DistanceCalculator()
261 | progress = eikonal.GetProgress()
262 | if eikonal.GetReady() <= 0:
263 | break
264 |
265 | eikonal.ResolvePath()
266 | polys[i] = eikonal.GetMinPath()
267 | cdef np.ndarray[np.double_t, ndim=2] segment
268 | if return_segment_list:
269 | out_list = []
270 | for i in range(polys.size()):
271 | poly = polys[i]
272 | segment = np.empty((poly.size(), 2), np.float64)
273 | for j in prange(poly.size(), nogil=True):
274 | segment[j, X] = poly[j].x
275 | segment[j, Y] = poly[j].y
276 | out_list.append(segment)
277 | return out_list
278 | cdef vector[int] offsets
279 | offsets.reserve(polys.size())
280 | offsets[0] = 0
281 | for i in range(polys.size()):
282 | n_points += polys[i].size()
283 | if i < 1:
284 | continue
285 | offsets[i] = offsets[i-1]+polys[i-1].size()
286 | cdef np.ndarray[np.double_t, ndim=2] out = np.zeros([n_points, 2], dtype=np.float64)
287 | cdef double[:, :] out_view = out
288 | cdef CVec2 p
289 | cdef int offset
290 | cdef int poly_size
291 | for i in prange(polys.size(), nogil=True, num_threads=num_threads):
292 | poly = polys[i]
293 | poly_size = poly.size()
294 | offset = offsets[i]
295 | for j in range(poly_size):
296 | p = poly[poly_size - 1 - j]
297 | out_view[offset+j, X] = p.x
298 | out_view[offset+j, Y] = p.y
299 | return out
300 |
--------------------------------------------------------------------------------
/src/napari_nd_annotator/minimal_contour/feature_extractor.py:
--------------------------------------------------------------------------------
1 | import napari
2 | import numpy as np
3 | from scipy.ndimage import gaussian_gradient_magnitude, gaussian_filter
4 | from skimage.filters import sobel, sobel_h, sobel_v
5 | from scipy.signal import convolve2d
6 | import cv2
7 |
8 | from qtpy.QtCore import QRunnable, QThreadPool, Signal, QObject, Slot
9 | import queue
10 | import threading
11 | import time
12 | import itertools
13 |
14 |
15 | class FeatureExtractor:
16 | def __init__(self, max_threads=None):
17 | self.queueLock = threading.Lock()
18 | self.queue = queue.Queue()
19 | self.n_done = 0
20 | self.pool = QThreadPool.globalInstance()
21 | self.n_threads = self.pool.maxThreadCount() if max_threads is None else max_threads
22 | self.done_mask = None
23 |
24 | def start_jobs(self, img, outs, current_slice, dims_not_displayed=None, rgb=None, f=None):
25 | self.start = time.time()
26 | ndim = img.ndim - (1 if rgb else 0)
27 | current_slice = tuple(map(lambda s: 0 if type(s) == slice else s, current_slice))
28 | idx_list = np.asarray(list(itertools.product(*[[-1] if i not in dims_not_displayed else range(img.shape[i]) for i in range(ndim)])))
29 | order = np.argsort(np.abs(idx_list-current_slice).sum(1))
30 | idx_list = idx_list[order]
31 | idx_list = list(map(lambda l: tuple(l[i] if i in dims_not_displayed else slice(None) for i in range(len(l))), idx_list))
32 | # viewer.dims.events.current_step.connect(on_current_step)
33 | self.init_runnables()
34 | self.done_mask = np.zeros([img.shape[i] for i in dims_not_displayed], bool)
35 | # Fill the queue
36 | self.queueLock.acquire()
37 | for idx in idx_list:
38 | self.queue.put(idx)
39 | self.queueLock.release()
40 |
41 | # Create new threads
42 | for runnable in self.runnables:
43 | runnable.data = img
44 | runnable.outs = outs
45 | runnable.rgb = rgb
46 | runnable.done_mask = self.done_mask
47 | runnable.dims_not_displayed = dims_not_displayed
48 | self.pool.start(runnable)
49 |
50 | def init_runnables(self):
51 | self.runnables = []
52 | self.n_done = 0
53 | for i in range(self.n_threads):
54 | runnable = self.FeatureExtractorTask(i, self.queue, self.queueLock)
55 | self.runnables.append(runnable)
56 |
57 | class FeatureExtractorTask(QRunnable):
58 | def __init__(self, threadID, queue, lock):
59 | super().__init__()
60 | self.threadID = threadID
61 | self.q = queue
62 | self.lock = lock
63 | self.data = None
64 | self.outs = None
65 | self.rgb = None
66 | self.done_mask = None
67 | self.dims_not_displayed = None
68 | self._signals = self.Signals()
69 | self.conv_filter_v = np.asarray([[0, -1, 1]]).astype(float)
70 | self.conv_filter_h = np.asarray([[0], [-1], [1]]).astype(float)
71 |
72 | @Slot()
73 | def run(self):
74 | if self.data is None or self.outs is None:
75 | return
76 | while True:
77 | self.lock.acquire()
78 | if not self.q.empty():
79 | idx = self.q.get()
80 | self.lock.release()
81 | if not self.rgb:
82 | # self.out[idx, ...] = gaussian_gradient_magnitude(self.data[idx].astype(float), 5)
83 | self.outs[0][idx] = sobel_v(self.data[idx].astype(float))
84 | self.outs[1][idx] = sobel_h(self.data[idx].astype(float))
85 | else:
86 | r, g, b = self.data[idx + (0,)].astype(float), self.data[idx + (1,)].astype(float), self.data[idx + (2,)].astype(float)
87 | channels_v = []
88 | channels_h = []
89 | if np.any(r):
90 | channels_v.append(gaussian_filter(r, 2, (0, 1)))
91 | channels_h.append(gaussian_filter(r, 2, (1, 0)))
92 | if np.any(g):
93 | channels_v.append(gaussian_filter(g, 2, (0, 1)))
94 | channels_h.append(gaussian_filter(g, 2, (1, 0)))
95 | if np.any(b):
96 | channels_v.append(gaussian_filter(b, 2, (0, 1)))
97 | channels_h.append(gaussian_filter(b, 2, (1, 0)))
98 | self.outs[0][idx] = sum(channels_v)
99 | self.outs[1][idx] = sum(channels_h)
100 |
101 | self.q.task_done()
102 | self.slice_done.emit(idx)
103 | self.done_mask[tuple(idx[i] for i in self.dims_not_displayed)] = True
104 | else:
105 | self.lock.release()
106 | break
107 | self.done.emit()
108 |
109 | @property
110 | def done(self):
111 | return self._signals.done
112 |
113 | @property
114 | def slice_done(self):
115 | return self._signals.slice_done
116 |
117 | class Signals(QObject):
118 | done = Signal()
119 | slice_done = Signal(object)
120 |
--------------------------------------------------------------------------------
/src/napari_nd_annotator/minimal_contour/feature_manager.py:
--------------------------------------------------------------------------------
1 | import os
2 | import sys
3 | import tempfile
4 | import atexit
5 | import warnings
6 | import traceback
7 |
8 | import numpy as np
9 | import random
10 | import string
11 | import shutil
12 | from .feature_extractor import FeatureExtractor
13 | from .._helper_functions import layer_dims_displayed, layer_dims_not_displayed, layer_ndisplay, layer_slice_indices
14 | from napari import Viewer, layers
15 | from typing import Union, Optional
16 | import glob
17 | TEMP_SUFFIX = "_nd_annotator"
18 |
19 |
20 | class FeatureManager:
21 | def __init__(self, viewer):
22 | self.layer = None
23 | self.dims_displayed = None
24 | self.memmaps: list[Optional[Union[np.ndarray, np.memmap]]] = [None, None]
25 | self.slices_calculated = dict()
26 | self.clean_tmp()
27 | self.temp_folder = tempfile.mkdtemp(suffix=TEMP_SUFFIX)
28 | # map layers to file prefix
29 | self.prefix_map = dict()
30 | self.viewer: Viewer = viewer
31 | self.feature_extractor = FeatureExtractor()
32 | atexit.register(self.clean)
33 |
34 | def get_features(self, layer: layers.Layer, block=True):
35 | if layer_ndisplay(layer) == 3:
36 | return None, None
37 | with warnings.catch_warnings():
38 | warnings.simplefilter("ignore")
39 | dims_displayed = tuple(layer_dims_displayed(layer))
40 | dims_not_displayed = tuple(layer_dims_not_displayed(layer))
41 | if layer != self.layer or dims_displayed != self.dims_displayed:
42 | self.init_file(layer, dims_displayed)
43 | with warnings.catch_warnings():
44 | warnings.simplefilter("ignore")
45 | slice_indices = layer_slice_indices(layer)
46 | idx = tuple(slice_indices[i] for i in dims_not_displayed)
47 | if any(type(id_) is int and id_ < 0 or id_ >= layer.data.shape[dim] for dim, id_ in zip(dims_not_displayed, idx)):
48 | return None, None
49 | # if not block and not self.slices_calculated[layer][dims_displayed][idx]:
50 | if not block and not self.feature_extractor.done_mask[idx]:
51 | raise ValueError("features not calculated for layer %s at %s" % (layer, idx))
52 | # while not self.slices_calculated[layer][dims_displayed][idx]:
53 | while not self.feature_extractor.done_mask[idx]:
54 | pass
55 | with warnings.catch_warnings():
56 | warnings.simplefilter("ignore")
57 | idx = slice_indices
58 | return self.memmaps[0][idx], self.memmaps[1][idx]
59 |
60 | def clear_memmap(self):
61 | while len(self.memmaps) > 0:
62 | try:
63 | del self.memmaps[0]
64 | except:
65 | traceback.print_exc(file=sys.stdout)
66 |
67 | def remove_features(self, layer):
68 | if self.layer == layer:
69 | self.layer = None
70 | self.clear_memmap()
71 | self.feature_extractor.done_mask = None
72 | if layer not in self.prefix_map:
73 | return
74 | filename = self.prefix_map[layer]
75 | if layer in self.slices_calculated:
76 | del self.slices_calculated[layer]
77 | files = glob.glob(os.path.join(self.temp_folder, self.generate_filename(filename)))
78 | for file in files:
79 | os.remove(file)
80 |
81 | def init_file(self, layer, dims_displayed):
82 | self.layer = layer
83 | self.dims_displayed = dims_displayed
84 | self.clear_memmap()
85 | if layer in self.prefix_map:
86 | filename = self.prefix_map[layer]
87 | else:
88 | filename = self.random_prefix()
89 | self.prefix_map[layer] = filename
90 | if layer not in self.slices_calculated:
91 | self.slices_calculated[layer] = dict()
92 | if dims_displayed not in self.slices_calculated[layer]:
93 | self.slices_calculated[layer][dims_displayed] = np.zeros([layer.data.shape[i] for i in self.viewer.dims.not_displayed], bool)
94 | path_v = self.generate_filename(filename, dims_displayed, "_v")
95 | path_h = self.generate_filename(filename, dims_displayed, "_h")
96 | if layer.rgb:
97 | shape = layer.data.shape[:-1]
98 | else:
99 | shape = layer.data.shape
100 | if not os.path.exists(path_v):
101 | self.memmaps.append(np.memmap(path_v, shape=shape, dtype=float, mode="w+"))
102 | self.memmaps.append(np.memmap(path_h, shape=shape, dtype=float, mode="w+"))
103 | self.start_feature_calculation(layer)
104 | else:
105 | self.memmaps.append(np.memmap(path_v, shape=shape, dtype=float))
106 | self.memmaps.append(np.memmap(path_h, shape=shape, dtype=float))
107 | self.feature_extractor.done_mask = np.ones([shape[i] for i in layer_dims_not_displayed(layer)], bool)
108 |
109 | def generate_filename(self, prefix, dims_displayed=None, suffix=''):
110 | if dims_displayed is None:
111 | dims_displayed = "*"
112 | return os.path.join(self.temp_folder, "tmp_ftrs_%s_%s%s.dat" % (prefix, "_".join(str(d) for d in dims_displayed), suffix))
113 |
114 | def start_feature_calculation(self, layer):
115 | with warnings.catch_warnings():
116 | warnings.simplefilter("ignore")
117 | slice_indices = layer_slice_indices(layer)
118 | dims_not_displayed = layer_dims_not_displayed(layer)
119 | self.feature_extractor.start_jobs(layer.data, self.memmaps, slice_indices, dims_not_displayed, layer.rgb)
120 |
121 | @staticmethod
122 | def random_prefix():
123 | return ''.join(random.choice(string.ascii_letters + string.digits) for _ in range(10))
124 |
125 | def clean(self):
126 | self.clear_memmap()
127 | if os.path.exists(self.temp_folder):
128 | shutil.rmtree(self.temp_folder)
129 |
130 | def clean_tmp(self):
131 | temp_dir = tempfile.gettempdir()
132 | temp_folders = glob.glob(os.path.join(temp_dir, "%s*%s" % (tempfile.gettempprefix(), TEMP_SUFFIX)))
133 | for fold in temp_folders:
134 | try:
135 | shutil.rmtree(fold)
136 | except PermissionError:
137 | print(f"Couldn't remove temp folder {fold}, skipping.")
138 |
--------------------------------------------------------------------------------
/src/napari_nd_annotator/napari.yaml:
--------------------------------------------------------------------------------
1 | name: napari-nD-annotator
2 | display_name: Annotation Toolbox
3 | contributions:
4 | commands:
5 | - id: napari-nD-annotator.annotator_widget
6 | python_name: napari_nd_annotator:AnnotatorWidget
7 | title: Annotation Toolbox
8 | - id: napari-nD-annotator.interpolation_widget
9 | python_name: napari_nd_annotator:InterpolationWidget
10 | title: Slice Interpolation
11 | - id: napari-nD-annotator.object_list_widget
12 | python_name: napari_nd_annotator:ListWidgetBB
13 | title: Object List
14 | widgets:
15 | - command: napari-nD-annotator.annotator_widget
16 | display_name: Annotation Toolbox
17 | - command: napari-nD-annotator.interpolation_widget
18 | display_name: Slice Interpolator
19 | - command: napari-nD-annotator.object_list_widget
20 | display_name: Object List
21 |
--------------------------------------------------------------------------------
/tox.ini:
--------------------------------------------------------------------------------
1 | # For more information about tox, see https://tox.readthedocs.io/en/latest/
2 | [tox]
3 | envlist = py{38,39,310}-{linux,macos,windows}
4 | isolated_build=true
5 |
6 | [gh-actions]
7 | python =
8 | 3.8: py38
9 | 3.9: py39
10 | 3.10: py310
11 |
12 | [gh-actions:env]
13 | PLATFORM =
14 | ubuntu-latest: linux
15 | macos-latest: macos
16 | windows-latest: windows
17 |
18 | [testenv]
19 | platform =
20 | macos: darwin
21 | linux: linux
22 | windows: win32
23 | passenv =
24 | CI
25 | GITHUB_ACTIONS
26 | DISPLAY XAUTHORITY
27 | NUMPY_EXPERIMENTAL_ARRAY_FUNCTION
28 | PYVISTA_OFF_SCREEN
29 | extras =
30 | testing
31 | commands = pytest -v --color=yes --cov=napari_nd_annotator --cov-report=xml
32 |
--------------------------------------------------------------------------------