├── .github
└── workflows
│ └── publish-to-test-pypi.yml
├── .gitignore
├── .readthedocs.yaml
├── CONTRIBUTING.md
├── LICENSE
├── README.md
├── docs
├── Makefile
├── make.bat
└── source
│ ├── _static
│ └── css
│ │ └── custom.css
│ ├── conf.py
│ ├── index.rst
│ └── tutorials
│ ├── 1_kernel1d_intro.ipynb
│ ├── 2_kernel2d_intro.ipynb
│ ├── 3_operators.ipynb
│ ├── 4_optimization_intro.ipynb
│ ├── 5_optimization_ac225.ipynb
│ ├── 6_nearestkernel_ac225.ipynb
│ ├── 7_pytomography_ac225recon.ipynb
│ ├── 8_pytomography_lu177_le_recon.ipynb
│ ├── 9_lu177_me_modeling_and_recon.ipynb
│ ├── paper_plots.ipynb
│ ├── sample_simind_script
│ ├── positions.txt
│ ├── run_multi_lu177.sh
│ └── simind128_shift.smc
│ └── tutorials.rst
├── figures
└── ac_recon_modeling.png
├── paper
├── compile.sh
├── fig1.png
├── paper.bib
└── paper.md
├── pyproject.toml
├── src
└── spectpsftoolbox
│ ├── kernel1d.py
│ ├── kernel2d.py
│ ├── operator2d.py
│ ├── simind_io.py
│ └── utils.py
└── tests
└── test_functionality.py
/.github/workflows/publish-to-test-pypi.yml:
--------------------------------------------------------------------------------
1 | name: Publish 🐍 to PyPI on Release
2 |
3 | on:
4 | release:
5 | types:
6 | - created # Trigger when a release is created (new or re-created)
7 |
8 | permissions:
9 | id-token: write
10 | contents: read
11 |
12 | jobs:
13 | build-n-publish:
14 | name: Build and publish Python 🐍 distributions 📦 to PyPI and TestPyPI
15 | runs-on: ubuntu-latest
16 | steps:
17 | - uses: actions/checkout@main
18 | - name: Set up Python 3.8
19 | uses: actions/setup-python@v3
20 | with:
21 | python-version: "3.8"
22 | - name: Install pypa/build
23 | run: >-
24 | python -m
25 | pip install
26 | build
27 | --user
28 | - name: Build a binary wheel and a source tarball
29 | run: >-
30 | python -m
31 | build
32 | --sdist
33 | --wheel
34 | --outdir dist/
35 | .
36 | - name: Publish distribution 📦 to PyPI
37 | if: startsWith(github.ref, 'refs/tags')
38 | uses: pypa/gh-action-pypi-publish@release/v1
39 | with:
40 | password: ${{ secrets.PYPI_API_TOKEN }}
41 |
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
1 | *.code-workspace
2 | *.vscode
3 | *__pycache__*
4 | notebook_testing
5 | *docs/build*
6 | paper_psf
7 | *.pdf
8 | paper/jats
9 | build
10 | paper/figures
--------------------------------------------------------------------------------
/.readthedocs.yaml:
--------------------------------------------------------------------------------
1 | version: 2
2 |
3 | build:
4 | os: "ubuntu-20.04"
5 | tools:
6 | python: "3.11"
7 |
8 | sphinx:
9 | configuration: docs/source/conf.py
10 |
11 | python:
12 | install:
13 | - method: pip
14 | path: .
15 | extra_requirements:
16 | - doc
17 |
--------------------------------------------------------------------------------
/CONTRIBUTING.md:
--------------------------------------------------------------------------------
1 | # Contributing to SPECTPSFToolbox
2 |
3 | First off, thanks for your interest in contributing! There are many potential options for contributions to this library: most notably, any implementation of 2D LSI operations using operators that are more efficient than 2D convolutions. If you wish to contribute:
4 |
5 | **Bugs**
6 | 1. Open an issue.
7 | 2. Provide as much context as you can about what you're running into.
8 | 3. Provide project and platgorm versions.
9 |
10 | **New Features**
11 |
12 | The recommended method of contributing is as follows:
13 | 1. Create an issue on the [issues page](https://github.com/lukepolson/SPECTPSFToolbox/issues)
14 | 2. Fork the repository to your own account
15 | 3. Fix issue and push changes to your own fork on GitHub
16 | 4. Create a pull request from your fork (whatever branch you worked on) to the development branch in the main repository,
17 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) [2024] [Luke Polson]
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # SPECT Point Spread Function Toolbox
2 |
3 | This toolbox provides functionality for developing and fitting PSF models to SPECT point source data; the developed PSF models can be loaded into [PyTomography](https://github.com/qurit/PyTomography) for general PSF modeling.
4 |
5 | ## Installation
6 |
7 | 1. Clone this repository and navigate to the directory you cloned into.
8 | 2. If using anaconda (recommended) then activate the `conda` environment you want to install the repository in.
9 | 3. Use the command `python -m pip install -e .` to install the current `main` branch.
10 |
11 | ## Usage
12 |
13 | See the [documentation](https://spectpsftoolbox.readthedocs.io/en/latest/) and in particular the associated [tutorials](https://spectpsftoolbox.readthedocs.io/en/latest/tutorials/tutorials.html) for demonstration on how to use the library.
14 |
15 | ## Contributing
16 |
17 | If you wish to contribute, please see follow the [contributing guidelines](CONTRIBUTING.md) file. Contributions might include fixing bugs highlighted in the [issues](https://github.com/PyTomography/SPECTPSFToolbox/issues), as well as adding new features, such as new operators that accelerate the computational time of PSF modeling.
18 |
19 | ## License
20 |
21 | The package is distributing under the MIT license. More details can be found in the [LICENSE](LICENSE).
22 |
23 | ## Testing
24 |
25 | There is an automated testing script to test the functionality of the library in the [tests](/tests) folder. It requires the `pytest` package is ran using `pytest test_functionality.py`.
26 |
27 | ## Context
28 |
29 | Gamma cameras used in SPECT imaging have finite resolution: infinitesmal point sources of radioactivity show up as finite "point spread functions" (PSF) on the camera. Sample PSFs from point sources at various distances from a camera can be seen on the left hand side of the figure below.
30 |
31 | The PSF consists of three main components: (i) the geometric component (GC) which depends on the shape and spacing of the collimator bores, (ii) the septal penetration component (SPC) which results from photons that travel through the collimator material without being attenuated, and (iii) the septal scatter component (SSC), which consists of photons that scatter within the collimator material and subsequently get detected in the scintillator. When the thickness of the SPECT collimator is large and the diameter of the collimator bores is small relative to the energy of the detected radiation, the PSF is dominated by the GC and can be reasonably approximated using a distance dependent Gaussian function. When the energy of the photons is large relative to the collimator parameters, the PSF contains significant contributions from SPC and SSC and it can no longer be approximated using simple Gaussian functions. The tails and dim background present in the PSF plots in the figure below correspond to the SPC and SSC. For more information, see [chapter 16 of the greatest book of all time](https://www.wiley.com/en-in/Foundations+of+Image+Science-p-9780471153009)
32 |
33 | The figure below shows axial slices of reconstructed Monte Carlo Ac225 SPECT data. The images highlighted as "PSF Model" correspond to application of a PSF operator developed in the library on a point source. The images highlighted as "New" are obtainable via reconstruction with [PyTomography](https://github.com/qurit/PyTomography) using the PSF operators obtained in this library; they require comprehensive PSF modeling.
34 |
35 | 
36 |
37 |
38 |
39 |
--------------------------------------------------------------------------------
/docs/Makefile:
--------------------------------------------------------------------------------
1 | # Minimal makefile for Sphinx documentation
2 | #
3 |
4 | # You can set these variables from the command line, and also
5 | # from the environment for the first two.
6 | SPHINXOPTS ?=
7 | SPHINXBUILD ?= sphinx-build
8 | SOURCEDIR = source
9 | BUILDDIR = build
10 |
11 | # Put it first so that "make" without argument is like "make help".
12 | help:
13 | @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
14 |
15 | .PHONY: help Makefile
16 |
17 | # Catch-all target: route all unknown targets to Sphinx using the new
18 | # "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
19 | %: Makefile
20 | @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
21 |
--------------------------------------------------------------------------------
/docs/make.bat:
--------------------------------------------------------------------------------
1 | @ECHO OFF
2 |
3 | pushd %~dp0
4 |
5 | REM Command file for Sphinx documentation
6 |
7 | if "%SPHINXBUILD%" == "" (
8 | set SPHINXBUILD=sphinx-build
9 | )
10 | set SOURCEDIR=source
11 | set BUILDDIR=build
12 |
13 | if "%1" == "" goto help
14 |
15 | %SPHINXBUILD% >NUL 2>NUL
16 | if errorlevel 9009 (
17 | echo.
18 | echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
19 | echo.installed, then set the SPHINXBUILD environment variable to point
20 | echo.to the full path of the 'sphinx-build' executable. Alternatively you
21 | echo.may add the Sphinx directory to PATH.
22 | echo.
23 | echo.If you don't have Sphinx installed, grab it from
24 | echo.http://sphinx-doc.org/
25 | exit /b 1
26 | )
27 |
28 | %SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%
29 | goto end
30 |
31 | :help
32 | %SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%
33 |
34 | :end
35 | popd
36 |
--------------------------------------------------------------------------------
/docs/source/_static/css/custom.css:
--------------------------------------------------------------------------------
1 | html {
2 | --pst-font-size-base: 22px;
3 | }
--------------------------------------------------------------------------------
/docs/source/conf.py:
--------------------------------------------------------------------------------
1 | # Configuration file for the Sphinx documentation builder.
2 | #
3 | # This file only contains a selection of the most common options. For a full
4 | # list see the documentation:
5 | # https://www.sphinx-doc.org/en/master/usage/configuration.html
6 |
7 | # -- Path setup --------------------------------------------------------------
8 |
9 | # If extensions (or modules to document with autodoc) are in another directory,
10 | # add these directories to sys.path here. If the directory is relative to the
11 | # documentation root, use os.path.abspath to make it absolute, like shown here.
12 | #
13 | import os
14 | import sys
15 | import toml
16 | sys.path.insert(0, os.path.abspath('../../src'))
17 |
18 |
19 | # -- Project information -----------------------------------------------------
20 |
21 | project = 'SPECTPSFToolbox'
22 | copyright = '2024, Luke Polson'
23 | author = 'Luke Polson'
24 |
25 | # The full version, including alpha/beta/rc tags
26 | with open('../../pyproject.toml', 'r') as f:
27 | release = toml.load(f)['project']['version']
28 |
29 | # -- General configuration ---------------------------------------------------
30 |
31 | # Add any Sphinx extension module names here, as strings. They can be
32 | # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
33 | # ones.
34 | extensions = [
35 | "myst_parser",
36 | "sphinx.ext.autosectionlabel",
37 | "sphinx.ext.autodoc",
38 | 'sphinx.ext.viewcode',
39 | "sphinx.ext.napoleon",
40 | "sphinx_design",
41 | "nbsphinx",
42 | "autoapi.extension",
43 | "IPython.sphinxext.ipython_console_highlighting"
44 | ]
45 |
46 | # Where to autogen API
47 | autoapi_dirs = ['../../src/spectpsftoolbox']
48 | autoapi_options = [ 'members', 'undoc-members', 'private-members', 'show-inheritance', 'show-module-summary', 'special-members', 'imported-members', 'recursive']
49 | autoapi_template_dir = "_templates/autoapi"
50 |
51 | # Add any paths that contain templates here, relative to this directory.
52 | templates_path = ['_templates']
53 |
54 | # List of patterns, relative to source directory, that match files and
55 | # directories to ignore when looking for source files.
56 | # This pattern also affects html_static_path and html_extra_path.
57 | exclude_patterns = []
58 |
59 |
60 | # -- Options for HTML output -------------------------------------------------
61 |
62 | # The theme to use for HTML and HTML Help pages. See the documentation for
63 | # a list of builtin themes.
64 | #
65 | html_theme = 'pydata_sphinx_theme'
66 | #html_logo = 'images/PT1.png'
67 |
68 | # Add any paths that contain custom static files (such as style sheets) here,
69 | # relative to this directory. They are copied after the builtin static files,
70 | # so a file named "default.css" will overwrite the builtin "default.css".
71 | html_static_path = ['_static']
72 | html_css_files = [
73 | 'css/custom.css',
74 | ]
75 |
76 | # typehints
77 | autodoc_typehints = "description"
78 | autodoc_inherit_docstrings=True
79 |
80 | # Add link to github
81 | html_theme_options = {
82 | "icon_links": [
83 | {
84 | "name": "GitHub",
85 | "url": "https://github.com/lukepolson/SPECTPSFToolbox",
86 | "icon": "fa-brands fa-github",
87 | "type": "fontawesome",
88 | },
89 | ],
90 | 'fixed_sidebar': True,
91 | 'sidebar_width': '220px',
92 |
93 | }
94 |
95 | linkcheck_anchors = False
--------------------------------------------------------------------------------
/docs/source/index.rst:
--------------------------------------------------------------------------------
1 | .. SPECTPSFToolbox documentation master file, created by
2 | sphinx-quickstart on Fri Aug 2 21:56:39 2024.
3 | You can adapt this file completely to your liking, but it should at least
4 | contain the root `toctree` directive.
5 |
6 | SPECTPSFToolbox Documentation
7 | ===========================================
8 |
9 | The ``SPECTPSFToolbox`` is a collection of python functions and classes that can be used for modeling the point spread functions (PSFs) of gamma cameras. The PSF models developed by the toolbox can be saved and used in `PyTomography `_ for SPECT image reconstruction.
10 |
11 | In traditional SPECT reconstruction, only the geometric component of the PSF is considered; this library provides a means for modeling the septal scatter and septal penetration components, permitting more accurate image reconstruction when these components are significant.
12 |
13 | Installation
14 | ++++++++++++++++++++
15 |
16 | 1. Clone `the repository `_, and ``cd`` to the directory you cloned the repository to in a terminal.
17 | 2. If using anaconda (recommended), then activate the ``conda`` environment you want to install the repository in.
18 | 3. Use the command ``python -m pip install -e .`` to install the repository locally.
19 |
20 | Resources
21 | ++++++++++++++++++++
22 |
23 | .. grid:: 1 3 3 3
24 | :gutter: 2
25 |
26 | .. grid-item-card:: Tutorials
27 | :link: tutorials/tutorials
28 | :link-type: doc
29 | :link-alt: Tutorials
30 | :text-align: center
31 |
32 | :material-outlined:`psychology;8em;sd-text-secondary`
33 |
34 | These tutorials show how to construct operators, fit them to real/Monte Carlo PSF data, and use them in image reconstruction.
35 |
36 | .. grid-item-card:: API Reference
37 | :link: autoapi/index
38 | :link-type: doc
39 | :link-alt: API
40 | :text-align: center
41 |
42 | :material-outlined:`computer;8em;sd-text-secondary`
43 |
44 | View the application programming interface of the library
45 |
46 | .. grid-item-card:: Get Help
47 | :text-align: center
48 |
49 | :material-outlined:`live_help;8em;sd-text-secondary`
50 |
51 | .. button-link:: https://github.com/lukepolson/SPECTPSFToolbox/issues
52 | :shadow:
53 | :expand:
54 | :color: warning
55 |
56 | **Report an issue**
57 |
58 | .. button-link:: https://pytomography.discourse.group/
59 | :shadow:
60 | :expand:
61 | :color: warning
62 |
63 | **Ask questions on PyTomography discourse**
64 |
65 | .. toctree::
66 | :maxdepth: 1
67 | :hidden:
68 |
69 | tutorials/tutorials
--------------------------------------------------------------------------------
/docs/source/tutorials/3_operators.ipynb:
--------------------------------------------------------------------------------
1 | {
2 | "cells": [
3 | {
4 | "cell_type": "markdown",
5 | "metadata": {},
6 | "source": [
7 | "# Tutorial 3: Operators"
8 | ]
9 | },
10 | {
11 | "cell_type": "code",
12 | "execution_count": 1,
13 | "metadata": {},
14 | "outputs": [],
15 | "source": [
16 | "import matplotlib.pyplot as plt\n",
17 | "import torch\n",
18 | "from spectpsftoolbox.operator2d import GaussianOperator, Kernel2DOperator\n",
19 | "from spectpsftoolbox.kernel2d import NGonKernel2D\n",
20 | "from spectpsftoolbox.utils import get_kernel_meshgrid\n",
21 | "device = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")"
22 | ]
23 | },
24 | {
25 | "cell_type": "markdown",
26 | "metadata": {},
27 | "source": [
28 | "# Operator Intro"
29 | ]
30 | },
31 | {
32 | "cell_type": "markdown",
33 | "metadata": {},
34 | "source": [
35 | "Operators in `spectpsftoolbox` are used to convolve kernels with various inputs. Let's start by recreating the ngon kernel we made in tutorial 2"
36 | ]
37 | },
38 | {
39 | "cell_type": "code",
40 | "execution_count": 2,
41 | "metadata": {},
42 | "outputs": [],
43 | "source": [
44 | "collimator_length = 2.405 \n",
45 | "collimator_width = 0.254 #flat side to flat side\n",
46 | "sigma_fn = lambda a, bs: (bs[0]+a) / bs[0] \n",
47 | "sigma_params = torch.tensor([collimator_length], requires_grad=True, dtype=torch.float32, device=device)\n",
48 | "# Set amplitude to 1\n",
49 | "amplitude_fn = lambda a, bs: torch.ones_like(a)\n",
50 | "amplitude_params = torch.tensor([1.], requires_grad=True, dtype=torch.float32, device=device)\n",
51 | "\n",
52 | "ngon_kernel = NGonKernel2D(\n",
53 | " N_sides = 6, # sides of polygon\n",
54 | " Nx = 255, # resolution to make polygon before grid sampling\n",
55 | " collimator_width=collimator_width, # width of polygon\n",
56 | " amplitude_fn=amplitude_fn,\n",
57 | " sigma_fn=sigma_fn,\n",
58 | " amplitude_params=amplitude_params,\n",
59 | " sigma_params=sigma_params,\n",
60 | " rot=90\n",
61 | ")"
62 | ]
63 | },
64 | {
65 | "cell_type": "markdown",
66 | "metadata": {},
67 | "source": [
68 | "Note that this operator takes in `xv`, `yv`, and `a` and returns the correpsonding kernel:"
69 | ]
70 | },
71 | {
72 | "cell_type": "code",
73 | "execution_count": 3,
74 | "metadata": {},
75 | "outputs": [],
76 | "source": [
77 | "Nx0 = 255\n",
78 | "dx0 = 0.048\n",
79 | "x = y = torch.arange(-(Nx0-1)/2, (Nx0+1)/2, 1).to(device) * dx0\n",
80 | "xv, yv = torch.meshgrid(x, y, indexing='xy')\n",
81 | "distances = torch.tensor([1,5,10,15,20,25], dtype=torch.float32, device=device)\n",
82 | "kernel = ngon_kernel(xv, yv, distances, normalize=True).cpu().detach()"
83 | ]
84 | },
85 | {
86 | "cell_type": "markdown",
87 | "metadata": {},
88 | "source": [
89 | "Operators can be constructed using kernels. For example, we can create an operator using a 2D kernel as followS:"
90 | ]
91 | },
92 | {
93 | "cell_type": "code",
94 | "execution_count": 4,
95 | "metadata": {},
96 | "outputs": [],
97 | "source": [
98 | "ngon_operator = Kernel2DOperator(ngon_kernel)"
99 | ]
100 | },
101 | {
102 | "cell_type": "markdown",
103 | "metadata": {},
104 | "source": [
105 | "Operators apply convolution with the kernel to some input `input`. \n",
106 | "\n",
107 | "* We'll create an input of shape (12,255,255) corresponding to 12 different `a` positions. The input will consist of two squares in each plane. We'll see convolution with the `ngon` kernel in each plane"
108 | ]
109 | },
110 | {
111 | "cell_type": "code",
112 | "execution_count": 5,
113 | "metadata": {},
114 | "outputs": [],
115 | "source": [
116 | "Nx0 = 255\n",
117 | "dx0 = 0.48\n",
118 | "x = y = torch.arange(-(Nx0-1)/2, (Nx0+1)/2, 1).to(device) * dx0\n",
119 | "xv, yv = torch.meshgrid(x, y, indexing='xy')\n",
120 | "distances = torch.tensor([1,25,40,55.]).to(device)\n",
121 | "input = torch.zeros((distances.shape[0],Nx0,Nx0)).to(device)\n",
122 | "input[:,120:130,120:130] = 1\n",
123 | "input[:,120:130,170:180] = 1"
124 | ]
125 | },
126 | {
127 | "cell_type": "markdown",
128 | "metadata": {},
129 | "source": [
130 | "The meshgrid `xv` and `yv`above correspond to the dimensions of the input. Typically, the kernel need not be as large as the input. We can manually define the meshgrid of the kernel `xv_k` `yv_k` or use built in functions to obtain them if we know the spatial extent of the kernel. **The thing you need to enforce is (i) The dimensions of `xv_k` should be odd and (ii) that the spacing in `xv` and `xv_k` is the same**"
131 | ]
132 | },
133 | {
134 | "cell_type": "code",
135 | "execution_count": 6,
136 | "metadata": {},
137 | "outputs": [],
138 | "source": [
139 | "k_width = 24 #cm, need to manually determine this\n",
140 | "# Automatically makes (i) same spacing as xv and (ii) odd kernel size\n",
141 | "xv_k, yv_k = get_kernel_meshgrid(xv, yv, k_width)"
142 | ]
143 | },
144 | {
145 | "cell_type": "markdown",
146 | "metadata": {},
147 | "source": [
148 | "Now we can use our operator on the input. Again, note that `xv_k` and `yv_k` refer to the kernel used and not the input."
149 | ]
150 | },
151 | {
152 | "cell_type": "code",
153 | "execution_count": 7,
154 | "metadata": {},
155 | "outputs": [],
156 | "source": [
157 | "ngon_output = ngon_operator(input, xv_k, yv_k, distances, normalize=True)"
158 | ]
159 | },
160 | {
161 | "cell_type": "code",
162 | "execution_count": 8,
163 | "metadata": {},
164 | "outputs": [
165 | {
166 | "data": {
167 | "image/png": "iVBORw0KGgoAAAANSUhEUgAAA90AAAIfCAYAAABtmb+NAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjguNCwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy8fJSN1AAAACXBIWXMAAA9hAAAPYQGoP6dpAABArElEQVR4nO39ebDleUHf/78+n7Pc/d7eZ9+CMzBDgKGionEZzYAIAwYxGrcSBhWjUMRQlkYUl1SxWJGKfA0W+jNsUmAEg0aRSSTiVqCioBFZAgoMzEzv3Xc/6+fz++PcvtPNzPT0DP3pjcej6tLd537OOe97mPc95/n5vM/nFHVd1wEAAADOuvJ8DwAAAAAuVaIbAAAAGiK6AQAAoCGiGwAAABoiugEAAKAhohsAAAAaIroBAACgIaIbAAAAGiK6AQAAoCFfUtH9pje9KUVRbH9NT0/n8ssvzzd+4zfmVa96VQ4ePPiA6/zcz/1ciqJ4RPezsbGRn/u5n8sf//Efn6WRn39vectb8p3f+Z157GMfm7Isc/311z/i2/jlX/7lPO5xj8vU1FRuuOGG/PzP/3yGw+HZHyyXLHP40bnvvvvy0z/90/nqr/7q7NmzJ4uLi/kX/+Jf5Nd+7dcyHo9P2faP//iPT3mMT/76i7/4izO6v4MHD+b5z39+9uzZk9nZ2Xz1V391/s//+T9N/GhcAszrs+PAgQPZvXt3iqLIO9/5zgd8f21tLT/6oz+aK6+8MtPT07n11lvzm7/5m2d8++Y1j4R5/ehdf/31D/oc/O/+3b87ZTvP1xeX9vkewPnwxje+MY973OMyHA5z8ODB/Pmf/3l+4Rd+Ib/4i7+Y//7f/3ue+tSnbm/7Az/wA/nmb/7mR3T7Gxsb+fmf//kkyTd8wzeczaGfN7/xG7+R/fv35yu/8itTVdUjjuVXvOIVefnLX57/+B//Y77pm74pH/zgB/PTP/3Tueeee/Jrv/ZrDY2aS5U5/Mj8zd/8Td7ylrfk+77v+/Lyl788nU4n73nPe/LDP/zD+Yu/+Iu84Q1veMB1XvnKV+Ybv/EbT7nsn//zf/6w99Xv93P77bfn+PHjee1rX5t9+/blda97Xb75m785733ve3PbbbedtZ+LS4t5/cV50YtelOnp6Yf8/nOf+9x88IMfzKtf/ercdNNNedvb3pbv+q7vSlVV+e7v/u7T3rZ5zaNlXj86X/M1X5Nf/MVfPOWyyy677EG39Xx9kai/hLzxjW+sk9Qf/OAHH/C9z372s/U111xTLyws1Pv37/+i7ufQoUN1kvpnf/Znv6jbuZCMx+Ptv99xxx31ddddd8bXPXz4cD09PV2/8IUvPOXyV7ziFXVRFPU//MM/nK1hcokzhx+do0eP1oPB4AGXv+hFL6qT1Hfffff2Ze973/vqJPU73vGOR3Vfr3vd6+ok9fvf//7ty4bDYX3LLbfUX/mVX/mobpNLm3n9xXvnO99Zz8/P129+85sfdP6++93vrpPUb3vb2065/GlPe1p95ZVX1qPR6LS3b17zSJnXj951111X33HHHQ+7nefri8uX1PLy07n22mvzmte8Jqurq/nVX/3V7csfbKnLH/3RH+UbvuEbsnv37szMzOTaa6/Nt33bt2VjYyOf+cxnsnfv3iTJz//8z28v83j+85+fJPnUpz6VO++8MzfeeGNmZ2dz1VVX5dnPfnb+/u///pT7OLFk5O1vf3t+6qd+KldeeWUWFxfz1Kc+NZ/4xCceMP677rort99+e5aWljI7O5ubb745r3rVq07Z5q//+q/zLd/yLdm1a1emp6fz5Cc/Ob/1W791Ro9PWT76/1Tuuuuu9Hq93Hnnnadcfuedd6au6/zO7/zOw97GPffckxe+8IW55ppr0u12c+WVV+bf/Jt/kwMHDiS5//F629velp/4iZ/IFVdckfn5+Tz72c/OgQMHsrq6mhe+8IXZs2dP9uzZkzvvvDNra2uP+mfiwmMOP7SdO3em0+k84PKv/MqvTJJ8/vOff9jbOFPvete78tjHPjZf/dVfvX1Zu93O937v9+av/uqvcs899zzsbTzcY/H85z8/8/Pz+fjHP56nP/3pmZubyxVXXJFXv/rVSZK/+Iu/yNd+7ddmbm4uN910U9785jeftZ+Pc8u8fnhHjx7Ni170orziFa/Itdde+6DbvOtd78r8/Hy+/du//ZTL77zzztx77735y7/8y9Peh3nN2WReXxjM63NLdJ/kmc98ZlqtVv70T//0Ibf5zGc+kzvuuCPdbjdveMMbctddd+XVr3515ubmMhgMcsUVV+Suu+5Kknz/939/PvCBD+QDH/hAXv7ylydJ7r333uzevTuvfvWrc9ddd+V1r3td2u12nvKUpzzoxH7Zy16Wz372s/n1X//1/Nqv/Vo++clP5tnPfvYp78P8b//tv+WZz3xmqqrK61//+vze7/1eXvKSl5zyQvp973tfvuZrvibHjx/P61//+vzu7/5ubr311vzbf/tv86Y3veksPYIP7iMf+UiS5AlPeMIpl19xxRXZs2fP9vcfyj333JOv+IqvyLve9a689KUvzXve85780i/9UpaWlnLs2LFTtn3Zy16WgwcP5k1velNe85rX5I//+I/zXd/1Xfm2b/u2LC0t5e1vf3t+/Md/PL/xG7+Rl73sZWf3B+W8M4cfmT/6oz9Ku93OTTfd9IDvvehFL0q73c7i4mKe/vSn58///M/P6DY/8pGP5IlPfOIDLj9x2T/8wz+c9vpn8lgkyXA4zHOf+9zccccd+d3f/d084xnPyE/+5E/mZS97WZ73vOflBS94wfYLiuc///n5m7/5mzMaPxce8/r0XvKSl+SGG27Ii1/84ofc5iMf+UhuvvnmtNunvqvwxLx8uOdh85qzzbw+vT/90z/NwsJCOp1ObrnllrzmNa95wDlYTvB8fZE434faz6XTLXU54bLLLqtvvvnm7X//7M/+bH3yw/TOd76zTlL/7d/+7UPexiNZ6jIajerBYFDfeOON9X/4D/9h+/ITS0ae+cxnnrL9b/3Wb9VJ6g984AN1Xdf16upqvbi4WH/t135tXVXVQ97P4x73uPrJT35yPRwOT7n8Wc96Vn3FFVecsnz84TzS5eU/+IM/WE9NTT3o92666ab6m77pm057/Re84AV1p9OpP/rRjz7kNicer2c/+9mnXP6jP/qjdZL6JS95ySmXP+c5z6l37dp1hj8BFwpz+OzM4bqu6//1v/5XXZblKWOu67r+0Ic+VP/7f//v63e96131n/7pn9ZveMMb6ptvvrlutVr1XXfd9bC32+l06h/6oR96wOXvf//7H3R568nO9LF43vOeVyepf/u3f3v7suFwWO/du7dOUn/oQx/avvzIkSN1q9WqX/rSlz7s2Dk/zOtHP69///d/v+50OvXf//3fnzK+L1xueuONN9ZPf/rTH3D9e++9t05Sv/KVrzzt/ZjXPFLm9aOf1z/yIz9Sv+ENb6j/5E/+pP6d3/md+nu+53vqJPX3fu/3nrKd5+uLiyPdX6Cu69N+/9Zbb023280LX/jCvPnNb84//dM/PaLbH41GeeUrX5lbbrkl3W437XY73W43n/zkJ/Oxj33sAdt/y7d8yyn/PrH36bOf/WyS5P3vf39WVlbyIz/yIw95xsdPfepT+fjHP57v+Z7v2R7Dia9nPvOZue+++x50j9/ZdLqzUT7cmSrf85735Bu/8Rtz8803P+z9POtZzzrl3yeuc8cddzzg8qNHj1pifgkyhx/ehz70oXzHd3xHvuqrvuoBS+Ke/OQn55d+6ZfynOc8J1/3dV+XO++8M+9///tzxRVX5Md//MfP6PYf7Xw/k8fi5Nt55jOfuf3vdrudL/uyL8sVV1yRJz/5yduX79q1K/v27dt+vLk4mdcPtLy8nB/6oR/KT/zET5zRSZO+mOfhL+b65jUPxbx+cK973ety55135uu//uvzr//1v85b3/rWvPjFL85b3/rWfPjDH97ezvP1xUV0n2R9fT1HjhzJlVde+ZDbPOYxj8l73/ve7Nu3Ly960YvymMc8Jo95zGPy2te+9ozu46UvfWle/vKX5znPeU5+7/d+L3/5l3+ZD37wg3nSk56Uzc3NB2y/e/fuU/49NTWVJNvbHjp0KEly9dVXP+R9nnjf84/92I+l0+mc8vUjP/IjSZLDhw+f0fgfjd27d6fX62VjY+MB3zt69Gh27dp12usfOnTotD/fyb7wtrrd7mkv7/V6Z3S7XBzM4Yf34Q9/OE972tNy44035g/+4A+2x3M6O3bsyLOe9az83//7fx/0ZzzZ7t27c+TIkQdcfvTo0SQPnIsnO5PH4oTZ2dkHnKm52+0+6O13u11z/SJmXj+4n/qpn0qn08mLX/ziHD9+PMePH9/ekbyxsZHjx49vR80XMy+/2Oub1zwY8/qR+d7v/d4kediPAvN8feH6kvzIsIfy7ne/O+Px+GE/cuDrvu7r8nVf93UZj8f567/+6/zyL/9yfvRHfzSXXXZZvvM7v/O0133rW9+a7/u+78srX/nKUy4/fPhwduzY8YjHfOIEEqc7EdKePXuSJD/5kz+Z5z73uQ+6zWMf+9hHfN9n6sR7uf/+7/8+T3nKU7Yv379/fw4fPvywe+j37t17Vk/0xKXLHD69D3/4w3nqU5+a6667Lv/7f//vLC0tnfE4T7x4f7g92k94whMecJKaJNuXnW6+n8ljwZce8/rBfeQjH8lnPvOZXH755Q/43vOe97wkybFjx7Jjx4484QlPyNvf/vaMRqNT3td9JvMyMa85+8zrR+bEc/CZnNjY8/WFyZHuLXfffXd+7Md+LEtLS/mhH/qhM7pOq9XKU57ylLzuda9LMlmymTxwz9jJiqJ4wJGld7/73Wd0hsAH8y//5b/M0tJSXv/61z/kMp3HPvaxufHGG/N3f/d3+fIv//IH/VpYWHhU938mvvmbvznT09MPOHnEm970phRFkec85zmnvf4znvGMvO9972t8CTwXN3P49HP4b//2b/PUpz41V199df7wD/8wO3fuPOMxHjt2LL//+7+fW2+99bSfA5wk3/qt35qPf/zjp5wNeTQa5a1vfWue8pSnnPaoxpk8FnxpMa8fel7/0i/9Ut73vved8vVf/st/STI5C/T73ve+zM/PJ5nMy7W1tfz2b//2Kbfx5je/OVdeeeUpO8QfjHnN2WReP/LX3G95y1uSJF/1VV912u08X1+4viSPdH/kIx/Zfn/FwYMH82d/9md54xvfmFarlXe9613be28ezOtf//r80R/9Ue64445ce+216fV6ecMb3pAkeepTn5okWVhYyHXXXZff/d3fze23355du3Zlz549uf766/OsZz0rb3rTm/K4xz0uT3ziE/M3f/M3+c//+T+f8fLpLzQ/P5/XvOY1+YEf+IE89alPzQ/+4A/msssuy6c+9an83d/9Xf7rf/2vSZJf/dVfzTOe8Yw8/elPz/Of//xcddVVOXr0aD72sY/lQx/6UN7xjnec9n4++tGP5qMf/WiSyRHqjY2NvPOd70yS3HLLLbnllluSJH/yJ3+S22+/PT/zMz+Tn/mZn0kyWZ7y0z/903n5y1+eXbt25Zu+6ZvywQ9+MD/3cz+XH/iBH9i+7kP5T//pP+U973lPvv7rvz4ve9nL8oQnPCHHjx/PXXfdlZe+9KV53OMe96geOy5e5vAjm8Of+MQntn+2V7ziFfnkJz+ZT37yk9vff8xjHrP9mH33d393rr322nz5l3959uzZk09+8pN5zWtekwMHDjxgx9n3f//3581vfnP+8R//Mdddd12S5AUveEFe97rX5du//dvz6le/Ovv27cuv/Mqv5BOf+ETe+973npXHgkuTef3I5vWtt976kN97/OMff8oRxGc84xl52tOelh/+4R/OyspKvuzLvixvf/vbc9ddd+Wtb31rWq3W9rbmNWeTef3I5vXb3va2/I//8T9yxx135Lrrrsvx48fzjne8I7/5m7+Z5z//+XnSk560va3n64vMeTh523lz4kyKJ7663W69b9+++rbbbqtf+cpX1gcPHnzAdb7wTIof+MAH6m/91m+tr7vuunpqaqrevXt3fdttt9X/83/+z1Ou9973vrd+8pOfXE9NTdVJ6uc973l1Xdf1sWPH6u///u+v9+3bV8/OztZf+7VfW//Zn/1Zfdttt9W33Xbb9vUf6gykn/70p+sk9Rvf+MZTLv+DP/iD+rbbbqvn5ubq2dnZ+pZbbql/4Rd+4ZRt/u7v/q7+ju/4jnrfvn11p9OpL7/88vpf/at/Vb/+9a9/2MfuxOPwYF8nnzHyxLgf7CySr33ta+ubbrqp7na79bXXXlv/7M/+bD0YDB72vuu6rj/3uc/VL3jBC+rLL7+87nQ69ZVXXll/x3d8R33gwIHTPl4PdfbMEz/PoUOHzuj+uTCYw49uDn/h4/aFXyeP5VWvelV966231ktLS3Wr1ar37t1bf+u3fmv9V3/1Vw+43RNnJf30pz99yuX79++vv+/7vq/etWtXPT09XX/VV31V/Yd/+IenHeMjeSye97zn1XNzcw+43m233VY//vGPf8Dl1113XX3HHXec8f1zbpnXj/65+Qs91PjqenK24Ze85CX15ZdfXne73fqJT3xi/fa3v/0B25nXnA3m9aOb1x/4wAfq22+/ffv17uzsbP0VX/EV9a/8yq884Kznnq8vLkVdWxMAAAAATfCebgAAAGiI6AYAAICGiG4AAABoiOgGAACAhohuAAAAaIjoBgAAgIaIbgAAAGhI+0w3LIoz3hQ4x+p69KiuZ17DhevRzuvE3IYLmedsuPQ83Lx2pBsAAAAaIroBAACgIaIbAAAAGiK6AQAAoCGiGwAAABoiugEAAKAhohsAAAAaIroBAACgIaIbAAAAGiK6AQAAoCGiGwAAABoiugEAAKAhohsAAAAaIroBAACgIaIbAAAAGiK6AQAAoCGiGwAAABoiugEAAKAhohsAAAAaIroBAACgIaIbAAAAGiK6AQAAoCGiGwAAABoiugEAAKAhohsAAAAaIroBAACgIaIbAAAAGiK6AQAAoCGiGwAAABoiugEAAKAhohsAAAAaIroBAACgIaIbAAAAGiK6AQAAoCGiGwAAABoiugEAAKAhohsAAAAaIroBAACgIaIbAAAAGiK6AQAAoCGiGwAAABoiugEAAKAhohsAAAAaIroBAACgIaIbAAAAGiK6AQAAoCGiGwAAABoiugEAAKAhohsAAAAaIroBAACgIaIbAAAAGiK6AQAAoCGiGwAAABoiugEAAKAhohsAAAAaIroBAACgIaIbAAAAGiK6AQAAoCGiGwAAABoiugEAAKAhohsAAAAaIroBAACgIaIbAAAAGiK6AQAAoCGiGwAAABoiugEAAKAhohsAAAAaIroBAACgIaIbAAAAGiK6AQAAoCGiGwAAABoiugEAAKAhohsAAAAaIroBAACgIaIbAAAAGiK6AQAAoCGiGwAAABoiugEAAKAhohsAAAAaIroBAACgIaIbAAAAGiK6AQAAoCGiGwAAABoiugEAAKAhohsAAAAaIroBAACgIaIbAAAAGiK6AQAAoCGiGwAAABoiugEAAKAhohsAAAAaIroBAACgIaIbAAAAGiK6AQAAoCGiGwAAABoiugEAAKAhohsAAAAaIroBAACgIaIbAAAAGiK6AQAAoCGiGwAAABoiugEAAKAhohsAAAAaIroBAACgIaIbAAAAGiK6AQAAoCGiGwAAABrSPt8DoDnjt7w4efwNp93mK27/WD50/NfP0Yge3nN3/WTe8Yd7TrvN+N0fTvdn3nqORgQXFvMaLk3mNlx6zGtOEN2XssffkOpJTzrtJjP1p8/RYM7MXKd82DG3P33PORoNXIDMa7g0mdtw6TGv2WJ5OQAAADREdAMAAEBDRDcAAAA0RHQDAABAQ0Q3AAAANER0AwAAQENENwAAADTE53Rfwp5y+8czXX/2tNv85frbztFozsw7jr0xn969etptlotj52g0cOExr+HSZG7Dpce85oSiruv6jDYs9DlcqOp69KiuZ17DhevRzuvE3IYLmedsuPQ83Ly2vBwAAAAaIroBAACgIaIbAAAAGiK6AQAAoCGiGwAAABoiugEAAKAhohsAAAAaIroBAACgIaIbAAAAGiK6AQAAoCGiGwAAABoiugEAAKAhohsAAAAaIroBAACgIaIbAAAAGiK6AQAAoCGiGwAAABoiugEAAKAhohsAAAAaIroBAACgIaIbAAAAGiK6AQAAoCGiGwAAABoiugEAAKAhohsAAAAaIroBAACgIaIbAAAAGiK6AQAAoCGiGwAAABoiugEAAKAhohsAAAAaIroBAACgIaIbAAAAGiK6AQAAoCGiGwAAABoiugEAAKAhohsAAAAaIroBAACgIaIbAAAAGiK6AQAAoCGiGwAAABoiugEAAKAhohsAAAAaIroBAACgIaIbAAAAGiK6AQAAoCGiGwAAABoiugEAAKAhohsAAAAaIroBAACgIaIbAAAAGiK6AQAAoCGiGwAAABoiugEAAKAhohsAAAAaIroBAACgIaIbAAAAGiK6AQAAoCGiGwAAABoiugEAAKAhohsAAAAaIroBAACgIaIbAAAAGiK6AQAAoCGiGwAAABoiugEAAKAhohsAAAAaIroBAACgIaIbAAAAGiK6AQAAoCGiGwAAABoiugEAAKAhohsAAAAaIroBAACgIaIbAAAAGiK6AQAAoCGiGwAAABoiugEAAKAhohsAAAAaIroBAACgIaIbAAAAGiK6AQAAoCGiGwAAABoiugEAAKAhohsAAAAaIroBAACgIaIbAAAAGiK6AQAAoCGiGwAAABoiugEAAKAhohsAAAAaIroBAACgIaIbAAAAGiK6AQAAoCGiGwAAABoiugEAAKAhohsAAAAaIroBAACgIaIbAAAAGiK6AQAAoCGiGwAAABoiugEAAKAhohsAAAAaIroBAACgIaIbAAAAGiK6AQAAoCGiGwAAABoiugEAAKAhohsAAAAaIroBAACgIaIbAAAAGiK6AQAAoCFFXdf1+R4EAAAAXIoc6QYAAICGiG4AAABoiOgGAACAhohuAAAAaIjoBgAAgIaIbgAAAGiI6AYAAICGiG4AAABoiOgGAACAhohuAAAAaIjoBgAAgIaIbgAAAGiI6AYAAICGiG4AAABoiOgGAACAhohuAAAAaIjoBgAAgIaIbgAAAGiI6AYAAICGiG4AAABoiOgGAACAhohuAAAAaIjoBgAAgIaIbgAAAGiI6AYAAICGiG4AAABoiOgGAACAhohuAAAAaIjoBgAAgIaIbgAAAGiI6AYAAICGiG4AAABoiOgGAACAhohuAAAAaIjoBgAAgIaIbgAAAGiI6AYAAICGiG4AAABoiOgGAACAhohuAAAAaEj7TDcsijPeFDjH6nr0qK5nXsOF69HO68TchguZ52y49DzcvHakGwAAABoiugEAAKAhohsAAAAaIroBAACgIaIbAAAAGiK6AQAAoCGiGwAAABoiugEAAKAhohsAAAAaIroBAACgIaIbAAAAGiK6AQAAoCGiGwAAABoiugEAAKAhohsAAAAaIroBAACgIaIbAAAAGiK6AQAAoCGiGwAAABoiugEAAKAhohsAAAAaIroBAACgIaIbAAAAGiK6AQAAoCGiGwAAABoiugEAAKAhohsAAAAaIroBAACgIaIbAAAAGiK6AQAAoCGiGwAAABoiugEAAKAhohsAAAAaIroBAACgIaIbAAAAGiK6AQAAoCGiGwAAABoiugEAAKAhohsAAAAaIroBAACgIaIbAAAAGiK6AQAAoCGiGwAAABoiugEAAKAhohsAAAAaIroBAACgIaIbAAAAGiK6AQAAoCGiGwAAABoiugEAAKAhohsAAAAaIroBAACgIaIbAAAAGiK6AQAAoCGiGwAAABoiugEAAKAhohsAAAAaIroBAACgIaIbAAAAGiK6AQAAoCGiGwAAABoiugEAAKAhohsAAAAaIroBAACgIaIbAAAAGiK6AQAAoCGiGwAAABoiugEAAKAhohsAAAAaIroBAACgIaIbAAAAGiK6AQAAoCGiGwAAABoiugEAAKAhohsAAAAaIroBAACgIaIbAAAAGiK6AQAAoCGiGwAAABoiugEAAKAhohsAAAAaIroBAACgIaIbAAAAGiK6AQAAoCGiGwAAABoiugEAAKAhohsAAAAaIroBAACgIaIbAAAAGiK6AQAAoCGiGwAAABoiugEAAKAhohsAAAAaIroBAACgIaIbAAAAGiK6AQAAoCGiGwAAABrSPt8DAAAAuLQUW3+WJ11SpE590jbV1p8nX3a+XazjvrCJbgAAgLOiSFKmSJEU5eTvW3+e+G5Spa6rJFVSV1tBez5D9v7Qnoy7vTXmyWX3b/FQ4xbfD0d0AwAAfFFOju12iqKd8pQ/7z9yXNdV6lSp6lHqk74mITvOuY3YIkVaSVGm2Brv9pi3dxjcP+4TY67qUbL1Jb4fnugGADivJi/WJ38rHnSL838k7At9wZGxB3H/clQvxrnUnQjXrWAtu2mX02mVU2mV3ZRlJ2XRTrk1z8f1KFU9zLgaZFz1M64GqapBqnowidh6nObnzdZOgqK1FdndlGU3rbJ7/7iLTlrFJBerrZ0EVTUZ92i8mXE9GfdkzMNzMOaLl+gGADgvHuQIU8rJv7denNepto6CVVtHws73UaUHP5r3hWPO1pG883sED86F+4O7VU6nVU6n055LpzWXbmsunWI23WI2rXTSSjtVqowzzCj99Ku1DKq1jMabGY7XMxr3Mq56SQap66S5eX5ycHfTKqfTbk1vj7nbmk+nmE07U2mlkzJlxhllnGEG9UYG1VoG1XqGo/VJfFe9jKts/X4yzx+M6Aa46Dz4UaX7XYhPdhfjmKFJrVNe8J58dKlVTqUoyu0lqOOqv/XCdpBx1bv/yNJ5W4Y6iYvJ0byZtFpbY95ailrX1fbRu3HVz2h7zIOtI3jjczhmaNKpc6Ldms1UZylTrcXMtHZmLjszVy1mpp7eSu4yk91PVTYzyEaxlrX28Wy2jmVzdDyDYjXDcZnROGkuvE8N7nZrNp3WXKY6S5luLWam2Jn5ekdmq/nMpJtWypQpMkqVfkbpFZvZaK1lvXUsm+Wx9EbHMxhtHcGveinqCO8HIboBLgqnLl0rTjqqlJw4GjY5uUm9vTTtQnhh29peurZ9FO8kJ8Z9/xE8T9R8KWilKDrbL9K77YVMtRczVS5mqpjPVGa358ow/fSzll61nP54NYPhaobj9YyrjaQanMM5M/kdVJTdtMr7X6RPtRYzVc5nKvPpZCpJMs4ww/TTq1cyqNbSGx3PcLSe4XgtVdXbCokL4fcTfDG23hbyBcE919mb+WJfdlZ7syNzWWx3M9MuM90q0iqTuk6GVZ3N0WyWh3NZrpayXCxmuT2dja2l3JMdV0nqXjPHuU8K7m57IdPtpcy192Wx3pud1a4sldNZ6LQz0y7SbU12mo+rpDeusz6ay9poKUezmOOt2ZRFZ/t933WqVFUmP6Sl5qcQ3QAXvK0X6K3ZdFsL6bRn0ylnUhad7S3qVBlWmxmNNzMYrWY03khd9VJndJ7GfOIF+v1R0WnNpb11NOyEUd3fXlY3HG9kPN5IXffjiZpLV5GiaG0tQZ3PdGdX5tp7s1Rcnh3VzixkJtNlK51ickSsNx5nvR7keLGSlc6hrJb7szk8mv5wK1vPSXifGtzT3Z2Zae/KfGtfdtSXZUe1kLmym245OSI2rKtsjkdZyWaWW0ezUh7MenkoGSTDZCu8vSDn4ndip3KrnE63vZC5zt7sKK7Knmpf9rZns3uqnZ1TRRY7yWy7TquY9Gi/KrM2So72yxztdXJk0E2r6KRol6lO7IxOlXE9OssBuzWXt9533mnNZbq9lPn25dlVX5m99Y7smZrK7ulWdnST+XYy3Zrc77BKNsdlVoZljvXbmet1MjWeSqtsJ+1M3u+99VaYyU50Tia6AS5oRYqik057MbNT+7LYvjJLuSyL9VKm091eqjbMOGvt9RzrHMzK+N6s9e9Lf3g8dbWZ83NEqUzZms1Ue2fmpy7PUuuq7Kj2Zi4z6aS1vVStVwyy0lrOsdyb1eG92egfymgc4c0l6sQL3skRpunOrix1rs7e6prsy1L2znSzc6rMQifplsm4TjZGnSwPpnKoN5NDo4Xsb012XNV1lf6oyriukrrpOT45mleW05nqLGW2syc7y2tyeXVlLuvMZtd0Kzu6RWZaSVkkgypZHXZzrD+dg725HMx8DnU6k5Ut/SrD7fd4n6+dgnA2TD4GrCy7abdmMt3ekfliX/ZU+3Jldz5XzLZy+Uyyb2qcnZ1xFjqjdIoqVZKNUTvHh+0c6rayv9NKd2M6RW93Uifj1jBVZ3KitboepapOrGQ7S7bOIdHeeu/5bGtPdtVX5vLsyhWz3Vw2U+bKmTq7u+Ps7A4z3RqnTDKsy6wO2zk6aOVAt5WZdiet9flkdHnG5Sjjdj9VNQnv+3cWWNFygugGuIAVaaXVms3s1L7s7dyU66sbctXMTPbNlJlvJ1OtZFQlm+PkSH8h923szmfK3bl3autF+XB0Ho4obe0oaM1ncfqaXFk8LtcXe3PFQju7p+9/Yd4fJyvD5ODmjtzT25vPdOdzOMl6b5TReBhP1lySismL9G57IXPtvdlbXZNrWrtyzXwn18zVuWxqnB2dUaZb44zrIqujdg7227lno5PZ9fmUvatTt6qMOv2Mq/5J7+9uLmCLFCmKdjqt2Uy3l7JUXpUrq6ty7cxcrp0vc8V0lb1Twyy0xymLOr1xK8eH7dzXa2VxYzrdtd2pqjqj9mTM43qw9aK8iJ1rXJyK7XlxIl5nWjuzs9qbve3ZXDHbyrVzda6bHebq2c3sntvI7Mwgne44dVVkc7OTY+sz2b8+m5nWVFpFmaqeyrC3I4NyM8PWRkatzYyqXk7E/dl5TtzaUVBs7Sho7chCsTe7q6Xsm+nmqrky18+Oc81sP5fPbmTn/GampkcpyjrDQSvrG1M5vD6TxY2ZdMtOknaGa7PpVbvTL9cybE9W3FXFid9L5vgJovsSNn7Li5PH33Dabb7i9o/lQ8d//RyN6OE9d9dP5h1/uOe024zf/eF0f+at52hEcJ5tvdBdbF+Z66sb8s93zOVxi1Wun93MnuleZjqjDKsyK/1uPr8xk/+31k3n6OUZFJvpd1YyHK9lfM4DtkyrnM50d1f2Fjfkps5l+ec7W3nswjBXzvSy2O2nVdZZH3RypD+VT69P5WMr86mX/1l6nZX0h8sZVxtb70uHS8n9R5im2otZKi7P5cWOXDPfyeMWq3zZfC9XL6xl59JGujOjVMMy62tTuW95PkvtubTLdobVXDYHl2WztZxhez2jqpdxPWgwYIvtHQWd1uSo2O7qslwxNZsbFsrcND/Kly2s5bKltczND1K0q/TXOzm2MpvPrc5ntjWdqu5ksLorG8VK+q3lDIrVVEX7Ajr3BDwKRZmyaKdVTmWqtZC57MzOYj57p9u5Yia5bnaYGxdXcsW+lcxdMUp7dyfFdDv1qMriai9L925mfv8gZbGUcT2d3rjM5mg6a6MdWS+X0m+tpjVaT1WcOA/CWRjy1o6C1oml5eViFqsd2dWZyr6ZMlfNVLl+rpd/tut4dl+5ke7lRcqFToqySLUxytLRlSzdu5npQ0tJ5jOoutkcdbK+vpi1ckd65XIG5erkbOZFeQ5W4Vw8RPel7PE3pHrSk067yUz96XM0mDMz1ykfdsztT99zjkYD59uJo0tzWcpluWpmJjcvVvny3cu5/pojmbu6TrnQTj2oMjw4zlWfmc/UwT1ZHXZz4Ni+HGnvyEZxKOOsn+tRpyy7mW7tyL56b25YaOVJS/088bLD2XP9ejr72inaRcbLo2x8vszuz+1OlaUc689m/+DyrLQ/n8HoeOp6cE7HDedCceJFermYHdXO7JmZHOH+svlebt53JLsfs5nONdMp5uaTUZXFg5tZ+KfNdO6u0quWsjxoZWW4kGPFzmy0jqRfLGf8BScoPPvKtLaOjM0WO7Irc7l8ppVrZ8e5ecdKrr/uaGb/WZnW3pmkLFKvDrL0+WOZ/8d+6uzO+ngmK4NujvR2Z6W1P61yKsNx02OGJk2OGG8HbDGbuWoxC+1OlrpF9k6Nc/XsZq7Yt5KFG6u0r9+RYs9iMjuVjMZpLW+ktfN4yvZaBqNW1kftHB92c6xfZnE0m+UsZqOcSb9sp6jaKTI4C7vUJidEK4oyZdFJp5zJbJYyn+ksdlvZPZVcPj3M1Yur2XPNeqZvmk559c5kaS4pi5Qb/bQOLqc1t5q6Xs7GcLJE/ki/zEKvm9lqMavlbNqt6QzH6zm7R+gvfqIb4AJWFGXarZks1kvZM13mutnNXH/V0Sx+xXTKG/YlOxeSwTCtzx/OFVOHs7I5nX9a35mdyzOZqhdSFu0UKc754q6yaGeqnM9SMZXLp+s8Zmkllz1+I90n7Uuu3JO0W2kdW037k/tzw/hoDvam8/+mpjPXW0yrnEoajwg49yYLUsu0ym6mivksZCa7pspcNjXOtYur2f2YzXRv3Z3iustSL8ylGAzT3n8kC1P7c9Xmcg5szuTeqekc6HQyO1xMu5iazPGiPGtHwh503EWZsmynU85kvl7KUqeTPdPJ1TP9XLV3OXO3dNK6+Ypk367U7VbK5dVM7T6QPTmWqzemc1+vm3umWlnszaZbzKfV6qYctVOfh99NcLYUJx3p7hazmamnM9cps9hJdnVH2T23MTnCff2OFDdcnnrfnmRmOhmPUxxbTtkuM7VxMLuPr2fX+mwWO53Md8pMl+1M1TNpFVOTM4OnTIry7Kxm2Xo/d1m20y6n0q1nMlt2Mt8psqNTZc/UILt2bmTq2k7KG/YmV+9LvXNpcv/rGynmZtJOMr+6nN3Lm9nVm85ip5u5TpmZ3vTkJK/l5GzmTf9eutiIboALWJEyraKT6XSz2En2TPcye3WV8jGXpb7x+tS7diX9XsqZ6XSOb2TPP65nx5EdmWm30h5NbX+Mxzkfd1Gmm9nMdcrs6FTZtWM9nevnkxuvSXXVlUm3m+Lw4ZRVlbl7Ppfd9/Sz0JnOTKa3IyLeC8Yl6MSR7k6mMl22MtdJdnRGWVrcOsJ9zd7U112demkp9WCQYnoq5epm5u8+nF0H+lnqTGWmVWRmMJN2OXXKpxg048R7QDtpF9OZqqcy1ymz0K6za6qfucuHaV17eerrrkx9+WVJu50sL6eo6nQObWTXZ9ezc3kx8+1Wpst2uplNWbQnL+LholZux2UrnXTSSrcsMt2qM98eZ3ZmMFlSvmcx9Z5dqfftTWZmkvEo6XRSbPbS2r2Sqfn1zHeHmW9VmW610i3LdMaTs5mXZTNzpSzaKdNJp+6k2yoz3UpmW3XmOsNMLY1S7l5M9u5IvW9v6h07krJM1ier5orVjbR3rWV+tp/Z5XGmW0m3LNJJO61MdhJ84ceDIroBLnhFyrRTplMmM51RWkvtZGk+9a5dqXfsTIaD1KtrKZZmMjW9kqmyTqfMeX/SK1KmXRSZbo3TnRml2Lkr9c6l1Lt3J612UlUpdh5Oa6HMdHucbpm0U6b0ZM2l6qQXz6100inKtIvcP0cWFlIvLUyCe3EpGQ5SrK+nWJhJa67IdHucdpl0yiKttLbOhH5iCec5GH7KtNJKq0i6ZZ1ua5zWXJEszCRLi6mXliZzO0mxMJdyoZvu1Ga6ZZVOmbSKImVdpiha52S8cC6UW5FZZvI53J0y6RRVOt1xiun2ZEn57HQyM5N6ZnYS3b1+ipnpZLaT1lSdTjlOq5h8pFi7KFLUZWM7zU+8NijTSpkyrbLYut/JnC47SWa6qWemU58Yc1lOFqfPribTnRTTrbTaVTplvTXu4sStbe+IsGLtVB4NgItAlTpVnQyrMvWgSgbDpN+bvCjv9yb/HowyroqM6qS6AA4Q16lSJxnXRepRmXowSgbDFL3JuNPvJ4NR6nGdcVVkXGfrGnCJOuljf+rJp/Fuz5FqWCaDcYrBMBkMJnN7MEiGw2Q83p4ndT1Z/1Fncltn9aOEzsCJ30WT+VqkHiUZjE4a72Ay/vEo9XCcalxMfgdk8glCVc7teOFcqusT/50n1bhIPaqS0XjypDweTXY2j0aTv49Hyaja2r7YfrtFVSd1UW3P8ebHPJnTk/suJoMfj1OMxsmJsVbV1njHk49MGdWpq2L7tcbkd5Ln79NxpBvgAlanSlUP0y9GWR8lK/1uBvvHaX3+cMqZ6dSra0m/n+LzBzK6bz0rq7NZH5XpjUcZpX9exz7OML1xndVRmdWVqSzeu5LW5+5LRuMU3U6Ko8dS33M4g/11lgfdbI6TYSaf73nuP+YMzo0Tc3qUYXrjcTZGnayO2llfm8rioc209x9JMT2VYn09GQ5T7D+U+sBKekfKrA472RgX6Y3HGWSYcT1MnXFyDl6c13WVKsMMt+b1xrjISr+b/uGkc3A5xb0HkqpO0WolK6vJ/iMZH+plbX02a6NWNkZJvx5nXAxTVcNTdkDAxWny+dnjepRxhhlmnGFVp1+V2Ri10+t1Uq320lreSHFsOel0kl4/GY8m/z62mnq5n+FaK5vDdjbHRQbjZFhXGWWcOuPJc+FZniuTHeLVZMzFMMOqzqBKeuMim8N2BmtlppZ7KY6vpFiYm1ypLCerbo4tp15ez3h1lH5/Nr1xK71xkWFVZbR1m3V94nPFzfGTiW6AC1hdVxnVg2wUGzk2WMznN2Zy1d3zuWLmcDrHN1LsmE36w0lwfzT5/MpCDvaLrI2GGRYbqermPrv3dKpqlH69lpXhMAd60/nc8cUs/sOBLIw+m/KKw0mnner4Rgb/tJEDn1vMPZvTOdavs1asZlz148maS1ZdpapH6Wct6/Ugy4OpHOy3c9/yfBb+cTML3f0pVzdTLM0mw1HqAyvpf2Ithw4s5b7N6RzpJ6vDcTbKtYxGG6mqUcNHuycvnqt6mGG1mY1yLWvDHTnS72R/bzpX7p9P9+Mr6VZ3pzh8PGm1Ui9vZHz3ctY+Vea+lYUcGrSyPKizVvfTz1qqerT1wt+ONS5u9+9E66efYTZHddZGyfKwnWPrM9mxf3PrLOVlis3eZEn5eJQsr6X+/JEM7+ll5fhcjg+6WR0VWR/V6VXjDIrNjMb9yfzOifA+S/OlrlJVo4zqXoZFP71qnM1RO6ujMkf7U1k9Np2ZezbTWTicIkmxvDp5T/dmLzl4LNW9K+kdKHJ0fTbLw1bWR8nmqMpmehnV/VT18Jwdpb+YiG6AC1adpMpwtJ5jnYO5b2N3/t9ad/KxYL2p7P7HjUzPHs9oVGZldTafW1nMP6zM5HNrVQ4Vx9Mbr6SqBuf8hW2dOuN6kP54OYfK5XxufSofWZ5L/Y+X56rDq1mYX0/ZqtPb6OTw6p58amU+n1xr5d6NXpaLQxmNe+d8ySycC3XqVPUoo/FmetVyjhcrOdKbzT0b7Sy159K5u8qV6ytZuPtwWnNF6mGd3rEyhw4s5VPHl3L3ZjsHN6scHW9mvTyWYbW5tWOt2flS11XG1SCD8VrWW8dzdLw7h3rtfHajnYWjO1J/rMjuI2uZ2rWaolVktFZn9eBUPn9kR/5pbTb3bBQ50htmtVjOoFrLaLyZ+jztEISzY3I0t6pHGVeD9Ku1bBRrWR3N5/iglUODVvavz2b+3kH2lWuZ2jiY1u6VZLYzWVK+3M/wnl6Of3Yq9y4v5ECvm2ODIqvDcdbqfnrF2tb8Hp7F58M6qavtHQXDajMb7ZWsVbuyMuzkyKCVg/1udi4vZPozwyxmOe3lXoql6aQsko1hxoc20/tclcP7F7J/Y2Z7h9rqaJResZF+vZpxNZjsWHO0+xSi+xL2lNs/nun6s6fd5i/X33aORnNm3nHsjfn07tXTbrNcHDtHo4Hzr65HGY7XszK+N58pd6dz9PKsDrv5p/Vd2dHZmemyyrgusjoqc6hf5O61Kv+4uZqDxafTGxxLVQ9y7p/0qlRVL5vDYzk489l8cn0myUIO9uezZ2Uu860qraJOrypzfFjkvo0in1kb5u7cl7XR/snne3pBziWpSn3iRfp4NSudQzkwWsj0+nzaZTu9aikHNmey61A/0+1xxlWR1WEn921O5+7Ndj6zVudAb5Aj5eFsjI9kONqYzPGm345Rj1LVgwxHG1lvH86RYkfmNrrpllNJprM6aueylfnMd4cpizq9UStH+lO5Z7ObuzfKfH59nAPj1RwvD6Q/Xsm4GnhBziXgxHzuZ1CtZa19PMeqpSz22ltn659Ku1zKYNTK7uPrmV5cT2uqTjVOhmutrByfy73LC/ns+mw+v9nK4V6dY4PJzqlevZxRtbk1V0Znbed5nXr7d9BovJleayUrxWqO9qcy3ykz126lW84mSS7bXMvC/s1053opymTcL7Kx3MnR44v53Op8PrfZzf7NIkf64yzXG1kvj2cwWj9pp5r5fTLRfQn76+P/v/M9hEesN7g3fz74/873MOCCUdfjjMYbWevfl3unygyKzRw4ti87l2cy026lVRRJkt54lLXRMIeK4zlYfDrLg7szGC2nrgY59++NrlNXg/RHyznW/3TqbpW1jevyuY0dWWh10m0VSYqM6zqbo1GOVZs5WN6XI9Wnsz44mHG1sfU+VbgE1aOMq14Gw9WslvuzvzWVsnd1htVclget3Ds1naXOVLplnXFdZH1c5Gg/ObhZ5b5eP/flYI7X96Y3Op7heP0crGapt1+oj6rNbA6P5lj33rTSSdb2pDfu5ki/k53dTubadcrU6VVFVoZFjvSTAxuj3Ddcz8Hy3qyPDmYwWt3aUWDHGhe5ehLdo3Evw/F61ssjWS4Wc7jfTbecTlmUGdYzWR62s2d9NvOHhumU41SZvHf6aH8qB/qd3LvZyr0bdfZvDnOkXs1yeSi90UpG482TdpyfvYCt61GqapDRuJf+eDkr7UOZqWYytdFKq2inqjvpjedztN/NjuVBZjqjlKkzGLeyOuzkcL+b/b1O7tksct9GlUODzRwrj2S9OpzheP3+HQW15/GTiW6AC9o4ddVLf3g8dV2l31nJkfaOTNULk8/h3voQinGGGRRr6Y1X0h8upz9aPq/xWmec8XgtG4NkXA2y0Tmc+1pL6VQzaVWdrW2qjOp++llNb3A8vcGxDMdrqap+nESNS9MkYKtqkMF4NeWwnSJl6laVzcFlOT6cz1K7k9l2mU45OZNxbzzOymCcY9VmjpSHc7y+N+ujQ+kPlzOuelvx2vQRpSp1Nci46GUwWs1aeSBpT37vrG/szpH+dBY6rUy3yhQpMqzqbIzGWR4Nc6ReydFyf1bH+7M5PJbheOOkHQXmOReryVy+fxXIevrlSpbbB9IqOil6uzOup7I5KnN80M2Obmd7lVedIpvjIqujIkf6RQ736hzsDXNovJaj5YFsjI9kMFrdeqvV6CyfSK1K6iJVPcio2kx/vJr18nCOlFMpx0WyvpBR1c76qJXDg8kOwOlycv/DusjGuMzxwdaOwN44B3v9HC6OZKU+mP54dXtHwf1vHzHHTxDdABe4OqPU1Wb6w1GG47VsFIdSFu2TPgsz2+8tq6rB5AmvGmwF9/l6wqtT18OMRyvZrHrpj5ZTFu3tcZ8Y8/a468HkhXg9TBzl5pJWTV6oV730h8uTS+phNlvLOVbszOxoMTPDmbTSSp0qgwyzUa5lvTyWjfGR9EbHJy/Iq43U9bmK1xM7C3oZJslgsgpn2J4sKT1S7ch0bzZTmUqSjDPOZrGZjWIlGzmWjdGR9IfLk6Pc52xHATRtErCTVSCTHVIbRTtFu0zqZNjbkfXRVI4PWpnrlJlpTT7jvk4yGCfrozorg3GODydHuI+WB7JaHZysYhmtb8fr2Z3j9+8sGFe9DEft9Ip2inYrKZNqXKe3PpfVYSdHumXm2mU6W5/RPa6TzXGyOqiyMhzn2KiXI8XRHMt9kx0Fw9WMqpN3FJjjJxPdABeFyUeHjMfDVNl4yK3qrZOvXRh7l+utHQbjVFU/RYqH3GriQhk3NKme7BCrBhkn6Q2rjKt++q3VbLSOpFPOpFV0UqYz+VifepjRaCPDajPD0cbW+zx7qare1vLNc7WTajz5POGt8K62ltX2Wsez2ppPq5iaHOFLmSrDjKp+htVmRtVmhqP1jE6M+bzvEISz5aSj3dUgw/H6/TuVW1UG5WbWRjtyfDST2bKTblmmvfWWsH5VpVdNTpq2WixntTya9erw9k61yRwfbM3xsx2vW0e7q0FG2bh/zO1xRmU/m/XerPUXMjvoZrZsp1NOxjyq6wyqKhvVMCtZz2p5LGv1kWyMJzvVtt/uYo4/KNENcNGok4wvwqexi3Xc0JSTwruutl6wb6RfnFgR0klRlJPVIBmnqkanrAjJ9tGvc70qZCu8xxtbJ2PqpRx10yuOpSzbKdLaHndVD09afTOaHJXfDgi/DbhUVKnrJBlktDUdT3zUZ7+1mvVyKTNZzFQ9k864m6LeektYMU6vWE+vWEuvXs5gtJb+eHWyg2q8cdJ5D5qYL5PfP0WdraXxa/efib3VT7+1ltVyMdOZz1Q1k9a4NblWUWWYQTbL9fSzll61nP54NYPh6tZ7uXsnrb7hCxV1XZ/RI1MU+hwuVI/2o1fMa7hwfTEfqWRuX0yKJGWKorX1Z7l1+Yk/q/vP9F2f+Gzr8x2uW2NOkRRl7h/3/WNOsj1usX0qz9mXmvvncFF0U5bdtMvptFszabdmtlav3L8SJMkDVoOMxpMzlZ84wj3Zsdb00eIiRVpJ0U5ZdtMqp9MuZ9JubY29mEq7nEqZ+8/DMq6HGVUbGdWTs5+Pxpv3r2LZPnnal+Zcf7h5LbrhEuAJHC49ovtLzYO//eJUF+IL2Yt13OeP5+xL0ck7oiYRWxTttIpuynKyeqUs7///r66rrRUsw63PtT5xxu9zHa4n7zBopyy2xl12t8f8wPOwDLc/i7vaOlP5+Vt9c+F4uHlt9gIAnHcXa5herOOGs+nE26iKpK5TjUdJ0U5V9FJUk08pyPYqlmytWqm2QnZrGfnW20nuv71zNe6t1TR1lXExSlJmXG2drPVBxl1tjXdyndEFsvrmwie6AQAAvminxnddJ0UGk2+dHK/JSW8XSc5vtNZb/ztO6nGScium86Bjnmx7IrTvvz6nJ7oBAADOmvuXWtcn3oJRP9jS6wspWE+M5cSJT4uHGPPJ23KmRDcAAEAjLtZAvVjHfWEqH34TAAAA4NEQ3QAAANAQ0Q0AAAANEd0AAADQENENAAAADRHdAAAA0BDRDQAAAA0R3QAAANAQ0Q0AAAANEd0AAADQENENAAAADRHdAAAA0BDRDQAAAA0R3QAAANAQ0Q0AAAANEd0AAADQENENAAAADRHdAAAA0BDRDQAAAA0R3QAAANAQ0Q0AAAANEd0AAADQENENAAAADRHdAAAA0BDRDQAAAA0R3QAAANAQ0Q0AAAANEd0AAADQENENAAAADRHdAAAA0BDRDQAAAA0R3QAAANAQ0Q0AAAANEd0AAADQENENAAAADRHdAAAA0BDRDQAAAA0R3QAAANAQ0Q0AAAANEd0AAADQENENAAAADRHdAAAA0BDRDQAAAA0R3QAAANAQ0Q0AAAANEd0AAADQENENAAAADRHdAAAA0BDRDQAAAA0R3QAAANAQ0Q0AAAANEd0AAADQENENAAAADRHdAAAA0BDRDQAAAA0R3QAAANAQ0Q0AAAANEd0AAADQENENAAAADRHdAAAA0BDRDQAAAA0R3QAAANAQ0Q0AAAANEd0AAADQENENAAAADRHdAAAA0BDRDQAAAA0R3QAAANAQ0Q0AAAANEd0AAADQENENAAAADRHdAAAA0BDRDQAAAA0R3QAAANAQ0Q0AAAANEd0AAADQENENAAAADRHdAAAA0BDRDQAAAA0R3QAAANAQ0Q0AAAANEd0AAADQENENAAAADRHdAAAA0BDRDQAAAA0R3QAAANAQ0Q0AAAANEd0AAADQENENAAAADRHdAAAA0BDRDQAAAA0R3QAAANAQ0Q0AAAANEd0AAADQENENAAAADRHdAAAA0BDRDQAAAA0R3QAAANAQ0Q0AAAANEd0AAADQENENAAAADRHdAAAA0JCiruv6fA8CAAAALkWOdAMAAEBDRDcAAAA0RHQDAABAQ0Q3AAAANER0AwAAQENENwAAADREdAMAAEBDRDcAAAA0RHQDAABAQ/7/kOTOlrvbOOIAAAAASUVORK5CYII=",
168 | "text/plain": [
169 | ""
170 | ]
171 | },
172 | "metadata": {},
173 | "output_type": "display_data"
174 | }
175 | ],
176 | "source": [
177 | "fig, ax = plt.subplots(2,4,figsize=(10,6))\n",
178 | "for i in range(4):\n",
179 | " ax[0,i].imshow(input[i].cpu().detach().numpy(), cmap='magma')\n",
180 | " ax[1,i].imshow(ngon_output[i].cpu().detach().numpy(), cmap='magma')\n",
181 | " ax[0,i].set_title(f\"Distance {distances[i]} cm\")\n",
182 | "[a.axis('off') for a in ax.ravel()]\n",
183 | "fig.tight_layout()"
184 | ]
185 | },
186 | {
187 | "cell_type": "markdown",
188 | "metadata": {},
189 | "source": [
190 | "# Chaining Operators"
191 | ]
192 | },
193 | {
194 | "cell_type": "markdown",
195 | "metadata": {},
196 | "source": [
197 | "It is possible to chain operators in `spectpsftoolbox`. For example, in SPECT acquisition, the component above models the PSF component from the collimator structure only, but there is also an PSF component resulting from uncertainty in position when detected in the scintillator. The PSF is best modeled as\n",
198 | "\n",
199 | "$$PSF_{tot}[x] = PSF_{scint}[PSF_{coll}[x]] $$\n",
200 | "\n",
201 | "$PSF_{scint}$ is best modeled using a Gaussian function with width equal to the intrinsic resolution of the scintillator crystal. We can define the operator as:"
202 | ]
203 | },
204 | {
205 | "cell_type": "code",
206 | "execution_count": 9,
207 | "metadata": {},
208 | "outputs": [
209 | {
210 | "name": "stderr",
211 | "output_type": "stream",
212 | "text": [
213 | "/data/anaconda/envs/pytomo_install_test/lib/python3.11/site-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3526.)\n",
214 | " return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]\n",
215 | "/data/anaconda/envs/pytomo_install_test/lib/python3.11/site-packages/torchquad/integration/utils.py:255: UserWarning: DEPRECATION WARNING: In future versions of torchquad, an array-like object will be returned.\n",
216 | " warnings.warn(\n"
217 | ]
218 | }
219 | ],
220 | "source": [
221 | "intrinsic_sigma = 0.1614 # typical for NaI 140keV detection\n",
222 | "gauss_amplitude_fn = lambda a, bs: torch.ones_like(a)\n",
223 | "gauss_sigma_fn = lambda a, bs: bs[0]*torch.ones_like(a)\n",
224 | "gauss_amplitude_params = torch.tensor([1.], requires_grad=True, dtype=torch.float32, device=device)\n",
225 | "gauss_sigma_params = torch.tensor([intrinsic_sigma], requires_grad=True, device=device, dtype=torch.float32)\n",
226 | "scint_operator = GaussianOperator(\n",
227 | " gauss_amplitude_fn,\n",
228 | " gauss_sigma_fn,\n",
229 | " gauss_amplitude_params,\n",
230 | " gauss_sigma_params,\n",
231 | ")"
232 | ]
233 | },
234 | {
235 | "cell_type": "markdown",
236 | "metadata": {},
237 | "source": [
238 | "We could use this operator directly on the input:"
239 | ]
240 | },
241 | {
242 | "cell_type": "code",
243 | "execution_count": 10,
244 | "metadata": {},
245 | "outputs": [],
246 | "source": [
247 | "output_tot = scint_operator(ngon_output, xv_k, yv_k, distances, normalize=True)"
248 | ]
249 | },
250 | {
251 | "cell_type": "markdown",
252 | "metadata": {},
253 | "source": [
254 | "OR we can chain together the operators to get a single operator. We can use the `*` operator to make an operator that first applies the operator on the right, then the operator on the left:"
255 | ]
256 | },
257 | {
258 | "cell_type": "code",
259 | "execution_count": 11,
260 | "metadata": {},
261 | "outputs": [],
262 | "source": [
263 | "spect_psf_operator = scint_operator * ngon_operator\n",
264 | "# Now apply directly to inpuy\n",
265 | "output_tot = spect_psf_operator(input, xv_k, yv_k, distances, normalize=True)"
266 | ]
267 | },
268 | {
269 | "cell_type": "code",
270 | "execution_count": 12,
271 | "metadata": {},
272 | "outputs": [
273 | {
274 | "data": {
275 | "image/png": "iVBORw0KGgoAAAANSUhEUgAAA90AAAIfCAYAAABtmb+NAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjguNCwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy8fJSN1AAAACXBIWXMAAA9hAAAPYQGoP6dpAABA50lEQVR4nO39eZDteUHf/78+n7P03n33WZk7BAaYUWD4RUHjMhpQhAGDGI1bCYOIUShiKEsjikuqWKxIBctgod+ETQqMYNAoMolEcClQUdCICAEEBmbm7rf3Pn2Wz+f3x+nuuXeWO3eG+7kbj0dVM3NPf8457+7hfT/n+fm8z+cUdV3XAQAAAM658kIPAAAAAC5XohsAAAAaIroBAACgIaIbAAAAGiK6AQAAoCGiGwAAABoiugEAAKAhohsAAAAaIroBAACgIV9W0f3mN785RVHsfE1OTubKK6/MN3/zN+fVr351jhw5cp/7/MIv/EKKonhIz7O+vp5f+IVfyAc+8IFzNPIL761vfWu+53u+J4997GNTlmWuv/76h/wYv/qrv5rHPe5xmZiYyCMf+cj84i/+YgaDwbkfLJctc/jhufvuu/OzP/uz+dqv/drs27cv8/Pz+ef//J/nN37jNzIajU7b9gMf+MBpv+NTv/7iL/7irJ7vyJEjef7zn599+/Zleno6X/u1X5v/83/+TxM/GpcB8/rcOHz4cPbu3ZuiKPKud73rPt9fXV3Nj//4j+fqq6/O5ORkbr755vzWb/3WWT++ec1DYV4/fNdff/397oP/7b/9t6dtZ399aWlf6AFcCG9605vyuMc9LoPBIEeOHMmf//mf55d+6Zfyy7/8y/nv//2/52lPe9rOti984Qvzbd/2bQ/p8dfX1/OLv/iLSZJv+qZvOpdDv2B+8zd/M4cOHcqTn/zkVFX1kGP5la98ZV7xilfkP/yH/5Bv/dZvzYc//OH87M/+bO688878xm/8RkOj5nJlDj80f/M3f5O3vvWt+cEf/MG84hWvSKfTyXvf+9786I/+aP7iL/4ib3zjG+9zn1e96lX55m/+5tNu+8qv/MoHfa7Nzc089alPzeLiYn7lV34lBw4cyOtf//p827d9W973vvfllltuOWc/F5cX8/pL8+IXvziTk5MP+P3nPve5+fCHP5zXvOY1ecxjHpO3v/3t+d7v/d5UVZXv+77vO+Njm9c8XOb1w/N1X/d1+eVf/uXTbrviiivud1v760tE/WXkTW96U52k/vCHP3yf733+85+vH/GIR9Rzc3P1oUOHvqTnOXr0aJ2k/vmf//kv6XEuJqPRaOffb7311vrgwYNnfd9jx47Vk5OT9Yte9KLTbn/lK19ZF0VR/8M//MO5GiaXOXP44Tlx4kTd7/fvc/uLX/ziOkl9xx137Nz2/ve/v05Sv/Od73xYz/X617++TlJ/8IMf3LltMBjUN910U/3kJz/5YT0mlzfz+kv3rne9q56dna3f8pa33O/8fc973lMnqd/+9refdvu3fMu31FdffXU9HA7P+PjmNQ+Vef3wHTx4sL711lsfdDv760vLl9Xy8jO57rrr8trXvjYrKyv59V//9Z3b72+pyx//8R/nm77pm7J3795MTU3luuuuy3d+53dmfX09n/vc57J///4kyS/+4i/uLPN4/vOfnyT59Kc/ndtuuy033HBDpqenc8011+TZz352/v7v//6059heMvKOd7wjP/MzP5Orr7468/PzedrTnpZPfvKT9xn/7bffnqc+9alZWFjI9PR0brzxxrz61a8+bZu//uu/zrd/+7dnz549mZyczJOe9KT89m//9ln9fsry4f9f5fbbb0+v18ttt9122u233XZb6rrO7/7u7z7oY9x555150YtelEc84hHpdru5+uqr86//9b/O4cOHk9zz+3r729+en/qpn8pVV12V2dnZPPvZz87hw4ezsrKSF73oRdm3b1/27duX2267Laurqw/7Z+LiYw4/sN27d6fT6dzn9ic/+clJki9+8YsP+hhn693vfnce+9jH5mu/9mt3bmu32/mBH/iB/NVf/VXuvPPOB32MB/tdPP/5z8/s7Gw+8YlP5OlPf3pmZmZy1VVX5TWveU2S5C/+4i/y9V//9ZmZmcljHvOYvOUtbzlnPx/nl3n94E6cOJEXv/jFeeUrX5nrrrvufrd597vfndnZ2XzXd33Xabffdtttueuuu/KXf/mXZ3wO85pzyby+OJjX55foPsUzn/nMtFqt/Omf/ukDbvO5z30ut956a7rdbt74xjfm9ttvz2te85rMzMyk3+/nqquuyu23354k+aEf+qF86EMfyoc+9KG84hWvSJLcdddd2bt3b17zmtfk9ttvz+tf//q02+085SlPud+J/fKXvzyf//zn81//63/Nb/zGb+RTn/pUnv3sZ5/2Psz/9t/+W575zGemqqq84Q1vyO///u/npS996WkvpN///vfn677u67K4uJg3vOEN+b3f+73cfPPN+Tf/5t/kzW9+8zn6Dd6/j33sY0mSxz/+8afdftVVV2Xfvn07338gd955Z776q7867373u/Oyl70s733ve/O6170uCwsLOXny5GnbvvzlL8+RI0fy5je/Oa997WvzgQ98IN/7vd+b7/zO78zCwkLe8Y535Cd/8ifzm7/5m3n5y19+bn9QLjhz+KH54z/+47Tb7TzmMY+5z/de/OIXp91uZ35+Pk9/+tPz53/+52f1mB/72MfyhCc84T63b9/2D//wD2e8/9n8LpJkMBjkuc99bm699db83u/9Xp7xjGfkp3/6p/Pyl788z3ve8/KCF7xg5wXF85///PzN3/zNWY2fi495fWYvfelL88hHPjIveclLHnCbj33sY7nxxhvTbp/+rsLteflg+2HzmnPNvD6zP/3TP83c3Fw6nU5uuummvPa1r73PNVi22V9fIi70qfbz6UxLXbZdccUV9Y033rjz55//+Z+vT/01vetd76qT1H/7t3/7gI/xUJa6DIfDut/v1zfccEP97//9v9+5fXvJyDOf+czTtv/t3/7tOkn9oQ99qK7rul5ZWann5+frr//6r6+rqnrA53nc4x5XP+lJT6oHg8Fptz/rWc+qr7rqqtOWjz+Yh7q8/Id/+IfriYmJ+/3eYx7zmPpbv/Vbz3j/F7zgBXWn06k//vGPP+A227+vZz/72afd/uM//uN1kvqlL33pabc/5znPqffs2XOWPwEXC3P43Mzhuq7r//W//lddluVpY67ruv7IRz5S/7t/9+/qd7/73fWf/umf1m984xvrG2+8sW61WvXtt9/+oI/b6XTqH/mRH7nP7R/84Afvd3nrqc72d/G85z2vTlL/zu/8zs5tg8Gg3r9/f52k/shHPrJz+/Hjx+tWq1W/7GUve9Cxc2GY1w9/Xv/BH/xB3el06r//+78/bXz3Xm56ww031E9/+tPvc/+77rqrTlK/6lWvOuPzmNc8VOb1w5/XP/ZjP1a/8Y1vrP/kT/6k/t3f/d36+7//++sk9Q/8wA+ctp399aXFme57qev6jN+/+eab0+1286IXvShvectb8k//9E8P6fGHw2Fe9apX5aabbkq320273U63282nPvWp/OM//uN9tv/2b//20/68ffTp85//fJLkgx/8YJaXl/NjP/ZjD3jFx09/+tP5xCc+ke///u/fGcP21zOf+czcfffd93vE71w609UoH+xKle9973vzzd/8zbnxxhsf9Hme9axnnfbn7fvceuut97n9xIkTlphfhszhB/eRj3wk3/3d352v+Zqvuc+SuCc96Ul53etel+c85zn5hm/4htx222354Ac/mKuuuio/+ZM/eVaP/3Dn+9n8Lk59nGc+85k7f26323n0ox+dq666Kk960pN2bt+zZ08OHDiw8/vm0mRe39fS0lJ+5Ed+JD/1Uz91VhdN+lL2w1/K/c1rHoh5ff9e//rX57bbbss3fuM35l/9q3+Vt73tbXnJS16St73tbfnoRz+6s5399aVFdJ9ibW0tx48fz9VXX/2A2zzqUY/K+973vhw4cCAvfvGL86hHPSqPetSj8iu/8itn9Rwve9nL8opXvCLPec5z8vu///v5y7/8y3z4wx/OE5/4xGxsbNxn+717957254mJiSTZ2fbo0aNJkmuvvfYBn3P7fc8/8RM/kU6nc9rXj/3YjyVJjh07dlbjfzj27t2bXq+X9fX1+3zvxIkT2bNnzxnvf/To0TP+fKe692N1u90z3t7r9c7qcbk0mMMP7qMf/Wi+5Vu+JTfccEP+8A//cGc8Z7Jr164861nPyv/9v//3fn/GU+3duzfHjx+/z+0nTpxIct+5eKqz+V1sm56evs+Vmrvd7v0+frfbNdcvYeb1/fuZn/mZdDqdvOQlL8ni4mIWFxd3DiSvr69ncXFxJ2q+lHn5pd7fvOb+mNcPzQ/8wA8kyYN+FJj99cXry/Ijwx7Ie97znoxGowf9yIFv+IZvyDd8wzdkNBrlr//6r/Orv/qr+fEf//FcccUV+Z7v+Z4z3vdtb3tbfvAHfzCvetWrTrv92LFj2bVr10Me8/YFJM50IaR9+/YlSX76p386z33uc+93m8c+9rEP+bnP1vZ7uf/+7/8+T3nKU3ZuP3ToUI4dO/agR+j3799/Ti/0xOXLHD6zj370o3na056WgwcP5n//7/+dhYWFsx7n9ov3Bzui/fjHP/4+F6lJsnPbmeb72fwu+PJjXt+/j33sY/nc5z6XK6+88j7fe97znpckOXnyZHbt2pXHP/7xecc73pHhcHja+7rPZl4m5jXnnnn90Gzvg8/mwsb21xcnZ7q33HHHHfmJn/iJLCws5Ed+5EfO6j6tVitPecpT8vrXvz7JeMlmct8jY6cqiuI+Z5be8573nNUVAu/Pv/gX/yILCwt5wxve8IDLdB772MfmhhtuyN/93d/lq77qq+73a25u7mE9/9n4tm/7tkxOTt7n4hFvfvObUxRFnvOc55zx/s94xjPy/ve/v/El8FzazOEzz+G//du/zdOe9rRce+21+aM/+qPs3r37rMd48uTJ/MEf/EFuvvnmM34OcJJ8x3d8Rz7xiU+cdjXk4XCYt73tbXnKU55yxrMaZ/O74MuLef3A8/p1r3td3v/+95/29Z//839OMr4K9Pvf//7Mzs4mGc/L1dXV/M7v/M5pj/GWt7wlV1999WkHxO+Pec25ZF4/9Nfcb33rW5MkX/M1X3PG7eyvL15flme6P/axj+28v+LIkSP5sz/7s7zpTW9Kq9XKu9/97p2jN/fnDW94Q/74j/84t956a6677rr0er288Y1vTJI87WlPS5LMzc3l4MGD+b3f+7089alPzZ49e7Jv375cf/31edaznpU3v/nNedzjHpcnPOEJ+Zu/+Zv8p//0n856+fS9zc7O5rWvfW1e+MIX5mlPe1p++Id/OFdccUU+/elP5+/+7u/yX/7Lf0mS/Pqv/3qe8Yxn5OlPf3qe//zn55prrsmJEyfyj//4j/nIRz6Sd77znWd8no9//OP5+Mc/nmR8hnp9fT3vete7kiQ33XRTbrrppiTJn/zJn+SpT31qfu7nfi4/93M/l2S8POVnf/Zn84pXvCJ79uzJt37rt+bDH/5wfuEXfiEvfOELd+77QP7jf/yPee9735tv/MZvzMtf/vI8/vGPz+LiYm6//fa87GUvy+Me97iH9bvj0mUOP7Q5/MlPfnLnZ3vlK1+ZT33qU/nUpz618/1HPepRO7+z7/u+78t1112Xr/qqr8q+ffvyqU99Kq997Wtz+PDh+xw4+6Ef+qG85S1vyWc+85kcPHgwSfKCF7wgr3/96/Nd3/Vdec1rXpMDBw7k137t1/LJT34y73vf+87J74LLk3n90Ob1zTff/IDf+4qv+IrTziA+4xnPyLd8y7fkR3/0R7O8vJxHP/rRecc73pHbb789b3vb29JqtXa2Na85l8zrhzav3/72t+d//I//kVtvvTUHDx7M4uJi3vnOd+a3fuu38vznPz9PfOITd7a1v77EXICLt10w21dS3P7qdrv1gQMH6ltuuaV+1ateVR85cuQ+97n3lRQ/9KEP1d/xHd9RHzx4sJ6YmKj37t1b33LLLfX//J//87T7ve9976uf9KQn1RMTE3WS+nnPe15d13V98uTJ+od+6IfqAwcO1NPT0/XXf/3X13/2Z39W33LLLfUtt9yyc/8HugLpZz/72TpJ/aY3vem02//wD/+wvuWWW+qZmZl6enq6vummm+pf+qVfOm2bv/u7v6u/+7u/uz5w4EDd6XTqK6+8sv6X//Jf1m94wxse9He3/Xu4v69Trxi5Pe77u4rkr/zKr9SPecxj6m63W1933XX1z//8z9f9fv9Bn7uu6/oLX/hC/YIXvKC+8sor606nU1999dX1d3/3d9eHDx8+4+/rga6euf3zHD169Kyen4uDOfzw5vC9f2/3/jp1LK9+9avrm2++uV5YWKhbrVa9f//++ju+4zvqv/qrv7rP425flfSzn/3sabcfOnSo/sEf/MF6z5499eTkZP01X/M19R/90R+dcYwP5XfxvOc9r56ZmbnP/W655Zb6K77iK+5z+8GDB+tbb731rJ+f88u8fvj75nt7oPHV9fhqwy996UvrK6+8su52u/UTnvCE+h3veMd9tjOvORfM64c3rz/0oQ/VT33qU3de705PT9df/dVfXf/ar/3afa56bn99aSnq2poAAAAAaIL3dAMAAEBDRDcAAAA0RHQDAABAQ0Q3AAAANER0AwAAQENENwAAADREdAMAAEBD2me7YVGc9abAeVbXw4d1P/MaLl4Pd14n5jZczOyz4fLzYPPamW4AAABoiOgGAACAhohuAAAAaIjoBgAAgIaIbgAAAGiI6AYAAICGiG4AAABoiOgGAACAhohuAAAAaIjoBgAAgIaIbgAAAGiI6AYAAICGiG4AAABoiOgGAACAhohuAAAAaIjoBgAAgIaIbgAAAGiI6AYAAICGiG4AAABoiOgGAACAhohuAAAAaIjoBgAAgIaIbgAAAGiI6AYAAICGiG4AAABoiOgGAACAhohuAAAAaIjoBgAAgIaIbgAAAGiI6AYAAICGiG4AAABoiOgGAACAhohuAAAAaIjoBgAAgIaIbgAAAGiI6AYAAICGiG4AAABoiOgGAACAhohuAAAAaIjoBgAAgIaIbgAAAGiI6AYAAICGiG4AAABoiOgGAACAhohuAAAAaIjoBgAAgIaIbgAAAGiI6AYAAICGiG4AAABoiOgGAACAhohuAAAAaIjoBgAAgIaIbgAAAGiI6AYAAICGiG4AAABoiOgGAACAhohuAAAAaIjoBgAAgIaIbgAAAGiI6AYAAICGiG4AAABoiOgGAACAhohuAAAAaIjoBgAAgIaIbgAAAGiI6AYAAICGiG4AAABoiOgGAACAhohuAAAAaIjoBgAAgIaIbgAAAGiI6AYAAICGiG4AAABoiOgGAACAhohuAAAAaIjoBgAAgIaIbgAAAGiI6AYAAICGiG4AAABoiOgGAACAhohuAAAAaIjoBgAAgIaIbgAAAGiI6AYAAICGiG4AAABoiOgGAACAhohuAAAAaIjoBgAAgIaIbgAAAGiI6AYAAICGiG4AAABoiOgGAACAhohuAAAAaIjoBgAAgIaIbgAAAGiI6AYAAICGiG4AAABoiOgGAACAhohuAAAAaIjoBgAAgIaIbgAAAGiI6AYAAICGiG4AAABoiOgGAACAhohuAAAAaIjoBgAAgIaIbgAAAGhI+0IPgOaM3vqS5CseecZtvvqp/5iPLP7X8zSiB/fcPT+dd/7RvjNuM3rPR9P9ubedpxHBxcW8hsuTuQ2XH/OabaL7cvYVj0z1xCeecZOp+rPnaTBnZ6ZTPuiY25+98zyNBi5C5jVcnsxtuPyY12yxvBwAAAAaIroBAACgIaIbAAAAGiK6AQAAoCGiGwAAABoiugEAAKAhohsAAAAa4nO6L2NPeeonMll//ozb/OXa28/TaM7OO0++KZ/du3LGbZaKk+dpNHDxMa/h8mRuw+XHvGZbUdd1fVYbFvocLlZ1PXxY9zOv4eL1cOd1Ym7Dxcw+Gy4/DzavLS8HAACAhohuAAAAaIjoBgAAgIaIbgAAAGiI6AYAAICGiG4AAABoiOgGAACAhohuAAAAaIjoBgAAgIaIbgAAAGiI6AYAAICGiG4AAABoiOgGAACAhohuAAAAaIjoBgAAgIaIbgAAAGiI6AYAAICGiG4AAABoiOgGAACAhohuAAAAaIjoBgAAgIaIbgAAAGiI6AYAAICGiG4AAABoiOgGAACAhohuAAAAaIjoBgAAgIaIbgAAAGiI6AYAAICGiG4AAABoiOgGAACAhohuAAAAaIjoBgAAgIaIbgAAAGiI6AYAAICGiG4AAABoiOgGAACAhohuAAAAaIjoBgAAgIaIbgAAAGiI6AYAAICGiG4AAABoiOgGAACAhohuAAAAaIjoBgAAgIaIbgAAAGiI6AYAAICGiG4AAABoiOgGAACAhohuAAAAaIjoBgAAgIaIbgAAAGiI6AYAAICGiG4AAABoiOgGAACAhohuAAAAaIjoBgAAgIaIbgAAAGiI6AYAAICGiG4AAABoiOgGAACAhohuAAAAaIjoBgAAgIaIbgAAAGiI6AYAAICGiG4AAABoiOgGAACAhohuAAAAaIjoBgAAgIaIbgAAAGiI6AYAAICGiG4AAABoiOgGAACAhohuAAAAaIjoBgAAgIaIbgAAAGiI6AYAAICGiG4AAABoiOgGAACAhohuAAAAaIjoBgAAgIaIbgAAAGiI6AYAAICGiG4AAABoiOgGAACAhohuAAAAaIjoBgAAgIaIbgAAAGiI6AYAAICGiG4AAABoiOgGAACAhohuAAAAaIjoBgAAgIaIbgAAAGiI6AYAAICGiG4AAABoiOgGAACAhohuAAAAaIjoBgAAgIaIbgAAAGiI6AYAAICGiG4AAABoiOgGAACAhohuAAAAaIjoBgAAgIaIbgAAAGiI6AYAAICGiG4AAABoiOgGAACAhohuAAAAaIjoBgAAgIaIbgAAAGhIUdd1faEHAQAAAJcjZ7oBAACgIaIbAAAAGiK6AQAAoCGiGwAAABoiugEAAKAhohsAAAAaIroBAACgIaIbAAAAGiK6AQAAoCGiGwAAABoiugEAAKAhohsAAAAaIroBAACgIaIbAAAAGiK6AQAAoCGiGwAAABoiugEAAKAhohsAAAAaIroBAACgIaIbAAAAGiK6AQAAoCGiGwAAABoiugEAAKAhohsAAAAaIroBAACgIaIbAAAAGiK6AQAAoCGiGwAAABoiugEAAKAhohsAAAAaIroBAACgIaIbAAAAGiK6AQAAoCGiGwAAABoiugEAAKAhohsAAAAaIroBAACgIaIbAAAAGiK6AQAAoCGiGwAAABrSPtsNi+KsNwXOs7oePqz7mddw8Xq48zoxt+FiZp8Nl58Hm9fOdAMAAEBDRDcAAAA0RHQDAABAQ0Q3AAAANER0AwAAQENENwAAADREdAMAAEBDRDcAAAA0RHQDAABAQ0Q3AAAANER0AwAAQENENwAAADREdAMAAEBDRDcAAAA0RHQDAABAQ0Q3AAAANER0AwAAQENENwAAADREdAMAAEBDRDcAAAA0RHQDAABAQ0Q3AAAANER0AwAAQENENwAAADREdAMAAEBDRDcAAAA0RHQDAABAQ0Q3AAAANER0AwAAQENENwAAADREdAMAAEBDRDcAAAA0RHQDAABAQ0Q3AAAANER0AwAAQENENwAAADREdAMAAEBDRDcAAAA0RHQDAABAQ0Q3AAAANER0AwAAQENENwAAADREdAMAAEBDRDcAAAA0RHQDAABAQ0Q3AAAANER0AwAAQENENwAAADREdAMAAEBDRDcAAAA0RHQDAABAQ0Q3AAAANER0AwAAQENENwAAADREdAMAAEBDRDcAAAA0RHQDAABAQ0Q3AAAANER0AwAAQENENwAAADREdAMAAEBDRDcAAAA0RHQDAABAQ0Q3AAAANER0AwAAQENENwAAADREdAMAAEBDRDcAAAA0RHQDAABAQ0Q3AAAANER0AwAAQENENwAAADREdAMAAEBDRDcAAAA0RHQDAABAQ0Q3AAAANER0AwAAQENENwAAADREdAMAAEBDRDcAAAA0RHQDAABAQ0Q3AAAANER0AwAAQENENwAAADREdAMAAEBDRDcAAAA0RHQDAABAQ0Q3AAAANER0AwAAQENENwAAADREdAMAAEBDRDcAAAA0RHQDAABAQ0Q3AAAANER0AwAAQENENwAAADREdAMAAEBD2hd6AAAAAJeXYuuf5Sm3FKlTn7JNtfXPU2+70C7VcV/cRDcAAMA5USQpU6RIinL871v/3P5uUqWuqyRVUldbQXshQ/ae0B6Pu7015vFt92zxQOMW3w9GdAMAAHxJTo3tdoqinfK0f95z5riuq9SpUtXD1Kd8jUN2lPMbsUWKtJKiTLE13p0x7xwwuGfc22Ou6mGy9SW+H5zoBgC4oMYv1sf/VtzvFhf+TNi93evM2P24ZzmqF+Nc7rbDdStYy27a5WRa5URaZTdl2UlZtFNuzfNRPUxVDzKq+hlVmxlV/VRVP1XdH0dsPUrz82brIEHR2orsbsqym1bZ3Rl3q5zYGXO1dZCgqsbjHo42MqrH474nvs/3AYNLh+gGALgg7ucMU8rxn7de6Napts6CVVtnwi70WaX7P5t37zFn60zehT2DB+fDeB4XZTdl0U2rnEynPZNOaybd1kw6xXS6xXRa6aSVdqpUGWWQYTazWa1mWK2nP1rLYLSW4aiXUdVL0k9dJ83N81ODezzmdmtyZ8zd1mw6xXTamUgrnZQpM8owowzSr9fTr1bTr9YyGK6N47saj7uoY54/ANENcEm5/zNKp7sYd3YPNu6LcczQpNZpL3jvfXapKMqdJaijanPrhW0/o6q3c2bpwi1DbadVTm6dzZtKq7U15q2lqHVd7Zy9G1WbGe6Mub91Bm90HscMTbpnTpRFN+3WdCY6C5lozWeqtTsz2Z2Zaj5T9eRWcpcZH36qspF+1ovVrLYXs9E6mY3hYvrFSgajMsNR0lx4nx7c7dZ0Oq2ZTHQWMtmaz1SxO7P1rkxXs5lKN520kiTDVBlklPViPeut1ay1TmajPJnecDH94dYZfOH9gEQ3wCXh9KVrxSlnlbadelbp4nlh29pZurZzFu8U9dZFWWpL0/iy0kpRdLbOLk2n257LRHs+E+V8JorZTGQ6rbqdqqgyyGY2s5petZTN0Ur6g5UMRmsZVetJ1T+Pc2b7bN5kWuVkuu258bhb85ks59PNdDqZSFmXGRXDbGY9m/VqNqvl9IaLGQzXMhitpqp6WyFxMfz9BF+KrbeFbB2E2g7umc7+zBYHsrvan12ZyXy7m6l2mclWkdbWLrA/qrMxnM7KcDYnq4UsFfNZak9mvRin2fjAVZK618x57lOCu9uey2R7ITPtA5mv92d3tScL5WTmOu1MtYt0W+OD5qMq6Y3qrA2nszpcyInMZ7E1nbLo7Lzvu06VqkrGk9zbSk4lugEuelsv0FvT6bbm0mlPp1NO7ezots+GDaqNDEcb6Q9XMhytp656qTO8QGO+5wX69k6905pJ+15nw6p6sLOsbjBaz2i0nrrejB01l68iRdHaWoI6m8nOnsy092ehuDK7qt2Zy1Qmy1Y65fiMWG80ylrdz2KxnOXO0ayUh7IxOJHNwVa2npfwvmf57HZYTLX3ZLZ1ILvqK7KrmstM2U23LFOmyKCusjEaZjkbWWqdyHJ5JGvl0RSDMv1htsLbC3IufdsHlbcPRM109mdXcU32VQeyvz2dvRPt7J4oMt9Jptt1WsW4RzerMqvD5MRmmbleO8f73XSKiRxvl6m2D0anyqgenuOA3ZrLW+8777RmMtleyGz7yuypr87+elf2TUxk72Qru7rJbDuZbI2fd1AlG6Myy4MyJzfbmel1MjGaSKtsJ+2M3++99VaY8UF0TiW6AS5qRYqik057PtMTBzLfvjoLuSLz9UIm0x0vVSvqDDLKanstJztHsjy6K2ubR9IbHE9dbeTCnFEqU7amM9HendmJK7PQuia7qv2ZyVQ6aaVMkWFRpVf0s9xaysnclZXBXdnoH89guCy8uUxtv+Adx+tkZ08WOtdmf/WIXFnsyt6pTnZPlJnrJN0yGdXJ+rCTpf5Ejvemc3g4l0Ot8YGruq6yOawyqqutF+VNzvPx2bxyKyymO/uyu3xErqyuzhWd6eyZbGVXt8hUKymLpF8lK4NuTm5O5mhvJoczm6OdztZZsGEGO+/xvlAHBeFcGH8MWFl2025NZbK9K7PFgeyrDuTq7myumm7lyqnkwMQouzujzHWG6RRVqiTrw3YWB+0c7bZyqNNKd30yRW93qrrKqDVI1RlfaK2uh6mq7Y/pOke2riHR3nrv+XRrX/bUV+fK7MlV091cMVXm6qk6e7uj7O4OMtkapUwyqMusDNo50W/lcLeVqXYnrbXZZHhlRuUwo/Zmqmoc3vccLLCiZZvovqy1zmKbi3EyPNi4HR3ny0eRVlqt6UxPHMiBzuNysDqYa6amcmCqzGw7mWglwyrZGCXHN+dy9/refK7cm7sn/zFVPcjmYHgBzihtHShozWZ+8hG5trgxB8t9uWq2nT0T97ww3xwly4PkWG9XvrCxP5/rzuZY/l+qqp/haJCL8+8n+BIV4xfp3fZcZtr7s796RK5r78m1M508YqbOFROj7OkO0i2rjOoiK8N2jmy2c+d6O5Nrsyl716ZuVak6g4yqzVPe393gkFOkKNrptKYz2V7IQnlNrq6uyXVTM7lutsw1U1X2dgeZa49SFnV6o1YWB+3c3Wvli+uT6azuTapk2N4cv9e77m+9KC9if86lqdiZF9vxOtXand3V/uxvT+eq6Vaum6lzcHqQa6c3sndmPdNT/XS6o9RVkY2NTk6uTeXQ2nSmWhNpFWWSiQw2dqVfbmTQWs+wtZFh1ct23J+bfeLWgYJi60BBa1fmiv3ZWy3kiqlurpkpc/30KI+Y3sxVM+vZNbORiclhirLOoN/K2vpEjq1NZX59Kt2yk6Sdwep0Nkf7s1muZtAer7iriu2/l8zxbaL7stXK6P97YYonPOqBNxkN8+RnfjZ/vfj/nb9hPYjn7vnpvOsP5pLWA/9fc/A//zYTr3xHTGK+LGy90J1vX52D1cF85a6ZPG6+yvXTG9k32ctUZ5hBVWZ5s5svrk/l/6120zlxZfrVRnqdxQxGqxmd94At0yonM9ndk/3FI/PozoF85e5WHjs3yNVTvcx3N9Mq66z1Ozm+OZHPrk1kfnk29dI/S6+znM3BUkbV+tb70uFycs8Zpon2fBaKK3NlsSvXznTyuPkqj57t5dq51exeWE93aphqUGZtdSJ3L81moT2TdtnOqJrJRv+KbLSWstleybDqZVT3GwzYYudAQac1Piu2t7oiV01M55FzZR4zO8yj51ZzxcJqZmb7KdpVNtc6Obk8nS+szGa6NZmq7qS/sjurxf5stpbSL1ZSFe2L6NoT8DAUZcqinVY5kYnWXGayO7uL2eyfbOeqqeTg9CA3zC/nqgPLmblqmPbeTorJduphlfmVXhbu2sjsoX7KYiGjejK9UZm1wWRWh7uyVi5ks7WS1nAtVbF9HYRzMOStAwWt7aXl5Xzmq13Z05nI/qky105VuX6ml3+2ZzF7r15P98oi5VwnRVmkWh9m4cRyFu7ayOTRhSSz6VfdbAw7WVubzUqxK71yKf1yZXw186JM7Md3iO7LWPGER2X0z/9/D7zBaJjp+vD5G9BZmO+2Mvqqr0rK8gG36Xz2rpy7I35wMRvvHLvtuSzkilwzNZUb56t81d6lXP+I45m5tk45107drzI4Mso1n5vNxJF9WRl0c/TkFTne3pX14miqrJ/36xuXZTeTrV05UO/PI+daeeLCZp5wxbHsu34tnQPtFO0io6Vh1r9YZu8X9qbKQk5uTudQ/8ost7+Y/nAxdd0/j6OG5u2cGWtNZaKcz65qd/ZNjc9wP3q2lxsPHM/eR22k84jJFDOzybDK/JGNzP3TRjp3VOlVC1nqt7I4mMvJYnfWW8ezWSxllAfeZ54bZVpbZ8ami13Zk5lcOdXKddOj3LhrOdcfPJHpf1amtX8qKYvUK/0sfPFkZj+zmTp7szaaynK/m+O9vVluHUq7NZXBaLXhMUOTxmeMdwK2mM5MNZ+5dicL3SL7J0a5dnojVx1YztwNVdrX70qxbz6ZnkiGo7SW1tPavZiyvZr+sJW1YTuLg25ObpaZH05nKfNZL6eyWbZTVO0U6Z+D/fj4gmhFUaYsOumUU5nOQuYzlYVuK3snkismB7l2fiX7HrGWycdMprx2d7Iwk5RFyvXNtI4spTWzkrpeyvpgvET++GaZuV4309V8VsrptFuTGYzWcm7P0F/6RDfARawoyrTKiczXC9k3Webg9Eauv+ZE5r96MuUjDyS755L+IK0vHstVE8eyvDGZf1rbnYWlyUzUc+PPz70AyqKdyXI+u8qJXDlZ51ELy7niK9bTfeKB5Op9SbuV1smVtD91KI8cnciR3mT+38Rk5nq70ionMt5RW5bG5Wf7c60nitnMZSp7JspcMTHKdfMr2fuojXRv3pvi4BWp52ZS9AdpHzqeuYlDuWZjKYc3pnJocjKHNzqZHezKYjGRcuvTDM7VmbD7H3OZsmynU05ltl7IQqeTfZPJtVObuWb/UmZu6qR141XJgT2p262USyuZ2Hs4+3Iy165P5u5eN3dOtDLfm063mE1Zjn8HdQoznEtWccqZ7m4xnal6MjOdMvOdZE93mL0z6+Mz3NfvSvHIK1Mf2JdMTSajUYqTSynbZSbWj2Tv4lr2rE1nvtPJbKfMZNnORD2VVjExvmBqyqQoz81qlq33c5dlO+1yIt16KpNlOzOdIrs6VfZN9LNn93omruukfOT+5NoDqXcvjJ9/bT3FzFTaSWZXlrJveT17epOZ73Qz0ykz1ZscX+S1HF/ktem/ly41ohvgIlakTKvoZDLdzHeSfZO9TF9bpXzUFalvuD71nj3JZi/l1GQ6i+vZ95m17Dq+K1PtVtrDiZ2P8Ti/gx7vbDuZzHS7zK5OlT271tK5fja54RGprrk66XZTHDuWsqoyc+cXsvfOzcx1JjOR7k5EwOWoyPhAWicTmSxbmekkuzrDLMxvneF+xP7UB69NvbCQut9PMTmRcmUjs3ccy57Dm5lrT2SqVWSiP5F2OX5RnkbPdG+/B7STdjGZiXoiM50yc+06eyY2M3PlIK3rrkx98OrUV16RtNvJ0lKKqk7n6Hr2fH4tu5fmM9tuZbJsp5vp8cFAc5xLXrkTl6100kkr3bLIdLvObHuU6an+eEn5vvnU+/akPrA/mZpKRsOk00mx0Utr73ImZtcy2x1ktlVlstVKtyzTGXXTKjopy2bmSlm0U6aTTt1Jt1VmspVMt+rMdvuZWBim3Duf7N+V+sD+1Lt2jVegrq0lSYqV9bT3rGZmqp/p1iiTraRbFumknVY6431446tvLj2iG+AiV6RMO2U6ZTLVGaa10E4WZlPv2ZN61+5k0E+9sppiYSoTk8uZKOt0yux8NNcFHXdRZLI1SndqmGL3ntS7F1Lv3Tu+bkNVpdh9LK25MpPtUbpl0k6Z0s6ay9XWfNx5kV6U6Za5Z47MzaVemBsH9/xCMuinWFtLMTeV1kyRyfYo7TLplEVaaaVM55Q53vwyziJlWmmlVSTdss5Ee5TWTJHMTSUL86kXFnauyVIszKac66Y7sZFuWaVTJq2iSFmXKYqzudArXBrGH5Q3/t9WmbSKpFNU6XRHKSbb4yXl05PJ1FTqqelxdPc2U0xNJtOdtCbqdMpRWsX4I8XaRZGibm7/vR3EZVopU6ZVFlvPW6dTVik7Saa6qacmU2+PuSzHi9OnV5LJTorJVlrtKp2y3hp3sf1o4+cotpeWs81vA+ASUKVOVSeDqkzdr5L+INnsjV+Ub/bGf+4PM6qKVHVSbS3pOqcfM/IQ1alSJxnVRephmbo/TPqDFL3xuLO5mfSHqUd1RtV4mWllsSmXs635uP0ZvFXqjOp75kj6oxT9QdLvj+d2v58MBslodM88qccLTMfz69SPEjo/c33776J7xp2kPzxlvP2t8Q9SD0apRsV4u8RSUy57dT3+qpJUoyL1sEqGo/FOeTQcH2weDsf/Phomw2pr+2Ln7RZVndTFeH6fnzGP5/T4uYvx4EejFMNRsj3Wqtoa72j8kSnDOvXW641k++8kE/xMnOkGuIjVqcYf/VUMszZMlje76R8apfXFYymnJlOvrCabmym+eDjDu9eyvDKdlWGZ3miYYTYv0KDHLxRGGaQ3qrMyLLOyPJH5u5bT+sLdyXCUottJceJk6juPpX+ozlK/m7VhMsj48z0v5MECaNJ2LA8zSL+q0hslK8N2VlcmMnd0I+1Dx1NMT6ZYW0sGgxSHjqY+vJze8TIrg07WR0V6o1H6GWRUD1JnlPMR3HVdpcogg615vT4qsrzZzeaxpHNkKcVdh5OqTtFqJcsryaHjGR3tZXVtOqvDVtaHyWY9yqDYTFUNdv6egEvX+KDXqB5mlEEGGWVQ1dmsyqwP2+n1OqlWemktrac4uZR0OklvMxkNx38+uZJ6aTOD1VY2Bu1sVkX6o2RQVxlmlDqj8b7wHM+V7b+DRhlkVAwzqOoM66Q3KrIxaKe/WmZiqZdicTnF3Mz4TmU5XnVzcin10lpGK8Nsbk6nN2qlNyoyqKoMtx6zrrcPBprjpxLdABexuq4yrPtZL9Zzsj+fL65P5Zo7ZnPV1LF0FtdT7JpONgfj4P548sXluRzZLLI6HGRQrKeqhxdk3FU1TL9ez/JgkMO9yXxhcT7z/3A4c8PPp7zqWNJpp1pcT/+f1nP4C/O5c2MyJzfrrBYrGVWbGe+sHTXnMlRXGVWb2cxqVqrNLPa7ObLZzt1Ls5n7zEbmuodSrmykWJhOBsPUh5ez+cnVHD28kLs3JnOyn6wMRlkvVzMcrqeqmj5INX7xXNWDDKqNrJerWR3sysl+J4d7Ezl2aDbdTyynW92R4thi0mqlXlrP6I6lrH66zN3Lcznab2WpX2e1Hv/cVT3ceuFvjnNp2z4wPsxmNjPIxrDO6jBZGrSztDaZXYc2tq5SXqbY6I2XlI+GydJq6i8ez+DOXpYXZ7LY72ZpUGR9VKdXjQ9ODUeb4/md7fA+R/OlrlJVwwzrXjaLXnrVKGuDdlaGZU5sTmTl5GSm7txIZ+5YiiTF0sr4Pd0bveTIyVR3Lad3uMiJteksDVpZGyYbwyob6WVYb6aqB+ftLP2lRHRfzraXrjyAYji86JZyVnXGY64f+J0P9dBHD/DlYrzgejBcy8nOkdy9vjf/b7U7/liw3kT2fmY9k9OLGQ7LLK9M5wvL8/mH5al8YbXK0WIxvdFyqqp/3l/Y1qkzqvvpjU7maLmUL6xN5GNLM6k/c2WuObaSudm1lK06vfVOjq3sy6eXZ/Op1VYObWxmqTia4ajnTDeXpTp1qnqY4WgjvWopi8Vyjm1M585uOwvtmXTuqHL12nLm7jiW1kyRelCnd7LM0cML+fTiQu7YaOfQepUTo42slSczqDa2Dqw1O1/qusqo6qc/Ws1aazEnRntzeKOdz3U7mT2xK/U/Ftl7fDUTe1ZStIoMV+usHJnIF4/vyj+tTufO9SLHe4OsFEvpV6sZVZupL9ABQTg3xmdzq3qYUdXPZrWa9WI1K8PZLPZbOdpv5a61mUzfNciBcjUT60fS2rucTHfGS8qXNjO4s5fFz0/krqW5HO51c7JfZLk/ymq9mY1i+Z6APWf7wzrZfmvL9kG09nJWqz1ZHnRyvN/Kkc1udi/NZfJzg8xnKe2lXoqFyaQskvVBRkc30vtClWOH5nJofWrngNrKcJhesZ7NeiWjqj8+sOZs92lE92VrlKc84/OZyZEH3KJKnb9ce/t5HNODe+fim3PHlWf+7M6TxXJ85h9fLup6mMFoLcuju/K5cm86J67MyqCbz67tyUJndybLKqO6yMqwzNHNInesVvnMxkqOFJ9Nr38yVd3P+d/pVamqfjYGJ3Nk6vP5zNp0ktkc2ZzNvuWZzLaqtIo6varM4qDI3etFPrc6yB31oayODo0/39MLci5LVertF+mjlSx3jubwcC6Ta7Npl+30qoUc3pjKnqObmWyPMqqKrAw6uXtjMl/YaOdzq3UO9/o5Xh7L+uh4BsP18Rw/l2fB7k89TFX3MxiuZ619LMeLXZlZ76ZbTiSZzMqwnSuWZzPbHaQs6vSGrRzfnMidG93csV7mi2ujHB6tZLE8nM3R8ikH1rwg51K2PZ83069Ws9pezFK1K8d77a2r9U+kXS6kP2xl7+JaJufX0pqoU42SwWory4szuWtpLp9fm84XN1o51qtzsj8+ONWrlzIYrWVUjffh5+rgeZ165++g4WgjvdZylouVnNicyGynzEy7lW45nSS5YmM1c4c20p3ppSiT0WaR9aVOTizO5wsrs/nCRjeHNooc3xxlqV7PWrmY/nAtw9HG1kE18/tUovsy9uGlX7/QQ3jINja/mA9svu5CDwMuGnU9ynC0ntXNu3PXRJl+sZGjJ6/IwtJkptqtdMrxhUx6o2FWh4McLRZzpPhslvp3pD9cSl31c/6Xadepq142h0s5ufnZfLpbZXn9YL6wvitzrU66rSJlUWRQ1dkYDnOy2siR8u4crz6btf6RjKr1rfepwmWoHmZU9dIfrGSlPJRDrYmUvWszqGay1G/lronJLHQm0i3rjOoia6MiJzaTIxtV7u5t5u4cyWJ9V3rDxQxGa+dhNUu980J9WG1kY3AiJ7t3pZOJZHVPeqNujm92srvbyUy7Tpk6/arI0qDIsc3k8Powdw/WcqS8K2vDI+kPV7YOFDiwxiWuHkf3cNTLYLSWtfJ4ThbzmdrspFtOpizKDOqpLA3a2bc2ndmjg3TKUaqM3zt9YnMihzc7uWujlbvW6xzaGOR4vZKl8mh6w+UMRxup6v45D9i6Hqaq+hmOetkcLWW5fTRT1VQm1ltpFe1UdSf9ajYnNrvZtdTPVGc4ntejVlYGnRzb7OZQr5M7N4rcvV7laH8jJ8vjWa9O7hwoqOuh6zbci+gGuKiNxgE7WExdV9nsLOd4e1cm6rm0hxNppZMqo/FFXIqNbIxOZnOwlM3h0gWN1zqjjEarWe8no6qf9c6x3N1aSKeaSqvqpEwrowwyrDezmZX0+ovp9U9mMFpNVW3G+7m5PI0Dtqr66Y9WUg7Gn2dbt6ps9K/I8mAuh9rtTLfLdMrxlYx7o1GW+6OcrDZyvDyWxfqurA2PZnOwlFHV24rXpl/cVqmrfkZFL/3hSlbLw0k7GWQza+t7c3xzMnOdViZbZYqMD6itD0dZGg5yvF7OifJQVkaHsjE4mcFo/ZQDBeY5l6rxXL5nFchaNsvlLLUPp1V0UvT2ZlRPZGNYZrHfza5uZ2eVV50iG6MiK8MixzeLHOvVOdIb5OhoNSfKw9moTqY/XNlaEXKu47VK6iJV3c+w2sjmaCVr5bEcLyfSGrWStZkMq3bWhq0c2RwfAJwsx88/qIusj8os9rcOBPZGOdLbzLHieJbrI+mNFk87UGCOn050A1zk6gxTVxvZHAwzGK1mvTiasminKO75HM/t95ZVVX+8w6v6W8F9oXZ4dep6kNFwORtbZ73Lor0z7u0x74y77o9fiNeDePsIl7dq/EK96mVzsDS+pR5ko7WUk8XuTA/nMzWYSiut1KnSzyDr5WrWypNZHx1Pb7g4fkFeraeuz1e8bh8s6GWQJP3xKpxBe7yk9Hi1K5O96UxkIkkyyigbxUbWi+Ws52TWh8ezOVgan+U+bwcKoGnjgB2vAhkfkFov2inaZVIng96urA0nsthvZaZTZqo1/oz7Okl/lKwN6yz3R1kcjM9wnygPZ6U6ko3BiQyGaw3F6z0HC0ZVL4NhO72inaLdSspkNLoivbWZrAw6Od4tM9Mu09n6jO5RnWyMkpV+leXBKCeHvRwvTuRk7s766Hj6g5UMq1MPFJjjpxLdAJeE8UeHjEaDVFl/wK3qrYuvXRxHl+utAwajVNVmihRn3PLiGTc0qR4fEKv6GSXpDbauZt5ayXrreDrlVNrF5PgMeKqM6kGGw/UMqo0MhusZVhsZVb1UVS91Pcr5O0i19XnCW+FdbS2r7bUWs9KaTauYSLuY2NpykGG1mUG1kWG1kcFwLcPtMV/wA4JwrpxytrvqZzBau+egcqtKv9zI6nBXFodTmS476ZZl2sV4P7hZVelV44umrRRLWSlPZK06tnNQbTzH+w3F69bZ7qqfYdbvGXN7lGG5mY16f1Y35zLd72a6bKdTjsc8rOv0qyrr1SCr2chSeTyr9fGsj8YH1Xbe7mIly/0S3QCXjDrJ6BLcjV2q44amnBLedbX1gn09m8X2ipBOiqIcrwbJKFU13FkRUtfDU17Uns9VIVvzuE6q0frWxZh6KYfd9IqTKcvxuJPtlTeDrTEPt1ax9LcOEji4xuVkfDAq6Wf7w3W2P+pzs7WStXIhU5nPRD2VzqibVt1OlSqjYpResZZesZpevZT+cDWbo5XxAarR+s51D5o5QDX++6eos7U0fvWeK7G3NrPZWs1KOZ/JzGaimkpr1EqZcvyZ3ulno1zLZlbTq5ayOVpJf7Cy9V7u3imrb5zlvreiruuz+i9ZFPocLlYP96NXzGu4eH0pH6lkbl9KiiRliqK19c/tj8zc/md1z5W+6+oiWRWyNeYUSVHmnnHfM+YkO+MW26ezz77c3DOHi6KbsuymXU6m3ZpKuzWVTjmVVjExfq/31hy592qQ4Wgjo6p/yhnupoL79HEXaSVFO2XZTaucTLucSru1NfZiIu1yImW2DqZtrbwZ1VvjHm19bY35y/3A2oPNa9ENlwE7cLj8iO4vN/d++8XpATt2Mb6QPXXc9zfm5OIc94Vjn305OvVA1Dhii6KdVtHdWQVSlvf896vramsFy2Drc623VrHUw/McrqceMGinLLbGXXZ3xnzPgcDca8zbq1iGWwcJzvfqm4vLg81rsxcA4IK79wvsS+XF66njvlTGDOfa9tuoiqSuU42GSdFOVfRSVONPKcgp8TpetVJtXVB068KCW28nuefxzte4t1bT1FVGxTBJmVG1dbHW+xl3tTXe8X2GF8nqm4uf6AYAAPiSnR7fdZ0U6Y+/dWq8JjsfBXbho7Xe+t9RUo+SlFsxnbMY8z3358xENwAAwDlzz1LrevstGPX9rQS5mIJ1eyzbFz4tHmDMp27L2RLdAAAAjbhUA/VSHffFqXzwTQAAAICHQ3QDAABAQ0Q3AAAANER0AwAAQENENwAAADREdAMAAEBDRDcAAAA0RHQDAABAQ0Q3AAAANER0AwAAQENENwAAADREdAMAAEBDRDcAAAA0RHQDAABAQ0Q3AAAANER0AwAAQENENwAAADREdAMAAEBDRDcAAAA0RHQDAABAQ0Q3AAAANER0AwAAQENENwAAADREdAMAAEBDRDcAAAA0RHQDAABAQ0Q3AAAANER0AwAAQENENwAAADREdAMAAEBDRDcAAAA0RHQDAABAQ0Q3AAAANER0AwAAQENENwAAADREdAMAAEBDRDcAAAA0RHQDAABAQ0Q3AAAANER0AwAAQENENwAAADREdAMAAEBDRDcAAAA0RHQDAABAQ0Q3AAAANER0AwAAQENENwAAADREdAMAAEBDRDcAAAA0RHQDAABAQ0Q3AAAANER0AwAAQENENwAAADREdAMAAEBDRDcAAAA0RHQDAABAQ0Q3AAAANER0AwAAQENENwAAADREdAMAAEBDRDcAAAA0RHQDAABAQ0Q3AAAANER0AwAAQENENwAAADREdAMAAEBDRDcAAAA0RHQDAABAQ0Q3AAAANER0AwAAQENENwAAADREdAMAAEBDRDcAAAA0RHQDAABAQ0Q3AAAANER0AwAAQENENwAAADREdAMAAEBDRDcAAAA0RHQDAABAQ0Q3AAAANER0AwAAQENENwAAADREdAMAAEBDRDcAAAA0RHQDAABAQ0Q3AAAANER0AwAAQENENwAAADREdAMAAEBDRDcAAAA0RHQDAABAQ0Q3AAAANER0AwAAQENENwAAADREdAMAAEBDRDcAAAA0RHQDAABAQ0Q3AAAANER0AwAAQENENwAAADREdAMAAEBDirqu6ws9CAAAALgcOdMNAAAADRHdAAAA0BDRDQAAAA0R3QAAANAQ0Q0AAAANEd0AAADQENENAAAADRHdAAAA0BDRDQAAAA35/wPs3QkYNFO2MwAAAABJRU5ErkJggg==",
276 | "text/plain": [
277 | ""
278 | ]
279 | },
280 | "metadata": {},
281 | "output_type": "display_data"
282 | }
283 | ],
284 | "source": [
285 | "fig, ax = plt.subplots(2,4,figsize=(10,6))\n",
286 | "for i in range(4):\n",
287 | " ax[0,i].imshow(input[i].cpu().detach().numpy(), cmap='magma')\n",
288 | " ax[1,i].imshow(output_tot[i].cpu().detach().numpy(), cmap='magma')\n",
289 | " ax[0,i].set_title(f\"Distance {distances[i]} cm\")\n",
290 | "[a.axis('off') for a in ax.ravel()]\n",
291 | "fig.tight_layout()"
292 | ]
293 | },
294 | {
295 | "cell_type": "markdown",
296 | "metadata": {},
297 | "source": [
298 | "In general, we can combine operators together in complicated ways"
299 | ]
300 | },
301 | {
302 | "cell_type": "code",
303 | "execution_count": 13,
304 | "metadata": {},
305 | "outputs": [],
306 | "source": [
307 | "combined_operator = (ngon_operator + scint_operator) * ngon_operator + scint_operator\n",
308 | "\n",
309 | "output_weird= spect_psf_operator(input, xv_k, yv_k, distances, normalize=True)"
310 | ]
311 | },
312 | {
313 | "cell_type": "markdown",
314 | "metadata": {},
315 | "source": [
316 | "Let's check normalization. Since there were 2 squares each with $10 \\times 10 $ pixels of magnitude 1, the blurred image should sum to 200 at each plane:"
317 | ]
318 | },
319 | {
320 | "cell_type": "code",
321 | "execution_count": 14,
322 | "metadata": {},
323 | "outputs": [
324 | {
325 | "data": {
326 | "text/plain": [
327 | "tensor([200.0000, 200.0001, 200.0000, 200.0000], device='cuda:0',\n",
328 | " grad_fn=)"
329 | ]
330 | },
331 | "execution_count": 14,
332 | "metadata": {},
333 | "output_type": "execute_result"
334 | }
335 | ],
336 | "source": [
337 | "output_weird.sum(dim=(1,2))"
338 | ]
339 | },
340 | {
341 | "cell_type": "markdown",
342 | "metadata": {},
343 | "source": [
344 | "* Normalization is carefully tracked for combined operators"
345 | ]
346 | }
347 | ],
348 | "metadata": {
349 | "kernelspec": {
350 | "display_name": "python39_pytomo",
351 | "language": "python",
352 | "name": "python3"
353 | },
354 | "language_info": {
355 | "codemirror_mode": {
356 | "name": "ipython",
357 | "version": 3
358 | },
359 | "file_extension": ".py",
360 | "mimetype": "text/x-python",
361 | "name": "python",
362 | "nbconvert_exporter": "python",
363 | "pygments_lexer": "ipython3",
364 | "version": "3.11.6"
365 | }
366 | },
367 | "nbformat": 4,
368 | "nbformat_minor": 2
369 | }
370 |
--------------------------------------------------------------------------------
/docs/source/tutorials/sample_simind_script/positions.txt:
--------------------------------------------------------------------------------
1 | 0
2 | 0
3 | 0
4 | 0
5 | 0
6 | 0
7 | 0
8 | 0
9 | 0
10 | 0
11 | 0
12 | 0
13 | 0
14 | 0
15 | 0
16 | 0
17 | 0
18 | 0
19 | 0
20 | 0
21 | 0
22 | 0
23 | 0
24 | 0
25 | 0
26 | 0
27 | 0
28 | 0
29 | 0
30 | 0
31 | 0
32 | 0
33 | 0
34 | 0
35 | 0
36 | 0
37 | 0
38 | 0
39 | 0
40 | 0
41 | 0
42 | 0
43 | 0
44 | 0
45 | 0
46 | 0
47 | 0
48 | 0
49 | 0
50 | 0
51 | 0
52 | 0
53 | 0
54 | 0
55 | 0
56 | 0
57 | 0
58 | 0
59 | 0
60 | 0
61 | 0
62 | 0
63 | 0
64 | 0
65 | 0
66 | 0
67 | 0
68 | 0
69 | 0
70 | 0
71 | 0
72 | 0
73 | 0
74 | 0
75 | 0
76 | 0
77 | 0
78 | 0
79 | 0
80 | 0
81 | 0
82 | 0
83 | 0
84 | 0
85 | 0
86 | 0
87 | 0
88 | 0
89 | 0
90 | 0
91 | 0
92 | 0
93 | 0
94 | 0
95 | 0
96 | 0
97 | 0
98 | 0
99 | 0
100 | 0
101 | 0
102 | 0
103 | 0
104 | 0
105 | 0
106 | 0
107 | 0
108 | 0
109 | 0
110 | 0
111 | 0
112 | 0
113 | 0
114 | 0
115 | 0
116 | 0
117 | 0
118 | 0
119 | 0
120 | 0
121 | 0
122 | 0
123 | 0
124 | 0
125 | 0
126 | 0
127 | 0
128 | 0
129 | 0
130 | 0
131 | 0
132 | 0
133 | 0
134 | 0
135 | 0
136 | 0
137 | 0
138 | 0
139 | 0
140 | 0
141 | 0
142 | 0
143 | 0
144 | 0
145 | 0
146 | 0
147 | 0
148 | 0
149 | 0
150 | 0
151 | 0
152 | 0
153 | 0
154 | 0
155 | 0
156 | 0
157 | 0
158 | 0
159 | 0
160 | 0
161 | 0
162 | 0
163 | 0
164 | 0
165 | 0
166 | 0
167 | 0
168 | 0
169 | 0
170 | 0
171 | 0
172 | 0
173 | 0
174 | 0
175 | 0
176 | 0
177 | 0
178 | 0
179 | 0
180 | 0
181 | 0
182 | 0
183 | 0
184 | 0
185 | 0
186 | 0
187 | 0
188 | 0
189 | 0
190 | 0
191 | 0
192 | 0
193 | 0
194 | 0
195 | 0
196 | 0
197 | 0
198 | 0
199 | 0
200 | 0
201 | 0
202 | 0
203 | 0
204 | 0
205 | 0
206 | 0
207 | 0
208 | 0
209 | 0
210 | 0
211 | 0
212 | 0
213 | 0
214 | 0
215 | 0
216 | 0
217 | 0
218 | 0
219 | 0
220 | 0
221 | 0
222 | 0
223 | 0
224 | 0
225 | 0
226 | 0
227 | 0
228 | 0
229 | 0
230 | 0
231 | 0
232 | 0
233 | 0
234 | 0
235 | 0
236 | 0
237 | 0
238 | 0
239 | 0
240 | 0
241 | 0
242 | 0
243 | 0
244 | 0
245 | 0
246 | 0
247 | 0
248 | 0
249 | 0
250 | 0
251 | 0
252 | 0
253 | 0
254 | 0
255 | 0
256 | 0
257 | 0
258 | 0
259 | 0
260 | 0
261 | 0
262 | 0
263 | 0
264 | 0
265 | 0
266 | 0
267 | 0
268 | 0
269 | 0
270 | 0
271 | 0
272 | 0
273 | 0
274 | 0
275 | 0
276 | 0
277 | 0
278 | 0
279 | 0
280 | 0
281 | 0
282 | 0
283 | 0
284 | 0
285 | 0
286 | 0
287 | 0
288 | 0
289 | 0
290 | 0
291 | 0
292 | 0
293 | 0
294 | 0
295 | 0
296 | 0
297 | 0
298 | 0
299 | 0
300 | 0
301 | 0
302 | 0
303 | 0
304 | 0
305 | 0
306 | 0
307 | 0
308 | 0
309 | 0
310 | 0
311 | 0
312 | 0
313 | 0
314 | 0
315 | 0
316 | 0
317 | 0
318 | 0
319 | 0
320 | 0
321 | 0
322 | 0
323 | 0
324 | 0
325 | 0
326 | 0
327 | 0
328 | 0
329 | 0
330 | 0
331 | 0
332 | 0
333 | 0
334 | 0
335 | 0
336 | 0
337 | 0
338 | 0
339 | 0
340 | 0
341 | 0
342 | 0
343 | 0
344 | 0
345 | 0
346 | 0
347 | 0
348 | 0
349 | 0
350 | 0
351 | 0
352 | 0
353 | 0
354 | 0
355 | 0
356 | 0
357 | 0
358 | 0
359 | 0
360 | 0
361 | 0
362 | 0
363 | 0
364 | 0
365 | 0
366 | 0
367 | 0
368 | 0
369 | 0
370 | 0
371 | 0
372 | 0
373 | 0
374 | 0
375 | 0
376 | 0
377 | 0
378 | 0
379 | 0
380 | 0
381 | 0
382 | 0
383 | 0
384 | 0
385 | 0
386 | 0
387 | 0
388 | 0
389 | 0
390 | 0
391 | 0
392 | 0
393 | 0
394 | 0
395 | 0
396 | 0
397 | 0
398 | 0
399 | 0
400 | 0
401 | 0
402 | 0
403 | 0
404 | 0
405 | 0
406 | 0
407 | 0
408 | 0
409 | 0
410 | 0
411 | 0
412 | 0
413 | 0
414 | 0
415 | 0
416 | 0
417 | 0
418 | 0
419 | 0
420 | 0
421 | 0
422 | 0
423 | 0
424 | 0
425 | 0
426 | 0
427 | 0
428 | 0
429 | 0
430 | 0
431 | 0
432 | 0
433 | 0
434 | 0
435 | 0
436 | 0
437 | 0
438 | 0
439 | 0
440 | 0
441 | 0
442 | 0
443 | 0
444 | 0
445 | 0
446 | 0
447 | 0
448 | 0
449 | 0
450 | 0
451 | 0
452 | 0
453 | 0
454 | 0
455 | 0
456 | 0
457 | 0
458 | 0
459 | 0
460 | 0
461 | 0
462 | 0
463 | 0
464 | 0
465 | 0
466 | 0
467 | 0
468 | 0
469 | 0
470 | 0
471 | 0
472 | 0
473 | 0
474 | 0
475 | 0
476 | 0
477 | 0
478 | 0
479 | 0
480 | 0
481 | 0
482 | 0
483 | 0
484 | 0
485 | 0
486 | 0
487 | 0
488 | 0
489 | 0
490 | 0
491 | 0
492 | 0
493 | 0
494 | 0
495 | 0
496 | 0
497 | 0
498 | 0
499 | 0
500 | 0
501 | 0
502 | 0
503 | 0
504 | 0
505 | 0
506 | 0
507 | 0
508 | 0
509 | 0
510 | 0
511 | 0
512 | 0
513 | 0
514 | 0
515 | 0
516 | 0
517 | 0
518 | 0
519 | 0
520 | 0
521 | 0
522 | 0
523 | 0
524 | 0
525 | 0
526 | 0
527 | 0
528 | 0
529 | 0
530 | 0
531 | 0
532 | 0
533 | 0
534 | 0
535 | 0
536 | 0
537 | 0
538 | 0
539 | 0.04
540 | 0.09
541 | 0.15
542 | 0.2
543 | 0.25
544 | 0.31
545 | 0.36
546 | 0.41
547 | 0.47
548 | 0.52
549 | 0.57
550 | 0.63
551 | 0.68
552 | 0.73
553 | 0.79
554 | 0.84
555 | 0.89
556 | 0.95
557 | 1.0
558 | 1.05
559 | 1.11
560 | 1.16
561 | 1.21
562 | 1.27
563 | 1.32
564 | 1.37
565 | 1.43
566 | 1.48
567 | 1.53
568 | 1.59
569 | 1.64
570 | 1.69
571 | 1.75
572 | 1.8
573 | 1.85
574 | 1.91
575 | 1.96
576 | 2.01
577 | 2.07
578 | 2.12
579 | 2.17
580 | 2.23
581 | 2.28
582 | 2.33
583 | 2.39
584 | 2.44
585 | 2.49
586 | 2.55
587 | 2.6
588 | 2.65
589 | 2.71
590 | 2.76
591 | 2.81
592 | 2.87
593 | 2.92
594 | 2.97
595 | 3.03
596 | 3.08
597 | 3.13
598 | 3.19
599 | 3.24
600 | 3.29
601 | 3.35
602 | 3.4
603 | 3.45
604 | 3.51
605 | 3.56
606 | 3.61
607 | 3.67
608 | 3.72
609 | 3.77
610 | 3.83
611 | 3.88
612 | 3.93
613 | 3.99
614 | 4.04
615 | 4.09
616 | 4.15
617 | 4.2
618 | 4.25
619 | 4.31
620 | 4.36
621 | 4.41
622 | 4.47
623 | 4.52
624 | 4.57
625 | 4.63
626 | 4.68
627 | 4.73
628 | 4.79
629 | 4.84
630 | 4.89
631 | 4.95
632 | 5.0
633 | 5.05
634 | 5.11
635 | 5.16
636 | 5.21
637 | 5.27
638 | 5.32
639 | 5.37
640 | 5.43
641 | 5.48
642 | 5.53
643 | 5.59
644 | 5.64
645 | 5.69
646 | 5.75
647 | 5.8
648 | 5.85
649 | 5.91
650 | 5.96
651 | 6.01
652 | 6.07
653 | 6.12
654 | 6.17
655 | 6.23
656 | 6.28
657 | 6.33
658 | 6.39
659 | 6.44
660 | 6.49
661 | 6.55
662 | 6.6
663 | 6.65
664 | 6.71
665 | 6.76
666 | 6.81
667 | 6.87
668 | 6.92
669 | 6.97
670 | 7.03
671 | 7.08
672 | 7.13
673 | 7.19
674 | 7.24
675 | 7.29
676 | 7.35
677 | 7.4
678 | 7.45
679 | 7.51
680 | 7.56
681 | 7.61
682 | 7.67
683 | 7.72
684 | 7.77
685 | 7.83
686 | 7.88
687 | 7.93
688 | 7.99
689 | 8.04
690 | 8.09
691 | 8.15
692 | 8.2
693 | 8.25
694 | 8.31
695 | 8.36
696 | 8.41
697 | 8.47
698 | 8.52
699 | 8.57
700 | 8.63
701 | 8.68
702 | 8.73
703 | 8.79
704 | 8.84
705 | 8.89
706 | 8.95
707 | 9.0
708 | 9.05
709 | 9.11
710 | 9.16
711 | 9.21
712 | 9.27
713 | 9.32
714 | 9.37
715 | 9.43
716 | 9.48
717 | 9.53
718 | 9.59
719 | 9.64
720 | 9.69
721 | 9.75
722 | 9.8
723 | 9.85
724 | 9.91
725 | 9.96
726 | 10.01
727 | 10.07
728 | 10.12
729 | 10.17
730 | 10.23
731 | 10.28
732 | 10.33
733 | 10.39
734 | 10.44
735 | 10.49
736 | 10.55
737 | 10.6
738 | 10.65
739 | 10.71
740 | 10.76
741 | 10.81
742 | 10.87
743 | 10.92
744 | 10.97
745 | 11.03
746 | 11.08
747 | 11.13
748 | 11.19
749 | 11.24
750 | 11.29
751 | 11.35
752 | 11.4
753 | 11.45
754 | 11.51
755 | 11.56
756 | 11.61
757 | 11.67
758 | 11.72
759 | 11.77
760 | 11.83
761 | 11.88
762 | 11.93
763 | 11.99
764 | 12.04
765 | 12.09
766 | 12.15
767 | 12.2
768 | 12.25
769 | 12.31
770 | 12.36
771 | 12.41
772 | 12.47
773 | 12.52
774 | 12.57
775 | 12.63
776 | 12.68
777 | 12.73
778 | 12.79
779 | 12.84
780 | 12.89
781 | 12.95
782 | 13.0
783 | 13.05
784 | 13.11
785 | 13.16
786 | 13.21
787 | 13.27
788 | 13.32
789 | 13.37
790 | 13.43
791 | 13.48
792 | 13.53
793 | 13.59
794 | 13.64
795 | 13.69
796 | 13.75
797 | 13.8
798 | 13.85
799 | 13.91
800 | 13.96
801 | 14.01
802 | 14.07
803 | 14.12
804 | 14.17
805 | 14.23
806 | 14.28
807 | 14.33
808 | 14.39
809 | 14.44
810 | 14.49
811 | 14.55
812 | 14.6
813 | 14.65
814 | 14.71
815 | 14.76
816 | 14.81
817 | 14.87
818 | 14.92
819 | 14.97
820 | 15.03
821 | 15.08
822 | 15.13
823 | 15.19
824 | 15.24
825 | 15.29
826 | 15.35
827 | 15.4
828 | 15.45
829 | 15.51
830 | 15.56
831 | 15.61
832 | 15.67
833 | 15.72
834 | 15.77
835 | 15.83
836 | 15.88
837 | 15.93
838 | 15.99
839 | 16.04
840 | 16.09
841 | 16.15
842 | 16.2
843 | 16.25
844 | 16.31
845 | 16.36
846 | 16.41
847 | 16.47
848 | 16.52
849 | 16.57
850 | 16.63
851 | 16.68
852 | 16.73
853 | 16.79
854 | 16.84
855 | 16.89
856 | 16.95
857 | 17.0
858 | 17.05
859 | 17.11
860 | 17.16
861 | 17.21
862 | 17.27
863 | 17.32
864 | 17.37
865 | 17.43
866 | 17.48
867 | 17.53
868 | 17.59
869 | 17.64
870 | 17.69
871 | 17.75
872 | 17.8
873 | 17.85
874 | 17.91
875 | 17.96
876 | 18.01
877 | 18.07
878 | 18.12
879 | 18.17
880 | 18.23
881 | 18.28
882 | 18.33
883 | 18.39
884 | 18.44
885 | 18.49
886 | 18.55
887 | 18.6
888 | 18.65
889 | 18.71
890 | 18.76
891 | 18.81
892 | 18.87
893 | 18.92
894 | 18.97
895 | 19.03
896 | 19.08
897 | 19.13
898 | 19.19
899 | 19.24
900 | 19.29
901 | 19.35
902 | 19.4
903 | 19.45
904 | 19.51
905 | 19.56
906 | 19.61
907 | 19.67
908 | 19.72
909 | 19.77
910 | 19.83
911 | 19.88
912 | 19.93
913 | 19.99
914 | 20.04
915 | 20.09
916 | 20.15
917 | 20.2
918 | 20.25
919 | 20.31
920 | 20.36
921 | 20.41
922 | 20.47
923 | 20.52
924 | 20.57
925 | 20.63
926 | 20.68
927 | 20.73
928 | 20.79
929 | 20.84
930 | 20.89
931 | 20.95
932 | 21.0
933 | 21.05
934 | 21.11
935 | 21.16
936 | 21.21
937 | 21.27
938 | 21.32
939 | 21.37
940 | 21.43
941 | 21.48
942 | 21.53
943 | 21.59
944 | 21.64
945 | 21.69
946 | 21.75
947 | 21.8
948 | 21.85
949 | 21.91
950 | 21.96
951 | 22.01
952 | 22.07
953 | 22.12
954 | 22.17
955 | 22.23
956 | 22.28
957 | 22.33
958 | 22.39
959 | 22.44
960 | 22.49
961 | 22.55
962 | 22.6
963 | 22.65
964 | 22.71
965 | 22.76
966 | 22.81
967 | 22.87
968 | 22.92
969 | 22.97
970 | 23.03
971 | 23.08
972 | 23.13
973 | 23.19
974 | 23.24
975 | 23.29
976 | 23.35
977 | 23.4
978 | 23.45
979 | 23.51
980 | 23.56
981 | 23.61
982 | 23.67
983 | 23.72
984 | 23.77
985 | 23.83
986 | 23.88
987 | 23.93
988 | 23.99
989 | 24.04
990 | 24.09
991 | 24.15
992 | 24.2
993 | 24.25
994 | 24.31
995 | 24.36
996 | 24.41
997 | 24.47
998 | 24.52
999 | 24.57
1000 | 24.63
1001 | 24.68
1002 | 24.73
1003 | 24.79
1004 | 24.84
1005 | 24.89
1006 | 24.95
1007 | 25.0
1008 | 25.05
1009 | 25.11
1010 | 25.16
1011 | 25.21
1012 | 25.27
1013 | 25.32
1014 | 25.37
1015 | 25.43
1016 | 25.48
1017 | 25.53
1018 | 25.59
1019 | 25.64
1020 | 25.69
1021 | 25.75
1022 | 25.8
1023 | 25.85
1024 | 25.91
1025 | 25.96
1026 | 26.01
1027 | 26.07
1028 | 26.12
1029 | 26.17
1030 | 26.23
1031 | 26.28
1032 | 26.33
1033 | 26.39
1034 | 26.44
1035 | 26.49
1036 | 26.55
1037 | 26.6
1038 | 26.65
1039 | 26.71
1040 | 26.76
1041 | 26.81
1042 | 26.87
1043 | 26.92
1044 | 26.97
1045 | 27.03
1046 | 27.08
1047 | 27.13
1048 | 27.19
1049 | 27.24
1050 | 27.29
1051 | 27.35
1052 | 27.4
1053 | 27.45
1054 | 27.51
1055 | 27.56
1056 | 27.61
1057 | 27.67
1058 | 27.72
1059 | 27.77
1060 | 27.83
1061 | 27.88
1062 | 27.93
1063 | 27.99
1064 | 28.04
1065 | 28.09
1066 | 28.15
1067 | 28.2
1068 | 28.25
1069 | 28.31
1070 | 28.36
1071 | 28.41
1072 | 28.47
1073 | 28.52
1074 | 28.57
1075 | 28.63
1076 | 28.68
1077 | 28.73
1078 | 28.79
1079 | 28.84
1080 | 28.89
1081 | 28.95
1082 | 29.0
1083 | 29.05
1084 | 29.11
1085 | 29.16
1086 | 29.21
1087 | 29.27
1088 | 29.32
1089 | 29.37
1090 | 29.43
1091 | 29.48
1092 | 29.53
1093 | 29.59
1094 | 29.64
1095 | 29.69
1096 | 29.75
1097 | 29.8
1098 | 29.85
1099 | 29.91
1100 | 29.96
1101 | 30.01
1102 | 30.07
1103 | 30.12
1104 | 30.17
1105 | 30.23
1106 | 30.28
1107 | 30.33
1108 | 30.39
1109 | 30.44
1110 | 30.49
1111 | 30.55
1112 | 30.6
1113 | 30.65
1114 | 30.71
1115 | 30.76
1116 | 30.81
1117 | 30.87
1118 | 30.92
1119 | 30.97
1120 | 31.03
1121 | 31.08
1122 | 31.13
1123 | 31.19
1124 | 31.24
1125 | 31.29
1126 | 31.35
1127 | 31.4
1128 | 31.45
1129 | 31.51
1130 | 31.56
1131 | 31.61
1132 | 31.67
1133 | 31.72
1134 | 31.77
1135 | 31.83
1136 | 31.88
1137 | 31.93
1138 | 31.99
1139 | 32.04
1140 | 32.09
1141 | 32.15
1142 | 32.2
1143 | 32.25
1144 | 32.31
1145 | 32.36
1146 | 32.41
1147 | 32.47
1148 | 32.52
1149 | 32.57
1150 | 32.63
1151 | 32.68
1152 | 32.73
1153 | 32.79
1154 | 32.84
1155 | 32.89
1156 | 32.95
1157 | 33.0
1158 | 33.05
1159 | 33.11
1160 | 33.16
1161 | 33.21
1162 | 33.27
1163 | 33.32
1164 | 33.37
1165 | 33.43
1166 | 33.48
1167 | 33.53
1168 | 33.59
1169 | 33.64
1170 | 33.69
1171 | 33.75
1172 | 33.8
1173 | 33.85
1174 | 33.91
1175 | 33.96
1176 | 34.01
1177 | 34.07
1178 | 34.12
1179 | 34.17
1180 | 34.23
1181 | 34.28
1182 | 34.33
1183 | 34.39
1184 | 34.44
1185 | 34.49
1186 | 34.55
1187 | 34.6
1188 | 34.65
1189 | 34.71
1190 | 34.76
1191 | 34.81
1192 | 34.87
1193 | 34.92
1194 | 34.97
1195 | 35.03
1196 | 35.08
1197 | 35.13
1198 | 35.19
1199 | 35.24
1200 | 35.29
1201 | 35.35
1202 | 35.4
1203 | 35.45
1204 | 35.51
1205 | 35.56
1206 | 35.61
1207 | 35.67
1208 | 35.72
1209 | 35.77
1210 | 35.83
1211 | 35.88
1212 | 35.93
1213 | 35.99
1214 | 36.04
1215 | 36.09
1216 | 36.15
1217 | 36.2
1218 | 36.25
1219 | 36.31
1220 | 36.36
1221 | 36.41
1222 | 36.47
1223 | 36.52
1224 | 36.57
1225 | 36.63
1226 | 36.68
1227 | 36.73
1228 | 36.79
1229 | 36.84
1230 | 36.89
1231 | 36.95
1232 | 37.0
1233 | 37.05
1234 | 37.11
1235 | 37.16
1236 | 37.21
1237 | 37.27
1238 | 37.32
1239 | 37.37
1240 | 37.43
1241 | 37.48
1242 | 37.53
1243 | 37.59
1244 | 37.64
1245 | 37.69
1246 | 37.75
1247 | 37.8
1248 | 37.85
1249 | 37.91
1250 | 37.96
1251 | 38.01
1252 | 38.07
1253 | 38.12
1254 | 38.17
1255 | 38.23
1256 | 38.28
1257 | 38.33
1258 | 38.39
1259 | 38.44
1260 | 38.49
1261 | 38.55
1262 | 38.6
1263 | 38.65
1264 | 38.71
1265 | 38.76
1266 | 38.81
1267 | 38.87
1268 | 38.92
1269 | 38.97
1270 | 39.03
1271 | 39.08
1272 | 39.13
1273 | 39.19
1274 | 39.24
1275 | 39.29
1276 | 39.35
1277 | 39.4
1278 | 39.45
1279 | 39.51
1280 | 39.56
1281 | 39.61
1282 | 39.67
1283 | 39.72
1284 | 39.77
1285 | 39.83
1286 | 39.88
1287 | 39.93
1288 | 39.99
1289 | 40.04
1290 | 40.09
1291 | 40.15
1292 | 40.2
1293 | 40.25
1294 | 40.31
1295 | 40.36
1296 | 40.41
1297 | 40.47
1298 | 40.52
1299 | 40.57
1300 | 40.63
1301 | 40.68
1302 | 40.73
1303 | 40.79
1304 | 40.84
1305 | 40.89
1306 | 40.95
1307 | 41.0
1308 | 41.05
1309 | 41.11
1310 | 41.16
1311 | 41.21
1312 | 41.27
1313 | 41.32
1314 | 41.37
1315 | 41.43
1316 | 41.48
1317 | 41.53
1318 | 41.59
1319 | 41.64
1320 | 41.69
1321 | 41.75
1322 | 41.8
1323 | 41.85
1324 | 41.91
1325 | 41.96
1326 | 42.01
1327 | 42.07
1328 | 42.12
1329 | 42.17
1330 | 42.23
1331 | 42.28
1332 | 42.33
1333 | 42.39
1334 | 42.44
1335 | 42.49
1336 | 42.55
1337 | 42.6
1338 | 42.65
1339 | 42.71
1340 | 42.76
1341 | 42.81
1342 | 42.87
1343 | 42.92
1344 | 42.97
1345 | 43.03
1346 | 43.08
1347 | 43.13
1348 | 43.19
1349 | 43.24
1350 | 43.29
1351 | 43.35
1352 | 43.4
1353 | 43.45
1354 | 43.51
1355 | 43.56
1356 | 43.61
1357 | 43.67
1358 | 43.72
1359 | 43.77
1360 | 43.83
1361 | 43.88
1362 | 43.93
1363 | 43.99
1364 | 44.04
1365 | 44.09
1366 | 44.15
1367 | 44.2
1368 | 44.25
1369 | 44.31
1370 | 44.36
1371 | 44.41
1372 | 44.47
1373 | 44.52
1374 | 44.57
1375 | 44.63
1376 | 44.68
1377 | 44.73
1378 | 44.79
1379 | 44.84
1380 | 44.89
1381 | 44.95
1382 | 45.0
1383 | 45.05
1384 | 45.11
1385 | 45.16
1386 | 45.21
1387 | 45.27
1388 | 45.32
1389 | 45.37
1390 | 45.43
1391 | 45.48
1392 | 45.53
1393 | 45.59
1394 | 45.64
1395 | 45.69
1396 | 45.75
1397 | 45.8
1398 | 45.85
1399 | 45.91
1400 | 45.96
1401 | 46.01
1402 | 46.07
1403 | 46.12
1404 | 46.17
1405 | 46.23
1406 | 46.28
1407 | 46.33
1408 | 46.39
1409 | 46.44
1410 | 46.49
1411 | 46.55
1412 | 46.6
1413 | 46.65
1414 | 46.71
1415 | 46.76
1416 | 46.81
1417 | 46.87
1418 | 46.92
1419 | 46.97
1420 | 47.03
1421 | 47.08
1422 | 47.13
1423 | 47.19
1424 | 47.24
1425 | 47.29
1426 | 47.35
1427 | 47.4
1428 | 47.45
1429 | 47.51
1430 | 47.56
1431 | 47.61
1432 | 47.67
1433 | 47.72
1434 | 47.77
1435 | 47.83
1436 | 47.88
1437 | 47.93
1438 | 47.99
1439 | 48.04
1440 | 48.09
1441 | 48.15
1442 | 48.2
1443 | 48.25
1444 | 48.31
1445 | 48.36
1446 | 48.41
1447 | 48.47
1448 | 48.52
1449 | 48.57
1450 | 48.63
1451 | 48.68
1452 | 48.73
1453 | 48.79
1454 | 48.84
1455 | 48.89
1456 | 48.95
1457 | 49.0
1458 | 49.05
1459 | 49.11
1460 | 49.16
1461 | 49.21
1462 | 49.27
1463 | 49.32
1464 | 49.37
1465 | 49.43
1466 | 49.48
1467 | 49.53
1468 | 49.59
1469 | 49.64
1470 | 49.69
1471 | 49.75
1472 | 49.8
1473 | 49.85
1474 | 49.91
1475 | 49.96
1476 | 50.01
1477 | 50.07
1478 | 50.12
1479 | 50.17
1480 | 50.23
1481 | 50.28
1482 | 50.33
1483 | 50.39
1484 | 50.44
1485 | 50.49
1486 | 50.55
1487 | 50.6
1488 | 50.65
1489 | 50.71
1490 | 50.76
1491 | 50.81
1492 | 50.87
1493 | 50.92
1494 | 50.97
1495 | 51.03
1496 | 51.08
1497 | 51.13
1498 | 51.19
1499 | 51.24
1500 | 51.29
1501 | 51.35
1502 | 51.4
1503 | 51.45
1504 | 51.51
1505 | 51.56
1506 | 51.61
1507 | 51.67
1508 | 51.72
1509 | 51.77
1510 | 51.83
1511 | 51.88
1512 | 51.93
1513 | 51.99
1514 | 52.04
1515 | 52.09
1516 | 52.15
1517 | 52.2
1518 | 52.25
1519 | 52.31
1520 | 52.36
1521 | 52.41
1522 | 52.47
1523 | 52.52
1524 | 52.57
1525 | 52.63
1526 | 52.68
1527 | 52.73
1528 | 52.79
1529 | 52.84
1530 | 52.89
1531 | 52.95
1532 | 53.0
1533 | 53.05
1534 | 53.11
1535 | 53.16
1536 | 53.21
1537 | 53.27
1538 | 53.32
1539 | 53.37
1540 | 53.43
1541 | 53.48
1542 | 53.53
1543 | 53.59
1544 | 53.64
1545 | 53.69
1546 | 53.75
1547 | 53.8
1548 | 53.85
1549 | 53.91
1550 | 53.96
1551 | 54.01
1552 | 54.07
1553 | 54.12
1554 | 54.17
1555 | 54.23
1556 | 54.28
1557 | 54.33
1558 | 54.39
1559 | 54.44
1560 | 54.49
1561 | 54.55
1562 | 54.6
1563 | 54.65
1564 | 54.71
1565 | 54.76
1566 | 54.81
1567 | 54.87
1568 | 54.92
1569 | 54.97
1570 | 55.03
1571 | 55.08
1572 | 55.13
1573 | 55.19
1574 | 55.24
1575 | 55.29
1576 | 55.35
1577 | 55.4
1578 | 55.45
1579 | 55.51
1580 | 55.56
1581 | 55.61
1582 | 55.67
1583 | 55.72
1584 | 55.77
1585 | 55.83
1586 | 55.88
1587 | 55.93
1588 | 55.99
1589 | 56.04
1590 | 56.09
1591 | 56.15
1592 | 56.2
1593 | 56.25
1594 | 56.31
1595 | 56.36
1596 | 56.41
1597 | 56.47
1598 | 56.52
1599 | 56.57
1600 | 56.63
1601 | 56.68
1602 | 56.73
1603 | 56.79
1604 | 56.84
1605 | 56.89
1606 | 56.95
1607 | 57.0
1608 | 57.05
1609 | 57.11
1610 | 57.16
1611 | 57.21
1612 | 57.27
1613 | 57.32
1614 | 57.37
1615 | 57.43
1616 | 57.48
1617 | 57.53
1618 | 57.59
1619 | 57.64
1620 | 57.69
1621 | 57.75
1622 | 57.8
1623 | 57.85
1624 | 57.91
1625 | 57.96
1626 | 58.01
1627 | 58.07
1628 | 58.12
1629 | 58.17
1630 | 58.23
1631 | 58.28
1632 | 58.33
1633 | 58.39
1634 | 58.44
1635 | 58.49
1636 | 58.55
1637 | 58.6
1638 | 58.65
1639 |
--------------------------------------------------------------------------------
/docs/source/tutorials/sample_simind_script/run_multi_lu177.sh:
--------------------------------------------------------------------------------
1 | path=208keV_ME_PSF # signifies "medium energy" collimator
2 |
3 | index_numbers=($(cat "positions.txt"))
4 |
5 | for i in "${!index_numbers[@]}";
6 | do
7 | # 53:1 and 59:1 are for full collimator modeling and random collimator shift. 01:208 means that the photon energy is 208keV. 26:20 is the number of photons, 12:${index_numbers[$i]} gives the radial position, and cc:sy-me is the collimator type.
8 | simind simind point_position${i}/53:1/59:1/01:208/26:20/12:${index_numbers[$i]}/cc:sy-me &
9 | done
10 | wait
11 |
12 | # Create a directory and move the output files to that directory
13 | mkdir $path
14 | mv point_position* $path
--------------------------------------------------------------------------------
/docs/source/tutorials/sample_simind_script/simind128_shift.smc:
--------------------------------------------------------------------------------
1 | SMCV2
2 | An example of a point source simulation
3 | 120 # Basic Change data
4 | 0.20800E+03 0.00000E+00 0.10000E-01 0.10000E-01 0.10000E+02
5 | 0.11000E+02 0.11000E+02 0.61440E+02 0.95200E+00 0.00000E+00
6 | 0.50000E+01 0.15000E+02 0.20000E+00 0.50000E+01 0.50000E+01
7 | 0.24000E+00 0.24000E+00 0.00000E+00 0.30000E+01-0.20000E+02
8 | -0.20000E+02 0.10000E+02 0.38000E+00 0.89100E+00 0.10000E+01
9 | 0.20000E+02 0.10000E+01 0.48000E+00 0.10000E+01 0.00000E+00
10 | 0.50000E+00 0.00000E+00 0.10000E+01 0.10000E+01 0.00000E+00
11 | 0.00000E+00 0.00000E+00 0.50000E+00 0.00000E+00 0.00000E+00
12 | 0.00000E+00 0.10000E+01 0.00000E+00 0.00000E+00 0.10000E+01
13 | 0.40000E+00 0.46188E+00 0.20000E+00 0.57735E+00 0.30000E+00
14 | 0.51962E+00 0.59700E+01 0.00000E+00-0.30000E+01 0.00000E+00
15 | 0.00000E+00-0.99990E+01-0.99990E+01 0.00000E+00 0.00000E+00
16 | 0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00
17 | 0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00
18 | 0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00
19 | 0.12800E+03 0.12800E+03 0.64000E+02 0.64000E+02 0.51200E+03
20 | 0.00000E+00 0.00000E+00 0.00000E+00 0.10000E+01 0.00000E+00
21 | 0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00
22 | 0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00
23 | 0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00
24 | 0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00
25 | 0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00
26 | 0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00
27 | 0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00 0.00000E+00
28 | 30 # Simulation flags
29 | TTTTTTFTFFFTFTFFFFFFFFFFFFFFFF
30 | 8 # Text Variables
31 | sy-he
32 | none
33 | none
34 | none
35 | none
36 | none
37 | none
38 | none
39 | 12 # Data files
40 | h2o
41 | bone
42 | al
43 | nai
44 | vox_man
45 | vox_man
46 | pmt
47 | none
48 | none
49 | none
50 | none
51 | none
--------------------------------------------------------------------------------
/docs/source/tutorials/tutorials.rst:
--------------------------------------------------------------------------------
1 | .. _tutorial-index:
2 |
3 | ************************
4 | Tutorials
5 | ************************
6 | The tutorials use data that can be downloaded `here `_.
7 |
8 | .. grid:: 1 2 4 4
9 | :gutter: 2
10 |
11 | .. grid-item-card:: 1: Kernel1D
12 | :link: 1_kernel1d_intro
13 | :link-type: doc
14 | :link-alt: Tutorial 1
15 | :text-align: center
16 |
17 | :material-outlined:`psychology;4em;sd-text-secondary`
18 |
19 | .. grid-item-card:: 2: Kernel2D
20 | :link: 2_kernel2d_intro
21 | :link-type: doc
22 | :link-alt: Tutorial 2
23 | :text-align: center
24 |
25 | :material-outlined:`psychology;4em;sd-text-secondary`
26 |
27 | .. grid-item-card:: 3: Operators
28 | :link: 3_operators
29 | :link-type: doc
30 | :link-alt: Tutorial 3
31 | :text-align: center
32 |
33 | :material-outlined:`psychology;4em;sd-text-secondary`
34 |
35 | .. grid-item-card:: 4: Optimization Intro
36 | :link: 4_optimization_intro
37 | :link-type: doc
38 | :link-alt: Tutorial 4
39 | :text-align: center
40 |
41 | :material-outlined:`psychology;4em;sd-text-secondary`
42 |
43 | .. grid-item-card:: 5: Optimization Ac225
44 | :link: 5_optimization_ac225
45 | :link-type: doc
46 | :link-alt: Tutorial 5
47 | :text-align: center
48 |
49 | :material-outlined:`psychology;4em;sd-text-secondary`
50 |
51 | .. grid-item-card:: 6: NearestKernel Ac225
52 | :link: 6_nearestkernel_ac225
53 | :link-type: doc
54 | :link-alt: Tutorial 6
55 | :text-align: center
56 |
57 | :material-outlined:`psychology;4em;sd-text-secondary`
58 |
59 | .. grid-item-card:: 7: Ac225 Reconstruction
60 | :link: 7_pytomography_ac225recon
61 | :link-type: doc
62 | :link-alt: Tutorial 7
63 | :text-align: center
64 |
65 | :material-outlined:`psychology;4em;sd-text-secondary`
66 |
67 | .. grid-item-card:: 8: Lu177 Low Energy Collimator
68 | :link: 8_pytomography_lu177_le_recon
69 | :link-type: doc
70 | :link-alt: Tutorial 8
71 | :text-align: center
72 |
73 | :material-outlined:`psychology;4em;sd-text-secondary`
74 |
75 | .. grid-item-card:: 9: Lu177 Medium Energy Collimator
76 | :link: 9_lu177_me_modeling_and_recon
77 | :link-type: doc
78 | :link-alt: Tutorial 9
79 | :text-align: center
80 |
81 | :material-outlined:`psychology;4em;sd-text-secondary`
82 |
83 | .. toctree::
84 | :maxdepth: 1
85 | :hidden:
86 |
87 | 1_kernel1d_intro
88 | 2_kernel2d_intro
89 | 3_operators
90 | 4_optimization_intro
91 | 5_optimization_ac225
92 | 6_nearestkernel_ac225
93 | 7_pytomography_ac225recon
94 | 8_pytomography_lu177_le_recon
95 | 9_lu177_me_modeling_and_recon
--------------------------------------------------------------------------------
/figures/ac_recon_modeling.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/PyTomography/SPECTPSFToolbox/47ab089c9838342c418009e01590e104c5dfdc88/figures/ac_recon_modeling.png
--------------------------------------------------------------------------------
/paper/compile.sh:
--------------------------------------------------------------------------------
1 | docker run --rm \
2 | --volume $PWD:/data \
3 | --user $(id -u):$(id -g) \
4 | --env JOURNAL=joss \
5 | openjournals/inara
--------------------------------------------------------------------------------
/paper/fig1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/PyTomography/SPECTPSFToolbox/47ab089c9838342c418009e01590e104c5dfdc88/paper/fig1.png
--------------------------------------------------------------------------------
/paper/paper.bib:
--------------------------------------------------------------------------------
1 | @ARTICLE{osem,
2 | author={Hudson, H.M. and Larkin, R.S.},
3 | journal={IEEE Transactions on Medical Imaging},
4 | title={Accelerated image reconstruction using ordered subsets of projection data},
5 | year={1994},
6 | volume={13},
7 | number={4},
8 | pages={601-609},
9 | doi={10.1109/42.363108}}
10 |
11 | @ARTICLE{OSL,
12 | author={Green, P.J.},
13 | journal={IEEE Transactions on Medical Imaging},
14 | title={Bayesian reconstructions from emission tomography data using a modified EM algorithm},
15 | year={1990},
16 | volume={9},
17 | number={1},
18 | pages={84-93},
19 | doi={10.1109/42.52985}}
20 |
21 | @article{simind,
22 | title = {A Monte Carlo program for the simulation of scintillation camera characteristics},
23 | journal = {Computer Methods and Programs in Biomedicine},
24 | volume = {29},
25 | number = {4},
26 | pages = {257-272},
27 | year = {1989},
28 | issn = {0169-2607},
29 | doi = {10.1016/0169-2607(89)90111-9},
30 | url = {https://www.sciencedirect.com/science/article/pii/0169260789901119},
31 | author = {Michael Ljungberg and Sven-Erik Strand},
32 | keywords = {Monte Carlo, SPECT, Scatter simulation, Photon transport, Pile-up, Imaging, Spectrum, Scintillation camera},
33 | abstract = {There is a need for mathematical modelling for the evaluation of important parameters for photon imaging systems. A Monte Carlo program which simulates medical imaging nuclear detectors has been developed. Different materials can be chosen for the detector, a cover and a phantom. Cylindrical, spherical, rectangular and more complex phantom and source shapes can be simulated. Photoelectric, incoherent, coherent interactions and pair production are simulated. Different detector parameters, e.g. the energy pulse-height distribution and pulse pile-up due to finite decay time of the scintillation light emission, can be calculated. An energy resolution of the system is simulated by convolving the energy imparted with an energy-dependent Gaussian function. An image matrix of the centroid of the events in the detector can be simulated. Simulation of different collimators permits studies on spatial resolution and sensitivyt. Comparisons of our results with experimental data and other published results have shown good agreement. The usefulness of the Monte Carlo code for the accurately simulation of important parameters in scintillation camera systems, stationary as well as SPECT (single-photon emission computed tomography) systems, has been demonstrated.}
34 | }
35 |
36 | @ARTICLE{RDP,
37 | author={Nuyts, J. and Beque, D. and Dupont, P. and Mortelmans, L.},
38 | journal={IEEE Transactions on Nuclear Science},
39 | title={A concave prior penalizing relative differences for maximum-a-posteriori reconstruction in emission tomography},
40 | year={2002},
41 | volume={49},
42 | number={1},
43 | pages={56-60},
44 | doi={10.1109/TNS.2002.998681}}
45 |
46 | @INPROCEEDINGS{STIR,
47 | author={Thielemans, Kris and Mustafovic, Sanida and Tsoumpas, Charalampos},
48 | booktitle={2006 IEEE Nuclear Science Symposium Conference Record},
49 | title={STIR: Software for Tomographic Image Reconstruction Release 2},
50 | year={2006},
51 | volume={4},
52 | number={},
53 | pages={2174-2176},
54 | doi={10.1109/NSSMIC.2006.354345}}
55 |
56 | @article{Ashrafinia_2017,
57 | doi = {10.1088/1361-6560/aa6911},
58 | url = {https://dx.doi.org/10.1088/1361-6560/aa6911},
59 | year = {2017},
60 | month = {may},
61 | publisher = {IOP Publishing},
62 | volume = {62},
63 | number = {12},
64 | pages = {5149},
65 | author = {Saeed Ashrafinia and Hassan Mohy-ud-Din and Nicolas A Karakatsanis and Abhinav K Jha and Michael E Casey and Dan J Kadrmas and Arman Rahmim},
66 | title = {Generalized PSF modeling for optimized quantitation in PET imaging},
67 | journal = {Physics in Medicine & Biology},
68 | }
69 |
70 | @Article{hermes_spect_ap,
71 | author={Vuohijoki, Hanna E.
72 | and Constable, Christopher J.
73 | and Sohlberg, Antti O.},
74 | title={Anatomically guided reconstruction improves lesion quantitation and detectability in bone SPECT/CT},
75 | journal={Nuclear Medicine Communications},
76 | year={2023},
77 | volume={44},
78 | number={4},
79 | keywords={image reconstruction; Kernel-method; single-photon emission computed tomography; single-photon emission computed tomography-computed tomography},
80 | abstract={Bone single-photon emission computed tomography (SPECT)/computed tomography (CT) imaging suffers from poor spatial resolution, but the image quality can be improved during SPECT reconstruction by using anatomical information derived from CT imaging. The purpose of this work was to compare two different anatomically guided SPECT reconstruction methods to ordered subsets expectation maximization (OSEM) which is the most commonly used reconstruction method in nuclear medicine. The comparison was done in terms of lesion quantitation and lesion detectability. Anatomically guided Bayesian reconstruction (AMAP) and kernelized ordered subset expectation maximization (KEM) algorithms were implemented and compared against OSEM. Artificial lesions with a wide range of lesion-to-background contrasts were added to normal bone SPECT/CT studies. The quantitative accuracy was assessed by the error in lesion standardized uptake values and lesion detectability by the area under the receiver operating characteristic curve generated by a non-prewhitening matched filter. AMAP and KEM provided significantly better quantitative accuracy than OSEM at all contrast levels. Accuracy was the highest when SPECT lesions were matched to a lesion on CT. Correspondingly, AMAP and KEM also had significantly better lesion detectability than OSEM at all contrast levels and reconstructions with matching CT lesions performed the best. Quantitative differences between AMAP and KEM algorithms were minor. Visually AMAP and KEM images looked similar. Anatomically guided reconstruction improves lesion quantitation and detectability markedly compared to OSEM. Differences between AMAP and KEM algorithms were small and thus probably clinically insignificant.},
81 | issn={0143-3636},
82 | url={https://journals.lww.com/nuclearmedicinecomm/Fulltext/2023/04000/Anatomically_guided_reconstruction_improves_lesion.12.aspx},
83 | doi={10.1097/MNM.0000000000001675}
84 | }
85 |
86 | @article{castor,
87 | doi = {10.1088/1361-6560/aadac1},
88 | url = {https://dx.doi.org/10.1088/1361-6560/aadac1},
89 | year = {2018},
90 | month = {sep},
91 | publisher = {IOP Publishing},
92 | volume = {63},
93 | number = {18},
94 | pages = {185005},
95 | author = {Thibaut Merlin and Simon Stute and Didier Benoit and Julien Bert and Thomas Carlier and Claude Comtat and Marina Filipovic and Frédéric Lamare and Dimitris Visvikis},
96 | title = {CASToR: a generic data organization and processing code framework for multi-modal and multi-dimensional tomographic reconstruction},
97 | journal = {Physics in Medicine & Biology},
98 | abstract = {In tomographic medical imaging (PET, SPECT, CT), differences in data acquisition and organization are a major hurdle for the development of tomographic reconstruction software. The implementation of a given reconstruction algorithm is usually limited to a specific set of conditions, depending on the modality, the purpose of the study, the input data, or on the characteristics of the reconstruction algorithm itself. It causes restricted or limited use of algorithms, differences in implementation, code duplication, impractical code development, and difficulties for comparing different methods. This work attempts to address these issues by proposing a unified and generic code framework for formatting, processing and reconstructing acquired multi-modal and multi-dimensional data.
99 |
100 | The proposed iterative framework processes in the same way elements from list-mode (i.e. events) and histogrammed (i.e. sinogram or other bins) data sets. Each element is processed separately, which opens the way for highly parallel execution. A unique iterative algorithm engine makes use of generic core components corresponding to the main parts of the reconstruction process. Features that are specific to different modalities and algorithms are embedded into specific components inheriting from the generic abstract components. Temporal dimensions are taken into account in the core architecture.
101 |
102 | The framework is implemented in an open-source C++ parallel platform, called CASToR (customizable and advanced software for tomographic reconstruction). Performance assessments show that the time loss due to genericity remains acceptable, being one order of magnitude slower compared to a manufacturer’s software optimized for computational efficiency for a given system geometry. Specific optimizations were made possible by the underlying data set organization and processing and allowed for an average speed-up factor ranging from 1.54 to 3.07 when compared to more conventional implementations. Using parallel programming, an almost linear speed-up increase (factor of 0.85 times number of cores) was obtained in a realistic clinical PET setting. In conclusion, the proposed framework offers a substantial flexibility for the integration of new reconstruction algorithms while maintaining computation efficiency.}
103 | }
104 |
105 | @ARTICLE{BSREM,
106 | author={Sangtae Ahn and Fessler, J.A.},
107 | journal={IEEE Transactions on Medical Imaging},
108 | title={Globally convergent image reconstruction for emission tomography using relaxed ordered subsets algorithms},
109 | year={2003},
110 | volume={22},
111 | number={5},
112 | pages={613-626},
113 | doi={10.1109/TMI.2003.812251}}
114 |
115 | @incollection{Kikinis2014-3DSlicer,
116 | author = {Kikinis, R. and Pieper, S.D. and Vosburgh, K.},
117 | year = {2014},
118 | title = {{3D Slicer}: a platform for subject-specific image analysis, visualization, and clinical support},
119 | booktitle = {Intraoperative {I}maging {I}mage-{G}uided {T}herapy},
120 | editor = {Jolesz, F.A.},
121 | volume = {3(19)},
122 | pages = {277--289},
123 | isbn = {978-1-4614-7656-6},
124 | doi = {10.1007/978-1-4614-7657-3_19},
125 | }
126 |
127 | @misc{pytomography,
128 | title={PyTomography: A Python Library for Quantitative Medical Image Reconstruction},
129 | author={Lucas Polson and Roberto Fedrigo and Chenguang Li and Maziar Sabouri and Obed Dzikunu and Shadab Ahamed and Nikolaos Karakatsanis and Arman Rahmim and Carlos Uribe},
130 | year={2024},
131 | eprint={2309.01977},
132 | archivePrefix={arXiv},
133 | primaryClass={physics.med-ph},
134 | url={https://arxiv.org/abs/2309.01977},
135 | doi={10.48550/arXiv.2309.01977}
136 | }
137 |
138 | @INPROCEEDINGS{spect_proj_interp,
139 | author={Chrysostomou, Charalambos and Koutsantonis, Loizos and Lemesios, Christos and Papanicolas, Costas N.},
140 | booktitle={2020 IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC)},
141 | title={SPECT Angle Interpolation Based on Deep Learning Methodologies},
142 | year={2020},
143 | volume={},
144 | number={},
145 | pages={1-4},
146 | doi={10.1109/NSS/MIC42677.2020.9507966}}
147 |
148 | @article{spect_ai,
149 | title={Artificial intelligence in single photon emission computed tomography (SPECT) imaging: a narrative review},
150 | author={Shao, W. and Rowe, S. P. and Du, Y.},
151 | journal={Annals of Translational Medicine},
152 | volume={9},
153 | number={9},
154 | pages={820},
155 | year={2021},
156 | month={May},
157 | doi={10.21037/atm-20-5988},
158 | pmid={34268433},
159 | pmcid={PMC8246162}
160 | }
161 |
162 | @article{Barrett2003FoundationsOI,
163 | title={Foundations of Image Science},
164 | author={Harrison H. Barrett and Kyle J. Myers and Sreeram Dhurjaty},
165 | journal={J. Electronic Imaging},
166 | year={2003},
167 | volume={14},
168 | pages={029901},
169 | url={https://api.semanticscholar.org/CorpusID:206393854},
170 | doi={10.1117/1.1905634}
171 | }
172 |
173 | @article{ac1,
174 | title={Targeted $\alpha$-therapy of metastatic castration-resistant prostate cancer with 225Ac-PSMA-617: swimmer-plot analysis suggests efficacy regarding duration of tumor control},
175 | author={Kratochwil, C. and others},
176 | journal={Journal of Nuclear Medicine},
177 | volume={59},
178 | number={5},
179 | pages={795--802},
180 | year={2018},
181 | doi={10.2967/jnumed.117.203539}
182 | }
183 |
184 | @article{ac2,
185 | title={Predictors of overall and disease-free survival in metastatic castration-resistant prostate cancer patients receiving 225Ac-PSMA-617 radioligand therapy},
186 | author={Sathekge, M. and others},
187 | journal={Journal of Nuclear Medicine},
188 | volume={61},
189 | number={1},
190 | pages={62--69},
191 | year={2020},
192 | doi={10.2967/jnumed.119.229229}
193 | }
194 |
195 | @article{ac3,
196 | title={First clinical results for PSMA-targeted $\alpha$-therapy using 225Ac-PSMA-I\&T in advanced-mCRPC patients},
197 | author={Zacherl, M.J. and others},
198 | journal={Journal of Nuclear Medicine},
199 | volume={62},
200 | number={5},
201 | pages={669--674},
202 | year={2021},
203 | doi={10.2967/jnumed.120.251017}
204 | }
205 |
206 | @article{ac4,
207 | title={Molecular imaging and biochemical response assessment after a single cycle of [225Ac]Ac-PSMA-617/[177Lu]Lu-PSMA-617 tandem therapy in mCRPC patients who have progressed on [177Lu]Lu-PSMA-617 monotherapy},
208 | author={Rosar, F. and others},
209 | journal={Theranostics},
210 | volume={11},
211 | number={9},
212 | pages={4050},
213 | year={2021},
214 | doi={10.7150/thno.56211}
215 | }
216 |
217 | @article{ac5,
218 | title={Biodistribution and dosimetry for combined [{177Lu}]Lu-PSMA-I\&T/[{225Ac}]Ac-PSMA-I\&T therapy using multi-isotope quantitative SPECT imaging},
219 | author={Delker, A. and Schleske, M. and Liubchenko, G. and others},
220 | journal={European Journal of Nuclear Medicine and Molecular Imaging},
221 | volume={50},
222 | pages={1280--1290},
223 | year={2023},
224 | doi={10.1007/s00259-022-06092-1}
225 | }
226 |
227 | @article{ac6,
228 | title={Image-based dosimetry for [{225Ac}]Ac-PSMA-I\&T therapy and the effect of daughter-specific pharmacokinetics},
229 | author={Liubchenko, Grigory and others},
230 | journal={European Journal of Nuclear Medicine and Molecular Imaging},
231 | volume={51},
232 | number={8},
233 | pages={2504--2514},
234 | year={2024},
235 | doi={10.1007/s00259-024-06681-2}
236 | }
237 |
238 | @article{GRF_2,
239 | doi = {10.1088/0031-9155/35/1/008},
240 | url = {https://dx.doi.org/10.1088/0031-9155/35/1/008},
241 | year = {1990},
242 | month = {jan},
243 | publisher = {},
244 | volume = {35},
245 | number = {1},
246 | pages = {81},
247 | author = {B M W Tsui and G T Gullberg},
248 | title = {The geometric transfer function for cone and fan beam collimators},
249 | journal = {Physics in Medicine & Biology},
250 | abstract = {Geometric response functions are derived for both cone and fan beam collimators for the scintillation camera. The formulation is based on an effective response function which is determined by the geometric response of a single hole. The technique provides an accurate description of the spatial resolution by characterising the complete geometric response function which includes the effects of the shape and orientation of the collimator holes. The theoretical formulation was used to design a fan beam collimator for SPECT imaging and was shown to agree well with the experimental results.}
251 | }
252 |
253 | @article{GRF_3,
254 | doi = {10.1088/0031-9155/43/4/021},
255 | url = {https://dx.doi.org/10.1088/0031-9155/43/4/021},
256 | year = {1998},
257 | month = {apr},
258 | publisher = {},
259 | volume = {43},
260 | number = {4},
261 | pages = {941},
262 | author = {E C Frey and B M W Tsui and G T Gullberg},
263 | title = {Improved estimation of the detector response function for converging beam collimators},
264 | journal = {Physics in Medicine & Biology},
265 | abstract = {Converging beam collimator geometries offer improved tradeoffs between resolution and noise for single photon emission computed tomography (SPECT). The major factor limiting the resolution in SPECT is the collimator-detector response blurring. In order to compensate for this blurring it is useful to be able to calculate the collimator response function. A previous formulation presented a method for calculating the response for parallel and converging beam collimators that assumed that the shape of the holes did not change over the face of the collimator. However, cast collimators are fabricated using pins with a constant cross-section (shape perpendicular to the pin axis). As a result, due to the angulation of the pins, the holes made by these pins have shapes on the front and back faces of the collimator that change with position. This change in hole shape is especially pronounced when the angle between the collimator hole and the collimator normal is large, as is the case for half-fan-beam or short-focal-length collimators. This paper presents a derivation of a modification to the original method that accounts for the change in shape of the collimator holes. The method has been verified by comparing predicted line spread functions to experimentally measured ones for a collimator with a maximum hole angle of with respect to the normal. This formulation is useful for predicting the response of fan-beam collimators in the design process and for use in detector response compensation algorithms.}
266 | }
267 |
268 | @article{GRF_4,
269 | doi = {10.1088/0031-9155/43/11/013},
270 | url = {https://dx.doi.org/10.1088/0031-9155/43/11/013},
271 | year = {1998},
272 | month = {nov},
273 | publisher = {},
274 | volume = {43},
275 | number = {11},
276 | pages = {3359},
277 | author = {Andreas Robert Formiconi},
278 | title = {Geometrical response of multihole collimators},
279 | journal = {Physics in Medicine & Biology},
280 | abstract = {A complete theory of camera multihole collimators is presented. The geometrical system response is determined in closed form in frequency space. This closed form accounts for the known efficiency and resolution formulae for parallel beam, fan beam, cone beam and astigmatic collimators as well as for the most frequent hole array patterns and hole shapes. The point spread function in the space domain for a certain collimator and source position can be calculated via a discrete fast Fourier transform. Beside the complete theoretical definition of the response of multihole collimators, this theory allows the definition of accurate models of the geometrical response for SPECT reconstruction and it is suitable for designing new collimators.}
281 | }
282 |
283 | @article{metz1980geometric,
284 | title={The geometric transfer function component for scintillation camera collimators with straight parallel holes},
285 | author={Metz, C E and Atkins, F B and Beck, R N},
286 | journal={Physics in Medicine and Biology},
287 | volume={25},
288 | number={6},
289 | pages={1059--1070},
290 | year={1980},
291 | publisher={IOP Publishing},
292 | doi={10.1088/0031-9155/25/6/003}
293 | }
294 |
295 | @ARTICLE{septal,
296 | author={Du, Y. and Frey, E.C. and Wang, W.T. and Tocharoenchai, C. and Baird, W.H. and Tsui, B.M.W.},
297 | journal={IEEE Transactions on Nuclear Science},
298 | title={Combination of MCNP and SimSET for Monte Carlo simulation of SPECT with medium- and high-energy photons},
299 | year={2002},
300 | volume={49},
301 | number={3},
302 | pages={668-674},
303 | keywords={Single photon emission computed tomography;Computational modeling;Imaging phantoms;Medical simulation;Optical collimators;Particle scattering;Nuclear medicine;Reconstruction algorithms;Electromagnetic scattering;X-ray scattering},
304 | doi={10.1109/TNS.2002.1039547}}
305 |
306 | @Inbook{Zaidi2006,
307 | author="Zaidi, H.
308 | and Hasegawa, B. H.",
309 | editor="Zaidi, Habib",
310 | title="Overview of Nuclear Medical Imaging: Physics and Instrumentation",
311 | bookTitle="Quantitative Analysis in Nuclear Medicine Imaging",
312 | year="2006",
313 | publisher="Springer US",
314 | address="Boston, MA",
315 | pages="1--34",
316 | isbn="978-0-387-25444-9",
317 | doi="10.1007/0-387-25444-7_1",
318 | url="https://doi.org/10.1007/0-387-25444-7_1"
319 | }
320 |
321 | @article {bi213_amyloid,
322 | author = {Bender, Aidan A. and Kirkeby, Emily K. and Cross, Donna J. and Minoshima, Satoshi and Roberts, Andrew G. and Mastren, Tara E.},
323 | title = {Development of a 213Bi-Labeled Pyridyl Benzofuran for Targeted α-Therapy of Amyloid-β Aggregates},
324 | elocation-id = {jnumed.124.267482},
325 | year = {2024},
326 | doi = {10.2967/jnumed.124.267482},
327 | publisher = {Society of Nuclear Medicine},
328 | abstract = {Alzheimer disease is a neurodegenerative disorder with limited treatment options. It is characterized by the presence of several biomarkers, including amyloid-β aggregates, which lead to oxidative stress and neuronal decay. Targeted α-therapy (TAT) has been shown to be efficacious against metastatic cancer. TAT takes advantage of tumor-localized α-particle emission to break disease-associated covalent bonds while minimizing radiation dose to healthy tissues due to the short, micrometer-level, distances traveled. We hypothesized that TAT could be used to break covalent bonds within amyloid-β aggregates and facilitate natural plaque clearance mechanisms. Methods: We synthesized a 213Bi-chelate{\textendash}linked benzofuran pyridyl derivative (BiBPy) and generated [213Bi]BiBPy, with a specific activity of 120.6 GBq/μg, dissociation constant of 11 {\textpm} 1.5 nM, and logP of 0.14 {\textpm} 0.03. Results: As the first step toward the validation of [213Bi]BiBPy as a TAT agent for the reduction of Alzheimer disease{\textendash}associated amyloid-β, we showed that brain homogenates from APP/PS1 double-transgenic male mice (6{\textendash}9 mo old) incubated with [213Bi]BiBPy exhibited a marked reduction in amyloid-β plaque concentration as measured using both enzyme-linked immunosorbent and Western blotting assays, with a half-maximal effective concentration of 3.72 kBq/pg. Conclusion: This [213Bi]BiBPy-concentration{\textendash}dependent activity shows that TAT can reduce amyloid plaque concentration in~vitro and supports the development of targeting systems for in~vivo validations.},
329 | issn = {0161-5505},
330 | URL = {https://jnm.snmjournals.org/content/early/2024/07/25/jnumed.124.267482},
331 | eprint = {https://jnm.snmjournals.org/content/early/2024/07/25/jnumed.124.267482.full.pdf},
332 | journal = {Journal of Nuclear Medicine}
333 | }
334 |
--------------------------------------------------------------------------------
/paper/paper.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: 'SPECTPSFToolbox: A Python Toolbox for SPECT Point Spread Function Modeling'
3 | tags:
4 | - 3D slicer
5 | - nuclear medicine
6 | - tomography
7 | - spect
8 | - image reconstruction
9 | authors:
10 | - name: Luke Polson
11 | orcid: 0000-0002-3182-2782
12 | corresponding: true
13 | affiliation: "1, 2" # (Multiple affiliations must be quoted)
14 | - name: Carlos Uribe
15 | affiliation: "2, 4, 5"
16 | - name: Arman Rahmim
17 | affiliation: "1, 2, 3, 5"
18 | affiliations:
19 | - name: Deparment of Physics & Astronomy, University of British Columbia, Vancouver Canada
20 | index: 1
21 | - name: Department of Integrative Oncology, BC Cancer Research Institute, Vancouver Canada
22 | index: 2
23 | - name: School of Biomedical Engineering, University of British Columbia, Vancouver Canada
24 | index: 3
25 | - name: Department of Radiology, University of British Columbia, Vancouver Canada
26 | index: 4
27 | - name: Molecular Imaging and Therapy Department, BC Cancer, Vancouver Canada
28 | index: 5
29 | date: 06 July 2024
30 | bibliography: paper.bib
31 |
32 | # Optional fields if submitting to a AAS journal too, see this blog post:
33 | # https://blog.joss.theoj.org/2018/12/a-new-collaboration-with-aas-publishing
34 | aas-doi: 10.3847/xxxxx <- update this with the DOI from AAS once you know it.
35 | aas-journal: Astrophysical Journal <- The name of the AAS journal.
36 | ---
37 |
38 | # Summary
39 |
40 | `SPECTPSFToolbox` is a python toolbox built with PyTorch for point spread function (PSF) modeling in clinical Single photon emission computed tomography (SPECT) imaging. The toolbox provides functions and classes that model distinct components of SPECT PSFs for parallel hole collimator systems. The individual components can be chained through addition and multiplication to construct a full PSF model. Developed models may also contain parameters which can be fit to Monte Carlo (MC) or real SPECT projection data. The toolbox is an independent repository of the PyTomography [@pytomography] project; developed models can be directly used in the main PyTomography library for SPECT image reconstruction.
41 |
42 |
43 | # Statement of need
44 |
45 | SPECT is an \textit{in vivo} imaging modality modality used to estimate the 3D radiopharmaceutical distributions within a patient [@Zaidi2006]. It requires (i) acquisition of 2D ``projection'' images at different angles using a gamma camera followed by (ii) use of a tomographic image reconstruction algorithm to obtain a 3D radioactivity distribution consistent with the acquired data. In order to reconstruct SPECT projection data, a system model is needed that captures all features of the imaging system, such as resolution modeling. The gamma cameras used in SPECT imaging have finite resolution, meaning point sources of radioactivity appear as point spread functions (PSFs) on the camera. The PSF has two main components: (i) the intrisic response function (IRF), which results from the inability of the scintillator to precisely locate the point of interaction and (ii) the collimator detector response function (CDRF), which results from the inability of the collimator to select for photons travelling perpendicular to the bores.
46 |
47 | The CDRF itself consists of three main components: (i) the geometric response function (GRF) [@metz1980geometric; @GRF_2;@GRF_3; @GRF_4] which results from photons that pass through the collimator bores without intersecting the septa, (ii) the septal penetration response function (SPRF) which results from photons that travel through the septa without being attenuated, and (iii) the septal scatter response function (SSRF), which consists of photons that scatter within the collimator material and subsequently get detected in the scintillator [@septal]. As the thickness of the collimator increases and diameter of the collimator bores decreases, the relative contribution from the SPRF and SSRF decreases. When the SPRF and SSRF are sufficiently small, the net PSF is dominated by the IRF and GRF and it can be reasonably approximated using a Gaussian function. The trade-off, however, is that a thick collimator with narrow bores also has lower detector sensitivity. It may necessary to have non-negligible contributions from the SPRF and SSRF in order to increase detector sensitivity. In certain situations, the imaged photons have energies so high that the available commerical collimators are unable to surpress the SPRF and SSRF.
48 |
49 | Unfortunately, the exisiting open source reconstruction libraries only provide support for Gaussian PSF modeling and thus can only be used reconstruct SPECT data where the PSF is dominated the IRF and GRF. In many recent SPECT applications, this does not hold. For example, ${}^{225}$Ac based treatments have recently shown promise in clinical studies [@ac1; @ac2; @ac3; @ac4; @ac5; @ac6], and targetted alpha therapy with ${}^{213}$Bi has shown promise in reducing amyloid plaque concentrations in male mice, eluding to a potential treatment option for Alzeimer disease [@bi213_amyloid]. Both ${}^{225}$Ac and ${}^{213}$Bi are imaged using a 440 keV photon emissions that result in significant SPRF and SSRF components in the PSF. If the nuclear medicine community is to explore and develop novel reconstruction techniques in these domains, then there is a need for open source tools that provide comprehensive and computationally efficient SPECT PSF modeling. This python based toolbox provides those tools.
50 |
51 | # Overview of SPECTPSFToolbox
52 |
53 | The purpose of SPECT reconstruction is to estimate the 3D radionuclide concentration $f$ that produces the acquired 2D image data $g$ given an analytical model for the imaging system, known as the system matrix. Under standard conditions, the SPECT system matrix estimates the projection $g_{\theta}$ at angle $\theta$ as
54 |
55 | \begin{equation}
56 | g_{\theta}(x,y) = \sum_{d} \mathrm{PSF}(d) \left[f'(x,y,d)\right]
57 | \label{eq:model_approx}
58 | \end{equation}
59 |
60 | where $(x,y)$ is the position on the detector, $d$ is the perpendicular distance to the detector, $f'$ is the attenuation adjusted image corresponding to the detector angle, and $\mathrm{PSF}(d)$ is a 2D linear operator that operates seperately on $f$ at each distance $d$. The toolbox provides the necessary tools to obtain $\mathrm{PSF}(d)$.
61 |
62 | The toolbox is seperated into three main class types; they are best described by the implementation of their `__call__` methods:
63 |
64 | 1. `Kernel1D`: called using 1D positions $x$, source-detector distances $d$, hyperparameters $b$, and return a 1D kernel at each source-detector distance.
65 | 2. `Kernel2D`: called using a 2D meshgrid $(x,y)$, source-detector distances $d$, hyperparameters $b$, and return a 2D kernel at each source-detector distance.
66 | 3. `Operator`: called using a 2D meshgrid $(x,y)$, source-detector distances $d$, as well as an input $f$, and return the operation $\mathrm{PSF}(d) \left[f'(x,y,d)\right]$
67 |
68 | Various subclasses of `Kernel1D` have their own instantiation methods. For example, the `__init__` method of `FunctionKernel1D` requires a 1D function definition $k(x)$, an amplitude function $A(d,b_A)$ and its hyperparameters $b_A$, and a scaling function $\sigma(d,b_{\sigma})$ and its hyperparameters $b_{\sigma}$. The `__call__` method returns $A(d,b_A)k(x/\sigma(d,b_{\sigma})) \Delta x$ where $\Delta x$ is the spacing of the kernel. The `ArbitraryKernel1D` is similar except that it requires a 1D array $k$ in place of $k(x)$, and $k(x/\sigma(d,b_{\sigma}))$ is obtained via interpolation between array values. Subclasses of `Kernel1D` require a `normalization_constant(x,d)` method to be implemented that returns sum of the kernel from $x=-\infty$ to $x=\infty$ at each detector distance $d$ given $x$ input with constant spacing $\Delta x$. This is not as simple as summing over the kernel output since the range of $x$ provided might be less than the size of the kernel. The `Kernel2D` class and corresponding subclasses are analogous to `Kernel1D`, except they require a 2D input $(x,y)$ and return a corresponding 2D kernel at each detector distance $d$.
69 |
70 | Sublasses of the `Operator` class form the main components of the library, and are built using various `Kernel1D` and `Kernel2D` classes. Currently, the library supports linear shift invariant (LSI) operators, since these are sufficient for SPECT PSF modeling. LSI operators can always be implemented via convolution with a 2D kernel, but often this is computationally expensive. In tutorial 5 on the documentation website, the form of the SPECT PSF is exploited and a 2D LSI operator is built using 1D convolutions and rotations. In tutorial 7, this is shown to lead to faster reconstruction than application of a 2D convolution but with nearly identical results. Operators must also implement a `normalization_constant(xv,yv,d)` method. The `__add__` and `__mult__` methods of operators have been implemented so that multiple PSF operators can be chained together; adding operators together yields an operator that returns a linear operator sum, while multiplication yields the equivalent to the linear operator matrix product. Propagation of normalization is implemented to ensure the chained operator is also properly normalized. For example, if the response functions are implemented in operators with variable names `irf_op`, `grf_op`, `ssrf_op`, and `sprf_op`, then the total response would be defined as `psf_op=irf_op*(grf_op+ssrf_op+sprf_op)` and the `__call__` method would implement
71 |
72 | \begin{equation}
73 | \mathrm{PSF} \left[f \right] = \mathrm{IRF}\left(\mathrm{GRF}+\mathrm{SSRF}+\mathrm{SPRF}\right)\left[f \right]
74 | \label{eq:psf_tot}
75 | \end{equation}
76 |
77 | The library is presently demonstrated on two SPECT image reconstruction examples where the ordered subset expectation maximum (OSEM) [@osem] reconstruction algorithm is used. The first use case considers reconstruction of MC ${}^{177}$Lu data. ${}^{177}$Lu is typically acquired using a medium energy collimator and the PSF is dominated by the GRF. When acquired using a low energy collimator, there is significant SPRF and SSRF, and a more sophisticated PSF model is required during reconstruction. This use case considers a cylindrical phantom with standard NEMA sphere sizes filled at a 10:1 source to background concentration with a total activity of 1000 MBq. SPECT acquisition was simulated in SIMIND [@simind] using (i) low energy collimators and (ii) medium energy collimators with 96 projection angles at 0.48 cm $\times$ 0.48 cm resolution and a 128 $\times$ 128 matrix size. Firstly, each case was reconstructed with only the GRF (Gaussian PSF) with OSEM(4it,8ss) (medium energy) and OSEM(40it8ss) (low energy). A MC based PSF model that encompasses GRF, SPRF, and SSRF was then obtained by (i) simulating a point source at 1100 distances between 0 cm and 55cm, and normalizing the kernel data and (ii) using a `NearestKernelOperator` which convolves the PSF kernel closest to the source-detector distance of each plane. The low energy collimator data was then reconstructed using the MC PSF model using OSEM (40it8ss). \autoref{fig:fig1} shows the sample PSF at six sample distances (left), the reconstructed images (center) and sample 1D profiles of the reconstructed images (right). When the MC PSF kernel that includes GRF+SSRF+SPRF is used, the activity in the spheres is significantly higher than when the GRF only (Gaussian) kernel is used.
78 |
79 | The second use case considers reconstruction of MC ${}^{225}$Ac data. ${}^{225}$Ac emits 440 keV photons that have significant SPRF and SSRF even when a high energy collimator is used. This use case considers a cylindrical phantom with 3 spheres of diameters 60mm, 37mm, and 28mm filled at a 6.4:1 source to background ratio with 100MBq of total activity. 440keV point source data was simulated via SIMIND at 1100 positions between 0cm and 55cm. 12 of these positions were used for developing and fitting a PSF operator built using the `GaussianOperator`, `Rotate1DConvOperator`, and `RotateSeperable2DConvOperator`; each of these classes performs 2D shift invariant convolutions using only 1D convolutions and rotations
80 | (1D-R). More details on this model and how it is fit are shown in tutorial 5 on the documentation website. The developed model is compared to a MC based model (2D) that uses the `NearestKernelOperator` with the 1100 acquired PSFs acquired from SIMIND. Use of the 1D-R model in image reconstruction reduces the required computation time by more than a factor of two; this occurs since rotations and 1D convolutions are significantly faster than direct 2D convolution, even when fast fourier transform techniques are used.
81 |
82 | 
83 |
84 |
85 | # References
--------------------------------------------------------------------------------
/pyproject.toml:
--------------------------------------------------------------------------------
1 | [build-system]
2 | requires = ["hatchling", "toml"]
3 | build-backend = "hatchling.build"
4 |
5 | [project]
6 | name = "spectpsftoolbox"
7 | version = "0.1.2"
8 |
9 | authors = [
10 | { name="Luke Polson", email="lukepolson@outlook.com" },
11 | ]
12 | description = "Library for creation of point spread functions"
13 | readme = "README.md"
14 | requires-python = ">=3.9"
15 | classifiers = [
16 | "Programming Language :: Python :: 3",
17 | "License :: OSI Approved :: MIT License",
18 | "Operating System :: OS Independent",
19 | ]
20 |
21 | dependencies = [
22 | "numpy>=1.24.2",
23 | "torch>=1.10.2",
24 | "fft-conv-pytorch>=1.2.0",
25 | "torchquad==0.4.1",
26 | "torchvision==0.20.1",
27 | "dill==0.3.9"
28 | ]
29 |
30 | [tool.hatch.metadata]
31 | allow-direct-references = true
32 |
33 | [tool.hatch.build.targets.wheel]
34 | packages = ["src/spectpsftoolbox"]
35 |
36 | [project.urls]
37 | "Homepage" = "https://github.com/qurit/SPECTPSFToolbox"
38 | "Bug Tracker" = "https://github.com/qurit/SPECTPSFToolbox/issues"
39 |
40 | [project.optional-dependencies]
41 | doc = [
42 | "toml",
43 | "sphinx~=6.2.1",
44 | "myst-parser",
45 | "furo",
46 | "nbsphinx",
47 | "sphinx-autoapi~=3.0.0",
48 | "ipykernel",
49 | "pydata-sphinx-theme",
50 | "sphinx-design"
51 | ]
52 |
53 |
--------------------------------------------------------------------------------
/src/spectpsftoolbox/kernel1d.py:
--------------------------------------------------------------------------------
1 | from __future__ import annotations
2 | from typing import Callable
3 | import torch
4 | import numpy as np
5 | from torch.nn.functional import grid_sample
6 | from torchquad import Simpson
7 |
8 | class Kernel1D:
9 | def __init__(self, a_min: float, a_max: float) -> None:
10 | """Super class for all implemented 1D kernels in the library. All child classes should implement the normalization_factor and __call__ methods.
11 |
12 | Args:
13 | a_min (float): Minimum source-detector distance for the kernel; any distance values passed to __call__ below this value will be clamped to this value.
14 | a_max (float): Maximum source-detector distance for the kernel; any distance values passed to __call__ above this value will be clamped to this value.
15 | """
16 | self.a_min = a_min
17 | self.a_max = a_max
18 | def normalization_constant(self, x: torch.Tensor,a: torch.Tensor) -> torch.Tensor:
19 | """Computes the normalization constant for the kernel
20 |
21 | Args:
22 | x (torch.Tensor[Lx]): Positions where the kernel is being evaluated
23 | a (torch.Tensor[Ld]): Source-detector distances at which to compute the normalization constant
24 |
25 | Returns:
26 | torch.Tensor[Ld]: Normalization constant at each source-detector distance
27 | """
28 | ...
29 | def __call__(x: torch.Tensor, a: torch.Tensor) -> torch.Tensor:
30 | """Returns the kernel evaluated at each source detector distance
31 |
32 | Args:
33 | x (torch.Tensor[Lx]): x values at which to evaluate the kernel
34 | a (torch.Tensor[Ld]): source-detector distances at which to evaluate the kernel
35 | Returns:
36 | torch.Tensor[Ld, Lx]: Kernel evaluated at each source-detector distance and x value
37 | """
38 | ...
39 |
40 | class ArbitraryKernel1D(Kernel1D):
41 | def __init__(
42 | self,
43 | kernel: torch.Tensor,
44 | amplitude_fn: Callable,
45 | sigma_fn: Callable,
46 | amplitude_params: torch.Tensor,
47 | sigma_params: torch.Tensor,
48 | dx0: float,
49 | a_min: float = -torch.inf,
50 | a_max: float = torch.inf,
51 | grid_sample_mode: str = 'bilinear'
52 | ) -> None:
53 | """1D Kernel defined using an arbitrary 1D array as the kernel with spacing dx0. The kernel is evaluated as f(x) = amplitude(a,b) * kernel_interp(x/sigma(a,b)) where amplitude and sigma are functions of the source-detector distance a and additional hyperparameters b. kernel_interp is the 1D interpolation of the kernel at the provided x values.
54 |
55 | Args:
56 | kernel (torch.Tensor): 1D array that defines the kernel
57 | amplitude_fn (Callable): Amplitude function that depends on the source-detector distance and additional hyperparameters
58 | sigma_fn (Callable): Scaling function that depends on the source-detector distance and additional hyperparameters
59 | amplitude_params (torch.Tensor): Hyperparameters for the amplitude function
60 | sigma_params (torch.Tensor): Hyperparameters for the sigma function
61 | dx0 (float): Spacing of the 1D kernel provided in the same units as x used in __call__
62 | a_min (float, optional): Minimum source-detector distance for the kernel; any distance values passed to __call__ below this value will be clamped to this value. Defaults to -torch.inf.
63 | a_max (float, optional):Minimum source-detector distance for the kernel; any distance values passed to __call__ below this value will be clamped to this value. Defaults to torch.inf.
64 | grid_sample_mode (str, optional): How to sample the kernel for general grids. Defaults to 'bilinear'.
65 | """
66 | self.kernel = kernel
67 | self.amplitude_fn = amplitude_fn
68 | self.sigma_fn = sigma_fn
69 | self.amplitude_params = amplitude_params
70 | self.sigma_params = sigma_params
71 | self.params = [self.amplitude_params, self.sigma_params, self.kernel]
72 | self.dx0 = dx0
73 | self.Nx0 = kernel.shape[0]
74 | self.a_min = a_min
75 | self.a_max = a_max
76 | self.grid_sample_mode = grid_sample_mode
77 |
78 | def normalization_constant(self, x: torch.Tensor,a: torch.Tensor) -> torch.Tensor:
79 | """Computes the normalization constant for the kernel
80 |
81 | Args:
82 | x (torch.Tensor[Lx]): Positions where the kernel is being evaluated
83 | a (torch.Tensor[Ld]): Source-detector distances at which to compute the normalization constant
84 |
85 | Returns:
86 | torch.Tensor[Ld]: Normalization constant at each source-detector distance
87 | """
88 | return self.amplitude_fn(a, self.amplitude_params) * torch.abs(self.kernel.sum()) * self.sigma_fn(a, self.sigma_params) * self.dx0 / (x[1]-x[0])
89 |
90 | def __call__(self, x: torch.Tensor, a: torch.Tensor, normalize: bool = False) -> torch.Tensor:
91 | """Calls the kernel function
92 |
93 | Args:
94 | x (torch.Tensor[Lx]): Positions where the kernel is being evaluated
95 | a (torch.Tensor[Ld]): Source-detector distances at which to compute the kernel
96 | normalize (bool, optional): Whether or not to normalize the output of the kernel. Defaults to False.
97 |
98 | Returns:
99 | torch.Tensor[Ld,Lx]: Kernel evaluated at each source-detector distance and x value
100 | """
101 | a = torch.clamp(a, self.a_min, self.a_max)
102 | sigma = self.sigma_fn(a, self.sigma_params)
103 | amplitude = self.amplitude_fn(a, self.amplitude_params)
104 | grid = (2*x / (self.Nx0*self.dx0 * sigma.unsqueeze(1).unsqueeze(1)))
105 | grid = torch.stack([grid, 0*grid], dim=-1)
106 | kernel = torch.abs(self.kernel).reshape(1,1,-1).repeat(a.shape[0],1,1)
107 | kernel = grid_sample(kernel.unsqueeze(1), grid, mode=self.grid_sample_mode, align_corners=False)[:,0,0]
108 | kernel = amplitude.reshape(-1,1) * kernel
109 | if normalize:
110 | kernel = kernel / self.normalization_constant(x,a.reshape(-1,1))
111 | return kernel
112 |
113 | class FunctionKernel1D(Kernel1D):
114 | def __init__(
115 | self,
116 | kernel_fn: Callable,
117 | amplitude_fn: Callable,
118 | sigma_fn: Callable,
119 | amplitude_params: torch.Tensor,
120 | sigma_params: torch.Tensor,
121 | a_min=-torch.inf,
122 | a_max=torch.inf,
123 | ) -> None:
124 | """Implementation of kernel1D where an explicit functional form is provided for the kernel. The kernel is evaluated as f(x) = amplitude(a,b) * k(x/sigma(a,b)) where amplitude and sigma are functions of the source-detector distance a and additional hyperparameters b.
125 |
126 | Args:
127 | kernel_fn (Callable): Kernel function k(x)
128 | amplitude_fn (Callable): Amplitude function amplitude(a,b)
129 | sigma_fn (Callable): Scaling function sigma(a,b)
130 | amplitude_params (torch.Tensor): Hyperparameters for the amplitude function
131 | sigma_params (torch.Tensor): Hyperparameters for the sigma function
132 | a_min (float, optional): Minimum source-detector distance for the kernel; any distance values passed to __call__ below this value will be clamped to this value. Defaults to -torch.inf.
133 | a_max (float, optional):Minimum source-detector distance for the kernel; any distance values passed to __call__ below this value will be clamped to this value. Defaults to torch.inf.
134 | """
135 | self.kernel_fn = kernel_fn
136 | self.amplitude_fn = amplitude_fn
137 | self.sigma_fn = sigma_fn
138 | self.amplitude_params = amplitude_params
139 | self.sigma_params = sigma_params
140 | self.params = [self.amplitude_params, self.sigma_params]
141 | self.a_min = a_min
142 | self.a_max = a_max
143 | self._compute_norm_via_integral()
144 |
145 | def _compute_norm_via_integral(self) -> torch.Tensor:
146 | """Compute the normalization constant by integrating k(x) from -infinity to infinity. To do this, a variable transformation is used to convert the integral to a definite integral over the range [-pi/2, pi/2]. The definite integral is computed using the Simpson's rule.
147 |
148 | Returns:
149 | torch.Tensor: Integral of k(x) from -infinity to infinity
150 | """
151 | # Convert to definite integral
152 | kernel_fn_definite = lambda t: self.kernel_fn(torch.tan(t)) * torch.cos(t)**(-2)
153 | # Should be good for most simple function cases
154 | self.kernel_fn_norm = Simpson().integrate(kernel_fn_definite, dim=1, N=1001, integration_domain=[[-torch.pi/2, torch.pi/2]])
155 |
156 | def normalization_constant(self,x: torch.Tensor,a: torch.Tensor) -> torch.Tensor:
157 | """Computes the normalization constant for the kernel
158 |
159 | Args:
160 | x (torch.Tensor[Lx]): Positions where the kernel is being evaluated
161 | a (torch.Tensor[Ld]): Source-detector distances at which to compute the normalization constant
162 | Returns:
163 | torch.Tensor[Ld]: Normalization constant at each source-detector distance
164 | """
165 | return self.kernel_fn_norm*self.amplitude_fn(a, self.amplitude_params)*self.sigma_fn(a, self.sigma_params) / (x[1]-x[0])
166 |
167 | def __call__(self, x: torch.Tensor, a: torch.Tensor, normalize: bool = False) -> torch.Tensor:
168 | """Computes the kernel at each source-detector distance
169 |
170 | Args:
171 | x (torch.Tensor[Lx]): Positions where the kernel is being evaluated
172 | a (torch.Tensor[Ld]): Source-detector distances at which to compute the kernel
173 | normalize (bool, optional): Whether or not to normalize the output. Defaults to False.
174 |
175 | Returns:
176 | torch.Tensor[Ld,Lx]: Kernel evaluated at each source-detector distance and x value
177 | """
178 | a = a.reshape(-1,1)
179 | a = torch.clamp(a, self.a_min, self.a_max)
180 | sigma = self.sigma_fn(a, self.sigma_params)
181 | amplitude = self.amplitude_fn(a, self.amplitude_params)
182 | kernel = amplitude*self.kernel_fn(x/sigma)
183 | if normalize:
184 | kernel = kernel / self.normalization_constant(x,a)
185 | return kernel
186 |
187 | class GaussianKernel1D(FunctionKernel1D):
188 | def __init__(
189 | self,
190 | amplitude_fn: Callable,
191 | sigma_fn: Callable,
192 | amplitude_params: torch.Tensor,
193 | sigma_params: torch.Tensor,
194 | a_min: float = -torch.inf,
195 | a_max: float = torch.inf
196 | ) -> None:
197 | """Subclass of FunctionKernel1D that implements a Gaussian kernel. The kernel is evaluated as f(x) = amplitude(a,b) * exp(-0.5*x^2/sigma(a,b)^2) where amplitude and sigma are functions of the source-detector distance a and additional hyperparameters b.
198 |
199 | Args:
200 | amplitude_fn (Callable): Amplitude function amplitude(a,b)
201 | sigma_fn (Callable): Scaling function sigma(a,b)
202 | amplitude_params (torch.Tensor): Amplitude hyperparameters
203 | sigma_params (torch.Tensor): Sigma hyperparameters
204 | a_min (float, optional): Minimum source-detector distance for the kernel; any distance values passed to __call__ below this value will be clamped to this value. Defaults to -torch.inf.
205 | a_max (float, optional):Minimum source-detector distance for the kernel; any distance values passed to __call__ below this value will be clamped to this value. Defaults to torch.inf.
206 | """
207 | kernel_fn = lambda x: torch.exp(-0.5*x**2)
208 | super(GaussianKernel1D, self).__init__(kernel_fn, amplitude_fn, sigma_fn, amplitude_params, sigma_params,a_min, a_max)
209 |
210 | def normalization_constant(self, x: torch.Tensor, a: torch.Tensor):
211 | """Computes the normalization constant for the kernel. For small FOV to sigma ratios, the normalization constant is computed using the definite integral of the kernel. For large FOV to sigma ratios, the normalization constant is computed by summing the values of the kernel; this prevents normalization errors when the Gaussian function is large compared to the pixel size.
212 |
213 | Args:
214 | x (torch.Tensor[Lx]): Values at which to compute the kernel
215 | a (torch.Tensor[Ld]): Source-detector distances at which to compute the normalization constant
216 |
217 | Returns:
218 | torch.Tensor[Ld]: Normalization constant at each source-detector distance
219 | """
220 | dx = x[1] - x[0]
221 | sigma = self.sigma_fn(a, self.sigma_params)
222 | FOV_to_sigma_ratio = x.max() / sigma.max()
223 | if FOV_to_sigma_ratio > 4:
224 | return self(x,a).sum(dim=1).reshape(-1,1)
225 | else:
226 | a = torch.clamp(a, self.a_min, self.a_max)
227 | a = a.reshape(-1,1)
228 | return np.sqrt(2 * torch.pi) * self.amplitude_fn(a, self.amplitude_params) * self.sigma_fn(a, self.sigma_params) / dx
--------------------------------------------------------------------------------
/src/spectpsftoolbox/kernel2d.py:
--------------------------------------------------------------------------------
1 | from __future__ import annotations
2 | from typing import Callable
3 | import torch
4 | import numpy as np
5 | from functools import partial
6 | from torch.nn.functional import conv1d, conv2d, grid_sample
7 | from torchvision.transforms.functional import rotate
8 | from torchquad import Simpson
9 | device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
10 |
11 | class Kernel2D:
12 | def __init__(self, a_min: float, a_max: float) -> None:
13 | """Parent class for 2D kernels. All children class should inherit from this class and implement the __call__ and normalization_constant methods.
14 |
15 | Args:
16 | a_min (float, optional): Minimum source-detector distance for the kernel; any distance values passed to __call__ below this value will be clamped to this value.
17 | a_max (float, optional):Minimum source-detector distance for the kernel; any distance values passed to __call__ below this value will be clamped to this value.
18 | """
19 | self.a_min = a_min
20 | self.a_max = a_max
21 | def normalization_constant(self, a: torch.Tensor) -> torch.Tensor:
22 | """Computes the normalization constant for the kernel at each source-detector distance a. This method should be implemented in the child class.
23 |
24 | Args:
25 | a (torch.Tensor[Ld]): Source detector distances
26 |
27 | Returns:
28 | torch.Tensor[Ld]: Normalization at each source-detector distance
29 | """
30 | ...
31 | def __call__(xv: torch.Tensor, yv: torch.Tensor, a: torch.Tensor) -> torch.Tensor:
32 | """Computes the kernel value at each point in the meshgrid defined by xv and yv for each source-detector distance a. This method should be implemented in the child class.
33 |
34 | Args:
35 | xv (torch.Tensor[Lx,Ly]): Meshgrid x-coordinates
36 | yv (torch.Tensor[Lx,Ly]): Meshgrid y-coordinates
37 | a (torch.Tensor[Ld]): Source-detector distances
38 |
39 | Returns:
40 | torch.Tensor[Ld,Lx,Ly]: Kernel values at each point in the meshgrid for each source-detector distance
41 | """
42 | ...
43 |
44 | class FunctionalKernel2D(Kernel2D):
45 | def __init__(
46 | self,
47 | kernel_fn: Callable,
48 | amplitude_fn: Callable,
49 | sigma_fn: Callable,
50 | amplitude_params: torch.Tensor,
51 | sigma_params: torch.Tensor,
52 | a_min: float = -torch.inf,
53 | a_max: float = torch.inf
54 | ) -> None:
55 | """2D kernel where the kernel is specified explicitly given a function of x and y. The kernel is evaluated as f(x,y) = amplitude(a,b) * k(x/sigma(a,b),y/sigma(a,b)) where amplitude and sigma are functions of the source-detector distance a and additional hyperparameters b.
56 |
57 | Args:
58 | kernel_fn (Callable): Kernel function k(x,y)
59 | amplitude_fn (Callable): Amplitude function amplitude(a,b)
60 | sigma_fn (Callable): Scaling function sigma(a,b)
61 | amplitude_params (torch.Tensor): Amplitude hyperparameters b
62 | sigma_params (torch.Tensor): Scaling hyperparameters b
63 | a_min (float, optional): Minimum source-detector distance for the kernel; any distance values passed to __call__ below this value will be clamped to this value. Defaults to -torch.inf.
64 | a_max (float, optional):Minimum source-detector distance for the kernel; any distance values passed to __call__ below this value will be clamped to this value. Defaults to torch.inf.
65 | """
66 | super(FunctionalKernel2D, self).__init__(a_min, a_max)
67 | self.kernel_fn = kernel_fn
68 | self.amplitude_fn = amplitude_fn
69 | self.sigma_fn = sigma_fn
70 | self.amplitude_params = amplitude_params
71 | self.sigma_params = sigma_params
72 | self.all_parameters = [amplitude_params, sigma_params]
73 | self._compute_norm_via_integral()
74 |
75 | def _compute_norm_via_integral(self) -> None:
76 | """Compute the normalization constant by integrating k(x,y) from -infinity to infinity. To do this, a variable transformation is used to convert the integral to a definite integral over the range [-pi/2, pi/2]. The definite integral is computed using the Simpson's rule.
77 | """
78 | # Convert to definite integral
79 | kernel_fn_definite = lambda t: self.kernel_fn(torch.tan(t[:,0]), torch.tan(t[:,1])) * torch.cos(t[:,0])**(-2) * torch.cos(t[:,1])**(-2)
80 | # Should be good for most simple function cases
81 | self.kernel_fn_norm = Simpson().integrate(kernel_fn_definite, dim=2, N=1001, integration_domain=[[-torch.pi/2, torch.pi/2], [-torch.pi/2, torch.pi/2]])
82 |
83 | def normalization_constant(self, xv: torch.Tensor, yv: torch.Tensor, a: torch.Tensor) -> torch.Tensor:
84 | """Obtains the normalization constant for the 2D kernel
85 |
86 | Args:
87 | xv (torch.Tensor[Lx,Ly]): Meshgrid x-coordinates
88 | yv (torch.Tensor[Lx,Ly]): Meshgrid y-coordinates
89 | a (torch.Tensor[Ld]): Source-detector distances
90 |
91 | Returns:
92 | torch.Tensor[Ld]: Normalization constant at each source-detector distance
93 | """
94 | return self.kernel_fn_norm*self.amplitude_fn(a, self.amplitude_params)*self.sigma_fn(a, self.sigma_params)**2 / (xv[0,1]-xv[0,0]) / (yv[1,0] - yv[0,0])
95 |
96 | def __call__(self, xv: torch.Tensor, yv: torch.Tensor, a: torch.Tensor, normalize: bool = False) -> torch.Tensor:
97 | """Computes the kernel at each source detector distance
98 |
99 | Args:
100 | xv (torch.Tensor[Lx,Ly]): Meshgrid x-coordinates
101 | yv (torch.Tensor[Lx,Ly]): Meshgrid y-coordinates
102 | a (torch.Tensor[Ld]): Source-detector distances
103 | normalize (bool, optional): Whether or not to normalize the output of the kernel. Defaults to False.
104 |
105 | Returns:
106 | torch.Tensor[Ld,Lx,Ly]: Kernel at each source-detector distance
107 | """
108 | a = torch.clamp(a, self.a_min, self.a_max)
109 | a = a.reshape(-1,1,1)
110 | N = 1 if normalize is False else self.normalization_constant(xv, yv, a)
111 | return self.amplitude_fn(a, self.amplitude_params) * self.kernel_fn(xv/self.sigma_fn(a, self.sigma_params), yv/self.sigma_fn(a, self.sigma_params)) / N
112 |
113 | class NGonKernel2D(Kernel2D):
114 | def __init__(
115 | self,
116 | N_sides: int,
117 | Nx: int,
118 | collimator_width: float,
119 | amplitude_fn: Callable,
120 | sigma_fn: Callable,
121 | amplitude_params: torch.Tensor,
122 | sigma_params: torch.Tensor,
123 | a_min = -torch.inf,
124 | a_max = torch.inf,
125 | rot: float = 0
126 | ) -> None:
127 | """Implementation of the arbitrary polygon kernel. This kernel is composed of a polygon shape convolved with itself, which is shown to be the true geometric component of the SPECT PSF when averaged over random collimator movement to get a linear shift invariant approximation. The kernel is computed as f(x,y) = amplitude(a,b) * k(x/sigma(a,b),y/sigma(a,b)) where k(x,y) is the convolved polygon shape.
128 |
129 | Args:
130 | N_sides (int): Number of sides of the polygon. Currently only supports even side lengths
131 | Nx (int): Number of voxels to use for constructing the polygon (seperate from any meshgrid stuff done later on)
132 | collimator_width (float): Width of the polygon (from flat edge to flat edge)
133 | amplitude_fn (Callable): Amplitude function amplitude(a,b)
134 | sigma_fn (Callable): Scaling function sigma(a,b)
135 | amplitude_params (torch.Tensor): Amplitude hyperparameters b
136 | sigma_params (torch.Tensor): Scaling hyperparameters b
137 | a_min (float, optional): Minimum source-detector distance for the kernel; any distance values passed to __call__ below this value will be clamped to this value. Defaults to -torch.inf.
138 | a_max (float, optional):Minimum source-detector distance for the kernel; any distance values passed to __call__ below this value will be clamped to this value. Defaults to torch.inf.
139 | rot (float, optional): Initial rotation of the polygon flat side. Defaults to 0 (first flat side aligned with +y axis).
140 | """
141 | self.N_sides = N_sides
142 | self.Nx = Nx
143 | self.N_voxels_to_face = int(np.floor(Nx/6 * np.cos(np.pi/self.N_sides)))
144 | self.collimator_width_voxels = 2 * self.N_voxels_to_face
145 | self.pixel_size = collimator_width / self.collimator_width_voxels
146 | self.collimator_width = collimator_width
147 | self.amplitude_fn = amplitude_fn
148 | self.sigma_fn = sigma_fn
149 | self.amplitude_params = amplitude_params
150 | self.sigma_params = sigma_params
151 | self.rot = rot
152 | self._compute_convolved_polygon()
153 | self.a_min = a_min
154 | self.a_max = a_max
155 | self.params = [amplitude_params, sigma_params]
156 |
157 |
158 | def _compute_convolved_polygon(self):
159 | """Computes the convolved polygon
160 | """
161 | # Create
162 | x = torch.zeros((self.Nx,self.Nx)).to(device)
163 | x[:(self.Nx-1)//2 + self.N_voxels_to_face] = 1
164 | polygon = []
165 | for i in range(self.N_sides):
166 | polygon.append(rotate(x.unsqueeze(0), 360*i/self.N_sides+self.rot).squeeze())
167 | polygon = torch.stack(polygon).prod(dim=0)
168 | convolved_polygon = conv2d(polygon.unsqueeze(0).unsqueeze(0), polygon.unsqueeze(0).unsqueeze(0), padding='same').squeeze()
169 | self.convolved_polygon = convolved_polygon / convolved_polygon.max()
170 | self.convolved_polygon_sum = self.convolved_polygon.sum()
171 |
172 | def normalization_constant(self, xv: torch.Tensor, yv: torch.Tensor, a: torch.Tensor) -> torch.Tensor:
173 | """Computes the normalization constant for the kernel at each source-detector distance a.
174 |
175 | Args:
176 | xv (torch.Tensor[Lx,Ly]): Meshgrid x-coordinates
177 | yv (torch.Tensor[Lx,Ly]): Meshgrid y-coordinates
178 | a (torch.Tensor[Ld]): Source-detector distances
179 |
180 | Returns:
181 | torch.Tensor[Ld]: Normalization constant at each source detector distance.
182 | """
183 | if self.grid_max<1:
184 | # For cases where the polygon kernel exceeds the boundary
185 | dx = xv[0,1] - xv[0,0]
186 | dy = yv[1,0] - yv[0,0]
187 | a = torch.clamp(a, self.a_min, self.a_max)
188 | a = a.reshape(-1,1,1)
189 | return self.amplitude_fn(a, self.amplitude_params) * self.pixel_size**2 * self.sigma_fn(a, self.sigma_params)**2 * self.convolved_polygon_sum / dx / dy
190 | else:
191 | # This is called nearly 100% of the time
192 | return self.kernel.sum(dim=(1,2)).reshape(-1,1,1)
193 |
194 | def __call__(self, xv: torch.Tensor, yv: torch.Tensor, a: torch.Tensor, normalize: bool = False):
195 | """Computes the kernel at each source detector distance
196 |
197 | Args:
198 | xv (torch.Tensor[Lx,Ly]): Meshgrid x-coordinates
199 | yv (torch.Tensor[Lx,Ly]): Meshgrid y-coordinates
200 | a (torch.Tensor[Ld]): Source-detector distances
201 | normalize (bool, optional): Whether or not to normalize the output of the kernel. Defaults to False.
202 |
203 | Returns:
204 | torch.Tensor[Ld,Lx,Ly]: Kernel at each source-detector distance
205 | """
206 | a = torch.clamp(a, self.a_min, self.a_max)
207 | a = a.reshape(-1,1,1)
208 | sigma = self.sigma_fn(a, self.sigma_params)
209 | grid = torch.stack([
210 | 2*xv/(self.Nx*self.pixel_size*sigma),
211 | 2*yv/(self.Nx*self.pixel_size*sigma)],
212 | dim=-1)
213 | self.grid_max = grid.max()
214 | amplitude = self.amplitude_fn(a, self.amplitude_params).reshape(a.shape[0],1,1)
215 | self.kernel = amplitude * grid_sample(self.convolved_polygon.unsqueeze(0).unsqueeze(0).repeat(a.shape[0],1,1,1), grid, mode = 'bilinear', align_corners=False)[:,0]
216 | if normalize:
217 | self.kernel = self.kernel / self.normalization_constant(xv, yv, a)
218 | return self.kernel
219 |
220 |
221 |
222 |
--------------------------------------------------------------------------------
/src/spectpsftoolbox/operator2d.py:
--------------------------------------------------------------------------------
1 | from __future__ import annotations
2 | from typing import Callable, Sequence
3 | import torch
4 | from torch.nn.functional import conv1d, conv2d
5 | from torchvision.transforms.functional import rotate
6 | from torchvision.transforms import InterpolationMode
7 | from torch.nn.functional import grid_sample
8 | from fft_conv_pytorch import fft_conv
9 | from spectpsftoolbox.utils import pad_object, unpad_object
10 | from spectpsftoolbox.kernel1d import GaussianKernel1D, Kernel1D
11 | from spectpsftoolbox.kernel2d import Kernel2D
12 | import dill
13 |
14 | class Operator:
15 | """Base class for operators; operators are used to apply linear shift invariant operations to a sequence of 2D images.
16 | """
17 | def __call__(
18 | self,
19 | input: torch.Tensor,
20 | xv: torch.Tensor,
21 | yv: torch.Tensor,
22 | a: torch.Tensor
23 | ) -> torch.Tensor:
24 | """Evaluates the operator on the input. The meshgrid xv and yv is used to compute the kernel size; it is assumed that the spacing in xv and yv is the same as that in input. The output is multiplied by the area of a pixel in the meshgrid.
25 |
26 | Args:
27 | input (torch.Tensor[Ld,Li,Lj]): Input 3D map to be operated on
28 | xv (torch.Tensor[Lx,Ly]): Meshgrid x coordinates
29 | yv (torch.Tensor[Lx,Ly]): Meshgrid y coordinates
30 | a (torch.Tensor[Ld]): Source-detector distances
31 |
32 | Returns:
33 | torch.Tensor[Ld,Li,Lj]: Output of the operator
34 | """
35 | return input * self._area(xv,yv)
36 | def _area(self, xv: torch.Tensor, yv: torch.Tensor) -> float:
37 | """Compute pixel volume in meshgrid
38 |
39 | Args:
40 | xv (torch.Tensor): Meshgrid x coordinates
41 | yv (torch.Tensor): Meshgrid y coordinates
42 |
43 | Returns:
44 | float: Are
45 | """
46 | return (xv[0,1]-xv[0,0])*(yv[1,0]-yv[0,0])
47 | def set_device(self, device: str):
48 | """Sets the device of all parameters in the operator
49 |
50 | Args:
51 | device (str): Device to set parameters to
52 | """
53 | for p in self.params:
54 | p.data = p.data.to(device)
55 | def detach(self):
56 | """Detaches all parameters from autograd.
57 | """
58 | for p in self.params:
59 | p.detach_()
60 | def set_requires_grad(self):
61 | """Sets all parameters to require grad
62 | """
63 | for p in self.params:
64 | p.requires_grad_(True)
65 | def save(self, path: str):
66 | """Saves the operator
67 |
68 | Args:
69 | path (str): Path where to save the operator
70 | """
71 | self.set_device('cpu')
72 | self.detach()
73 | dill.dump(self, open(path, 'wb'))
74 | def normalization_constant(
75 | self,
76 | xv: torch.Tensor,
77 | yv: torch.Tensor,
78 | a: torch.Tensor
79 | ) -> torch.Tensor:
80 | """Computes the normalization constant of the operator
81 |
82 | Args:
83 | xv (torch.Tensor[Lx,Ly]): Meshgrid x coordinates
84 | yv (torch.Tensor[Lx,Ly]): Meshgrid y coordinates
85 | a (torch.Tensor[Ld]): Source-detector distances
86 |
87 | Returns:
88 | torch.Tensor[Ld]: Normalization constant at each source-detector distance
89 | """
90 | ...
91 | def normalize(
92 | self,
93 | input: torch.Tensor,
94 | xv: torch.Tensor,
95 | yv: torch.Tensor,
96 | a: torch.Tensor
97 | ) -> torch.Tensor:
98 | """Normalizes the input by the normalization constant. This ensures that the operator maintains the total sum of the input at each source-detector distance.
99 |
100 | Args:
101 | input (torch.Tensor[Ld,Li,Lj]): Input to be normalized
102 | xv (torch.Tensor[Lx,Ly]): Meshgrid x coordinates
103 | yv (torch.Tensor[Lx,Ly]): Meshgrid y coordinates
104 | a (torch.Tensor[Ld]): Source-detector distances
105 |
106 | Returns:
107 | torch.Tensor[Ld,Li,Lj]: Normalized input
108 | """
109 | return input / self.normalization_constant(xv, yv, a)
110 | def __add__(self, other: Operator) -> Operator:
111 | """Implementation of addition to allow for adding operators. Addition of two operators yields a new operator that corresponds to the sum of two linear operators
112 |
113 | Args:
114 | other (Operator): Operator to add
115 |
116 | Returns:
117 | Operator: New operator corresponding to the sum of the two operators
118 | """
119 | def combined_operator(x, *args, **kwargs):
120 | return self(x, *args, **kwargs) + other(x, *args, **kwargs)
121 | return CombinedOperator(combined_operator, [self, other], type='additive')
122 | def __mul__(self, other: Operator) -> Operator:
123 | """Implementation of multiplication to allow for multiplying operators. Multiplication of two operators yields a new operator that corresponds to the composition of the two operators
124 |
125 | Args:
126 | other (Operator): Operator to use in composition
127 |
128 | Returns:
129 | Operator: Composed operators
130 | """
131 | def combined_operator(x, *args, **kwargs):
132 | return self(other(x, *args, **kwargs), *args, **kwargs)
133 | return CombinedOperator(combined_operator, [self,other], type='sequential')
134 |
135 | class CombinedOperator(Operator):
136 | """Operator that has been constructed using two other operators
137 |
138 | Args:
139 | func (Callable): Function that specifies how the two operators are combined
140 | operators (Sequence[Operator]): Sequence of operators
141 | type (str): Type of operator: either 'sequential' or 'additive'
142 | """
143 | def __init__(
144 | self,
145 | func: Callable,
146 | operators: Sequence[Operator],
147 | type: str
148 | ) -> None:
149 | self.params = [*operators[0].params, *operators[1].params]
150 | self.func = func
151 | self.type = type
152 | self.operators = operators
153 |
154 | def set_device(self, device: str) -> None:
155 | """Sets the device of all the parameters in the composed operator
156 |
157 | Args:
158 | device (str): Device to set parameters to
159 | """
160 | for operator in self.operators:
161 | operator.set_device(device)
162 |
163 | def detach(self) -> None:
164 | """Detaches all parameters of the composed operator
165 | """
166 | for operator in self.operators:
167 | operator.detach()
168 |
169 | def normalization_constant(
170 | self,
171 | xv: torch.Tensor,
172 | yv: torch.Tensor,
173 | a: torch.Tensor
174 | ) -> torch.Tensor:
175 | """Computes the normalization constant of the combined operator using the normalization constants of its components
176 |
177 | Args:
178 | xv (torch.Tensor): Meshgrid x coordinates
179 | yv (torch.Tensor): Meshgrid y coordinates
180 | a (torch.Tensor): Source-detector distances
181 |
182 | Returns:
183 | torch.Tensor: Normalization constant
184 | """
185 | if self.type=='additive':
186 | return self.operators[0].normalization_constant(xv, yv, a) + self.operators[1].normalization_constant(xv, yv, a)
187 | else:
188 | return self.operators[0].normalization_constant(xv, yv, a) * self.operators[1].normalization_constant(xv, yv, a)
189 |
190 | def __call__(
191 | self,
192 | input: torch.Tensor,
193 | xv: torch.Tensor,
194 | yv: torch.Tensor,
195 | a: torch.Tensor,
196 | normalize: bool = False
197 | ) -> torch.Tensor:
198 | """Computes the output of the combined operator
199 |
200 | Args:
201 | input (torch.Tensor[Ld,Li,Lj]): Input to the operator
202 | xv (torch.Tensor[Lx,Ly]): Meshgrid x coordinates
203 | yv (torch.Tensor[Lx,Ly]): Meshgrid y coordinates
204 | a (torch.Tensor[Ld]): Source-detector distances
205 | normalize (bool, optional): Whether to normalize the output. Defaults to False.
206 |
207 | Returns:
208 | torch.Tensor[Ld,Li,Lj]: Output of the operator
209 | """
210 | if normalize:
211 | return self.func(input, xv, yv, a) / self.normalization_constant(xv, yv, a)
212 | else:
213 | return self.func(input,xv, yv, a)
214 |
215 | class Rotate1DConvOperator(Operator):
216 | """Operator that functions by rotating the input by a number of angles and applying a 1D convolution at each angle
217 |
218 | Args:
219 | kernel1D (Kernel1D): 1D kernel to apply at each rotation angle
220 | N_angles (int): Number of angles to convolve at. Evenly distributes these angles between 0 and 180 degrees (2 angles would be 0, 90 degrees)
221 | additive (bool, optional): Use in additive mode; in this case, the initial input is used at each rotation angle. If False, then output from each previous angle is used in succeeding angles. Defaults to False.
222 | use_fft_conv (bool, optional): Whether or not to use FFT based convolution. Defaults to False.
223 | rot (float, optional): Initial angle offset. Defaults to 0.
224 | """
225 | def __init__(
226 | self,
227 | kernel1D: Kernel1D,
228 | N_angles: int,
229 | additive: bool = False,
230 | use_fft_conv: bool = False,
231 | rot: float = 0
232 | ) -> None:
233 | self.params = kernel1D.params
234 | self.kernel1D = kernel1D
235 | self.N_angles = N_angles
236 | self.angles = [180*i/N_angles + rot for i in range(N_angles)]
237 | self.additive = additive
238 | self.angle_delta = 1e-4
239 | self.use_fft_conv = use_fft_conv
240 |
241 | def _conv(self, input: torch.Tensor) -> torch.Tensor:
242 | """Applies convolution to the input
243 |
244 | Args:
245 | input (torch.Tensor): Input tensor
246 |
247 | Returns:
248 | torch.Tensor: Convolved input tensor
249 | """
250 | if self.use_fft_conv:
251 | return fft_conv(input, self.kernel, padding='same', groups=self.kernel.shape[0])
252 | else:
253 | return conv1d(input, self.kernel, padding='same', groups=self.kernel.shape[0])
254 |
255 | def normalization_constant(
256 | self,
257 | xv: torch.Tensor,
258 | yv: torch.Tensor,
259 | a: torch.Tensor
260 | ) -> torch.Tensor:
261 | # """Uses recursive docstring"""
262 | if self.additive:
263 | return (self.N_angles*self.kernel.sum(dim=-1)).unsqueeze(-1) * torch.sqrt(self._area(xv,yv))
264 | else:
265 | return ((self.kernel.sum(dim=-1))**self.N_angles).unsqueeze(-1) * self._area(xv,yv)
266 |
267 | def _rotate(self, input: torch.Tensor, angle: float) -> torch.Tensor:
268 | """Rotates the input at the desired angle
269 |
270 | Args:
271 | input (torch.Tensor): Input tensor
272 | angle (float): Angle to rotate by
273 |
274 | Returns:
275 | torch.Tensor: Rotated input
276 | """
277 | if abs(angle) torch.Tensor:
285 | """Applies the operator in additive mode
286 |
287 | Args:
288 | input (torch.Tensor): Input tensor
289 |
290 | Returns:
291 | torch.Tensor: Output tensor, which is rotated + convolved input tensor
292 | """
293 | output = 0
294 | for angle in self.angles:
295 | output_i = self._rotate(input, angle)
296 | output_i = output_i.swapaxes(0,1)
297 | output_i = self._conv(output_i)
298 | output_i = output_i.swapaxes(0,1)
299 | output_i = self._rotate(output_i, -angle)
300 | output += output_i
301 | return output
302 |
303 | def _apply_regular(self, input: torch.Tensor) -> torch.Tensor:
304 | """Applies operator in non-additive mode
305 |
306 | Args:
307 | input (torch.Tensor): Input tensor
308 |
309 | Returns:
310 | torch.Tensor: Output tensor, which is rotated + convolved input tensor
311 | """
312 | for angle in self.angles:
313 | input = self._rotate(input, angle)
314 | input = input.swapaxes(0,1)
315 | input = self._conv(input)
316 | input = input.swapaxes(0,1)
317 | input = self._rotate(input, angle)
318 | return input
319 |
320 | def __call__(
321 | self,
322 | input: torch.Tensor,
323 | xv: torch.Tensor,
324 | yv: torch.Tensor,
325 | a: torch.Tensor,
326 | normalize: bool = False
327 | ) -> torch.Tensor:
328 | # """Uses recursive docstring"""
329 | input = pad_object(input)
330 | # Get padded kernel shape
331 | dx = xv[0,1] - xv[0,0]
332 | Nx_padded = input.shape[-1]
333 | if Nx_padded%2==0: Nx_padded +=1 # kernel must be odd
334 | x_padded = torch.arange(-(Nx_padded-1)/2, (Nx_padded+1)/2, 1).to(input.device) * dx
335 | self.kernel = self.kernel1D(x_padded,a,normalize=False).unsqueeze(1)
336 | # Apply operations
337 | if self.additive:
338 | input = self._apply_additive(input)
339 | else:
340 | input = self._apply_regular(input)
341 | input = unpad_object(input)
342 | if normalize: input = self.normalize(input, xv, yv, a)
343 | if self.additive:
344 | return input * torch.sqrt(self._area(xv,yv))
345 | else:
346 | return input * self._area(xv,yv)
347 |
348 | class RotateSeperable2DConvOperator(Operator):
349 | """Operator that applies rotations followed by convolutions with two perpendicular 1D kernels (x/y) at each angle
350 |
351 | Args:
352 | kernel1D (Kernel1D): Kernel1D to use for convolution
353 | N_angles (int): Number of angles to rotate at
354 | additive (bool, optional): Use in additive mode; in this case, the initial input is used at each rotation angle. If False, then output from each previous angle is used in succeeding angles. Defaults to False.
355 | use_fft_conv (bool, optional): Whether or not to use FFT based convoltution. Defaults to False.
356 | rot (float, optional): Initial rotation angle. Defaults to 0.
357 | """
358 | def __init__(
359 | self,
360 | kernel1D: Kernel1D,
361 | N_angles: int,
362 | additive: bool = False,
363 | use_fft_conv: bool = False,
364 | rot: float = 0
365 | ) -> None:
366 | self.params = kernel1D.params
367 | self.kernel1D = kernel1D
368 | self.N_angles = N_angles
369 | self.angles = [90*i/N_angles + rot for i in range(N_angles)]
370 | self.additive = additive
371 | self.angle_delta = 1e-4
372 | self.use_fft_conv = use_fft_conv
373 |
374 | def _conv(self, input: torch.Tensor) -> torch.Tensor:
375 | """Applies convolution
376 |
377 | Args:
378 | input (torch.Tensor): Input tensor to convole
379 |
380 | Returns:
381 | torch.Tensor: Convolved input tensor
382 | """
383 | if self.use_fft_conv:
384 | return fft_conv(input, self.kernel, padding='same', groups=self.kernel.shape[0])
385 | else:
386 | return conv1d(input, self.kernel, padding='same', groups=self.kernel.shape[0])
387 |
388 | def normalization_constant(self, xv: torch.Tensor, yv: torch.Tensor, a: torch.Tensor) -> torch.Tensor:
389 | # """Uses recursive docstring"""
390 | dx = xv[0,1] - xv[0,0]
391 | if self.additive:
392 | return (self.N_angles*self.kernel.sum(dim=-1)**2).unsqueeze(-1) * self._area(xv,yv)
393 | else:
394 | return ((self.kernel.sum(dim=-1)**2)**self.N_angles).unsqueeze(-1) * self._area(xv,yv)
395 |
396 | def _rotate(self, input: torch.Tensor, angle: float) -> torch.Tensor:
397 | """Applies rotation to input tensor
398 |
399 | Args:
400 | input (torch.Tensor): Input tensor to be rotated
401 | angle (float): Rotation angle
402 |
403 | Returns:
404 | torch.Tensor: Rotated input tensor
405 | """
406 | if abs(angle) torch.Tensor:
414 | """Applies operator in additive mode
415 |
416 | Args:
417 | input (torch.Tensor): Input tensor
418 |
419 | Returns:
420 | torch.Tensor: Output tensor
421 | """
422 | output = 0
423 | for angle in self.angles:
424 | output_i = self._rotate(input, angle)
425 | output_i = output_i.swapaxes(0,1)
426 | # Perform 2D conv
427 | output_i = self._conv(output_i)
428 | output_i = output_i.swapaxes(0,-1)
429 | output_i = self._conv(output_i)
430 | output_i = output_i.swapaxes(0,-1)
431 | # ----
432 | output_i = output_i.swapaxes(0,1)
433 | output_i = self._rotate(output_i, -angle)
434 | output += output_i
435 | return output
436 |
437 | def _apply_regular(self, input: torch.Tensor) -> torch.Tensor:
438 | """Applies operator in non-additive mode
439 |
440 | Args:
441 | input (torch.Tensor): Input tensor
442 |
443 | Returns:
444 | torch.Tensor: Output tensor
445 | """
446 | for angle in self.angles:
447 | input = self._rotate(input, angle)
448 | input = input.swapaxes(0,1)
449 | # Perform 2D conv
450 | input = self._conv(input)
451 | input = input.swapaxes(0,-1)
452 | input = self._conv(input)
453 | input = input.swapaxes(0,-1)
454 | # ----
455 | input = input.swapaxes(0,1)
456 | input = self._rotate(input, -angle)
457 | return input
458 |
459 | def __call__(
460 | self,
461 | input: torch.Tensor,
462 | xv: torch.Tensor,
463 | yv: torch.Tensor,
464 | a: torch.Tensor,
465 | normalize: bool = False
466 | ) -> torch.Tensor:
467 | # """Uses recursive docstring"""
468 | self.kernel = self.kernel1D(xv[0],a,normalize=False).unsqueeze(1) # always false
469 | input = pad_object(input)
470 | if self.additive:
471 | input = self._apply_additive(input)
472 | else:
473 | input = self._apply_regular(input)
474 | input = unpad_object(input)
475 | if normalize: input = self.normalize(input, xv, yv, a)
476 | return input * self._area(xv,yv)
477 |
478 | class Kernel2DOperator(Operator):
479 | """Operator built using a general 2D kernel; the output of this operator is 2D convolution with the Kernel2D instance
480 |
481 | Args:
482 | kernel2D (Kernel2D): Kernel2D instance used for obtaining the generic 2D kernel
483 | use_fft_conv (bool, optional): Whether or not to use FFT based convolution. Defaults to False.
484 | """
485 | def __init__(
486 | self,
487 | kernel2D: Kernel2D,
488 | use_fft_conv: bool = False,
489 | ) -> None:
490 | self.params = kernel2D.params
491 | self.kernel2D = kernel2D
492 | self.use_fft_conv = use_fft_conv
493 |
494 | def _conv(self, input: torch.Tensor) -> torch.Tensor:
495 | """Applies convolution to the input
496 |
497 | Args:
498 | input (torch.Tensor): Input
499 |
500 | Returns:
501 | torch.Tensor: Output
502 | """
503 | if self.use_fft_conv:
504 | return fft_conv(input, self.kernel.unsqueeze(1), padding='same', groups=self.kernel.shape[0])
505 | else:
506 | return conv2d(input, self.kernel.unsqueeze(1), padding='same', groups=self.kernel.shape[0])
507 |
508 | def normalization_constant(
509 | self,
510 | xv: torch.Tensor,
511 | yv: torch.Tensor,
512 | a: torch.Tensor
513 | ) -> torch.Tensor:
514 | # """Uses recursive docstring"""
515 | return self.kernel2D.normalization_constant(xv, yv, a) * self._area(xv,yv)
516 |
517 | def __call__(
518 | self,
519 | input: torch.Tensor,
520 | xv: torch.Tensor,
521 | yv: torch.Tensor,
522 | a: torch.Tensor,
523 | normalize: bool = False
524 | ) -> torch.Tensor:
525 | # """Uses recursive docstring"""
526 | self.kernel = self.kernel2D(xv,yv,a)
527 | if normalize:
528 | self.kernel = self.kernel / self.normalization_constant(xv, yv, a)
529 | return self._conv(input) * self._area(xv,yv)
530 |
531 |
532 | class NearestKernelOperator(Operator):
533 | """Operator that uses a set of PSFs and distances to compute the output of the operator. The PSF is obtained by selecting the nearest PSF to each distance provided in __call__ so that each plane in input is convolved with the appropriate kernel.
534 |
535 | Args:
536 | psf_data (torch.Tensor[LD,LX,LY]): Provided PSF data
537 | distances (torch.Tensor[LD]): Source-detector distance for each PSF
538 | dr0 (float): Spacing in the PSF data
539 | use_fft_conv (bool, optional): Whether or not to use FFT based convolutions. Defaults to True.
540 | grid_sample_mode (str, optional): How to sample the PSF when the input spacing is not the same as the PSF. Defaults to 'bilinear'.
541 | """
542 | def __init__(
543 | self,
544 | psf_data: torch.Tensor,
545 | distances: torch.Tensor,
546 | dr0: float,
547 | use_fft_conv: bool = True,
548 | grid_sample_mode: str = 'bilinear'
549 | ) -> None:
550 | self.psf_data = psf_data
551 | self.Nx0 = psf_data.shape[1]
552 | self.Ny0 = psf_data.shape[2]
553 | self.distances_original = distances
554 | self.dr0 = dr0
555 | self.use_fft_conv = use_fft_conv
556 | self.params = []
557 | self.grid_sample_mode = grid_sample_mode
558 |
559 | def set_device(self, device: str) -> None:
560 | # """Uses recursive docstring"""
561 | self.psf_data = self.psf_data.to(device)
562 | self.distances_original = self.distances_original.to(device)
563 |
564 | def _conv(self, input: torch.Tensor, kernel: torch.Tensor) -> torch.Tensor:
565 | """Performs convolution on the input data
566 |
567 | Args:
568 | input (torch.Tensor): Input data
569 | kernel (torch.Tensor): Kernel to convolve with
570 |
571 | Returns:
572 | torch.Tensor: Convolved input data
573 | """
574 | groups = input.shape[0]
575 | if self.use_fft_conv:
576 | return fft_conv(input.unsqueeze(0), kernel.unsqueeze(1), padding='same', groups=groups).squeeze()
577 | else:
578 | return conv2d(input.unsqueeze(0), kernel.unsqueeze(1), padding='same', groups=groups).squeeze()
579 |
580 | def _get_nearest_distance_idxs(self, distances: torch.Tensor) -> torch.Tensor:
581 | """Obtains the indices of the nearest PSF to each distance
582 |
583 | Args:
584 | distances (torch.Tensor): Distances to find the nearest PSF for
585 |
586 | Returns:
587 | torch.Tensor: Array of indices of the nearest PSF
588 | """
589 | differences = torch.abs(distances[:, None] - self.distances_original)
590 | indices = torch.argmin(differences, dim=1)
591 | return self.psf_data[indices]
592 |
593 | def _get_kernel(
594 | self,
595 | xv: torch.Tensor,
596 | yv: torch.Tensor,
597 | a: torch.Tensor
598 | ) -> torch.Tensor:
599 | """Obtains the kernel by sampling the nearest PSF at the appropriate location
600 |
601 | Args:
602 | xv (torch.Tensor[Lx,Ly]): Meshgrid x coordinates
603 | yv (torch.Tensor[Lx,Ly]): Meshgrid y coordinates
604 | a (torch.Tensor[Ld]): Source-detector distances
605 |
606 | Returns:
607 | torch.Tensor[Ld,Lx,Ly]: Kernel obtained by sampling the nearest PSF
608 | """
609 | dx = xv[0, 1] - xv[0, 0]
610 | psf = self._get_nearest_distance_idxs(a)
611 | grid = torch.stack([
612 | 2*xv/(self.Nx0 * self.dr0[0]),
613 | 2*yv/(self.Ny0 * self.dr0[1])],
614 | dim=-1).unsqueeze(0).repeat(a.shape[0], 1, 1, 1)
615 | return (dx/self.dr0[0])**2 * grid_sample(psf.unsqueeze(1), grid, align_corners=False, mode=self.grid_sample_mode).squeeze()
616 |
617 | def __call__(
618 | self,
619 | input: torch.Tensor,
620 | xv: torch.Tensor,
621 | yv: torch.Tensor,
622 | a: torch.Tensor,
623 | normalize: bool = False
624 | ) -> torch.Tensor:
625 | # """Uses recursive docstring"""
626 | kernel = self._get_kernel(xv, yv, a)
627 | return self._conv(input, kernel).squeeze()
628 |
629 | # TODO: Make subclass of RotateSeperable2DConvOperator with one angle and Gaussian kernel
630 | class GaussianOperator(Operator):
631 | """Gaussian operator; works by convolving the input with two perpendicular 1D kernels. This is implemented seperately from the Kernel2DOperator since it is more efficient to convolve with two 1D kernels than a 2D kernel.
632 |
633 | Args:
634 | amplitude_fn (Callable): Amplitude function for 1D Gaussian kernel
635 | sigma_fn (Callable): Scale function for 1D Gaussian kernel
636 | amplitude_params (torch.Tensor): Amplitude hyperparameters
637 | sigma_params (torch.Tensor): Scaling hyperparameters
638 | a_min (float, optional): Minimum source-detector distance for the kernel; any distance values passed to __call__ below this value will be clamped to this value. Defaults to -torch.inf.
639 | a_max (float, optional):Minimum source-detector distance for the kernel; any distance values passed to __call__ below this value will be clamped to this value. Defaults to torch.inf.
640 | use_fft_conv (bool, optional): Whether or not to use FFT based convolution. Defaults to False.
641 | """
642 | def __init__(
643 | self,
644 | amplitude_fn: Callable,
645 | sigma_fn: Callable,
646 | amplitude_params: torch.Tensor,
647 | sigma_params: torch.Tensor,
648 | a_min: float = -torch.inf,
649 | a_max: float = torch.inf,
650 | use_fft_conv: bool = False,
651 | ) -> None:
652 | self.amplitude_fn = amplitude_fn
653 | self.sigma_fn = sigma_fn
654 | self.amplitude_params = amplitude_params
655 | self.sigma_params = sigma_params
656 | amplitude_fn1D = lambda a, bs: torch.sqrt(torch.abs(amplitude_fn(a, bs)))
657 | self.kernel1D = GaussianKernel1D(amplitude_fn1D, sigma_fn, amplitude_params, sigma_params, a_min, a_max)
658 | self.params = self.kernel1D.params
659 | self.a_min = a_min
660 | self.a_max = a_max
661 | self.use_fft_conv = use_fft_conv
662 |
663 | def normalization_constant(
664 | self,
665 | xv: torch.Tensor,
666 | yv: torch.Tensor,
667 | a: torch.Tensor
668 | ) -> torch.Tensor:
669 | # """Uses recursive docstring"""
670 | return self.kernel1D.normalization_constant(xv[0], a).unsqueeze(1)**2 * self._area(xv,yv)
671 |
672 | def _conv(self, input: torch.Tensor, kernel1D: torch.Tensor) -> torch.Tensor:
673 | """Performs convolution on input
674 |
675 | Args:
676 | input (torch.Tensor): Input tensor
677 | kernel1D (torch.Tensor): Gaussian 1D kernel
678 |
679 | Returns:
680 | torch.Tensor: Output convolved tensor
681 | """
682 | if self.use_fft_conv:
683 | return fft_conv(input, kernel1D, padding='same', groups=kernel1D.shape[0])
684 | else:
685 | return conv1d(input, kernel1D, padding='same', groups=kernel1D.shape[0])
686 |
687 | def __call__(
688 | self,
689 | input: torch.Tensor,
690 | xv: torch.Tensor,
691 | yv: torch.Tensor,
692 | a: torch.Tensor,
693 | normalize: bool = False
694 | ) -> torch.Tensor:
695 | # """Uses recursive docstring"""
696 | x = xv[0]
697 | kernel = self.kernel1D(x,a).unsqueeze(1)
698 | input = input.swapaxes(0,1) # x needs to be channel index
699 | for i in [0,2]:
700 | input = input.swapaxes(i,2)
701 | input = self._conv(input, kernel)
702 | input= input.swapaxes(i,2)
703 | input = input.swapaxes(0,1)
704 | if normalize: input = self.normalize(input, xv, yv, a)
705 | return input * self._area(xv,yv)
--------------------------------------------------------------------------------
/src/spectpsftoolbox/simind_io.py:
--------------------------------------------------------------------------------
1 | from __future__ import annotations
2 | from typing import Sequence
3 | import numpy as np
4 | import os
5 | import re
6 | import torch
7 | from pathlib import Path
8 |
9 | relation_dict = {'unsignedinteger': 'int',
10 | 'shortfloat': 'float',
11 | 'int': 'int'}
12 |
13 | def get_header_value(
14 | list_of_attributes: list[str],
15 | header: str,
16 | dtype: type = np.float32,
17 | split_substr = ':=',
18 | split_idx = -1,
19 | return_all = False
20 | ) -> float|str|int:
21 | """Finds the first entry in an Interfile with the string ``header``
22 |
23 | Args:
24 | list_of_attributes (list[str]): Simind data file, as a list of lines.
25 | header (str): The header looked for
26 | dtype (type, optional): The data type to be returned corresponding to the value of the header. Defaults to np.float32.
27 |
28 | Returns:
29 | float|str|int: The value corresponding to the header (header).
30 | """
31 | header = header.replace('[', '\[').replace(']','\]').replace('(', '\(').replace(')', '\)')
32 | y = np.vectorize(lambda y, x: bool(re.compile(x).search(y)))
33 | selection = y(list_of_attributes, header).astype(bool)
34 | lines = list_of_attributes[selection]
35 | if len(lines)==0:
36 | return False
37 | values = []
38 | for i, line in enumerate(lines):
39 | if dtype == np.float32:
40 | values.append(np.float32(line.replace('\n', '').split(split_substr)[split_idx]))
41 | elif dtype == str:
42 | values.append(line.replace('\n', '').split(split_substr)[split_idx].replace(' ', ''))
43 | elif dtype == int:
44 | values.append(int(line.replace('\n', '').split(split_substr)[split_idx].replace(' ', '')))
45 | if not(return_all):
46 | return values[0]
47 | return values
48 |
49 | def get_projections_from_single_file(headerfile: str) -> torch.Tensor:
50 | """Gets projection data from a SIMIND header file.
51 |
52 | Args:
53 | headerfile (str): Path to the header file
54 | distance (str, optional): The units of measurements in the SIMIND file (this is required as input, since SIMIND uses mm/cm but doesn't specify). Defaults to 'cm'.
55 |
56 | Returns:
57 | (torch.Tensor[1, Ltheta, Lr, Lz]): Simulated SPECT projection data.
58 | """
59 | with open(headerfile) as f:
60 | headerdata = f.readlines()
61 | headerdata = np.array(headerdata)
62 | num_proj = get_header_value(headerdata, 'total number of images', int)
63 | if num_proj>1:
64 | raise ValueError('Only one projection is supported for PSF fitting')
65 | proj_dim1 = get_header_value(headerdata, 'matrix size [1]', int)
66 | proj_dim2 = get_header_value(headerdata, 'matrix size [2]', int)
67 | number_format = get_header_value(headerdata, 'number format', str)
68 | number_format= relation_dict[number_format]
69 | num_bytes_per_pixel = get_header_value(headerdata, 'number of bytes per pixel', int)
70 | imagefile = get_header_value(headerdata, 'name of data file', str)
71 | dtype = eval(f'np.{number_format}{num_bytes_per_pixel*8}')
72 | projections = np.fromfile(os.path.join(str(Path(headerfile).parent), imagefile), dtype=dtype)
73 | projections = np.transpose(projections.reshape((num_proj,proj_dim2,proj_dim1))[:,::-1], (0,2,1))[0]
74 | projections = torch.tensor(projections.copy())
75 | return projections
76 |
77 | def get_projections(headerfiles: str | Sequence[str]) -> torch.Tensor:
78 | """Obtains projection PSF data from a list of headerfiles and concatenates them together
79 |
80 | Args:
81 | headerfiles (str | Sequence[str]): List of length Ld corresponding to all projections at different source-detector distances.
82 |
83 | Returns:
84 | torch.Tensor[Ld,Lx,Ly]: Output tensor of PSF data at each source-detector distance
85 | """
86 | projectionss = []
87 | for headerfile in headerfiles:
88 | projections = get_projections_from_single_file(headerfile)
89 | projectionss.append(projections)
90 | return torch.stack(projectionss)
91 |
92 | def get_source_detector_distances(resfiles: str) -> torch.Tensor:
93 | """Obtains the source-detector distance from a list of resfiles
94 |
95 | Args:
96 | resfiles (str): List of .res files (of length Ld) corresponding to each simulated PSF projection
97 |
98 | Returns:
99 | torch.Tensor[Ld]: List of source-detector distances
100 | """
101 | radii = []
102 | for resfile in resfiles:
103 | with open(resfile) as f:
104 | resdata = f.readlines()
105 | resdata = np.array(resdata)
106 | radius = float(get_header_value(resdata, 'UpperEneWindowTresh:', str).split(':')[-1])
107 | radii.append(radius)
108 | return torch.tensor(radii)
109 |
110 | def get_meshgrid(resfiles: str, device = 'cpu') -> tuple[torch.Tensor, torch.Tensor]:
111 | """Obtains a meshgrid of the x and y coordinates correpsonding to the PSF data simulated
112 |
113 | Args:
114 | resfiles (str): List of .res files (of length Ld) corresponding to each simulated PSF projection
115 | device (str, optional): Device to place the output projection data on. Defaults to 'cpu'.
116 |
117 | Returns:
118 | tuple[torch.Tensor, torch.Tensor]: Meshgrid of x and y coordinates
119 | """
120 | with open(resfiles[0]) as f:
121 | resdata = f.readlines()
122 | resdata = np.array(resdata)
123 | dx = float(get_header_value(resdata, 'PixelSize I', str).split(':')[1].split('S')[0])
124 | dy = float(get_header_value(resdata, 'PixelSize J', str).split(':')[1].split('S')[0])
125 | Nx = int(get_header_value(resdata, 'MatrixSize I', str).split(':')[1].split('I')[0])
126 | Ny = int(get_header_value(resdata, 'MatrixSize J', str).split(':')[1].split('A')[0])
127 | x = torch.arange(-(Nx-1)/2, (Nx+1)/2, 1).to(device).to(torch.float32) * dx
128 | y = torch.arange(-(Ny-1)/2, (Ny+1)/2, 1).to(device).to(torch.float32) * dy
129 | xv, yv = torch.meshgrid(x, y, indexing='xy')
130 | return xv, yv
131 |
--------------------------------------------------------------------------------
/src/spectpsftoolbox/utils.py:
--------------------------------------------------------------------------------
1 | import torch
2 | import numpy as np
3 | from torch.nn.functional import pad
4 |
5 | def compute_pad_size(width: int) -> int:
6 | """Computes the pad size required for rotation / inverse rotation so that pad + rotate + inverse rotate + unpad = original object
7 |
8 | Args:
9 | width (int): Width of input tensor (assumed to be square)
10 |
11 | Returns:
12 | int: How much to pad each side by
13 | """
14 | return int(np.ceil((np.sqrt(2)*width - width)/2))
15 |
16 | def compute_pad_size_padded(width: int) -> int:
17 | """Given a padded tensor, computes how much padding it has had
18 |
19 | Args:
20 | width (int): Width of input padded tensor (assumed square)
21 |
22 | Returns:
23 | int: How much padding was applied to the input tensor
24 | """
25 | a = (np.sqrt(2) - 1)/2
26 | if width%2==0:
27 | width_old = int(2*np.floor((width/2)/(1+2*a)))
28 | else:
29 | width_old = int(2*np.floor(((width-1)/2)/(1+2*a)))
30 | return int((width-width_old)/2)
31 |
32 | def pad_object(object: torch.Tensor, mode='constant') -> torch.Tensor:
33 | """Pads an input tensor so that pad + rotate + inverse rotate + unpad = original object. This is useful for rotating objects without losing information at the edges.
34 |
35 | Args:
36 | object (torch.Tensor): Object to be padded
37 | mode (str, optional): Mode for extrapolation beyonf out of bounds. Defaults to 'constant'.
38 |
39 | Returns:
40 | torch.Tensor: Padded object
41 | """
42 | pad_size = compute_pad_size(object.shape[-2])
43 | return pad(object, [pad_size,pad_size,pad_size,pad_size], mode=mode)
44 |
45 | def unpad_object(object: torch.Tensor) -> torch.Tensor:
46 | """Given a padded object, removes the padding to return the original object
47 |
48 | Args:
49 | object (torch.Tensor): Padded object
50 |
51 | Returns:
52 | torch.Tensor: Unpadded, original object
53 | """
54 | pad_size = compute_pad_size_padded(object.shape[-2])
55 | return object[:,pad_size:-pad_size,pad_size:-pad_size]
56 |
57 | def get_kernel_meshgrid(
58 | xv_input: torch.Tensor,
59 | yv_input: torch.Tensor,
60 | k_width: float
61 | ) -> tuple[torch.Tensor, torch.Tensor]:
62 | """Obtains a kernel meshgrid of given spatial width k_width (in same units as meshgrid). Enforces the kernel size is odd
63 |
64 | Args:
65 | xv_input (torch.Tensor): Meshgrid x-coordinates corresponding to the input of some operator
66 | yv_input (torch.Tensor): Meshgrid y-coordinates corresponding to the input of some operator
67 | k_width (float): Width of kernel in same units as meshgrid
68 |
69 | Returns:
70 | tuple[torch.Tensor, torch.Tensor]: Meshgrid of kernel
71 | """
72 | dx = xv_input[0,1] - xv_input[0,0]
73 | dy = yv_input[1,0] - yv_input[0,0]
74 | x_kernel = torch.arange(0,k_width/2,dx).to(xv_input.device)
75 | x_kernel = torch.cat([-x_kernel.flip(dims=(0,))[:-1], x_kernel])
76 | y_kernel = torch.arange(0,k_width/2,dy).to(xv_input.device)
77 | y_kernel = torch.cat([-y_kernel.flip(dims=(0,))[:-1], y_kernel])
78 | xv_kernel, yv_kernel = torch.meshgrid(x_kernel, y_kernel, indexing='xy')
79 | return xv_kernel, yv_kernel
--------------------------------------------------------------------------------
/tests/test_functionality.py:
--------------------------------------------------------------------------------
1 | import pytest
2 | import torch
3 | from spectpsftoolbox.kernel1d import FunctionKernel1D
4 | from spectpsftoolbox.kernel2d import NGonKernel2D, FunctionalKernel2D
5 | from spectpsftoolbox.utils import get_kernel_meshgrid
6 | from spectpsftoolbox.operator2d import Kernel2DOperator, GaussianOperator
7 | device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
8 |
9 | def test_1Dkernel():
10 | amplitude_fn = lambda a, bs: bs[0]*torch.exp(-bs[1]*a)
11 | sigma_fn = lambda a, bs: bs[0]*(a+0.1)
12 | amplitude_params = torch.tensor([2,0.1], device=device, dtype=torch.float32)
13 | sigma_params = torch.tensor([0.3], device=device, dtype=torch.float32)
14 | kernel_fn = lambda x: torch.exp(-torch.abs(x))
15 | kernel1D = FunctionKernel1D(kernel_fn, amplitude_fn, sigma_fn, amplitude_params, sigma_params)
16 | x = torch.linspace(-5,5,100).to(device)
17 | a = torch.linspace(1,10,5).to(device)
18 | kernel_value = kernel1D(x, a)
19 |
20 | def test_FunctionalKernel2D():
21 | Nx0 = 255
22 | dx0 = 0.05
23 | x = y = torch.arange(-(Nx0-1)/2, (Nx0+1)/2, 1).to(device) * dx0
24 | xv, yv = torch.meshgrid(x, y, indexing='xy')
25 | kernel_fn = lambda xv, yv: torch.exp(-torch.abs(xv))*torch.exp(-torch.abs(yv)) * torch.sin(xv*3)**2 * torch.cos(yv*3)**2
26 | amplitude_fn = lambda a, bs: bs[0]*torch.exp(-bs[1]*a)
27 | sigma_fn = lambda a, bs: bs[0]*(a+0.1)
28 | amplitude_params = torch.tensor([2,0.1], device=device, dtype=torch.float32)
29 | sigma_params = torch.tensor([0.3], device=device, dtype=torch.float32)
30 | # Define the kernel
31 | kernel2D = FunctionalKernel2D(kernel_fn, amplitude_fn, sigma_fn, amplitude_params, sigma_params)
32 | a = torch.linspace(1,10,5).to(device)
33 | kernel = kernel2D(xv, yv, a, normalize=True)
34 |
35 | def test_NGonKernel2D():
36 | collimator_length = 2.405
37 | collimator_width = 0.254 #flat side to flat side
38 | sigma_fn = lambda a, bs: (bs[0]+a) / bs[0]
39 | sigma_params = torch.tensor([collimator_length], requires_grad=True, dtype=torch.float32, device=device)
40 | # Set amplitude to 1
41 | amplitude_fn = lambda a, bs: torch.ones_like(a)
42 | amplitude_params = torch.tensor([1.], requires_grad=True, dtype=torch.float32, device=device)
43 | ngon_kernel = NGonKernel2D(
44 | N_sides = 6, # sides of polygon
45 | Nx = 255, # resolution of polygon
46 | collimator_width=collimator_width, # width of polygon
47 | amplitude_fn=amplitude_fn,
48 | sigma_fn=sigma_fn,
49 | amplitude_params=amplitude_params,
50 | sigma_params=sigma_params,
51 | rot=90
52 | )
53 | Nx0 = 255
54 | dx0 = 0.048
55 | x = y = torch.arange(-(Nx0-1)/2, (Nx0+1)/2, 1).to(device) * dx0
56 | xv, yv = torch.meshgrid(x, y, indexing='xy')
57 | distances = torch.tensor([1,5,10,15,20,25], dtype=torch.float32, device=device)
58 | kernel = ngon_kernel(xv, yv, distances, normalize=True).cpu().detach()
59 |
60 | def test_Operator1():
61 | # Tests Kernel2DOperator, GaussianOperator, and Operator __mult__
62 | # -------------------
63 | # Collimator Component
64 | # -------------------
65 | collimator_length = 2.405
66 | collimator_width = 0.254 #flat side to flat side
67 | mu = 28.340267562430935
68 | sigma_fn = lambda a, bs: (bs[0]+a) / bs[0]
69 | sigma_params = torch.tensor([collimator_length-2/mu], requires_grad=True, dtype=torch.float32, device=device)
70 | # Set amplitude to 1
71 | amplitude_fn = lambda a, bs: torch.ones_like(a)
72 | amplitude_params = torch.tensor([1.], requires_grad=True, dtype=torch.float32, device=device)
73 | ngon_kernel = NGonKernel2D(
74 | N_sides = 6, # sides of polygon
75 | Nx = 255, # resolution of polygon
76 | collimator_width=collimator_width, # width of polygon
77 | amplitude_fn=amplitude_fn,
78 | sigma_fn=sigma_fn,
79 | amplitude_params=amplitude_params,
80 | sigma_params=sigma_params,
81 | rot=90
82 | )
83 | ngon_operator = Kernel2DOperator(ngon_kernel)
84 | # -------------------
85 | # Detector component
86 | # -------------------
87 | intrinsic_sigma = 0.1614 # typical for NaI 140keV detection
88 | gauss_amplitude_fn = lambda a, bs: torch.ones_like(a)
89 | gauss_sigma_fn = lambda a, bs: bs[0]*torch.ones_like(a)
90 | gauss_amplitude_params = torch.tensor([1.], requires_grad=True, dtype=torch.float32, device=device)
91 | gauss_sigma_params = torch.tensor([intrinsic_sigma], requires_grad=True, device=device, dtype=torch.float32)
92 | scint_operator = GaussianOperator(
93 | gauss_amplitude_fn,
94 | gauss_sigma_fn,
95 | gauss_amplitude_params,
96 | gauss_sigma_params,
97 | )
98 | # Total combined:
99 | psf_operator = scint_operator * ngon_operator
100 | Nx0 = 512
101 | dx0 = 0.24
102 | x = y = torch.arange(-(Nx0-1)/2, (Nx0+1)/2, 1).to(device) * dx0
103 | xv, yv = torch.meshgrid(x, y, indexing='xy')
104 | distances = torch.arange(0.36, 57.9600, 0.48).to(device)
105 | # Get kernel meshgrid
106 | k_width = 24 #cm
107 | xv_k, yv_k = get_kernel_meshgrid(xv, yv, k_width)
108 | # Create input with point source at origin
109 | input = torch.zeros_like(xv).unsqueeze(0).repeat(distances.shape[0], 1, 1)
110 | input[:,256,256] = 1
111 | output = psf_operator(input, xv_k, yv_k, distances, normalize=True)
--------------------------------------------------------------------------------